Parabolic Anderson Model in a Dynamic Random Environment: Random Conductances

The parabolic Anderson model is defined as the partial differential equation ∂u(x, t)/∂t = κ Δ u(x, t) + ξ(x, t)u(x, t), x ∈ ℤd, t ≥ 0, where κ ∈ [0, ∞) is the diffusion constant, Δ is the discrete Laplacian, and ξ is a dynamic random environment that drives the equation. The initial condition u(x, 0) = u0(x), x ∈ ℤd, is typically taken to be non-negative and bounded. The solution of the parabolic Anderson equation describes the evolution of a field of particles performing independent simple random walks with binary branching: particles jump at rate 2dκ, split into two at rate ξ ∨ 0, and die at rate (−ξ) ∨ 0. In earlier work we looked at the Lyapunov exponentsλp(κ)=limt→∞1tlog𝔼([u(0,t)]p)1/p,p∈ℕ,λ0(κ)=limt→∞1tlogu(0,t).\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \lambda _{p}(\kappa ) = \lim\limits _{t\to \infty } \frac {1}{t} \log \mathbb {E} ([u(0,t)]^{p})^{1/p}, \quad p \in \mathbb{N} , \qquad \lambda _{0}(\kappa ) = \lim\limits _{t\to \infty } \frac {1}{t}\log u(0,t). $\end{document} For the former we derived quantitative results on the κ-dependence for four choices of ξ : space-time white noise, independent simple random walks, the exclusion process and the voter model. For the latter we obtained qualitative results under certain space-time mixing conditions on ξ. In the present paper we investigate what happens when κΔ is replaced by Δ𝓚, where 𝓚 = {𝓚(x, y) : x, y ∈ ℤd, x ∼ y} is a collection of random conductances between neighbouring sites replacing the constant conductances κ in the homogeneous model. We show that the associated annealed Lyapunov exponents λp(𝓚), p ∈ ℕ, are given by the formula λp(𝓚)=sup{λp(κ):κ∈Supp(𝓚)},\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \lambda _{p}(\mathcal{K} ) = \text{sup} \{\lambda _{p}(\kappa ) : \, \kappa \in \text{Supp} (\mathcal{K} )\}, $\end{document} where, for a fixed realisation of 𝓚, Supp(𝓚) is the set of values taken by the 𝓚-field. We also show that for the associated quenched Lyapunov exponent λ0(𝓚) this formula only provides a lower bound, and we conjecture that an upper bound holds when Supp(𝓚) is replaced by its convex hull. Our proof is valid for three classes of reversible ξ, and for all 𝓚 satisfying a certain clustering property, namely, there are arbitrarily large balls where 𝓚 is almost constant and close to any value in Supp(𝓚). What our result says is that the annealed Lyapunov exponents are controlled by those pockets of 𝓚 where the conductances are close to the value that maximises the growth in the homogeneous setting. In contrast our conjecture says that the quenched Lyapunov exponent is controlled by a mixture of pockets of 𝓚 where the conductances are nearly constant. Our proof is based on variational representations and confinement arguments.

For the former we derived quantitative results on the κ-dependence for four choices of ξ : space-time white noise, independent simple random walks, the exclusion process and the voter model. For the latter we obtained qualitative results under certain space-time mixing conditions on ξ . In the present paper we investigate what happens when κ is replaced by K , where K = {K(x, y) : x, y ∈ Z d , x ∼ y} is a collection of random conductances between neighbouring sites replacing the constant D. Erhard D.Erhard@warwick.ac.uk

Introduction and Main Results
Random walks with random conductances have been studied intensively in the literature. For a recent overview, we refer the reader to Biskup [2]. The goal of the present paper is to study the version of the Parabolic Anderson model where the underlying random walk is driven by random conductances, and to investigate the effect on the Lyapunov exponents.

Parabolic Anderson Model with Random Conductances
The parabolic Anderson model with random conductances is the partial differential equation u(x, 0) = u 0 (x), x ∈ Z d , t 0 . (1.1) Here, u is an R-valued random field, K is the discrete Laplacian with random conductances K acting on u as K u(x, t) = y∈Z d y∼x

K(x, y)[u(y, t) − u(x, t)]
, (1.2) where {K(x, y) : x, y ∈ Z d , x ∼ y} is a (0, ∞)-valued field of random conductances, x ∼ y means that x and y are neighbours, while is an R-valued random field playing the role of a dynamic random environment that drives the equation. Throughout the paper we assume that The ξ -field and the K-field are defined on probability spaces ( , F, P) and (˜ ,F,P), respectively. Throughout the paper we assume that (1) 0 < c K(x, y) C < ∞ ∀ x, y ∈ Z d , x ∼ y.
(2) K(x, y) = K(y, x) ∀ x, y ∈ Z d , x ∼ y. (1.5) The formal solution of (1.1) is given by the Feynman-Kac formula , (1.6) where X K = (X K (t)) t≥0 is the continuous-time Markov process with generator K , and P x is the law of X K given X K (0) = x. When K ≡ κ ∈ (0, ∞), we write X K = X κ . In Section 1.3 we will show that under mild assumptions on ξ the formula in (1.6) is the unique non-negative solution of (1.1). These assumptions are fulfilled for the three classes of ξ that will receive special attention in our paper, which we list next.

Choices of Dynamic Random Environments (I) Space-time White Noise
Here ξ is the Markov process on = R Z d given by where W = (W t ) t≥0 with W t = {W (x, t): x ∈ Z d } is a field of independent standard Brownian motions, and (1.1) is to be understood as an Itô-equation.

(II) Independent Random Walks (IIa) Finite System
Here ξ is the Markov process on = {0, . . . , n} Z d given by ξ(x, t) = n k=1 δ x (Y ρ k (t)), (1.8) where {Y ρ k : 1 ≤ k ≤ n} is a collection of n ∈ N independent continuous-time simple random walks jumping at rate 2dρ and starting at the origin.

(IIb) Infinite System
Here ξ is the Markov process on = N Z d 0 given by , (1.9) where {Y y j : y ∈ Z d , 1 ≤ j ≤ N y , Y y j (0) = y} is an infinite collection of independent continuous-time simple random walks jumping at rate 2d, and (N y ) y∈Z d is a Poisson random field with intensity ν ∈ (0, ∞). The generator L of this process is defined as follows (see Andjel [1]). Let l(x) = e − x , x ∈ Z d , with · the Euclidean norm. Define the l-norm on as (1.10) and define the sets E l = {η ∈ : η l < ∞} and L l = {f : E l → R Lipschitz continuous}. Then L acts on f ∈ L l as (1.11) and η x,y is defined by η x,y (z) = ⎧ ⎨ ⎩ η(z), z = x, y, η(x) − 1, z = x, η(y) + 1, z = y. (1.12) Write μ for the Poisson random field with intensity ν. This is the invariant distribution of the dynamics.

(III) Spin-flip Systems
Here ξ is the Markov process on = {0, 1} Z d whose generator L acts on cylinder functions f as (see Liggett [15,Chapter III]) (1.13) where, for a configuration η, c(x, η) is the rate for the spin at x to flip, and (1.14) We assume that the rates c(x, η) are such that (i) ξ is ergodic and reversible, i.e., there is a probability distribution μ on such that ξ t converges to μ in distribution as t → ∞ for any choice of ξ 0 ∈ , and We further assume that (iii) ξ 0 has distribution μ.
Let M be the class of continuous non-decreasing functions f on , the latter meaning that f (η) ≤ f (ζ ) for all η ≤ ζ . As shown in Liggett [15, Theorems II.2.14 and III.2.13], attractive spin-flip systems preserve the FKG-inequality, i.e., if ξ 0 satisfies the FKG-inequality (e.g. if ξ 0 is distributed according to μ), then so does ξ t for all Examples include the ferromagnetic stochastic Ising model, for which

Lyapunov Exponents
Our focus will be on the annealed Lyapunov exponents provided the limits exist. Note that K is fixed, i.e., the annealing and the quenching is with respect to ξ only.
For the binary case Supp(K) = {κ 1 , κ 2 }, the clustering property states that there are two sequences of boxes B 1 (t) and B 2 (t), whose sizes tend to infinity and whose distances to the origin are o(t), such that K(x, y) = κ 1 for all (x, y) ∈ B 1 (t) and K(x, y) = κ 2 for all (x, y) ∈ B 2 (t). Note that if K is i.i.d., then it has the clustering property with probability 1.
Our main result for the annealed Lyapunov exponents is the following. Theorem 1.2 Let ξ be as in (I)-(III), and let K have the clustering property. Then for all p ∈ N the limit in (1.17) exists and equals To obtain a similar result for the quenched Lyapunov exponent, we need to make a different set of assumptions on ξ : (1) ξ is stationary and ergodic under translations in space and time.

4.
The quenched Lyapunov exponent λ 0 (κ) is continuous in κ as well, but it fails to be non-increasing (it is expected to be unimodal). Hence we do not expect the inequality in (1.21) to be an equality, as in the annealed case. In Section 5 we provide an illustrative example for a decorated version of Z d , i.e., each pair of neighbouring sites of Z d is connected by two edges rather than one, for which the inequality in (1.21) is strict. We conjecture that the following upper bound holds.

Conjecture 1.4
Under the conditions of Theorem 1.3, where Conv(Supp(K)) is the convex hull of Supp(K).

5.
The Feynman-Kac formula shows that understanding the Lyapunov exponents amounts to understanding the large deviation behaviour of the integral of the ξ -field along the trajectory of a random walk in random environment. Drewitz [6] studies the case where is replaced by a Laplacian with a deterministic drift and ξ is constant in time. It is proven that the Lyapunov exponent is maximal when the drift is zero.

6.
We expect that pushing the method of our proof a bit further one may relax the boundedness and uniform ellipticity assumption (1.5) on the K-field. However, at this point this seems only a technical issue and does not provide much more insight so that we refrained from doing so.
Outline The outline of the remainder of the paper is as follows. In Section 2 we derive variational formulas for the annealed Lyapunov exponents and use these to derive the rightmost inequality in (2.2), i.e., ≤ in (1.20) and the monotonicity in each coordinate of K. In Section 3 we derive the leftmost inequality in (2.2), i.e., ≥ in (1.20). The proof uses a confinement approximation, showing that the annealed Lyapunov exponent does not change when the random walk in the Feynman-Kac formula (1.6) is confined to a slowly growing box. In Section 4 we turn to the quenched Lyapunov exponent and prove the lower bound in Theorem 1.3 with the help of a confinement approximation. In Section 5 we discuss the failure of the corresponding upper bound by providing a counterexample for a decorated lattice.
In Appendix A we show that the annealed Lyapunov exponents are the same for all initial conditions that are bounded. In Appendix A we prove a technical lemma about the generator of dynamics (IIb).

Annealed Lyapunov Exponents: Preparatory Facts, Variational Representations, Existence and Upper Bound
Section 2.1 contains some preparatory facts. Section 2.2 gives variational representations for λ p (K) for each of the four dynamics (Propositions 2.2-2.5 below) and settles the existence. Section 2.3 explains why these variational representations imply the upper bound. Section 2.4 provides the proof of the variational representations.

Preparatory Facts
The following proposition, whose proof is deferred to Appendix A, shows that the annealed Lyapunov exponents are the same for any bounded initial condition u 0 , i.e., without loss of generality we may take u 0 = δ 0 or u 0 ≡ 1.
Consequently, the proof of Theorem 1.2 reduces to the following two inequalities: 2) We prove the second inequality (upper bound) in the present section and the first inequality (lower bound) in Section 3. For ease of notation we suppress the upper index from the respective Lyapunov exponents.
Before we proceed we make three observations: is the expectation with respect to p independent simple random walks X K 1 , . . . , X K p , all having generator K and all starting at 0. (IIa) For ξ finite independent simple random walks we have 4) which is similar to (2.3). In particular, the proof of the upper bound in Theorem 1.2 is similar for (I) and (IIa). Therefore we will only give the proof for (IIa).
(I)-(III) are reversible, and so we have in P-distribution.

Variational Representations
We assume that

Proposition 2.2 Let ξ be as in (I).
Then, for all p ∈ N, where (2.7)

Proposition 2.3 Let ξ be as in (IIa).
Then, for all p ∈ N, where (2.9) Proposition 2.4 Fix p ∈ N. Let ξ be as in (IIb) and let G(0) be the Green function at the origin of simple random walk jumping at rate 2d. Then, for all 0 < p < 1/G(0), where and L acts on f solely on its first coordinate.

Proposition 2.5 Let ξ be as in (III).
Then, for all p ∈ N, where m p is the counting measure on Z dp , and (2.14)

Proof of Propositions 2.2-2.5
The proofs are, besides the proof of ≤ in (2.10), essentially straightforward extensions of the proofs of [3, Lemma III.
We only indicate the main steps (and so the arguments in this section are not self-contained).

Proof of Propositions 2.2, 2.3 and 2.5
Proof As mentioned in Section 2.1, the Feynman-Kac formulas for the annealed Lyapunov exponents for white noise and finitely many independent random walks are similar, since the term in (2.4) for finitely many independent random walks. Therefore a slight adaptation of the proof of Proposition 2.3 below is enough to get the corresponding result for ξ being space-time white noise, i.e., ξ being as in (I).
The proofs of Propositions 2.3, and 2.5 follow the same line of argument as the proofs of [4, Proposition 2.1] and [10, Proposition 2.2.1], respectively, for K ≡ κ. Below we detail how to adapt the proofs. Consider the Markov process Y = (Y (t)) t≥0 with generator where L 1 and L 2 are the generators of (IIa) and (III) respectively, K i is given as in (2.11) but acting on the second coordinate of f ∈ 2 (m n ⊗ m p ) and f ∈ L 2 (μ ⊗ m p ) (if ξ is as in (IIa) and (III) respectively), and V 1 (as in [4, (16)]) and V 2 (as in [10, (2

.2.2)]) by
(2.18) and Since L 1 and L 2 are self-adjoint and bounded, and K has compact support and is symmetric, G K V is a bounded self-adjoint operator.

Upper Bound
Let (2.20) and let B R (t) ⊂ Z d be the box of radius R(t) = t log t centered at the origin. Then, for any fixed realization of K, we have Consequently, by a cost of a superexponentially small error, it is enough to consider in the Feynman-Kac representation (1.6) only random walk paths that stay in B R (t) until time t. This allows to use the spectral theorem as in [10, Proposition 2.2.1]. We omit the details.
Lower Bound Since K is bounded away from zero and infinity, it follows that for any finite K ⊂ Z d there exists C > 0 such that Let δ > 0 and take f δ such that inserted in the right hand side of (2.8) (respectively of (2.13)) it approximates the corresponding supremum in (2.8) (respectively in (2.13)) up to a difference δ. It was argued in [10, Proposition 2.2.1] that there is a finite set if ξ is as in (IIa), and if ξ is as in (III).

Proof of Proposition 2.4
Proof We only prove the case p = 1, the extension to general p being straightforward, see also Remark 2.7. The proof of Proposition 2.4 is divided into 2 Steps.
Step 1 We first show that λ 1 (K) is bounded from above by the right-hand side of (2.10). Recall (2.12).

Claim 2.6
There is a sequence of constants C t , t > 0, with lim t→∞ C t = ∞ such that for all N ∈ N and t > 0, where E μ,0 denotes expectation w.r.t. the joint process (ξ, X K ) when ξ is drawn from μ and X K starts at 0, and (2.27) Taking the logarithm, dividing by t and letting t → ∞, leads to the desired upper bound.
Before we begin the proof of Claim 2.6 we recall some facts from Gärtner and den Hollander [9]. A slight generalization of [9, Proposition 2.1] states that (2.28) Here, the function w is the solution of the equation We are now ready to prove Claim 2.6. We use ideas from Kipnis and Landim [13,Appendix 1.7]. Recall the uniform ellipticity assumption (1.5) on the K-field. Thus, by standard large deviation estimates of the number of jumps of X K and by (2.28)-(2.30), there is a sequence of constants C t as in the statement of Claim 2.6 such that for all t > 0 and N ∈ N, Here, B R(t) denotes the box centered at the origin with side length R(t) = t log t. We now make use of the following fact (which follows from Demuth and van Casteren , and is the generator of the semigroup (2.32) In particular, the function v t (η, Here V N acts as a multiplication operator. Moreover, by (2.33), for all t > 0, , (2.35) where interchanging the derivative and the scalar product is justified by dominated convergence in combination with Lemma B.1 in the appendix section. Further note that Using Cauchy-Schwarz and f 2 The claim follows by combining (2.31), (2.34) and (2.38).
Step 2 It remains to show that λ 1 (K) is bounded from below by the right-hand side of (2.10). The proof follows the same line of argument as the proof of [10, Proposition 2.2.1] for K ≡ κ. The details to adapt it are left to the reader since they are similar to those given in the proof of the lower bound in Section 2.4.1.

Remark 2.7
To adapt the above proof to general p note that (2.30) reads in this casē (2.39)

Annealed Lyapunov Exponents: Confinement Approximation and Lower Bound in Theorem 1.2
In Section 3.1 we show that the annealed Lyapunov exponents for K ≡ κ do not change when the random walk in the Feynman-Kac formula (1.6) is confined to a slowly growing box (Proposition 3.1). In Section 3.2 we use this result to prove the lower bound in Theorem 1.2, i.e., sup{λ p (κ) : κ ∈ Supp(K)} ≤ λ p (K). Throughout this section we assume that u 0 = δ 0 , see Proposition 2.1 for a justification of that assumption.

Confinement Approximation Proposition Fix p ∈ N and κ > 0, and let ξ be as in (I)-(III). Fix a nondecreasing function
(3.1) Proof We write out the proof for the dynamics (I), namely for space-time white noise. Given p independent simple random walks X κ 2) where, with a slight abuse of notation, we redefine B L(t) (0) = [−L(t), L(t)] dp ∩Z dp . Pick u ∈ [s, t]. Using that L is non-decreasing, inserting δ 0 (X κ (u − s)), and using the Markov property ofX κ at time u − s, we see that STWN (s, t) ≥ STWN (s, u) STWN (u, t).
Taking the logarithm, dividing by pnT , and letting n → ∞ followed by T → ∞, we obtain which is the desired claim.
The proof for (II)-(III) works along the same lines. To use the superadditivity argument as in (3.3) and to get the inequalities in (3.6), the same techniques as in the first step of the proof of Proposition 2.1 in Appendix A may be applied.

Proof of the Lower Bound in Theorem 1.2
We give the proof for (I). The idea of the proof is to restrict the random walk to a box that slowly increases with time such that the K-field is constant on this box. The existence of such a box is guaranteed by the clustering property of K stated in Definition 1.1. Proposition 3.1 then yields that the resulting Lyapunov exponent equals λ p (κ) with κ the value of K on this box.
Proof The proof comes in 2 Steps.
Step 2 We next prove Theorem 1.2 for the general case by reducing it to the setting of Step 1. Recall (2.20). Fix n ∈ N. Given a realization of K, we define a discretization K n of K by putting, for each x, y ∈ Z d , (3.14) A slight adaptation of Step 1 yields Here, the restriction to the set Supp(K n ) \ {κ * } comes from the fact thatP(K(x, y) = κ * ) = 0 is possible, e.g. when the distribution of K is continuous. By Carmona and Molchanov [3, Proposition III.2.7], κ → λ p (κ) is continuous, hence the right-hand side of (3.15) converges to sup{λ p (κ), κ ∈ Supp(K)} as n → ∞. Hence it suffices to show that lim sup n→∞ λ p (K n ) ≤ λ p (K).
To do so we borrow ideas from the proof of [11, Theorem 1.2(i)]. First we introduce the notationK(x) = y∈Z d K(x, y), x ∈ Z d , and we defineK n in a similar fashion. An application of Girsanov's formula yields that (see König where N(X K ; t) denotes the number of jumps of the random walk X K with generator K up to time t. Note that K n (x,y) K(x,y) ≤ 1 for all x ∼ y ∈ Z d and that − t 0 [K n (X K (s)) −K(X K (s))] ds ≤ 2dt/n. Hence, the right-hand side of (3.16) is bounded from above by Consequently, (3.16) and (3.17) show that lim sup n→∞ λ p (K n ) ≤ λ p (K). This finishes the proof. The proof for (II) and (III) is the same as above, with the additional restriction that 0 < p < 1/G(0) for (IIb  continuous for (II), which allows us to take the limit on the right-hand side of (3.15). The continuity of κ → λ p (κ) for (III) follows from Proposition 2.5, which still holds when κ is deterministic. Indeed, the variational formula in Proposition 2.5 shows that κ → λ p (κ) is convex. Since ξ is bounded for (III), so is κ → λ p (κ), which yields the desired continuity. To obtain the result for (IIb) with p ≥ 1/G(0), for which λ p (κ) = ∞ for all κ ≥ 0, we note that averaging u(0, t) p first with respect to the trajectories Y y j present in the definition of ξ , then with respect to the Poisson field (N y ) y∈Z d and using standard Feynman-Kac identities, an adaption of the proof of [9, Proposition 2.1] yields the estimate (0, s) ds , (3.18) wherew solves the equation To conclude it suffices to note that by [9, Proposition 2.3] (with the notation r d = 1/G(0)), t →w(0, t) is non-decreasing with lim t→∞w (0, t) = ∞.

Quenched Lyapunov Exponent: Confinement Approximation and Lower Bound
In Section 4.1 we show that a confinement approximation holds for K ≡ κ. In Section 4.2 we use this result to prove Theorem 1.3.

(4.2)
Pick u ∈ [s, t]. Using that L is non-decreasing and inserting δ 0 (X κ (u − s)) under the expectation in (4.2), we obtain Applying the Markov property of X κ at time u − s, we get Since ξ is stationary and ergodic, and the law of { (u + s, u + t) : 0 ≤ s ≤ t < ∞} is the same for all u ≥ 0, it follows from Kingman's superadditive ergodic theorem [12] that lim t→∞ 1 t log (0, t) exists P-a.s. and in P-mean, and is non-random.   Using that ξ is invariant under time shifts, we get Letting n → ∞ followed by T → ∞, using the L 1 -convergence in (4.5), and recalling that u 0 = δ 0 , we arrive at the sandwich The convergence of the rightmost term in (4.8) to the rightmost term in (4.5) can be shown by a direct comparison between these two terms using condition (4) for ξ .

Proof of Theorem 1.3
With the help of Proposition 4.1 we can now give the proof of Theorem 1.3.
Proof The proof comes in 3 Steps. Step 2}. An application of the Markov property of the random walk at times g i (t) and t − g i (t) yields (4.10) Further note that, by Jensen's inequality, 11) where the interchange of the expectations is justified because (4.12) A similar computation yields the same lower bound for E[log U 3 (t)]. Note that the lower bounds are sublinear in t. To control U 2 , note that X K restricted to the 1 (x(κ i , t))} is distributed as a random walk with diffusion constant κ i confined to stay in this box. Hence 1 (x(κ i , t)) , (4.13) so that, by the space-time shift invariance of ξ and Proposition 4.1, (4.14) Since, by the first step of the proof, we have the representation u(0, t)), (4.15) (4.10-4.14) yield which settles the claim for the case Supp(K) = {κ 1 , κ 2 }, κ 1 , κ 2 ∈ (0, ∞).
Step 3 The strategy to extend the proof to the general case works similarly as in the second step of the proof of Theorem 1.2 in Section 3.2. However, since we do not know whether κ → λ 0 (κ) is continuous, some modifications are needed (see [11, Theorem 1.2(i)], where conditions are provided under which the quenched Lyapunov exponent λ 0 (κ) is Lipschitz continuous outside any neighbourhood of zero). Fix n ∈ N and given a realisation of K define a discretization K n of K as in the second step of the proof of Theorem 1.
To proceed, let M = sup{λ 0 (κ), κ ∈ Supp(K)}. We claim that the liminf of the right-hand side of (4.17) is bounded from below by M. We distinguish between two cases. If M = ∞, then for each R > 0 there is κ R ∈ Supp(K) such that λ 0 (κ R ) ≥ R. Since κ → λ 0 (κ) is lower semi-continuous, for any ε > 0 there is a neighborhood U R of κ R such that λ 0 (κ) ≥ λ 0 (κ R ) − ε for all κ ∈ U R . Hence, for all R ≥ 0 and ε > 0, we obtain lim inf (4.20) From this we get the claim by letting R → ∞. The case M < ∞, may be treated similarly. It only remains to show that lim sup n→∞ λ 0 (K n ) ≤ λ 0 (K). But this works verbatim as in the second step of the proof of Theorem 1.2.

Quenched Lyapunov Exponent: Failure of Upper Bound
In this section we provide an example where the upper bound fails for a decorated version of Z d , namely, we show that there is a choice of K for which Let (V , E) denote the usual graph associated with Z d , i.e., V = Z d and E = {e(x, y) : x, y ∈ V , x ∼ y} is the set of edges connecting nearest-neighbour vertices of V . We consider (V , E ), a decorated version of (V , E) 1 (x, y), e 2 (x, y)) : x, y ∈ V , x ∼ y , (5.2) i.e., we draw two edges rather than one, say red and green, between every pair of nearest-neighbour vertices of Z d . Pick any K on E that has the alternating cluster property, i.e., there exist boxes B L(t) , with lim t→∞ L(t) = ∞, on which all red edges have value κ 1 and all green edges have value κ 2 . For such K, by the confinement approximation of Proposition 4.1, we have where (κ 1 , κ 2 ) E means that all red edges take value κ 1 and all green edges take value κ 2 . In [11] we exhibited a class of dynamic random environments ξ for which lim κ→∞ λ 0 (κ) = λ 0 (0) = E(ξ(0, 0)).
where under E ⊗p ys the processX κ starts in ys. Abbreviate (in case it is well defined) STWN y (s, t) = STWN y,y (s, t). It is enough to show the existence of a concave and symmetric function α : R dp → R such that, for all compact K ⊂ R dp , lim t→∞ sup y∈Kt∩Z dp Indeed, suppose that such a function exists. A short computation shows that α obtains a global maximum at zero. Moreover, a standard large deviation estimate for the number of jumps ofX κ shows that there is a compact subset K ⊂ R dp such that lim sup Hence, given such a set K, it is enough to focus on the contribution coming from those random walk paths such that {X κ [0, t] ⊆ Kt}. Note that necessarily 0 ∈ K. Fix ε > 0. By the approximation property of α in (A.2) we can find a t 0 ≥ 0 such that, for all t ≥ t 0 , which yields the desired claim.
The proof of the existence of α is divided into 3 Steps.
Step 1 We first show the existence of a function α : Q dp → R such that, for all y ∈ Q dp , lim t→∞ yt∈Z dp To that end, we fix y ∈ Q dp and take 0 ≤ s < u < t such that ys, yu, yt ∈ Z dp . Consequently, t → log STWN y (s, t) is superadditive for each s as above, and the claim in (A.5) follows.
Step 2 To extend α to a function on R dp and to get uniform convergence on compacts as in (A.2), we show that for any compact subset K ⊂ R dp , lim ε↓0 lim sup t→∞ sup x,y∈K,xt,yt∈Z dp x−y ε To that end, we fix ε > 0 and note that for all t > 0 and all y ∈ K such that yt ∈ Z dp , STWN y (0, t) = w∈R dp w(1−ε)t∈Z dp Moreover, by standard large deviation estimates for the number of jumps for each component ofX κ , it is possible to find an R > 0 such that lim sup so that the main contribution to (A.8) comes from those w such that w ∈ B R . Here, B R denotes the box centered at the origin with radius R. Consequently, to conclude Step 2 it is enough to show that lim ε↓0 lim sup t→∞ sup x,y∈K,xt,yt∈Z dp ||x−y|| ε 1 t log (A. 10) But this follows from the fact that the term appearing under the integral in the exponential in (2.3) is bounded, together with standard estimates on the random walk transition kernel. The details can be found in the proof of [7,Lemma 4.3].
Step 3 Using the results in Steps 1-2, we can conclude the proof as in [7]. We only give a sketch. Because of (A.7), α is continuous and hence can be extended to a continuous function α : R dp → R. The uniform convergence in (A.2) follows from (A.7) and a compactness argument. Clearly, α is symmetric, i.e., α(x) = α(−x) for all x ∈ R dp , which is a consequence of the symmetry of ξ . It remains to show the concavity of α. For that, fix x, y ∈ R dp , β ∈ (0, 1) and take sequences (t n ) n∈N , (x n ) n∈N , (y n ) n∈N such that lim n→∞ t n = ∞, lim n→∞ x n t n = x, lim n→∞ y n t n = y, and βy n t n , (1 − β)y n t n ∈ Z dp for all n ∈ N. Then, constrainingX κ to be at position βt n y n at time βt n , we see that log STWN βy n +(1−β)x n (0, t n ) log STWN y n (0, βt n ) + log STWN y n ,βy n +(1−β)x n (βt n , t n ). (A.11) The term in the left-hand side converges to α(βy + (1 − β)x) after division by t n , the first term in the right-hand side converges to α(y) after division by βt n , while the second term in the right-hand side converges to α(x) after division by (1 − β)t n . This yields the existence of a function α as claimed in (A.2), and finishes the proof.

A.2 Dynamics (IIa)
Proof For 0 ≤ s < t < ∞ and y, z ∈ R d(n+p) such that ys, zt 12) whereX = (X κ 1 , . . . , X κ p , X ρ 1 , . . . , X ρ n ). The function α from (A.2) is constructed on R d(n+p) rather than on R dp . The construction is similar to that for dynamics (I) and will therefore be omitted.

A.3 Dynamics (IIb)
Recall that the dynamics starts from a Poisson random field on Z d with intensity ν ∈ (0, ∞) and the representation derived in Section 2.4.2. For the proof we distinguish between two cases.
The difference with (I) is that we no longer have the same relation as in (A.8). However, by the lines following (2.29), we have the bound w(x, t) ≤w(0, t) for all x ∈ Z d , t ≥ 0. Moreover, by (2.30), the assumption 0 < p < 1/G(0) yields that w(0, t) is bounded. Hence, we can use large deviation arguments for the random walk to show that the main contribution to (A.13) comes from those random walk paths that stay until time t inside a box of size Rt for a suitable chosen value of R. Moreover, using that, for all t ≥ 0, ε ∈ (0, 1) and x, y ∈ Z dp , where we used (2.30) to obtain the last inequality and the relation (A.16) was used throughout all inequalities in (A.18). We can now proceed as for (I).
Step 3 This works almost verbatim as for (I). We omit the details.

A.4 Dynamics (III)
Proof The idea of the proof is the same as for (I)-(II), but some additional technical difficulties arise. Write E ⊗p μ,x = E μ ⊗ E ⊗p x for the expectation when (ξ,X κ ), with X κ = (X κ 1 , . . . , X κ p ) a collection of p indendent simple random walks jumping at rate 2dκ, has initial distribution (μ, δ x ). For 0 ≤ s < t < ∞ and y, z ∈ R dp such that ys, zt ∈ Z dp , define, similarly as in (A.1), and write SFS y (s, t) = SFS y,y (s, t). As for (I), it is enough to show the existence of a function α : R dp → R such that, for all compact subsets K ⊂ R dp , The proof comes in 3 Steps.
Step 1 We first show the existence of a function α : Q dp → R such that lim t→∞ yt∈Z dp 1 t log SFS y (0, t) = α(y).
The idea is again to establish the superadditivity of t → log SFS y (s, t) for all y ∈ Q dp such that ys, yt ∈ Z dp . In the present context, however, this is a bit more tricky than before, which is why we provide the details. Fix y ∈ Q dp , and take 0 ≤ s < u < t < ∞ such that ys, yu, yt ∈ Z dp . Constraining the random walkX κ to be at position yu at time u − s, we can use the strong Markov property of (ξ,X κ ) at time u − s to get where we abbreviate Expanding the exponentials, we may rewrite the right-hand side of (A.22) as ξ(X κ i (s j ), s j ) 1l{X κ (t) = y}, n ∈ N, t, s 1 , . . . , s n 0, y ∈ Z dp . (A.25) Note that by the non-negativity of ξ , for all n ∈ N, yu ∈ Z dp , s 1 , . . . , s n , u − s ≥ 0, where μξ u−s is the distribution of ξ at time u − s when ξ starts from μ. But μ is an invariant measure, and so this distribution equals μ. Consequently, the right-hand side of (A.28) becomes from which the existence of α follows.
Step 2 As in the proof for (I), we want to establish that, for any compact subset K ⊂ R d , The difference with (I) is that we no longer have the same relation as in (A.8). However, because of the boundedness of ξ , we can use a large deviation argument for the random walk to show that the main contribution to (A.19) comes from those random walk paths that stay until time t inside a box of size Rt for a suitable chosen value of R. Moreover, using that, for all t ≥ 0, ε ∈ (0, 1) and w, x ∈ Z dp , we can finish the proof as for (I).
Step 3 Use the techniques from Step 1 to proceed in a similar manner as in Step 3 for (I). We omit the details.

B A Technical Lemma
The following lemma was used in Section 2.4.2.
Lemma A.1 Let L be the generator of the dynamics in (IIb). For N ∈ N, define V N : N Z d 0 × Z d → R by V N (η, x) = η(x) ∧ N (recall (2.12)), and let P V N t be the semigroup of L V N = L + K + V N . Then for every t > 0 there is a g ∈ L 1 (N Z d 0 × Z d , μ ⊗ m) such that, for all η ∈ N Z d 0 and y ∈ Z d , L V N P V N tf (η, y) × P V N tf (η, y) g(η, y) (B.1) where P y,x denotes the product measure of (X K , Y x ) started from (y, x η(x)P y,x ∃ s ∈ [0, t + 1]: × P y ∃ s ∈ [t − 1, t + 1]: X K (s) = 0 . (B.6) We complete the proof by showing that the right-hand side of (B.6) is in L 1 (N Z d 0 × Z d , μ ⊗ m). To see why, note that integration of the right-hand side of (B.6) over η ∈ N Z d 0 yields the upper bound The first property in (1.5) combined with standard large deviations estimates shows that I is finite. To see that I I is finite, note that by the Cauchy-Schwarz inequality, y∈Z d k∈N x : x−y =k P y,x ∃s ∈ [0, t + 1]: X K (s) − Y x (s) = 1 × P y 0 ∈ X K ([t − 1, t + 1]) .
(B.10) To proceed, note that for any y ∈ Z d , k∈N x : x−y =k P y,x ∃s ∈ [0, t + 1]: Here, N(X K − Y x , t + 1) denotes the number of jumps of X K − Y x . Thus, the first property in (1.5) combined with standard large deviation estimates shows that the sum in (B.11) is bounded uniformly in y. To conclude, use that y → P y (0 ∈ X K ([t − 1, t + 1])) (B.12) decays faster than exponential in y . This implies that I I is finite, and shows that the right-hand side of (B.6) is in L 1 (N Z d 0 × Z d , μ ⊗ m).