Linear Stochastic Dyadic model

We discuss a stochastic interacting particles' system connected to dyadic models of turbulence, defining suitable classes of solutions and proving their existence and uniqueness. We investigate the regularity of a particular family of solutions, called moderate, and we conclude with existence and uniqueness of invariant measures associated with such moderate solutions.


Introduction
In this paper we consider a stochastic system of interacting particles, introduced and discussed in [14]: where k n := λ n , with λ > 1, the W n are independent Brownian motions with • dW denoting Stratonovich stochastic integration, X is a random initial condition, and σ is a nonnegative deterministic forcing, a term not present in [14]. It is closely related to dyadic models of turbulence, an interesting simplification of the energy cascade phenomenon, which have been extensively studied in the physical literature, as well as the mathematical one. We mention here just some results: in [7][8][9]11] one can find dyadic models of a form similar to the one here, linearised by means of Girsanov's theorem, where [6,10,13,19,21] deal with other variants of the dyadic models. We refer to the review papers [1,12], too, for further reading and references.
The σ in (1) is a deterministic forcing, a feature that (1) shares with other dyadic models, see for example [15,16]. Such forcing is usually introduced to provide a steady flow of energy and allow for solutions that are stationary in time: dyadic models, even when formally energy preserving, dissipate energy, through the so-called anomalous dissipation.
In some cases, one considers dyadic models with additive stochastic forcing, for example [5,18], where the noise is 1-dimensional, and [23], where it acts on all components. The model (1) considered in this paper is itself stochastic, through the random initial condition X, and the infinite-dimensional multiplicative noise (W n ) n , which formally conserves energy. Other examples linked to this one in the literature are the already mentioned [7][8][9]11] and [14].
Unlike classical dyadic models, the particle system (1) considered in this paper is linear. This is not surprising: as already mentioned, stochastic linear dyadic models arise from nonlinear ones through Girsanov's theorem. Moreover, the coefficients in such models are growing exponentially, and the associated operator, though linear, is still nontrivial to deal with. Additionally, as mentioned in [14], systems similar to the one discussed here play a role in modelling quantum spin chains and heat conduction.
The main results in this paper are the following: we define two classes of solutions for our system, proper solutions and moderate ones, a more general class. For both classes we prove existence and uniqueness, but the natural setting for the uniqueness is that of moderate solutions, a class that had already been introduced in [14]. On the other hand, we prove that, under very mild assumptions on the initial conditions, moderate solutions are much more regular than hinted to by the definition: in particular they have finite energy for positive times. Finally, we move on to invariant measures. By focusing on the more regular solutions suggested by the regularity theorem we just mentioned, we can improve the result in [14], showing that for moderate solutions there exists a unique invariant measure, with support in the space of finite energy solutions.
Before moving on, let us briefly give the general structure of the paper. We begin with the definition of the model and of proper solutions in Section 2, where we also prove the existence of such solutions. In Section 3 it is the turn of moderate solutions, and their existence and uniqueness. After that, in Section 4, we take an apparent detour, considering a related continuous-time Markov chain, which will allow us to better characterize moderate solutions and show that they are quite regular. Finally, in Section 5, we show existence and uniqueness of invariant measures for our system.

Model and proper solutions
The model studied in this paper is the following linear and formally conservative system of interacting particles, introduced in [14]: (2)      dX n = k n−1 X n−1 • dW n−1 − k n X n+1 • dW n , n ≥ 1 X n (0) = X n , n ≥ 1 X 0 (t) ≡ σ, t ≥ 0.
Here, for n ≥ 0, the coefficients are k n := λ n , for some λ > 1, the W n are independent Brownian motions on a given filtered probability space (Ω, F , F t , P ), and • dW denotes the Stratonovich stochastic integration, X is an F 0 -measurable random initial condition, and σ ≥ 0 is a constant and deterministic forcing term. We can rewrite each differential equation in Itō form: where, by (2) d[X n−1 , W n−1 ] = −k n−1 X n dt, n ≥ 2, and d[X n+1 , W n ] = k n X n dt, n ≥ 1, and rewrite (2) in the following way We consider as initial condition X an F 0 -measurable random variable, as mentioned, which will usually take values in some space H s , for s ∈ R, where Remark 1. These spaces have nice properties: they are Hilbert and separable. We also have that H s ⊆ H p for p < s, and · H p ≤ · H s . Notice, moreover, that H 0 = l 2 . The l 2 norm is identified as the energy of the configuration and the spaces H s may be seen as corresponding to the usual function spaces (see [13] for a thorough explanation in a related model).
We introduce now the definition of proper solutions which will be our starting point towards the more general moderate solutions, proposed in [14], that here appear in Definition 8.
Definition 2. Given a filtered probability space (Ω, F , F t , P ), an F 0 -measurable random variable X, taking values in some H s , and a sequence of independent Brownian motions (W n ) n≥0 , we say that a process X = (X n (t)) n≥1,t∈[0,T ] is a componentwise solution with initial condition X, if it has adapted, continuous components and satisfies system (3).
If a componentwise solution X is in L 2 ([0, T ] × Ω; l 2 ) and X n ∈ L 4 ([0, T ] × Ω) for all n ≥ 1, we say that X is a proper solution.
The requirement of finite fourth moments, which appears in the definition of proper solution, is a technical assumption needed in Proposition 5, which shows that second moments of a proper solution solve a closed system of equations (see also [9], [11]). Fourth moments play also a role in Theorem 3.
Let us now state and prove the following existence result for proper solutions with initial conditions with finite energy.
Theorem 3. For any initial condition X ∈ L 4 (Ω, F 0 ; l 2 ), there exists at least a proper solution X ∈ L ∞ ([0, T ]; L 4 (Ω; l 2 )). Moreover, and if σ = 0, then with probability 1, Proof. For N ≥ 3, consider the following SDE in R N , which represents a Galerkin approximation of the original problem (3): n (0) = X n , n = 1, . . . , N. This system has a strong solution with finite fourth moments which we consider embedded in l 2 for simplicity. We compute Now (dropping the index (N ) in the next two equations not to burden the notation too much), T ], and N ≥ 3, which we can also write as (9) X Consequently the sequence X (N ) is bounded in L ∞ ([0, T ]; L 4 (Ω; l 2 )), which is the dual of the space L 1 ([0, T ]; L 4/3 (Ω; l 2 )). Since the latter is separable (see for example [20] for details), sequential Banach-Alaoglu theorem applies and there is a subsequence X (N k ) which converges in the weak* topology to some limit X * for k → ∞. A fortiori, there is also weak convergence in L p ([0, T ] × Ω; l 2 ), for all 1 < p ≤ 4, and in particular for p = 2.
The components X (N ) n for n ≥ 1 belong to L 2 ([0, T ] × Ω; R) and are progressively measurable. The subset of progressively measurable processes is a linear subspace of L 2 which is complete, hence closed in the strong topology. Thus it is closed also in the weak topology. Since X (N k ) n converges to X * n in the weak topology of L 2 ([0, T ] × Ω; R), we conclude that X * n is progressively measurable.
Now we need to pass to the limit in (3). By (6) the processes X for N > n. The maps are linear and (strongly) continuous operators from L 2 ([0, T ] × Ω) to L 2 (Ω), hence they are weakly continuous so we can pass to the limit (see Remark 4,below) and conclude that the processes X * n also satisfy system (3). A posteriori, from these integral equations, it follows that there is a modification X of X * such that all its components are continuous, hence X is a componentwise solution in L 2 ([0, T ] × Ω; l 2 ).
To conclude that X is a proper solution, we only need to check that the components are in L 4 . For all n ≥ 1, where the third inequality is a consequence of bound (9) and the second one of the weak lower semicontinuity of the norm, i.e. that in a Banach space, if a sequence converges weakly, then the norm of the limit is bounded by the limit inferior of the norms. Then, to prove the bound on energy (4), we take a measurable set D ⊂ [0, T ], integrate (8) on D and pass to the limit with the weak lower semicontinuity of the L 2 (D × Ω; l 2 ) norm, to get By the arbitrariness of D, the bound (4) must hold for a.e. t. Now, if it still failed for some t 0 , then one could find ǫ > 0 and an integer m such that E n≤m X n (t 0 ) 2 − ǫ would also exceed the bound, but by the continuity of the trajectories and the finiteness of the sum this would give a contradiction. Finally, to prove the last statement, we follow ideas from [9]. If σ = 0, by (7), we have s. on all [0, T ] and for all N .
we now integrate the square of this inequality on A := { X l 2 > X l 2 } ⊂ [0, T ] × Ω and pass to the limit with the weak lower semicontinuity of the L 2 (A; l 2 ) norm, to get that A must be L ⊗ P -negligible. Then for all m ≥ 1 also {(t, ω) : n≤m X n (t) 2 > X 2 l 2 } is negligible, and hence by continuity of trajectories, and we can conclude by intersecting over all m.
Remark 4. Passing to the limit in the integral equations is standard but made somewhat tricky by the different spaces involved, so we expand it here for sake of completeness. First of all, we fix n ≥ 1. We start now from the fact that X x n+1 (s)dW n (s) x n (s)ds.
Since the integral operators are weakly continuous, L n,t (X (N ) ) → L n,t (X * ) in weak-L 2 (Ω), for all t ∈ [0, T ]. On the other hand X (N ) n (0) → X * n (0) a.s. since by construction it is eventually constant. Therefore X (N ) It is now enough to strengthen the convergence to weak-L 2 ([0, T ] × Ω) to conclude that X * n = Z and hence that it solves the integral equations.
To this end, take any . By Cauchy-Schwarz inequality and the uniform bound given by (8) we get and we are done.
We now take a first look at the second moments of a proper solution: they solve a linear system. We will see later on that such property can be used to get useful estimates on the solutions themselves.
Proposition 5. Let X be a proper solution with initial condition X such that for all n ≥ 1 Then u ∈ L 1 [0, T ]; l 1 (R + ) and it satisfies the following linear system with initial condition u.
Proof. It follows from (3), by applying Itō formula to X 2 n , that By the definition of proper solution, X n ∈ L 4 ([0, T ] × Ω) for all n, so the stochastic integrals above are true martingales, and taking expectations and differentiating, we get In other words, the second moments of the components satisfy system (10). The fact that u(0) = u is obvious and u ∈ L 1 ([0, T ]; l 1 ) follows from the definition of proper solution, since There is furthermore a unique constant solution to system (10) satisfied by the second moments of proper solutions. This constant solution has an explicit form, as shown in the following result.
Proof. Assume s = (s n ) n is such a solution. Then We want to write a recursion for the differences of consecutive elements: we have s 1 − s 2 = σ 2 · k −2 1 , and also Recall now, that k n = λ n = k n 1 , so s n − s n+1 = λ −2 (s n−1 − s n ), and by recursion s n − s n+1 = λ −2n σ 2 for all n ≥ 1, yielding that for any m > 1, In order to have the explicit form of this solution, we use now the fact that s ∈ l 1 and so s m → 0: and for any n, we get It is immediate to verify that this is in fact a solution in l 1 .

Remark 7.
We will prove uniqueness of the solutions of this system in Theorem 15, in Section 4. Then Proposition 6 will tell us that in system (3), if the initial condition is chosen with second moments of the form just shown, then proper solutions have higher regularity than their definition requires, living in L ∞ ([0, T ]; L 2 (Ω; H 1 − ), and their components have constant second moments. This suggests the existence of invariant measures supported on configurations with H 1 − regularity, which in fact will be found in Section 5, at the end of the paper.

Moderate solutions
The definition of proper solution given in Definition 2 is in some sense too strong, in particular for the assumptions on X in Theorem 3, so we would like to consider a more general class of solutions. Consequently, we present here the concept of moderate solutions, as introduced in [14] to identify a natural space to prove existence and uniqueness in, with much weaker requirements on initial conditions. Later, in Theorem 25, we will show that moderate solutions are actually almost as regular as proper solutions.
Some of the following results are similar to those in [14], but we include full proofs here nevertheless, given that some details differ, that we have an additional forcing term, and for overall completeness.
Definition 8. We say that a componentwise solution X is a moderate solution with initial condition X if: If a moderate solution is in L 2 ([0, T ] × Ω; l 2 ), we call it a finite energy (moderate) solution.
Remark 9. Clearly all proper solutions with initial conditions in L 2 (Ω, H −1 ) are finite energy solutions, as can be seen by taking the constant sequence.
The key result to prove existence and uniqueness of the moderate solution is the following lemma, which has a statement similar to Lemma 2.7 from [14] and shares the same proof strategy.
Proof. Recall that, by definition, for all t ≥ 0, E X(t) 2 Passing to the limit as N → ∞, the integral converges to zero, since X ∈ L 2 ([0, T ] × Ω; l 2 ), concluding the proof.
We can now prove the uniqueness result for moderate solutions.
Theorem 11. If X and X are two moderate solutions with the same initial condition X ∈ Proof. By Definition 8, it is easy to see that X − X is a moderate solution defined on [0, T ] for the model with σ = 0 and with zero initial condition, so without loss of generality we assume σ = 0, X = 0 and X = 0. Let X (N ) and X (N ) be as in Definition 8. For all N ≥ 1, X (N ) is a proper solution of the model with σ = 0, so by Proposition 5 we can apply Lemma 10, yielding that for all t ∈ [0, T ] Taking the limit for N → ∞, we have that the L 2 ([0, T ] × Ω; H −1 )-norm of X is zero. Finally, by the continuity of trajectories, it is easy to conclude that X(t) = 0 for all t, almost surely.
Corollary 12. Since proper solutions are moderate solutions, uniqueness holds in the class of proper solutions too, whatever the initial condition. This means in particular that the inequalities (4) and (5) hold in general for proper solutions with initial conditions in L 4 (Ω; l 2 ) and that the sequence of approximants (X (N ) ) N ≥1 in Definition 8 is uniquely determined by their initial conditions.
To conclude this section, we now state and prove the existence result for moderate solutions, using once again Lemma 10.
Theorem 13. For all X ∈ L 2 (Ω; H −1 ) there exists a moderate solution X with initial condition X, such that Moreover the approximants (X (N ) ) N ≥1 of X can be taken as the unique proper solutions with the following initial conditions Proof. By virtue of Theorem 3 and Corollary 12, there exists a unique proper solution Z of (3) with zero initial condition. Below we will exhibit a moderate solution X for the model with σ = 0 and initial condition X. Then, by linearity, Z + X will be the required moderate solution.
We can assume σ = 0. For N ≥ 1, let X (N ) ∈ L ∞ (Ω, l 2 ) be defined as in (12). Then, by Theorem 3 and Corollary 12, there exists a unique proper solution X (N ) with initial condition X (N ) . In view of Definition 8, we will show that ( For all M, N ≥ 1, the difference X (M) − X (N ) is a proper solution, hence by Proposition 5 we can apply Lemma 10, yielding that Thus, if we can prove the convergence for the sequence of initial conditions, we also get the Cauchy property for the sequence (X (N ) ) N ≥1 . To this end, consider the measurable functions, on Ω × N defined by Clearly ψ (N ) → 0 pointwise, as N → ∞, and also in On the other hand we have, Hence X (N ) → X in L 2 (Ω; H −1 ), as N → ∞, then by (13) the sequence (X (N ) ) N ≥1 has the Cauchy property and there exists the limit X ∈ L 2 ([0, T ] × Ω; H −1 ).
To conclude the proof of the existence statement, we need to show that X admits a modification which is a componentwise solution, that is, X has a modification with continuous adapted trajectories, which solves system (3). This is completely standard and a simpler version of the argument in the proof of Theorem 3, with strong L 2 convergence in place of weak L 2 convergence.
To prove the bound (11) of the H −1 norm, we can notice that Lemma 10 applies to the approximants X (N ) and, taking the limit, the same inequality holds for X. Then it is enough to recall that when σ > 0 we need to take the auxiliary proper solution Z into account, for which (4) applies, so that, with the H −1 norm controlled by the l 2 one,

Regularity of moderate solutions
Now that we have introduced moderate solutions and shown their existence and uniqueness' results, let us go back to the second moments' system (10), and delve deeper into it. We can show a Markov chain associated with our system. This is not surprising, as it is the case for other models in the dyadic family (see for example [8,9,11,14]). This associated process will allow us to prove sharper estimates on the norm of solutions, leading us to Theorem 25 at the end of this section, which states that moderate solutions are, in a sense, much more regular than one would expect from the definition.
Let Π be the infinite matrix defined by or, in an equivalent way, With this definition, Π is the stable and conservative q-matrix associated to a continuous-time Markov chain on the positive integers (see [4] for a comprehensive discussion). The corresponding Kolmogorov equations are Since Π is symmetric, both the forward and the backward equations are formally equivalent to system (10) with σ = 0. From now on we will refer in particular to the forward equations, because we will be studying the second moments of the finite energy solutions of the original system, which will belong to the class L ∞ ([0, T ]; l 1 (R + )).
The forward equations are well-posed. In particular, it is a general fact (see for example Theorem 2.2 in [4] and references therein) that, for a q-matrix such as Π, there exists a transition function f = (f i,j (t)) i,j≥1,t≥0 such that, for all i ≥ 1, f i,· is a solution of the forward equations with initial condition δ i,· , and, for all j ≥ 1, f ·,j is a solution of the backward equations with initial condition δ ·,j . This is called the minimal transition function associated with the q-matrix Π, and has some nice properties, for example j f i,j ≤ 1 (which is used in the proof of Theorem 15 below). Its uniqueness depends on the form and properties of Π, and in our case classical results (see [4], again) show that there is uniqueness in the class of solutions of the forward equations while there are infinitely many solutions in the class of solutions of the backward equations.
Nonetheless, we need a statement of uniqueness in a larger class, because we consider l 1 (R) instead of l 1 (R + ), and L ∞ ([0, T ], l 1 ) instead of L ∞ (R + , l 1 ).

Lemma 14.
Let u ∈ l 1 and f be the minimal transition function of Π. Then defines a solution u of the forward equations in the class L ∞ (R + ; l 1 ) with initial condition u.
Proof. Since f is a transition function, it is non-negative and j≥1 f i,j (t) ≤ 1 for all i ≥ 1 and all t ≥ 0, so in particular, however, to conclude we must check that differentiation commutes with the sum over i: The following theorem mimics results in [4], but requires a new proof nevertheless, as already mentioned, since we are considering different spaces.
Theorem 15. For all T > 0 there is uniqueness of the solution for the forward equations in the class L 1 ([0, T ]; l 1 (R)), for any initial condition. The same holds for system (10), that is, when σ > 0.
Proof. By linearity, suppose by contradiction that u is a nonzero solution in L 1 ([0, T ]; l 1 ) with null initial condition and σ = 0 (this applies to both cases). We start by constructing another solutionũ defined on the whole [0, ∞). Let τ ≤ T be a time such that u(τ ) = 0 but u(τ ) l 1 < ∞. Letũ = u on [0, τ ] and extend it after τ with the minimal transition function f , By Lemma 14,ũ is a solution of the forward equations in the class L 1 ([0, T ]; l 1 ) ∩ L ∞ ([T, ∞), l 1 ) and in particular we can define the residuals r = (r i (λ)) i≥1,λ>0 as Then by integrating by parts usingũ(0) = 0, we get the algebraic relation λr(λ) = r(λ)Π, that is These can be solved recursively: either r i = 0 for all i ≥ 1, or r i /r 1 > 1 for all i ≥ 2. To quickly see this, one can prove by induction on i that r i /r 1 > r i−1 /r 1 ≥ 1. The base case for i = 2 comes from the first equation, while the inductive step comes from the second one: We had r(λ) ∈ l 1 , so r(λ) = 0 for all λ > 0 yieldingũ = 0 and hence a contradiction.
Remark 16. With this proof, l 1 is the best space we can get: if we relax to l 1 − , we do not get the contradiction, since the r i 's might not explode, and one can actually show that We are now able to characterize the evolution in time of the second moments as a transformation through Π of the second moments at time 0. Proposition 17. Let σ = 0 and f be the minimal transition function of Π. If X is the moderate solution with initial condition X ∈ L 2 (Ω, H −1 ), then for all j ≥ 1 and t ∈ [0, T ], (14) E . As a first step, we prove the statement in the case that X ∈ L 2 (Ω; l 2 ) and X is a proper solution. In this case u ∈ l 1 , Lemma 14 applies, and so v is a solution of the forward equations in L ∞ (R + ; l 1 ). On the other hand, by Proposition 5 and since σ = 0, u is a solution of the forward equations in L 1 ([0, T ]; l 1 ). Both have initial condition u, so by Theorem 15,u We turn to the general case of X moderate solution. For N ≥ 1, let X (N ) and X (N ) be approximating sequences. We can take the initial conditions X (N ) in the form presented in (12) without loss of generality, given that the moderate solution is unique by Theorem 11. Let u  Notice that X (N ) ∈ L ∞ (Ω; l 2 ), so, by the first step, (14) holds for the approximants X (N ) and Taking the limit as N → ∞, u (N ) i increases monotonically to u i for all i ≥ 1, hence for all t ∈ [0, T ] the right-hand side converges monotonically to v j (t). As for the left-hand side, since X . By the uniqueness of the limit (L 1 and pointwise monotone), the identity in (14) is proved, as well as the finiteness for a.e. t. Since u j is bounded uniformly on [0, T ] by (11) and v (N ) j ≤ u j , the result extends to all t.
We can now link back to moderate solutions and their connection with the minimal transition function.
Proposition 18. The second moments of a moderate solution always solve system (10) componentwise.
Proof. Let X be a moderate solution with approximating sequence X (N ) , N ≥ 1. Let u a.e. and in L 1 , we can pass to the limit in the left-hand side and inside each of the integrals at the right-hand side, yielding that the identity holds for u j for a.e. t. Now the continuity of the trajectories of X j together with the bound given by equation (11) allows us to conclude that u j is continuous in t. Since the right-hand side is also continuous, we can remove the "a.e." and the statement is proved. Proposition 17 is the key tool to control the flow of energy between components for moderate solutions and it ensures that nothing different happens with respect to proper solutions. Estimates on the minimal transition function f will now allow us to compute different norms and get regularity results for the moderate solutions. 4.1. Transition function estimates. Let (S, S, P) be a probability space with a continuoustime Markov chain on the positive integers (ξ t ) t≥0 , with the property of being the minimal process associated with Π, that is where f is the minimal transition function. We will not fix the law of ξ 0 which will not be relevant, as we will be always conditioning on this random variable. The arguments in the following lemmas are very similar to the ones in Lemmas 10 to 14 in [8], but are restated and proven again here, with the right generality for this paper and compatible notation.
Lemma 19. Let f be the minimal transition function and T j the total time spent by ξ in state j, for j ≥ 1. Let moreover E i denote the expectation with respect to P i . Then, Proof. The first equality is trivial since both terms are equal to We turn to the second one. Let (τ n ) n≥0 be the jumping times of ξ, that is τ 0 := 0 and τ n+1 := inf{t > τ n : ξ t = ξ τn }, n ≥ 0.
Let (ζ n ) n≥0 denote the discrete-time Markov chain embedded in ξ, that is ζ n := ξ τn for n ≥ 0. For every state j ≥ 1, let V j denote the total number of visits to j, Then, by the strong Markov property and conditioning on the initial state ξ 0 , V j is a mixture of a Dirac δ 0 and a geometric random variable. Specifically, let π i,j := P i (ζ n = j, ∀n ≥ 0), where we used the fact that the Markov chain is nearest-neighbour and that π j−1,j = 0. For each visit of ξ to site j, the time spent there is an exponential random variable with rate −Π j,j (recall that Π has a negative diagonal). By the strong Markov property again, these variables are independent among them and with V j . Consequently, we only have to compute Notice that for j ≥ 2, independent of j, while for j = 1 the same quantity is 1. Then ζ is a simple random walk on the positive integers, reflected in 1, with positive drift 2θ − 1. It is now an exercise to prove that . Substituting, we can conclude When the chain starts from i, all states j ≥ i are visited with probability one, and the times T j have exponential distribution. In particular the following holds.
Corollary 20. For j ≥ 1 the law of T j , conditional on ξ 0 = 1, is exponential with mean λ −2j 1−λ −2 . The minimal process ξ t is uniquely defined up to the time of the first infinity (also known as the explosion time), τ := j≥1 T j and after that one can assume that it rests in an absorbing boundary state b outside the usual state-space of the positive integers. To estimate the total energy of a solution it will be important to deal with j≥1 f i,j (t) ≤ 1 which will be strictly less than 1 when there is a positive probability that the chain has reached b.
In the following lemma, we show that it is enough to show the strict inequality P(τ > t) < 1 for just any single time t, and then it holds for all positive times.
Proof. First of all, we notice that P(τ > t) ≤ P(τ > s), for all 0 < s ≤ t: we can read this from the Chapman-Kolmogorov equations, This tells us that the map P(τ > t) is not increasing in t, and in particular it is always less than 1 for t ≥t. Now suppose that there exists a t <t such that P(τ > t) = 1. Then, for any 0 < s < t, but the last term is still a probability, so all the terms must be equal to 1. In particular, this means that P(τ > s|ξ 0 = n) = 1 for all n.
Finally, we consider and we can keep repeating it to show that for all t <t, P(τ > t) < 1.
The following result tells us that by considering processes conditioned to a staring point in 1, we actually took the worst case scenario. The proof is a standard exercise in continuous time Markov chains.
By combining Lemmas 21, 22 and 23, we have that Moderate solutions are finite energy. We get now to the first application of the results in the previous subsection: we use them in the following proposition, to show anomalous dissipation of average energy for moderate solutions starting from finite energy initial conditions. The latter hypothesis will be dropped afterwards.
Proof. We start from the second statement, so let σ = 0 and fix t > 0. We can rewrite the energy at time t thanks to Proposition 17 and equations (14) and (15), as Then we can exploit the strict inequalities (16) for all i ≥ 1 to get the result.
Turning to the first statement, by uniqueness and linearity, we can decompose X as the sum of a proper solution with zero initial condition and a moderate solution with zero forcing. Applying what we proved above, bound (4) and triangle inequality, yields the result.
The next result states formally that moderate solutions are "almost" finite energy solution, in the sense that whatever the initial condition, they jump into l 2 immediately (in fact they jump into H 1 − ).
The points i, ii and iii are immediately verified by substituting suitable values of α and β. As for iv, it is a trivial consequence of the previous ones applied to subsequent time intervals.
This important result has two interesting consequences. First, we can recover a similar bound on the L 2 norm even if we consider the unforced case (i.e. σ = 0), however, in this case, the bound depends on T , too. Second, we can show that the evolution of the L 2 norm is continuous, as soon as we have t > 0.
Corollary 26. The assumption that σ = 0 can be dropped from Theorem 25, with the only difference that Proof. By linearity and uniqueness of moderate solutions, we decompose the solution as X = Y + Z, where Z has zero forcing and Y has constant second moments of components. To this end, let (s n ) n be as in the statement of Proposition 6, and let Y be the unique proper solution with forcing σ and deterministic initial condition Y defined by Y n := √ s n . By Proposition 5, the second moments of Y satisfy system (10), so by uniqueness and Proposition 6, the second moments of the components of Y are constant, and thus Y (t) ∈ L 2 (Ω; H s ) for all t ≥ 0 and all s < 1. By hypothesis X ∈ L 2 (Ω; H α ). Then Z := X − Y ∈ L 2 (Ω; H r ) for all r ≤ α, r < 1. Let Z be the moderate solution with no forcing and with initial condition Z, to which Theorem 25 applies. Then X = Y + Z has the same regularity as Z, and if β is like in the statement of that theorem, Corollary 27. For any moderate solution X and for all s < 1 the L 2 (Ω; H s )-norm of X is finite and continuous on (0, T ]. In particular, X(t) ∈ l 2 a.s. for all positive t.
Proof. Fix s < 1, and let · denote the L 2 (Ω; H s )-norm. By the last statement of Theorem 25, we know that X(t) is finite for a.e. t ∈ (0, T ]. Let (t n ) n be a sequence of such times converging to some t, and suppose by contradiction that lim n X(t n ) does not exists or is different from X(t) , which may or may not be finite. Then without loss of generality we can deduce that there exists a subsequence (n k ) k and a real a, such that lim sup k X(t n k ) < a < X(t) .
Then there exists j 0 such that The left-hand side is a finite sum of seconds moments, hence it is continuous in t by Proposition 17, yielding that for k large also X(t n k ) > a, which is a contradiction.

Invariant measure
This final section deals with invariant measures for the transition semigroup associated with moderate solutions. We prove that there exists one with support on H 1 − ⊂ l 2 which is the unique one among those with support on H −1 .
Let (P t ) t≥0 be the transition semigroup associated to the moderate solutions, meaning that for all A ⊂ H −1 measurable, x ∈ H −1 , ϕ ∈ C b (H −1 ) and t ≥ 0, we define and where X x is the moderate solution with deterministic initial condition X = x. (Notice that we are not specifying T : the solution can be taken on any interval [0, T ], with T ≥ t, and the semigroup is well-defined thanks to Theorem 11.) Theorem 28. The semigroup P t associated to moderate solutions, admits an invariant measure supported on l 2 .
Proof. By Corollary 27, P t (x, l 2 ) = 1 for all t > 0 and x ∈ H −1 , so it makes sense to consider the semigroup restricted to l 2 .
To prove existence, we rely on Corollary 3.1.2 in [17], which states that there exists an invariant measure for a Feller Markov semigroup P t . This holds under the assumption that for some probability measure ν and sequence of times T n ↑ ∞, the sequence (R * Tn ν) n≥1 is tight, where R * t is the operator on probability measures associated to P t , defined by for every probability measure ν on l 2 and measurable set A of l 2 .
Let us start with the tightness. Choose ν = δ 0 and let β ∈ (0, 1). The compact set to verify tightness will be the H β -norm closed ball of radius r, which is compact under the l 2 norm, Then, for all T > 0, Now Corollary 26 applies, and by (18) there exists a constant C such that R * T ν(B(r)) ≥ 1−Cr −2 for all T and all r, which proves the tightness.
Let us now move on to the Feller property: to show that it holds, we follow an argument similar to the one hinted to in [14]. For x ∈ l 2 and σ ∈ R, let X x,σ denote the unique moderate solution with deterministic initial condition X = x and forcing σ. Then, if x and y are two points il l 2 , we have, (19) E[ X x,σ (t) − X y,σ (t) 2 where we used uniqueness, linearity (whence forcing terms cancel out) and Proposition 24. Now consider a sequence x n → x in l 2 . By equation (19), X xn,σ −→ X x,σ in L 2 (Ω; l 2 ), hence in probability and in law, meaning that for all ϕ ∈ C b (l 2 ): P t ϕ(x n ) = E[ϕ(X xn,σ (t))] → E[ϕ(X x,σ (t))] = P t ϕ(x), which gives us the continuity of the semigroup P t .
Remark 29. Theorem 28 can be improved to H 1 − regularity. In fact, again by Corollary 27, P t (x, H s ) = 1 for all s < 1, t > 0 and x ∈ H −1 , so actually the invariant measure has support on H 1 − := s<1 H s .
To prove the uniqueness of the invariant measure, we use the strategy shown in [5]: we formulate the problem as a Kantorovich problem in transport of mass (see for example [2,3,22]) and proceed by showing a contradiction caused by assuming the existence of two different invariant measures.
Theorem 30. There is a unique invariant measure supported on H −1 for the semigroup associated with moderate solutions.
Proof. Let us assume, by contradiction, that there are two different invariant measures µ 1 and µ 2 . We can define the set Γ = Γ(µ 1 , µ 2 ) of admissible transport plans γ from µ 1 to µ 2 , that is the set of joint measures which have the µ i as marginals.
We can also define the functional Φ on Γ in the following way: for γ ∈ Γ Φ(γ) = l 2 ×l 2 x − y 2 l 2 dγ(x, y), that is, we take as cost function c(x, y) = x − y 2 l 2 . We claim that there exists an optimal transport map in Kantorovich problem, that is a γ 0 ∈ Γ such that Φ(γ 0 ) ≤ Φ(γ) for all γ ∈ Γ.
We are left with the claim. By Theorem 1.5 in [2], it is enough to check that c is lower semicontinuous and bounded from below. To prove the former, consider converging sequences x (n) → x and y (n) → y in H −1 . If x − y ∈ l 2 then x (n) − y (n) l 2 = +∞ definitely, because l 2 is a closed subspace of H −1 , and otherwise there would be a subsequence inside l 2 converging to a point outside of it. On the other hand, if x − y ∈ l 2 , then for all ε > 0 there exists k such that Convergence in H −1 implies convergence of components, so there exists n 0 such that for n ≥ n 0 yielding that c(x (n) , y (n) ) ≥ c(x, y) − ε definitely.
Remark 31. This result only applies to invariant measures for moderate solutions. It is however possible to construct wilder componentwise solutions that are stationary, such as the Gaussian one discussed in [14].