Evolution of concentration under lattice spin-flip dynamics

We consider spin-flip dynamics of configurations in $\{-1,1\}^{\mathbb{Z}^d}$, and study the time evolution of concentration inequalities. For"weakly interacting"dynamics we show that the Gaussian concentration bound is conserved in the course of time and it is satisfied by the unique stationary Gibbs measure. Next we show that, for a general class of translation-invariant spin-flip dynamics, it is impossible to evolve in finite time from a low-temperature Gibbs state towards a measure satisfying the Gaussian concentration bound. Finally, we consider the time evolution of the weaker uniform variance bound, and show that this bound is conserved under a general class of spin-flip dynamics.


Introduction
Concentration inequalities are important tools to understand the fluctuation properties of general observables f (σ 1 , . . . , σ n ) which are functions of n random variables (σ 1 , . . . , σ n ), where n is large but finite. For bounded random variables which are independent (or weakly dependent) typically one can obtain so-called Gaussian concentration bounds for the fluctuations of f (σ 1 , . . . , σ n ) about its expectation. In the context of lattice spin systems, one has, e.g., σ i ∈ {−1, +1}, with i ∈ [−n, n] d ∩ Z d , and these random variables are distributed according to a Gibbs measure. The "weak dependence" between them means for instance that we are in the Dobrushin uniqueness regime, which is for instance the case at "high enough" temperature for every finite-range potential, or for low temperature with a "high enough" external magnetic field. In this case a Gaussian concentration bound holds [9]. In contrast, regimes of non-uniqueness are known in which only weaker concentration bounds, such as moment bounds, hold [5]. In [6] it is shown that the Gaussian concentration bound implies uniqueness of equilibrium states (translation-invariant Gibbs measures). In [2], many applications of these concentration bounds are given (speed of convergence of the empirical measure in the sense of Kantorovich distance, fluctuation bounds in the Shannon-McMillan-Breiman theorem, fluctuation bounds for the first occurrence of a pattern, etc).
The Gaussian concentration bound implies volume large deviations for ergodic averages of local observables, i.e., when it holds, the probability that empirical averages of local observables deviate from their expectation is exponentially small in the volume over which the empirical average is taken. This excludes sub-volume large deviations, which in the context of equilibrium systems implies that the Gaussian concentration bound cannot hold in a phase transition regime.
In this paper we are interested in the time evolution of the Gaussian concentration bound under a stochastic evolution. More precisely we study the following questions in the context of spin-flip dynamics of lattice spin systems: 1. When started from a probability measure satisfying the Gaussian concentration bound, do we have this bound at later times?
2. When started from a probability measure which does not satisfy the Gaussian concentration bound, can this bound be obtained at finite times?
At the end of the paper we study the same questions for a weaker concentration bound, namely the uniform variance bound. The study of time-dependent concentration properties of a measure under a stochastic evolution has several motivations. First, it reveals properties of transient non-equilibrium states, i.e., when one heats up or cools down a system, then what are the concentration properties of the transient states in the course of this process? As mentioned before, the Gaussian concentration bound is a signature of "high-temperature", "strong uniqueness" or "strong mixing". When cooling or heating a high-temperature system, one can ask whether this signature of high-temperature behavior is conserved in the course of time, even if one cannot make sense of intermediate temperatures in the course of the evolution, due to possible Gibbs-non-Gibbs transitions [7]. Conversely, if one heats a system initially at low temperature, can the Gaussian concentration bound hold at finite times, i.e., can one obtain this signature of high-temperature behavior in finite time?
Second, semigroups corresponding to stochastic evolution are useful interpolation tools, which give access to properties of measures which are not available in explicit (e.g. Gibbsian) form. The study of time evolution of concentration properties gives insight in the concentration properties of such measures. An example is e.g. a spin-flip dynamics associated to two different temperatures, where the stationary distribution is an example of a non-equilibrium steady state about which little explicit information is available, as it will generically not be a Gibbs measure (equilibrium state). If one can show the conservation of the Gaussian concentration bound in the course of such a non-equilibrium time evolution, with constants uniformly bounded in time, then one obtains also the Gaussian concentration bound for the non-equilibrium steady state.
Third, the study of time-dependent concentration properties is related to the study of Gibbs-non-Gibbs transitions [7]. Here in the regime where the time-evolved measure is not Gibbs measure, one still would like to obtain some properties of these non-Gibbsian states. E.g. if one starts a high temperature dynamics from the low-temperature Ising model with a weak magnetic field, it is known that one can have Gibbs-non-Gibbs transitions. On the other hand, due to the magnetic field, the initial state satisfies the Gaussian concentration inequality, and therefore if this inequality is conserved in the course of time, one obtains that even in the non-Gibbsian regime, the measures in the course of time still satisfy the Gaussian concentration inequality. One can also start from the low-temperature Ising model in the phase transition regime and run a high-temperature dynamics. Then it is also known that in the course of time the Gibbs property is lost, even if the dynamics eventually converges to a high-temperature Gibbs state. It is then interesting so see whether the non-Gibbsian states reached in the course of the time evolution can already have at finite times signatures of the high-temperature behavior of the stationary state, such as the Gaussian concentration bound. In the context of time evolution of Gibbs measures, one has generically two scenarios. In the high-temperature regime, i.e., high-temperature initial Gibbs measure, and high-temperature dynamics, the time-evolved measure is generically high-temperature Gibbs, and results of this type are proved via some form of high-temperature (cluster, polymer) expansion, see [7], [12]. In the regime where the dynamics is high-temperature and the initial measure is low-temperature, one typically has Gibbs-non-Gibbs transitions, i.e., after a finite time the time-evolved measure is no longer a Gibbs measure, and sometimes (e.g. for independent spin-flip dynamics starting from a low-temperature Ising state with positive small magnetic field) the measure can become Gibbs again.
In the context of time-evolution of concentration inequalities, in [4] results so far are restricted to dynamics of diffusive type, in a finite-dimensional context. Here we are interested in the setting of translation-invariant spinflip dynamics in infinite volume, which is precisely the context of Gibbs-non-Gibbs transitions in [7]. Guided by the intuition coming from this context, one expects that a high-temperature dynamics should conserve the Gaussian concentration bound.
We prove this result in the present paper, using the expansion in [12], i.e., under the condition that the flip rates are sufficiently close to the rates of an independent spin-flip dynamics.
Next we show that whenever one starts from a low-temperature initial state, i.e., in the non-uniqueness regime, then for any finite-range spin-flip dynamics, at any later time the distribution cannot satisfy the Gaussian concentration bound. This can be thought of as a result showing that in finite time one cannot obtain "high-temperature properties" when initially started from a "low-temperature state". This result is shown via an analyticity argument, which shows that two different initial measures can never coincide in finite time, together with the fact that if a measure satisfies the Gaussian concentration bound, then its lower relative entropy density with respect to any other translation-invariant measure is strictly positive. I.e., the existence of two time-evolved measures with zero relative entropy density excludes the possibility that one of them satisfies the Gaussian concentration bound.
Finally, we show that a weaker concentration bound, the uniform variance inequality, is generically conserved in the course of quasilocal spin-flip dynamics. This weaker bound which is also valid for pure phases at low temperatures (such as the low-temperature Ising model) implies that the variance of empirical averages of local observables decays like the inverse of the volume over which the empirical average is taken. In particular, this excludes divergence of susceptibility, i.e., critical behavior. Our result implies that in the course of a time evolution started from a non-critical state, no critical state can be obtained. E.g. if one heats up a low-temperature Ising model, in the limit one obtains a high-temperature state, and in the course of the evolution one never reaches a state which looks like the Ising model at the critical temperature.
The rest of our paper is organized as follows. In section 2 we introduce some basic context and background on on Gibbs measures and spin-flip dynamics. In section 3 we show conservation of the Gaussian concentration bound under a strong high-temperature (or weak interaction) condition. In section 4 we prove that the Gaussian concentration bound cannot be obtained in finite time if one starts from an initial Gibbs measure in a nonuniqueness ("low-temperature") regime. In this section we also prove a nondegeneracy result, based on analyticity, which is of independent interest. In section 5 we show conservation of the uniform variance inequality for general quasilocal spin-flip dynamics.

Markovian dynamics
In this section we introduce some basic notation, definition of the Gaussian concentration bounds, basic concepts about Gibbs measures, spin-flip dynamics and relative entropy. The expert reader can skip this section, or go over it very quickly. We consider the state space of Ising spins on the lattice Z d , i.e., Ω = {−1, 1} Z d . For elements σ ∈ Ω, called "spin-configurations", we denote σ i ∈ {−1, 1} the value of the spin at lattice site i ∈ Z d . When we say "a probability measure µ on Ω", we mean a probability measure on the Borel-σ-field of Ω, equipped with the standard product of discrete topologies, which makes Ω into a compact metric space. For η ∈ Ω we denote τ i η the shifted or translated configuration, defined via (τ i η) j = η i+j . A function f : Ω → R is called local if it depends only on a finite number of coordinates. By the Stone-Weierstrass theorem, the set of local functions is dense in the Banach space of continuous functions C (Ω), equipped with the supremum norm. For f : Ω → R we denote τ i f the function defined via For a function f : Ω → R we denote the discrete gradient where σ i denotes the configuration obtained from σ by flipping the symbol at lattice site i ∈ Z d . We further denote We think of δ i f as "the Lipschitz constant in the coordinate σ i ". The symbol δf means the collection of δ i f, i ∈ Z d , i.e., the "vector" of Lipschitz constants. For p ≥ 1 we define For a continuous function f : Ω → R and a probability measure µ on Ω, we will write either E µ (f ) or f dµ for the integral of f with respect to µ.
We can now define what we mean by a Gaussian concentration bound for a given probability measure on Ω. DEFINITION

(Gaussian Concentration Bound).
A probability measure µ on Ω is said to satisfy the Gaussian concentration bound with constant C > 0, abbreviated GCB(C), if for all continuous f : Observe that δf 2 is always finite for local functions. Note that a function f : Ω → R is local if and only if there exists a finite subset of Z d (depending of course on f ) such that δ i f = 0 for all i outside of that subset. For non-local continuous functions, inequality (1) is meaningful only when δf 2 < +∞. By a standard argument (exponential Chebyshev inequality applied to λf , λ > 0, and then optimization over λ), the bound (1) implies the "sub-gaussian" concentration inequality for all u > 0. REMARK 2.1. The Gaussian concentration bound implies in particular "volume" large-deviation upper bounds for empirical averages. More precisely, for a translation-invariant measure satisfying (1), for a local function f , we have with C f > 0. Therefore in the context of Gibbs measures (equilibrium states), it is impossible to have the Gaussian concentration bound in the non-uniqueness regime. In this sense, the Gaussian concentration bound can be seen as a signature of "high-temperature" or "weak interaction" regime. The Gaussian concentration bound is (strictly) weaker than the log-Sobolev inequality, which is the context of Gibbs measures is known to be equivalent with strong uniqueness conditions [13].

Gibbs measures
In the context of Gibbs measures, the Gaussian concentration bound is satisfied in the so-called high-temperature regime, and more generally in regimes where the unique Gibbs measure is sufficiently close to a product measure such as the Dobrushin uniqueness regime. In this subsection we provide some basic background material on Gibbs measures which we need in the sequel. We refer to [8] for more details and further background. Let S denote the set of finite subsets of Z d . For Λ ⊂ Z d , we denote by F Λ the σ-field generated by {σ i , i ∈ Λ}. DEFINITION 2.2. A uniformly absolutely summable potential is a map U : S × Ω → R with the following properties: 2. Uniform absolute summability: Given a uniformly absolutely summable potential U , and Λ ∈ S , we denote the finite-volume Hamiltonian with boundary condition η ∈ Ω: and the corresponding finite-volume Gibbs measure with boundary condition η , the partition function with boundary condition η, is the normalizing constant (and where Ω Λ is the restriction of Ω to Λ). DEFINITION 2.3. Let U be a uniformly absolutely summable potential. A measure µ is called a Gibbs measure with potential U if its conditional probabilities satisfy for all σ, and for µ-almost every η. We will write µ ∈ G (U ) to mean that µ is a Gibbs measure for U .
We say that U satisfies the strong uniqueness condition if If U satisfies (2) then the set of Gibbs measures G (U ) is a singleton (unique Gibbs measure, no phase transition). The condition (2) implies the wellknown Dobrushin uniqueness condition (cf. [8] chapter 8).
If U is translation invariant then G (U ) contains at least one translationinvariant Gibbs measure.
The following result is a particular case of the main theorem in [9] which states that, under the Dobrushin uniqueness condition, one has the Gaussian concentration bound (1).
From the proof, one easily infers that also all the finite-volume Gibbs measures µ η Λ satisfy GCB(C) whenever U satisfies (2), with a constant C that neither depends on the boundary condition η nor on the volume Λ.

Relative entropy density and large deviations
Translation-invariant Gibbs measures with a translation-invariant uniformly absolutely summable potential satisfy a level-3 large deviation principle with the relative entropy density as rate function [8,Chapter 15]. Let U be a translation-invariant uniformly absolutely summable potential, and µ ∈ G (U ) be a translation-invariant Gibbs measure. Let ν be a translationinvariant probability measure on Ω. The relative entropy density is defined to be the limit The relative entropy density exists for any µ ∈ G (U ) translation-invariant Gibbs measure, and ν any translation-invariant probability measure. Moreover, the relative entropy density is the rate function of the so-called level 3 large deviation principle, i.e., in the sense of the large deviation principle, it holds that (This is of course an informal statement where "≈ ν" means a neighborhood of ν in weak topology, and "≍" means asymptotic equivalence after taking the logarithm and dividing out by |Λ n |.) In general, i.e., if µ is not a Gibbs measure, the limit defining (3) might not exist, in that case we define the lower relative entropy density as The following elementary lemma, which we formulate in the context of a finite set, with a Markov transition matrix, shows that the relative entropy is decreasing under the action of a Markov kernel. LEMMA 2.1. Let P (x, y) be a Markov transition function on a finite set S, x, y ∈ S, i.e., P (x, y) ≥ 0, y∈S P (x, y) = 1 for all x ∈ S. Let µ, ν be two probability measures on S and let denote their relative entropy. Define µP (y) = x∈S µ(x)P (x, y) and similarly νP . Then we have PROOF. Define µ 12 (x, y) = µ(x)P (x, y) and similarly ν 12 (x, y) = ν(x)P (x, y). These define two joint distributions of a random variable (X, Y ) on S × S. Then the first marginals of µ 12 , ν 12 are µ, resp. ν, and the second marginals are µP , resp. νP . Moreover, because y∈S P (x, y) = 1, we get Therefore, by the chain rule for relative entropy (see e.g. Lemma 4.18 in [10]) we obtain Because D is non-negative, we obtain the desired inequality.

Dynamics and generator
The basic question we are interested in is how the inequality GCB(C) is affected by applying a Markovian dynamics to the probability measure µ.
For this dynamics, we consider spin-flip dynamics with flip rates c(i, σ) at site i ∈ Z d satisfying the following assumptions.

Locality: sup
This condition ensures existence of the dynamics with generator L defined below in (4). In section 3 we will consider weakly interacting dynamics and need more stringent conditions: Condition C: 1. Strict positivity: inf i∈Z d , σ∈Ω c(i, σ) > 0.
2. Finite-range property: There exists R > 0 such that c(i, σ) depends only on σ j , for j such that |j − i| ≤ R.
If c(i, σ) = c(0, τ i σ), σ ∈ Ω, i ∈ Z d , then we say that the flip rates are translation invariant where we remind the notation (τ i σ) j = σ i+j . The dynamics is defined via the Markov pre-generator L acting on local functions via As proved in [11,Chapter 1], under Condition A, the closure of L (in C (Ω) equipped with the supremum norm) generates a unique Feller process. This process generated by L is denoted {σ(t), t ≥ 0}, and σ i (t) denotes the spin at time t at lattice site i. We denote E σ expectation in the process {σ(t), t ≥ 0} starting from σ, and P σ the corresponding path-space measure. We denote the semigroup S(t)f (σ) = E σ [f (σ(t))], which acts as a Markov semigroup of contractions on C (Ω). Via duality, S(t) acts on probability measures, and for µ a probability measure on Ω, we denote by µS(t) the time-evolved measure, determined by the equation We also introduce the non-linear semigroup V (t)f = log S(t) e f , which is a family of non-linear operators satisfying the semigroup property, i.e., V (t + s) = V (t)V (s), s, t ≥ 0. This non-linear semigroup appears naturally in the context of time-evolution of the Gaussian concentration bound.
Finally, notice that whereas

Some basic facts for spin-flip dynamics
In the study of existence and ergodicity properties of the Markovian dynamics {σ(t) : t ≥ 0} an important role is played by the matrix indexed by sites i, j ∈ Z d and defined by We have the pointwise estimate (see [11,Chapter 1]) where e tΓ δf denotes the bounded operator (in ℓ 1 (Z d )) e tΓ working on the "column vector" δf . If the rates are translation invariant, i.e., then we have Γ ij = γ(j − i), i.e., Γ acts as a convolution operator: and as a consequence The so-called uniform ergodic regime, or "M < ε regime" (see [11]), is the regime where the dynamics admits a unique invariant measure to which every initial measure converges exponentially fast in the course of time. In that case there exists α > 0 such that see [3,Theorem 3.3]. In general, for a spin-flip dynamics generated by (4), we have that Γ is a bounded operator in ℓ 2 (Z d ), i.e., for some time-dependent constant K(t) > 0. Finally, we mention a useful fact about the relative entropy density. Using the elementary Lemma 2.1, and finite-volume approximations, one obtains the following implication for a translation invariant spin-flip dynamics with rates satisfying condition A This will be used later on, in Section 4.

Time evolution of the Gaussian concentration bound
In this section we show conservation of the Gaussian concentration bound under weakly interacting spin-flip dynamics, i.e., dynamics sufficiently close to independent spin-flip dynamics.
More precisely if we start the process {σ(t) : t ≥ 0} from a probability measure µ satisfying GCB(C), then we are interested in the following questions: 1. Is it the case that under the time evolution {σ(t), t ≥ 0}, the timeevolved measure µS(t) still satisfies GCB(C t ), and if yes, how does the constant C t evolve?
2. If the dynamics admits a unique stationary measure ν, does this measure satisfy GCB(C)?

A general result and conservation of GCB for independent dynamics
We start with the following general result which states that if the Gaussian concentration bound holds at time t > 0 when starting from a Dirac measure δ σ with a constant that does not depend on σ, then the Gaussian concentration bound holds at time t > 0 when started from any initial measure satisfying the Gaussian concentration bound. THEOREM 3.1. Let {σ(t), t ≥ 0} be such that for all σ ∈ Ω the probability measure δ σ S(t) satisfies GCB(D t ) where the constant D t does not depend on σ. Let µ be a probability measure satisfying GCB(C µ ). Then, for all local functions f we have As a consequence, we obtain the following results: 2. In the uniformly ergodic case (M < ε regime, cf. (7)), there exists α > 0 such that µS(t) satisfies GCB(C(µ, t)) with If furthermore, sup t D t < ∞, then also the unique stationary measure ν satisfies GCB(C ν ) with C ν ≤ sup t D t < +∞.
PROOF. Start from the left-hand side of (9). Use that (5), (6) to rewrite In the two last steps we first used that δ σ S(t) satisfies GCB(D t ), i.e., we have the inequality for all σ. Second, we used the fact that µ satisfies GCB(C µ ). The consequences (1) and (2) now follow immediately.
The following corollary shows that for independent spin-flip dynamics, Gaussian concentration is conserved.
with sup t D t < +∞.
PROOF. First notice that if P is a product measure on {−1, 1} Z d then P satisfies GCB(C) with a constant C that is not depending on the marginal distributions, see [1]. For independent spin-flip dynamics, δ σ S(t) is a product measure. Therefore, for that case, the assumption of Theorem 3.1 is satisfied, with D t uniformly bounded as a function of t. Furthermore, because the flip rates are assumed to be bounded from below, the process {σ(t), t ≥ 0} is uniformly ergodic, and as a consequence we obtain (10).

Weakly interacting spin-flip dynamics
The result for independent spin-flip dynamics (i.e., Corollary 3.1) can be generalized to a setting of weakly interacting dynamics, which was studied before in [12] in the context of time-evolution of Gibbs measures. The setting is such that the rates are sufficiently close to the rates of independent rate 1 spin-flip dynamics, such that a space-time cluster expansion can be set up. In particular, these conditions imply that there exists a unique invariant measure which is a Gibbs measure in the Dobrushin uniqueness regime.
More precisely, the assumptions on the rates are those of condition C, with one extra assumption forcing the rates to be close to a constant: where ε 0 ∈ (0, 1) is a constant depending on the dimension, specified in [12]. The important implication of the space-time cluster expansion developed in [12] which we need in our context is the following. The measure δ σ S(t) is a Gibbs measure which is in the Dobrushin uniqueness regime, uniformly in t > 0 and σ. More precisely, δ σ S(t) is a Gibbs measure with uniformly absolutely summable potential U t σ satisfying More precisely, in [12] an exponential norm where a > 0 is small enough, is shown to be finite, and going to zero when ε 0 → 0, which is stronger than (12). Using Theorem 3.1, combined with Theorem 2.1, we obtain the following result. satisfies GCB (C(µ, t)).

No-go from low-temperature Gibbs measures to Gaussian concentration bound
In this section we consider a complementary regime, i.e., starting from an initial distribution where GCB is not satisfied, such as a translation-invariant Gibbs measure in the non-uniqueness regime. We prove that it is impossible to go from such a Gibbs measure in the non-uniqueness regime towards a probability measure which satisfies GCB(C) in finite time. One can interpret this result as the fact that one cannot acquire in finite time strong "hightemperature" properties from a low-temperature initial state. We prove this result first for finite-range spin-flip dynamics, and then extend to infinite range under appropriate conditions. We start with an abstract "non-degeneracy" condition on the Markov semigroup. DEFINITION

(Non-degenerate Markov semigroup).
We say that the Markov semigroup (S t ) t≥0 of a spin-flip dynamics is nondegenerate if for every pair of probability measures µ = ν, we have µS(t) = νS(t) for all t > 0.
Then we have the following general result which shows that under the evolution of a non-degenerate semigroup one cannot go from "low temperature" to "high temperature" in finite time. THEOREM 4.1. Let µ + = µ − denote two translation-invariant Gibbs measures for the same translation-invariant potential. Assume that the Markov semigroup is non-degenerate. Then for all t > 0, µ + S(t) cannot satisfy GCB(C).
The following lemma shows that independent spin-flip is non-degenerate. PROOF. Define, for A ∈ S , σ A = i∈A σ i . Then we have Lσ A = −2|A|σ A and as a consequence, If µS(t) = νS(t) for some t > 0 then it follows from (13) that e −2|A|t σ A dµ = e −2|A|t σ A dν and therefore σ A dµ = σ A dν. Because linear combinations of the functions σ A are uniformly dense in C (Ω), we conclude that µ = ν, which leads to a contradiction.
In the next subsection, we use analyticity arguments to show non-degeneracy for general translation-invariant finite-range spin-flip dynamics.

Analyticity and non-degeneracy of local spin-flip dynamics
In this section we show that for general finite-range translation-invariant spin-flip dynamics, for µ a probability measure on Ω, and for a (uniformly) dense set of continuous functions f the map t → S(t)f dµ can be analytically extended to a strip in the complex plane of which the width does not depend on µ. This implies non-degeneracy in the sense of Definition 4.1.
We start with setting up the necessary notation. We remind the notation σ B = i∈B σ i for B a finite subset of Z d . For a finite set B ⊂ Z d we define the associated translation-invariant operator In case B = ∅ we make the convention σ B = 1, i.e., L ∅ = i∈Z d ∇ i is the generator of rate 1 independent spin flips.
A general finite-range translation-invariant spin-flip generator can then be written in terms of these "building block" operators as follows where B is a finite collection of finite subsets of Z d , and where λ : B → R.
The lemma is proved.
We can then estimate L n B σ A .
As a consequence is a uniformly convergent series for t < t 0 with t 0 = 1 2M |B|(|A|+K) .
The consequence is immediate from (16). PROOF. The set of analytic vectors is by definition the set of functions such that there exists t > 0 such that is a convergent series. Let us denote by A the set of analytic vectors. Notice that A is a vector space. By Lemma 4.3 it follows that σ A ∈ A for all finite A ⊂ Z d . As a consequence, A contains all local functions and as we saw before, the set of local functions is uniformly dense in C (Ω). L n B f ∞ converges for t ≤ r, which implies that ψ f (z) can be extended analytically in B(0, r) = {z ∈ C : |z| ≤ r} ⊂ C.
Now notice that the same holds when we replace f by S(s)f , by the contraction property: More precisely, for all s, ψ S(s)f (·) can be extended analytically in where r does not depend on s. This implies the statement of the proposition, because, via the semigroup property The proof is finished.

Generalization to a class of infinite-range dynamics
The assumption of finite range for the translation-invariant flip rates can be replaced by an appropriate decay condition on the rates. This is specified below. We assume now that the generator is of the form where as before L B = i∈Z d σ B+i ∇ i . We assume now that B is an infinite set of finite subsets of Z d and that we have the bound where c ∈ (0, +∞) is a constant and where ψ(k) is a positive measure on the natural numbers such that for some u > 0 ∞ k=0 e uk ψ(k) = F (u) < +∞.
In the following lemma we obtain a bound which allows us to estimate L n B σ A ∞ . where we used that v n e −v /n! < 1, for all v > 0 and n.
We can then show that the bound of Lemma (15) still holds.
for some κ > 0. As a consequence, local functions are analytic vectors, and the Markovian dynamics generated by L B is non-degenerate.
PROOF. We estimate as in the proof of Lemma 4.3, using (18). Let u > 0 be as in (18), and A = ∅. Then ≤ |A| n 2 n c n e u n! u −n F (u) n ≤ n! κ n for some 0 < κ < +∞. With this bound, we can proceed as in the proof of the finite-range case (Lemma 4.3, Propositions 4.1, 4.2).

Uniform variance bound
In this section we consider the time-dependent behavior of a weaker concentration inequality, which we call the "uniform variance bound". In the context of Gibbs measures, contrarily to GCB, this inequality can still hold in the non-uniqueness regime (for the ergodic equilibrium states), see [5] for a proof of this inequality for the low-temperature pure phases of the Ising model. DEFINITION

(Uniform Variance Bound).
We say that µ satisfies the uniform variance bound with constant C (abbreviation UVB(C)) if for all f : Ω → R continuous Notice that, in contrast with the Gaussian concentration bound, the inequality (19) is homogeneous, i.e., if (19) holds for f then for all λ ∈ R, it also holds for λf . Furthermore, if (19) holds for a subset of continuous functions which is uniformly dense in C (Ω) (such as the set of local functions), then it holds for all f ∈ C (Ω) by standard approximation arguments. This implies that if we can show the validity of (19) for a set of functions D ⊂ C (Ω) such that ∪ λ∈[0,+∞) λD contains all local functions, we obtain the validity of (19) for all f ∈ C (Ω).
The following proposition shows that a weak form of Gaussian concentration is equivalent with the uniform variance bound. DEFINITION

(Weak Gaussian Concentration Bound).
We say that a probability measure µ satisfies the weak Gaussian concentration bound with constant C if for every f : Ω → R continuous there exists (20) PROPOSITION which is the uniform variance bound. Conversely, assume that the uniform variance bound holds, and let f : Ω → R be a continuous function. Then use the elementary inequality e λx −1 − λx ≤ λ 2 e x 2 2 , valid for for 0 ≤ λx ≤ 1, together with e x ≥ 1 + x, to conclude that for λ ≤ The following theorem is the analogue of Theorem 3.1 for the uniform variance bound.
PROOF. Let f : Ω → R be a continuous function. Then we compute, using (8): The theorem is proved.
COROLLARY 5.1. Assume that the spin-flip rates satisfy the weak interaction condition of Section 3.2, then the dynamics conserves the uniform variance bound.
PROOF. Under the weak interaction condition, δ σ S(t) satisfies GCB(C) with a constant that does not depend on σ. By Proposition 5.1 δ σ S(t) satisfies UVB(C) with a constant that does not depend on σ. The conclusion follows from Theorem 5.1.
The following theorem shows that the high-temperature condition of corollary 5.1 is not necessary, and in fact, the uniform variance inequality is robust under any local spin-flip dynamics, i.e., under the condition C of Section 2.3.  with C = 2ĉ t 0 K(s) 2 ds not depending on σ. Via Theorem 5.1, we obtain the statement of the theorem. REMARK 5.2. Remark that we did not use the finite range character of the spin-flip rates, neither the translation invariance. I.e., as soon as the flip rates are uniformly bounded, and are such that the Markovian dynamics with these rates can be defined, we obtain that the uniform variance bound is conserved in the course of time.
Finally, we show the analogue of Theorem 3.1 for more general inequalities including moment inequalities. To fit the examples we saw so far: we have UVB(a) corresponds to F(x) = x 2 , J(x) = x 2 , C = √ a, whereas GCB(a) corresponds to F(x) = e x , J(x) = e x 2 , C = √ a. More general moment inequalities correspond to F(x) = |x| p , J(x) = |x| p . The following theorem is then the analogue of Theorem 3.1 for the (F, J, C) inequality. THEOREM 5.3. Assume that δ σ S(t) satisfies the (F, J, C) inequality with constant C that does not depend on σ. Then if µ satisfies the (F, J, C µ ) inequality, so does µS(t) for all t > 0.
PROOF. We write, using p t (σ, dη) for the transition probability measure starting from σ, and abbreviating f dµS(t) =: µ(t, f ) Here in the last two steps we used (8), combined with the fact that J is increasing.