In the previous chapter we examined the central limit theorem of additive functionals of the simple exclusion process. We investigate in this one the asymptotic behavior of a distinguished particle.

Recall the set-up introduced in Chap. 5. Fix a finite range probability measure p on ℤd such that p(0)=0 and denote by {η t :t≥0} the exclusion process on \(\mathbb{X} = \{0,1\}^{\mathbb{Z}^{d}}\) associated to p. Consider a configuration η with a particle at the origin. Tag this particle and denote by Z t , t≥0, its position at time t. Due to the presence of the other particles {Z t :t≥0} by itself is not a Markov process but the pair {(Z t ,η t ):t≥0} is a Markov process.

We are interested in this chapter in the asymptotic behavior of Z t . If there were no other particles, Z t would evolve as a continuous time random walk for which the law of large numbers, the central limit theorem and the corresponding invariance principle are well known. The interaction of the tagged particle with the other particles, called from now on the environment, clearly alters the dynamics of the tagged particle, and the study of its evolution requires new tools.

The Markov process (Z t ,η t ) has a very special feature which enables the analysis of the asymptotic behavior of the tagged particle. Consider the evolution of the environment as seen from the position of the tagged particle. This means that we set the origin to be at the position of the tagged particle, and observe from there the evolution of the remaining particles. To respect this rule, each time the tagged particle jumps by x, we translate the entire configuration by −x to keep the tagged particle at the origin. A fundamental and quite particular property of this dynamics is that it possesses a very simple family of stationary states, the Bernoulli product measures with density 0≤α≤1 on \(\mathbb{Z}^{d}_{*} = \mathbb{Z}^{d}\setminus\{0\}\), which we denote by \(\nu^{*}_{\alpha}\).

A law of large numbers for the tagged particle starting from \(\nu^{*}_{\alpha}\) is easy to infer. The mean displacement of the tagged particle is \(\mathfrak{m} = \sum_{x\in\mathbb{Z}^{d}} x p(x)\). Since \(\nu^{*}_{\alpha}\) is stationary, the site to which the tagged particle chooses to jump is empty with probability 1−α. We thus expect a mean displacement in the stationary state to be equal to \((1-\alpha)\mathfrak{m}\). This is in fact the content of the first main theorem in this chapter which states that, in the stationary state with density α, Z t /t converges a.s. to \((1-\alpha)\mathfrak {m}\) as t→∞.

The second main result establishes a central limit theorem for the position of the tagged particle: \((Z_{t} - (1-\alpha)\mathfrak {m})/\sqrt{t}\) converges in distribution to a mean zero Gaussian random vector with a covariance matrix D(α) which depends on the surrounding density of particles. So far, this result has been proved in the mean zero case, and in the asymmetric case in dimension d≥3. It is conjectured to hold also in dimensions 1 and 2 in the asymmetric case, but there even the finiteness of the asymptotic variance has not yet been established.

1 The Exclusion Process as Seen from a Tagged Particle

Consider an exclusion process {η t :t≥0} on \(\mathbb{X}\) satisfying the assumptions of Sect. 5.1. Denote by {τ x :x∈ℤd} the group of translations on \(\mathbb{X}\) so that

$$ (\tau_x \eta) (y) = \eta(x+y) $$

for x, y in ℤd and a configuration η in \(\mathbb {X}\). The action of the translation group is naturally extended to functions and measures.

Consider a configuration η with a particle at the origin. Tag this particle and denote by Z t , t≥0, its position at time t. Let {ξ t :t≥0} be the state of the process as seen from the position of this tagged particle, \(\xi_{t} = \tau_{Z_{t}} \eta_{t}\), and notice that the origin stays always occupied, \(\xi_{t}(0)=(\tau_{Z_{t}} \eta_{t})(0) = \eta_{t}(Z_{t})=1\). In particular, we may consider ξ t either as a configuration of \(\mathbb{X}\) with a particle at the origin or as a configuration of \(\mathbb{X}^{*}=\{0,1\}^{\mathbb{Z}^{d}_{*}}\), where \(\mathbb{Z}^{d}_{*} = \mathbb{Z}^{d} \setminus\{0\}\). We adopt here the latter convention and denote by the Greek letter ξ the configurations of \(\mathbb{X}^{*}\).

The evolution of ξ t corresponds to a Feller process which we now describe. Assume that \(\mathbb{X}^{*}\) is endowed with the product topology which turns \(\mathbb{X}^{*}\) into a metrizable, compact space. Denote by \(C(\mathbb{X}^{*})\) the space of continuous functions on \(\mathbb {X}^{*}\), regarded as a Banach space with norm

$$ \Vert f\Vert= \sup_{\xi\in\mathbb{X}^*} \big|f(\xi)\big|, $$

and by \(\mathcal{C}\subset C({{\mathbb{X}}^{*}})\) the space of functions which depend on the configuration η through a finite number of coordinates, called the cylinder functions.

Let \(\{\theta_{x} : x\in\mathbb{Z}^{d}_{*}\}\) be the shift operators on \(\mathbb{X}^{*}\) defined as follows:

$$ (\theta_x \xi) (y) = \begin{cases} \xi(x) & \text{if}\ y=-x,\vspace{2pt}\cr \xi(y+x) & \text{otherwise}. \end{cases} $$

This means that θ x ξ stands for the configuration where the tagged particle, sitting at the origin, is first transferred to site x and then all the configuration is translated by −x.

Let \(\mathcal{L}\) be the operator defined on \(\mathcal{C}\) as the sum, \(\mathcal{L}={{\mathcal{L}}_{0}}+{{\mathcal{L}}_{\theta }},\) where \({{\mathcal{L}}_{0}},\), \({{\mathcal{L}}_{\theta }}\) are given by

(6.1)

The first operator takes into account the jumps of the environment, while the second one corresponds to the jumps of the tagged particle.

It is not difficult to check that the operator \(\mathcal{L}\) defined on \(\mathcal{C}\) is a Markov pre-generator (Liggett, 1985, Definition I.2.1). By the proof of Liggett (1985, Theorem I.3.9) the closure of \(\mathcal{L}\), still denoted by \(\mathcal{L}\), is a Markov generator and \(\mathcal{C}\) is a core for \(\mathcal{L}\). Denote by {S(t):t≥0} the strictly Markovian, Feller semigroup on \(C(\mathbb{X}^{*})\) associated to the generator \(\mathcal{L}\) through the Hille–Yosida theorem.

Let \(D([0,\infty), \mathbb{X}^{*})\) be the space of r.c.l.l. trajectories \(\xi:[0,\infty) \to\mathbb{X}^{*}\) and let {Π t :t≥0} be the canonical projections defined by Π t (ξ)=ξ t . We shall represent by \({{\mathcal{F}}^{0}}\) the smallest σ-algebra on \(D([0,\infty), \mathbb{X}^{*})\) which turns the projections Π s , s≥0, measurable; and by the smallest σ-algebra relative to which all the mappings Π s , 0≤st, are measurable. Denote by \(\{\mathbb{P}_{\xi}: \xi\in\mathbb{X}^{*}\}\) the normal Markov process associated to the semigroup {S(t):t≥0}. Expectation with respect to ℙ ξ is represented by \(\mathbb{E}_{\xi}\).

For 0≤α≤1, let \(\nu_{\alpha}^{*}\) be the Bernoulli product measure on \(\mathbb{X}^{*}\) with marginals given by

$$ \nu_\alpha^* \bigl\{\xi: \xi(x) =1\bigr\} = \alpha $$

for x in \(\mathbb{Z}^{d}_{*}\).

Proposition 6.1

The Bernoulli measures \(\{\nu^{*}_{\alpha}, 0\le\alpha\le1\}\) are invariant for the Markov process {ξ t :t≥0}.

Proof

By a simple change of variables, for any cylinder functions f, g and any \(z \in\mathbb{Z}^{d}_{*}\),

$$ \int f(\theta_z \xi) g(\xi) \bigl[1-\xi(z)\bigr] \nu^*_\alpha(d \xi) = \int f(\xi) g(\theta_{-z}\xi) \bigl[1-\xi(-z)\bigr] \nu^*_\alpha (d \xi) . $$

It follows from this identity and from (5.4), that

$${{\left\langle g,\mathcal{L}\,f \right\rangle }_{v_{\alpha }^{*}}}={{\left\langle \mathcal{L}\,{{}_{{{p}^{*}}}}g,f \right\rangle }_{_{v_{\alpha }^{*}}}}$$
(6.2)

for all cylinder functions f, g. In this formula, \(\langle \cdot, \cdot \rangle_{\nu_{\alpha}^{*}}\) represents the scalar product in \(L^{2}(\nu_{\alpha}^{*})\) and \({{\mathcal{L}}_{{{p}^{*}}}}\) the generator \(\mathcal{L}\) defined above (6.1) with p in place of p, where p (z)=p(−z). In particular, \(\int{Lfdv_{\alpha }^{*}}=0\) for any cylinder function f, which proves the proposition in view of Liggett (1985, Proposition I.2.13). □

Note that the probability measures \(\nu^{*}_{\alpha}\) are invariant for the operators \({{\mathcal{L}}_{0}},\) \({{\mathcal{L}}_{\theta }}\) taken individually if and only if the probability measure p is symmetric.

For a probability measure μ in \(\mathbb{X}^{*}\), denote by ℙ μ the measure on \((D([0,\infty ),{{\mathbb{X}}^{*}}),{{\mathcal{F}}^{0}})\) given by ∫ℙ ξ μ(). Expectation with respect to ℙ μ is denoted by \(\mathbb{E}_{\mu}\). For 0≤α≤1, denote by \((D([0,\infty ),{{\mathbb{X}}^{*}}),\mathcal{F}\text{,}{{\mathbb{P}}_{v\alpha ,}}\left\{ {{\mathcal{F}}_{t}}:t\ge 0 \right\})\) the usual augmentation of the filtered space \((D([0,\infty ),{{\mathbb{X}}^{*}}),{{\mathcal{F}}^{o}}\text{,}{{\mathbb{P}}_{v\alpha ,}}\left\{ \mathcal{F}_{t}^{o}:t\ge 0 \right\}).\). By Theorem 8.11 and Proposition 8.12 of Chap. 1 of Blumenthal and Getoor (1968), {ξ t :t≥0} is a strong Markov process with respect to the augmented filtration.

As in the previous chapter, the semigroup {S(t):t≥0} extends to a Markov semigroup on \(L^{2}(\nu^{*}_{\alpha})\) whose generator \({{\mathcal{L}}_{v_{\alpha }^{*}}}\) is the closure of \(\mathcal{L}\) in \(L^{2}(\nu^{*}_{\alpha})\). Since the density α remains fixed, to keep notation simple we denote \({{\mathcal{L}}_{v_{\alpha }^{*}}}\) by \(\mathcal{L}\) and by \(\mathcal{D}(\mathcal{L})\) the domain of \(\mathcal{L}\) in \(L^{2}(\nu^{*}_{\alpha})\). We adopt the same convention with the operator \({{\mathcal{L}}_{{{p}^{*}},}}\) introduced above, and use the same notation \({{\mathcal{L}}_{{{p}^{*}},}}\) to represent the generator acting on \(C(\mathbb{X}^{*})\) and on \(L^{2}(\nu^{*}_{\alpha})\).

Denote by \({{\mathcal{L}}^{*}}\)the adjoint of \(\mathcal{L}\mathcal{L}\) in L 2(ν α ). The next result follows from the proof of Lemma 5.2 and from (6.2).

Lemma 6.2

The adjoint operator \({{\mathcal{L}}^{*}}\) is a generator and \({{\mathcal{L}}^{*}}={{\mathcal{L}}_{p*}}.\) In particular, \(\mathcal{C}\) is a common core for \(\mathcal{L}\) and \({{\mathcal{L}}^{*}},\) and \(\mathcal{L}\) is self-adjoint with respect to each \(\nu^{*}_{\alpha}\) when p is symmetric.

Recall that s(⋅) (resp. a(⋅)) stands for the symmetric (resp. anti-symmetric) part of the probability p. For any cylinder function f

$${{\left\langle f,(-\mathcal{L})f \right\rangle }_{v_{\alpha }^{*}}}=:\mathcal{D}(f)={{\mathcal{D}}_{0}}(f)+{{\mathcal{D}}_{\theta }}(f),$$
(6.3)

where

$$\begin{matrix} {{\mathcal{D}}_{0}}(f)=(1/2)\sum\limits_{x,y\in \mathbb{Z}_{d}^{*}}{s(y-x)\int{\xi (x)[1-\xi (y)]({{T}^{x,y}}f){{(\xi )}^{2}}v_{\alpha }^{*}(d\xi ),}} \\ {{\mathcal{D}}_{0}}(f)=(1/2)\sum\limits_{z\in \mathbb{Z}_{d}^{*}}{s(z)\int{[1-\xi (z)]({{T}^{z}}f){{(\xi )}^{2}}v_{\alpha }^{*}(d\xi ).}} \\ \end{matrix} $$

In this formula and below, for a function g in \(L^{2}(\nu^{*}_{\alpha})\),

$$ \bigl(T^{x,y} g\bigr) (\xi) = g\bigl(\sigma^{x,y}\xi\bigr) - g(\xi),\quad \bigl(T^{z} g\bigr) (\xi) = g( \theta_z\xi) - g(\xi). $$

Since (T x,y f)(ξ) vanishes unless ξ(x)≠ξ(y) and since T x,y f=T y,x f, we may rewrite \({{\mathcal{D}}_{0}}(f)\) as

The explicit formulas for the Dirichlet forms written above hold also for functions f in the domain \(\mathcal{D}(\mathcal{L})\) of the generator, and the series defined on the right-hand side converge absolutely. This can be proved by an approximation argument (cf. Liggett, 1985, Lemma IV.4.3).

Observe that, in contrast with what the notation suggests, in the case where the probability measure p(⋅) is not symmetric, \({{\mathcal{D}}_{0}}(f)\) is not equal to \({{\left\langle (-{{\mathcal{L}}_{0}})f,f \right\rangle }_{v_{\alpha }^{*}.}}\) Denote by \({{\mathcal{S}}_{\theta }}\) the generator \({{\mathcal{L}}_{0}}\) introduced in (6.1) with the probability measure p(⋅) replaced by its symmetric part s(⋅). A straightforward computation shows that

$$\begin{matrix} {{\mathcal{D}}_{0}}(f)={{\left\langle (-{{\mathcal{S}}_{0}})f,f \right\rangle }_{v_{\alpha }^{*}}}\ne {{\left\langle (-{{\mathcal{L}}_{0}})f,f \right\rangle }_{v_{\alpha }^{*}}}, & f\in \mathcal{C} \\ \end{matrix}. $$
(6.4)

In particular, if p is not symmetric, \({{\mathcal{S}}_{0}}\) is not the symmetric part of the operator \({{\mathcal{L}}_{0}},\)even though the symmetric part of \(\mathcal{L}\text{=}{{\mathcal{L}}_{0}}+{{\mathcal{L}}_{\theta }}\) is \({{\mathcal{S}}_{0}}+{{\mathcal{S}}_{\theta }},\) where \({{\mathcal{S}}_{\theta }}\)is the generator \({{\mathcal{L}}_{\theta }},\)defined in (6.1) with the probability measure p replaced by its symmetric part s.

We claim that the measures \(\{\nu_{\alpha}^{*} : 0\le\alpha\le1\}\) are ergodic in all but one degenerate case.

Theorem 6.3

Assume that d≥2 or assume that d=1 and that p(⋅) is not nearest neighbor: ∑ x≠±1 p(x)>0. Then, for any 0≤α≤1, \(\nu^{*}_{\alpha}\) is ergodic for \(\mathcal{L}\text{.}\).

Proof

The proof follows closely the one of Theorem 5.3. Fix a function \(f\in L^{2}(\nu^{*}_{\alpha})\) invariant for the (L 2 extension) semigroup generated by \(\mathcal{L}\text{.}\) Then, f is in the domain of \(\mathcal{L}\) and \(\mathcal{L}f=0.\)By multiplying by f both sides of this equation and integrating, we obtain that

Under our assumptions (d≥2 or d=1 and p is not nearest neighbor), the support of s(⋅) generates \(\mathbb{Z}_{*}^{d}\). Hence, for any \(x,y \in\mathbb{Z}_{*}^{d}\)

$$ f\bigl(\sigma^{x,y} \xi\bigr) = f(\xi) \quad\nu^*_\alpha\hbox{-a.e.} $$

By De Finetti’s theorem we conclude that f is constant \(\nu^{*}_{\alpha}\)-a.e. □

Remark 6.4

Without any further mention, we exclude from now on the degenerate one-dimensional case with only nearest neighbor jumps.

The main results of this chapter state a law of large numbers and a central limit theorem for the position of the tagged particle:

Theorem 6.5

Fix 0≤α≤1. Then, as t↑∞, Z t /t converges \(\mathbb{P}_{\nu_{\alpha}^{*}}\)-almost surely to \([1-\alpha] \mathfrak {m}\), where \(\mathfrak{m} = \sum_{x\in\mathbb{Z}^{d}} x p(x)\).

Theorem 6.6

Assume that \(\mathfrak{m} =0\) or that d≥3. Then, under \(\mathbb {P}_{\nu _{\alpha}^{*}}\),

$$ \frac{Z_{t} - (1-\alpha) \mathfrak{m} t}{\sqrt{t}} $$

converges in distribution, as t↑∞, to a mean zero Gaussian random vector with covariance matrix denoted by D(α).

As a quadratic form D(α) is strictly positive and finite: There exists a strictly positive and finite constant C 0, depending only on p, such that

$$ C_0^{-1} \alpha (1-\alpha) |\mathfrak{a}|^2 \le \mathfrak{a} \cdot D(\alpha) \mathfrak{a} \le C_0 (1- \alpha) |\mathfrak{a}|^2 $$

for all \(\mathfrak{a}\) ind.

The matrix D(α) is called the self-diffusion matrix of the exclusion process. In this chapter, we adopt the convention that C 0 stands for a constant which depends only on the transition probability p and which may change from line to line.

Remark 6.7

  1. (i)

    The result holds in the stronger form introduced in Definition 2.6. Conditioned on , the random vector \([Z_{t} - (1-\alpha) \mathfrak{m} t]/\sqrt{t}\) converges in \(L^{1}(\mathbb {P}_{\nu_{\alpha}^{*}})\) to a mean zero Gaussian random vector with covariance matrix D(α).

  2. (ii)

    Denote by \(Z^{N}_{t}\), N≥1, t≥0, the rescaled position of the tagged particle speeded-up by N:

    $$ Z^N_t = \frac{Z_{tN} - t N (1-\alpha) \mathfrak{m}}{\sqrt{N}} \cdot $$

    An invariance principle holds: Under \(\mathbb{P}_{\nu_{\alpha}^{*}}\), the sequence of processes \(\{Z^{N}_{t} : t\ge0\}\) converges in D([0,∞),ℝd) to a d-dimensional Brownian motion with diffusion matrix D(α).

  3. (iii)

    It is conjectured that this theorem holds also in dimensions 1 and 2 in the case \(\mathfrak{m} \neq0\), but this is still an open problem. Actually, in these cases we are not even able to prove that the asymptotic variance is finite.

The proof of Theorem 6.6 follows the ideas discussed in Chap. 5 for additive functionals of simple exclusion processes. Some complications arise, however, due to the presence of the very non-local operators associated to the translations.

2 Elementary Martingales

It is useful to represent the position of the tagged particle in terms of elementary orthogonal martingales associated to the jumps of the process.

For z such that p(z)>0 and for 0≤s<t, denote by \(N^{z}_{[s,t]}\) the total number of jumps of the tagged particle from the origin to z in the time interval [s,t]. In the same way, for x, y in \(\mathbb{Z}^{d}_{*}\) such that p(yx)>0, denote by \(N^{x,y}_{[s,t]}\) the total number of jumps of a particle from x to y in the time interval [s,t]. Let \(N^{z}_{t} = N^{z}_{[0,t]}\), \(N^{x,y}_{t} = N^{x,y}_{[0,t]}\).

Lemma 6.8

For x, y, z in \(\mathbb{Z}_{*}^{d}\) such that p(z)>0, p(yx)>0, let

\(\{M^{z}_{t} : p(z)>0\}\), \(\{M^{x,y}_{t} : x,y\in\mathbb{Z}^{d}_{*} , p(y-x)>0\}\) are orthogonal martingales with quadratic variationM z t , 〈M x,y t given by

$$ \bigl\langle M^z\bigr\rangle_t = \int _0^t p(z) \bigl[1-\xi_s(z)\bigr] \,ds,\qquad \bigl\langle M^{x,y}\bigr\rangle_t = \int _0^t p(y-x) \xi_s(x) \bigl[1- \xi_s(y)\bigr]\, ds. $$

Proof

Fix x,y, z in \(\mathbb{Z}_{*}^{d}\) such that p(z)>0, p(yx)>0. It is easy to check that \((\xi_{t}, N^{z}_{t}, N^{x,y}_{t})\) is a Markov process on \(\mathbb{X}^{*}\times\mathbb{Z} \times\mathbb{Z}\) with generator \({{\mathcal{L}}_{x,y,z}}\) given by

$$\begin{matrix} ({{\mathcal{L}}_{x,y,z}}f)(\xi ,k,j) \\=p(y-x)\xi (x)[1-\xi (y)]\{f({{\sigma }^{x,y}}\xi ,k,j+1)-f(\xi ,k,j)\} \\ +p(z)[1-\xi (z)]\{f({{\theta }_{z}}\xi ,k+1,j)-f(\xi ,k,j)\} \\ +\sum\limits_{{x}',{y}'\in \mathbb{Z}_{d}^{*}}{p({y}'-{x}')\xi ({x}')[1-\xi ({y}')]\{f({{\sigma }^{{x}',{y}'}}\xi ,k,j)-f(\xi ,k,j)\}} \\ +\sum\limits_{{z}'\ne z}{p({z}')[1-\xi ({z}')]\{f({{\theta }_{{{z}'}}}\xi ,k,j)-f(\xi ,k,j)\}.} \\\end{matrix}$$

In this formula the first sum is carried over all pairs (x′,y′) in \(\mathbb{Z}^{d}_{*} \times\mathbb{Z}^{d}_{*}\) different from (x,y). Dynkin’s formula applied to the functions F 1(ξ,k,j)=k, F 2(ξ,k,j)=j shows that \(M^{z}_{t}\), \(M^{x,y}_{t}\) are martingales with quadratic variation as stated in the lemma.

To show that these martingales are orthogonal, we need to prove that \(M^{z}_{t} M^{x,y}_{t}\) is a martingale. Let W z (ξ)=p(z)[1−ξ(z)], W x,y (ξ)=p(yx)ξ(x)[1−ξ(y)]. Dynkin’s formula applied to F(ξ,k,j)=kj gives that

$$ N^z_t N^{x,y}_t - \int _0^t \bigl\{ N^z_s W_{x,y}(\xi_s) + N^{x,y}_s W_z(\xi_s) \bigr\}\, ds $$

is a martingale. Rewriting the processes \(N^{z}_{t}\), \(N^{x,y}_{t}\) as \(M^{z}_{t} + \int_{0}^{t} W_{z}(\xi_{s}) ds\), \(M^{x,y}_{t} + \int_{0}^{t} W_{x,y}(\xi_{s}) ds\) and integrating by parts we obtain that

$$\begin{matrix} M_{t}^{z}M_{t}^{x,y}+\int\limits_{0}^{t}{\left\{ M_{s}^{z}{{W}_{x,y}}({{\xi }_{s}})+M_{s}^{x,y}{{W}_{z}}({{\xi }_{s}}) \right\}ds} \\ +\int\limits_{0}^{t}{{{W}_{x,y}}({{\xi }_{s}})ds\int\limits_{0}^{t}{{{W}_{z}}({{\xi }_{s}})ds-\int\limits_{0}^{t}{\{N_{s}^{z}{{W}_{x,y}}({{\xi }_{s}})+N_{s}^{x,y}{{W}_{z}}({{\xi }_{s}})\}ds}}} \\\end{matrix}$$

is a martingale. Expressing the martingales M z, M x,y inside the first time integral in terms of the jump processes N z, N x,y we obtain that

$$\begin{matrix} M_{t}^{z}M_{t}^{x,y}+\int\limits_{0}^{t}{\int\limits_{0}^{s}{{{W}_{z}}({{\xi }_{r}})dr{{W}_{x,y}}({{\xi }_{s}})}ds} \\ +\int\limits_{0}^{t}{\int\limits_{0}^{s}{{{W}_{x,y}}}({{\xi }_{r}})dr{{W}_{z}}({{\xi }_{s}})ds+\int\limits_{0}^{t}{{{W}_{x,y}}({{\xi }_{s}})ds\int\limits_{0}^{t}{{{W}_{z}}({{\xi }_{s}})ds}}} \\\end{matrix}$$

is a martingale. An integration by parts shows that the integrals cancel so that \(M^{z}_{t} M^{x,y}_{t}\) is a martingale as claimed.

The same argument applies to any pair of distinct martingales in the set \(\{M^{z}_{t} : p(z)>0\} \cup\{M^{x,y}_{t} : x,y\in\mathbb{Z}^{d}_{*} , p(y-x)>0\}\). This concludes the proof of the lemma. □

These martingales associated to the jumps of the Markov process are called the elementary martingales because any martingale given by Dynkin’s formula can be written in terms of them: Fix a cylinder function \(f\colon\mathbb{X}^{*}\to\mathbb{R}\). By Dynkin’s formula,

$$M_{t}^{f}=f({{\xi }_{t}})-f({{\xi }_{0}})-\int\limits_{0}^{t}{(\mathcal{L}f)({{\xi }_{s}})ds}$$
(6.5)

is a martingale. Since f is a cylinder function, rewriting f(ξ t )−f(ξ 0) as the sum of all differences arising from a jump to or from a site contained in the support of f in the time interval [0,t], we obtain that

$$ f(\xi_t) - f(\xi_0) = \sum _{x,y \in\mathbb{Z}_*^d} \int_0^t \bigl(T^{x,y} f\bigr) (\xi_{s-}) \, d N^{x,y}_s + \sum_{z \in\mathbb{Z}_*^d} \int_0^t \bigl(T^z f\bigr) (\xi_{s-}) \, d N^{z}_s $$

so that

$$ M_t^f = \sum _{x,y \in\mathbb{Z}_*^d} \int_0^t \bigl(T^{x,y} f\bigr) (\xi_{s-}) \, d M^{x,y}_s + \sum_{z \in\mathbb{Z}_*^d} \int_0^t \bigl(T^z f\bigr) (\xi_{s-}) \, d M^{z}_s . $$
(6.6)

In (6.6) and below we set \(M^{z}_{t} = 0\) and \(M^{x, y}_{t} = 0\) if p(z)=0, p(yx)=0.

By (2.16), we know that

$${{\mathbb{E}}_{v_{\alpha }^{*}}}[{{(M_{t}^{f})}^{2}}]=2t\mathcal{D}(f).$$
(6.7)

We derive below this identity taking advantage of the representation of \(M^{f}_{t}\) in terms of the elementary martingales. As we shall see at the end of this section, this computation allows to describe the limits of \(L^{2}(\mathbb{P}_{\nu^{*}_{\alpha}})\)-Cauchy sequences of martingales \(M^{f}_{t}\).

Since the elementary martingales are orthogonal, the quadratic variation 〈M f t of the martingale M f defined by (6.6) is given by

$$\begin{matrix}{{\left\langle {{M}^{f}} \right\rangle }_{t}}=\sum\limits_{x,y\in \mathbb{Z}_{*}^{d}}{p(y-x)\int\limits_{0}^{t}{{{\xi }_{s}}(x)[1-{{\xi }_{s}}(y)]({{T}^{x,y}}f){{({{\xi }_{s}})}^{2}}ds}} \\+\sum\limits_{z\in \mathbb{Z}_{*}^{d}}{p(z)\int\limits_{0}^{t}{[1-{{\xi }_{s}}(z)]({{T}^{z}}f){{({{\xi }_{s}})}^{2}}ds}} \\\end{matrix}$$

so that

To obtain the expression of the Dirichlet form given in (6.3), we need to replace p by s and to remove the indicator ξ(x)[1−ξ(y)] in the first sum. Since

$$ \everymath{\displaystyle} \begin{array}{l} \bigl(T^{x,y}f\bigr) (\xi) = \bigl(T^{y,x}f \bigr) (\xi) , \qquad \bigl(T^{x,y}f\bigr) \bigl(\sigma^{x,y} \xi\bigr) = - \bigl(T^{x,y}f\bigr) (\xi), \\ \noalign{\vspace{5pt}} \bigl(T^{z}f\bigr) (\theta_{-z} \xi) = - \bigl(T^{-z}f\bigr) (\xi) , \end{array} $$
(6.8)

the change of variables ξ′=θ z ξ, ξ″=σ x,y ξ permit to replace p by s. Observing now that

$$ \bigl(T^{x,y}f\bigr) (\xi)^2 \bigl\{\xi(x) \bigl[1-\xi(y)\bigr] + \xi(y)\bigl[1-\xi(x)\bigr] \bigr\} = \bigl(T^{x,y}f\bigr) (\xi)^2 , $$
(6.9)

we obtain (6.7).

We claim that the representation (6.6) extends to functions in the domain \(\mathcal{D}(\mathcal{L}).\)

Lemma 6.9

Fix a function u in the domain \(\mathcal{D}(\mathcal{L}).\) Then, the martingale \(M^{u}_{t}\), defined by (6.5) with u in place of f, can be represented as in (6.6):

$$ M^u_t = \sum_{x,y \in\mathbb{Z}_*^d} \int _0^t \bigl(T^{x,y} u\bigr) ( \xi_{s-}) \, d M^{x,y}_s + \sum _{z \in\mathbb{Z}_*^d} \int_0^t \bigl(T^z u\bigr) (\xi_{s-}) \, d M^{z}_s . $$

Proof

Fix such a function u. Since u belongs to the domain of the generator and since the space of cylinder functions forms a core for the generator in \(L^{2}(\nu^{*}_{\alpha})\), there exists a sequence of cylinder functions {f n :n≥1} such that f n , \({{f}_{n}},\mathcal{L}{{f}_{n}}\) converges in \(L^{2}(\nu^{*}_{\alpha})\) to u, \({{\mathcal{L}}_{u}},\)respectively. In particular, the martingale \(M^{f_{n}}_{t}\), defined by (6.5) with f n in place of f, converges in \(L^{2}(\nu^{*}_{\alpha})\), as n↑∞, to the martingale \(M^{u}_{t}\), defined by (6.5) with u in place of f. This holds of course for every t>0.

In view of (6.3), (6.6) with u in place of f defines a martingale in \(L^{2}(\nu^{*}_{\alpha})\) because the partial sums form a Cauchy sequence in \(L^{2}(\nu^{*}_{\alpha})\). Denote this martingale by \(m^{u}_{t}\).

To show that \(M^{u}_{t} =m^{u}_{t}\), it is enough to show that \(M^{f_{n}}_{t}\) converges in \(L^{2}(\nu^{*}_{\alpha})\) to \(m^{u}_{t}\). Since f n is a cylinder function, the martingale \(M^{f_{n}}_{t}\) can be represented through the elementary martingales M z, M x,y by (6.6). Since the martingales M z, M x,y are orthogonal, by the computations performed right after (6.7),

$$\frac{1}{t}{{\mathbb{E}}_{v_{\alpha }^{*}}}[{{(M_{t}^{{{f}_{n}}}-m_{t}^{u})}^{2}}]=2\mathcal{D}\text{(}{{f}_{n}}-u\text{)}\text{.}$$

This expression vanishes as n↑∞ by the choice of the sequence {f n :n≥1}. This concludes the proof of the lemma. □

3 The Spaces \({{\mathcal{H}}_{1}}\) and \({{\mathcal{H}}_{-1}}\)

In this section, we examine the \(L^{2}(\mathbb{P}_{\nu^{*}_{\alpha}})\) limits of the martingales \(M^{f}_{t}\), where f is a cylinder function. As in Sect. 2.2, denote by \({{\mathcal{H}}_{1}}\) the Hilbert space generated by the space of cylinder functions endowed with the scalar product \({{\left\langle f,(-\mathcal{L})g \right\rangle }_{v_{\alpha }^{*}}}.\) Let ∥⋅∥1 be the norm in \({{\mathcal{H}}_{1}}.\) We have seen in (6.3) that

$$\left\| f \right\|_{1}^{2}={{\mathcal{D}}_{0}}(f)+{{\mathcal{D}}_{\theta }}(f)$$

for functions f in \(\mathcal{C}.\) This identity extends to the domain \(\mathcal{D}(\mathcal{L})\) because \(\mathcal{C}\) forms a core for \(\mathcal{L}\).

Denote by \({{\mathcal{H}}_{-1}}\) the dual space of \({{\mathcal{H}}_{1}}\) defined as in (2.6) with \(\nu_{\alpha}^{*}\) in place of π. Recall that the \({{\mathcal{H}}_{-1}}\) norm is defined by the variational formula

$$\begin{matrix} \left\| f \right\|_{-1}^{2}=\underset{g\in \mathcal{C}}{\mathop{\sup }}\,\left\{ 2{{\left\langle f,g \right\rangle }_{v_{\alpha }^{*}}}-\left\| g \right\|_{1}^{2} \right\}, & f\in {{L}^{2}}(v_{\alpha }^{*}) \\ \end{matrix}.\ $$

As in (2.8), (2.9), it is easy to check from this variational formula that for every function f in \({{\mathcal{H}}_{1}}\) and every function g in \({{L}^{2}}(v_{\alpha }^{*})\bigcap {{\mathcal{H}}_{-1}}\)

$$ \big\vert\langle f,g\rangle_{\nu_\alpha^*} \big\vert\le \Vert f\Vert_1 \Vert g\Vert_{-1} . $$

The same variational formula permits to show that a function in \(L^{2}(\nu_{\alpha}^{*})\) belongs to \({{\mathcal{H}}_{-1}}\) if and only if there exists a finite constant C 1 such that

$$ \langle f, g\rangle_{\nu_\alpha^*} \le C_1 \Vert g \Vert_{1} $$
(6.10)

for every g in In this case, ∥f−1C 1.

Denote by \(\mathbb{L}^{2}(\nu^{*}_{\alpha})\) the space of sequences \(\varPsi= \{\varPsi_{z} : \mathbb{X}^{*} \to\mathbb{R} ; p(z)>0\} \times\{\varPsi_{x,y} : \mathbb{X}^{*} \to\mathbb{R} ; x,y \in\mathbb{Z}^{d}_{*} , p(y-x)>0\}\) of \(L^{2}(\nu^{*}_{\alpha})\) functions such that

$$ \sum_{z\in\mathbb{Z}^d_*} p(z) E_{\nu^*_\alpha} \bigl[ \bigl[1- \xi(z)\bigr] \varPsi_z (\xi)^2 \bigr] + \sum _{x,y \in\mathbb{Z}^d_*} p(y-x) E_{\nu^*_\alpha} \bigl[ \xi(x) \bigl[1-\xi(y) \bigr] \varPsi_{x,y} (\xi)^2 \bigr] $$

is finite. \(\mathbb{L}^{2}(\nu^{*}_{\alpha})\) is endowed with the scalar product 〈⋅,⋅〉 defined by

$$\begin{matrix} \left\langle \Psi ,\Phi \right\rangle =\sum\limits_{z\in \mathbb{Z}_{*}^{d}}{p(z){{E}_{v_{\alpha }^{*}}}[[1-\xi (z)]{{\Psi }_{z}}(\xi ){{\Phi }_{z}}(\xi )]} \\ +\sum\limits_{x,y\in \mathbb{Z}_{*}^{d}}{p(y-x){{E}_{v_{\alpha }^{*}}}[\xi (x)[1-\xi (y)]{{\Psi }_{x,y}}(\xi ){{\Phi }_{x,y}}(\xi )].} \\ \end{matrix} $$

A function u in the domain \(\mathcal{D}(\mathcal{L})\) induces a sequence in \(\mathbb{L}^{2}(\nu^{*}_{\alpha})\), denoted by Ψ u and given by \(\varPsi^{u}_{z} = T^{z} u\), \(\varPsi^{u}_{x,y} = T^{x,y} u\). Moreover, for Ψ in \(\mathbb {L}^{2}(\nu^{*}_{\alpha})\), M Ψ defined by (6.6), with Ψ z , Ψ x,y in place of T z f, T x,y f, defines a square integrable martingale such that \(\mathbb{E}_{\nu^{*}_{\alpha}}[(M^{\varPsi}_{t})^{2}] = t \langle\varPsi, \varPsi\rangle\).

In view of (6.8), (6.9), denote by \(\mathbb{L}_{0}^{2}(\nu^{*}_{\alpha})\) the closed subspace of \(\mathbb{L}^{2}(\nu^{*}_{\alpha})\) composed by all sequences Ψ such that \(\nu^{*}_{\alpha}\)-a.s.

(6.11)

Repeating the computation performed right after (6.7) and taking advantage of the relations (6.11), we obtain that

$$\begin{matrix} {{t}^{-1}}{{\mathbb{E}}_{v_{\alpha }^{*}}}[{{(M_{t}^{\Psi })}^{2}}]=\left\langle \Psi ,\Psi \right\rangle \\ =(1/2)\sum\limits_{x,y\in \mathbb{Z}_{*}^{d}}{s(y-x)\int{{{\Psi }_{x,y}}{{(\xi )}^{2}}v_{\alpha }^{*}(d\xi )}} \\ +\sum\limits_{z\in \mathbb{Z}_{*}^{d}}{s(z)\int{[1-\xi (z)]{{\Psi }_{z}}{{(\xi )}^{2}}v_{\alpha }^{*}(d\xi ).}} \\ \end{matrix} $$
(6.12)

Lemma 6.10

Consider a sequence of cylinder functions {f n :n≥1} which forms a Cauchy sequence in \({{\mathcal{H}}_{1}}.\) Then, there exists Ψ in \(\mathbb{L}^{2}_{0}(\nu^{*}_{\alpha})\) such that \(M^{f_{n}}_{t}\) converges to \(M^{\varPsi}_{t}\) in \(L^{2}(\mathbb{P}_{\nu^{*}_{\alpha}})\) for all t≥0.

Proof

A sequence {f n :n≥1} of cylinder functions forms a Cauchy sequence in \({{\mathcal{H}}_{1}}\) if and only if \(\{\varPsi^{f_{n}} : n\ge1\}\) forms a Cauchy sequence in \(\mathbb{L}^{2}(\nu^{*}_{\alpha})\). In particular, \(\varPsi^{f_{n}}\) converges in \(\mathbb{L}^{2}(\nu^{*}_{\alpha})\) to some Ψ which belongs to \(\mathbb{L}^{2}_{0}(\nu^{*}_{\alpha})\) because this space is closed. By (6.12), \(M^{f_{n}}_{t}\) converges to \(M^{\varPsi}_{t}\) in \(\mathbb {L}^{2}(\nu^{*}_{\alpha})\). □

4 Law of Large Numbers

In this section, we prove Theorem 6.5. Recall the definition of the jump processes \(\{N_{t}^{z} : t\ge0\}\) defined in Sect. 6.2. The position at time t of the tagged particle is obtained by summing over the number of jumps multiplied by their size:

$$ Z_t= \sum_{z\in\mathbb{Z}^d_*} z N_t^z = \sum_{z\in \mathbb{Z}^d_*} z M_t^z+ \int_0^t \tilde{V}(\xi_s)\, ds , $$
(6.13)

where \(\tilde{V}\) is the cylinder function given by

$$ \tilde{V}(\xi) = \sum_{x\in\mathbb{Z}^d_*} x p(x) \bigl[1-\xi (x)\bigr]. $$

Under the stationary measure \(\mathbb{P}_{\nu_{\alpha}^{*}}\), the quadratic variation of the vector-valued martingale \(M_{t} = \sum_{x\in\mathbb {Z}^{d}_{*}} x M_{t}^{x}\) is bounded by C 0 t. In particular, as t↑∞, M t /t converges to 0-a.s., (cf. Feller, 1971, Theorem VII.9.3). On the other hand, since \(\mathbb{P}_{\nu_{\alpha}^{*}}\) is ergodic, as t↑∞, \(t^{-1}\int_{0}^{t} \tilde{V}(\xi_{s})\, ds\) converges a.s. to \(E_{\nu^{*}_{\alpha}} [\tilde{V}] = \mathfrak{m} (1- \alpha)\). This proves the theorem.

5 Central Limit Theorem

In this section, we prove Theorem 6.6 following the strategy presented in Chap. 2. Recall the definition of Z t , the position of the tagged particle. By (6.13),

$$ Z_t - (1-\alpha) t \mathfrak{m} = \sum _{z\in\mathbb {Z}^d_*} z M_{t}^z + \int _0^{t} V(\xi_s)\, ds , $$

where V is the mean zero cylinder function

$$ V(\xi) = \tilde{V}(\xi) - (1-\alpha) \mathfrak{m} = \sum _{x\in\mathbb{Z}^d} x p(x) \bigl\{\alpha- \xi(x)\bigr\} . $$
(6.14)

We have seen in Chap. 2 that the proof of a central limit theorem for additive functionals of Markov processes relies on bounds on \({{\mathcal{H}}_{-1}}.\) Fix a vector \(\mathfrak{a}\in\mathbb{R}^{d}\) and let \(V_{\mathfrak{a}} = \mathfrak{a} \cdot V\). Denote by u λ the solution of the resolvent equation

$$\lambda {{u}_{\lambda }}-\mathcal{L}{{u}_{\lambda }}={{V}_{a}}.$$
(6.15)

In this section, we prove Theorem 6.6 assuming that for every vector \(\mathfrak{a}\in\mathbb{R}^{d}\),

(6.16)

The six measure-preserving transformations are \(\sigma^{-e_{2}, e_{1}}\), \(\sigma^{-e_{1}, -e_{1}-e_{2}}\), \(\sigma^{e_{1}+e_{2}, e_{2}}\), \(\theta_{-e_{2}}\), \(\theta_{-e_{1}}\), \(\theta_{e_{1}+e_{2}}\). We have to estimate \({{\left\langle f,(-{{\mathcal{L}}_{3}}-{{\mathcal{L}}_{\theta }})g \right\rangle }_{v_{\alpha }^{*}.}}\) This expression consists of six terms. To fix ideas, consider the first one which is multiplied by 1−ξ(e 1). A simple computation shows that

$$ \theta_{-e_1} \circ\sigma^{-e_1, -e_1-e_2} \circ\theta_{e_1+e_2} \circ\sigma^{e_1+e_2, e_2} \circ\theta_{-e_2} \circ\sigma^{-e_2, e_1} \xi= \xi, $$

provided ξ(e 1)=0. Similar identities hold for the other five cases. We may therefore write the generator \({{\mathcal{L}}_{\theta }}+{{\mathcal{L}}_{3}}\) as the generator L in the statement of Lemma 5.8 and deduce that

where n=6. Here again we may replace the generator \({{\mathcal{L}}_{\text{3}}}+{{\mathcal{L}}_{\theta }}\) by \(\mathcal{L}\) on the right-hand side. This bound together with (6.23) proves the sector condition for the mean zero exclusion process as seen from the tagged particle. □

6 Duality

If \(\mathfrak{m} \neq0\), the proof of the central limit theorem for the position of the tagged particle relies on the dual spaces and operators introduced in Sect. 5.4. The presence of the tagged particle and the corresponding shifts operators create new difficulties.

For n≥0, denote by \(\mathcal{E}_{n}^{*}\) the subsets of \(\mathbb{Z}_{*}^{d}\) with n points and let \({{\mathcal{E}}^{*}}=\bigcup\nolimits_{n\ge 0}{\mathcal{E}_{n}^{*}.}\) For each A in \(\mathcal{E}_{n}^{*},\) let Ψ A be the cylinder function

$$ \varPsi_A (\xi) = \prod_{x\in A} \frac{ \xi(x) -\alpha}{ \sqrt{\chi(\alpha)}} , $$

where χ(α)=α(1−α). By convention, Ψ =1. The class \(\left\{ {{\Psi }_{A}},A\in \mathcal{E}_{n}^{*} \right\}\) forms an orthonormal basis of \(L^{2} (\nu^{*}_{\alpha})\). For each n≥1, denote by the subspace of \(L^{2}(\nu^{*}_{\alpha})\) generated by \(\left\{ {{\Psi }_{A}},A\in \mathcal{E}_{n}^{*} \right\},\) so that \({{L}^{2}}(v_{\alpha }^{*})={{\otimes }_{n\ge 0}}{{\mathcal{A}}_{n}}.\) Let \({{\mathcal{G}}_{n}}={{\oplus }_{0\le k\le n}}{{\mathcal{A}}_{k}}.\) Functions in \({{\mathcal{G}}_{n}}\backslash {{\mathcal{G}}_{n-1,}}\) \(\mathcal{G}\text{=}\bigcup\nolimits_{n\ge 0}{{{\mathcal{G}}_{n}}}\)are said to have degree n, finite degree, respectively, while functions in \({{\mathcal{A}}_{n}}\) are said to be monomials of degree n.

Consider a cylinder function f. Since \({{\Psi }_{A}}:A\in {{\mathcal{E}}^{*}}\) is a basis of \(L^{2}(\nu^{*}_{\alpha})\), there exists a finitely supported function \(\mathfrak{f}:{{\mathcal{E}}^{*}}\to \mathbb{R}\) such that

$$f=\sum\limits_{A\in {{\mathcal{E}}^{*}}}{\mathfrak{f}(A){{\Psi }_{A}}=\sum\limits_{n\ge 0}{\sum\limits_{A\in \mathcal{E}_{^{n}}^{*}}{\mathfrak{f}(A){{\Psi }_{A}}}}}$$

The function \(\mathfrak{f}\) represents the Fourier coefficients of the cylinder function f and is denoted by \(\mathfrak{F} f\) when we want to stress its dependence on f. \(\mathfrak{f}:{{\mathcal{E}}^{*}}\to \mathbb{R}\) is a function of finite support because f is a cylinder function. Moreover, the Fourier coefficients \(\mathfrak{f}(A)\) depend not only on f but also on the density α: \(\mathfrak{f}(A) = \mathfrak{f} (\alpha,A)\). Note finally that \(\mathfrak{f}(\varnothing) = E_{\nu^{*}_{\alpha}} [f]\).

Denote by Π n the orthogonal projection on \({{\mathcal{A}}_{n}},\) or the restriction to \(\mathcal{E}_{n}^{*}\) of a finitely supported function \(\mathfrak{f}:{{\mathcal{E}}^{*}}\to \mathbb{R}\text{:}\) \(({{\Pi }_{n}}\mathfrak{f})(A)=\mathfrak{f}(A)\mathbf{1}\{A\in \mathcal{E}_{n}^{*}\}.\) With this definition, \(\mathfrak{F} (\varPi_{n} f) = \varPi_{n} \mathfrak{F} f\).

Denote by ℭ the space of finitely supported functions \(\mathfrak{f}:{{\mathcal{E}}^{*}}\to \mathbb{R}\text{,}\) by μ the counting measure on \({{\mathcal{E}}^{*}}\) and by \(\langle\cdot, \cdot\rangle_{\mu_{\star}}\) the scalar product in L 2(μ ). For any two cylinder functions \(g=\sum{_{A\in \mathcal{E}\text{*}}}\mathfrak{g}\text{(}A){{\Psi }_{A}},\)

$${{\left\langle f,g \right\rangle }_{v_{\alpha }^{*}}}=\sum\limits_{A,B\in {{\mathcal{E}}^{*}}}{\mathfrak{f}(A)\mathfrak{g}}(B){{\left\langle {{\Psi }_{A}},{{\Psi }_{B}} \right\rangle }_{v_{\alpha }^{*}}}=\sum\limits_{A\in {{\mathcal{E}}^{*}}}{\mathfrak{f}(A)\mathfrak{g}(A)={{\left\langle \mathfrak{f},\mathfrak{g} \right\rangle }_{{{\mu }_{*}}}}}.$$

In particular, the map \(\mathfrak{F}: L^{2}(\nu^{*}_{\alpha}) \to L^{2}(\mu_{\star})\) is an isomorphism.

To examine how the generator acts on the Fourier coefficients, recall the definition of the set A x,y introduced in (5.15). To represent the shift operator, for A in \({{\mathcal{E}}^{*}},\)denote by θ y A the set defined by

$$ \theta_y A = \begin{cases} A + y & \text{if}\ -y \notin A ,\vspace{2pt}\cr (A+y)_{0,y} & \text{if}\ -y \in A, \end{cases} $$
(6.24)

where B+z is the set {x+z;xB}. Therefore, if −y belongs to A, we first translate A by y (obtaining a new set which contains the origin) and then we remove the origin and we add the site y. Of course, \({{\theta }_{y}}:\mathcal{E}_{n}^{*}\to \mathcal{E}_{n}^{*}\) is a one-to-one function and a straightforward computation shows that \(\varPsi_{A} (\theta_{y} \xi) = \varPsi_{\theta_{y} A}(\xi)\) for every configuration \(\xi\in\mathbb{X}^{*}\).

Fix a cylinder function A straightforward computation shows that

$$\mathcal{L}f=\sum\limits_{A\in {{\mathcal{E}}^{*}}}{({{\mathcal{L}}_{*,\alpha }}\mathfrak{F}f)(A){{\Psi }_{A}},}$$

where

$$ \mathfrak{L}_{\ast, \alpha} = \mathfrak{L}_{0,\alpha} + \mathfrak{L}_{\theta,\alpha} $$

and

$$ \begin{gathered} {{\mathfrak{L}}_{{0,\alpha }}}={{\mathfrak{S}}_{*}}+(1-2\alpha ){{\mathfrak{n}}_{*}}+\sqrt{{\chi (\alpha )}}{{\mathfrak{J}}_{{0,+}}}+\sqrt{{\chi (\alpha )}}{{\mathfrak{J}}_{{0,-}}} \hfill \\ {{\mathfrak{L}}_{{0,\alpha }}}=\alpha {{\mathfrak{L}}_{{\theta ,p,1}}}+(1-\alpha ){{\mathfrak{L}}_{{\theta ,p,2}}}_{*}+\sqrt{{\chi (\alpha )}}{{\mathfrak{J}}_{{\theta ,p,+}}}+\sqrt{{\chi (\alpha )}}{{\mathfrak{J}}_{{\theta ,p,-}}}. \hfill \\ \end{gathered} $$

The operators \(\mathfrak{S}_{\star}\), \(\mathfrak{N}_{\star}\), \(\mathfrak{J}_{0,+}\), \(\mathfrak{J}_{0,-}\) are given by

(6.25)

for any function \(mathfrak{f}:{{\mathcal{E}}^{*}}\to \mathbb{R}\) of finite support. Note that the generator \(\mathfrak{S}_{\star}\) defined above is different from the generator \(\mathfrak{S}_{\star}\) of Sect. 5.4 because the summation is carried over sites in \(\mathbb{Z}^{d}_{*}\). The operators \(\mathfrak {L}_{\theta, p, 1}\), \(\mathfrak{L}_{\theta, p, 2}\), \(\mathfrak{J}_{\theta,p,+}\), \(\mathfrak{J}_{\theta,p,-}\) are defined as follows:

$$ \everymath{\displaystyle} \begin{array}{l} (\mathfrak{L}_{\theta, p, 1} \mathfrak{f}) (A) = \sum_{x\in A} p(x) \bigl\{ \mathfrak{f} (\theta_{-x} A) - \mathfrak {f}(A) \bigr\} , \\[9pt] (\mathfrak{L}_{\theta, p, 2} \mathfrak{f}) (A) = \sum _{x\notin A} p(x) \bigl\{ \mathfrak{f} (\theta_{-x} A) - \mathfrak{f}(A) \bigr\} , \\[9pt] (\mathfrak{J}_{\theta, p, +} \mathfrak{f}) (A) = \sum _{x\in A} p(x) \bigl\{ \mathfrak{f} \bigl(A\backslash\{x\}\bigr) - \mathfrak{f}\bigl([\theta_{-x} A]\backslash \{-x\}\bigr) \bigr\} , \\[9pt] (\mathfrak{J}_{\theta, p, -} \mathfrak{f}) (A) = \sum _{x \notin A} p(x) \bigl\{ \mathfrak{f} \bigl(A\cup\{x\}\bigr) - \mathfrak{f}\bigl([\theta_{-x} A]\cup\{-x\}\bigr) \bigr\} \end{array} $$
(6.26)

for any function \(mathfrak{f}:{{\mathcal{E}}^{*}}\to \mathbb{R}\) of finite support. In these equations the origin never appears in the summation because p(0)=0. We denote by \(\mathfrak{L}_{\theta, q, 1}\), \(\mathfrak {L}_{\theta, q, 2}\), \(\mathfrak{J}_{\theta, q, +}\), \(\mathfrak{J}_{\theta, q, -}\) the operators \(\mathfrak{L}_{\theta, p, 1}\), \(\mathfrak{L}_{\theta, p, 2}\), \(\mathfrak{J}_{\theta, p, +}\), \(\mathfrak{J}_{\theta, p, -}\) defined above with p replaced by q, where q=p , s or a.

Up to this point we have proved that

$$ \mathfrak{F}\mathcal{L}={{\mathfrak{L}}_{{*,\alpha }}}\mathfrak{F}. $$

To proceed, we rewrite the operator \(\mathfrak{L}_{\ast, \alpha}\) as a sum of operators which are either symmetric or anti-symmetric. Notice first that \(\mathfrak{S}_{\star}\) is a symmetric operator in L 2(μ ) which preserves the degree of a function. In contrast with the previous chapter, we shall see that it does not carry all the symmetric part of the operator \(\mathfrak{L}_{\ast, \alpha}\).

The operators \(\mathfrak{N}_{\star}\), \(\mathfrak{L}_{\theta, p, 1}\) and \(\mathfrak{L}_{\theta, p, 2}\) also preserves the degree of functions. The special role played by the origin introduces some complications: \(\mathfrak {N}_{\star}\) is not anti-symmetric as its notation suggests, but it carries a piece which is compensated by \(\mathfrak{L}_{\theta, p, 1}\) and \(\mathfrak {L}_{\theta, p, 2}\). More precisely, let \(U:{{\mathcal{E}}^{*}}\to \mathbb{R}\) be given by

$$ U(A) = \sum_{x\in A} a(x) . $$

Note that U vanishes on sets A which do not contain points close to the origin. Denote by \(\mathfrak{N}_{\star}^{*}\), \(\mathfrak {L}^{*}_{\theta, p, 1}\) and \(\mathfrak{L}^{*}_{\theta, p, 2}\) the adjoints of \(\mathfrak{N}_{\star}\), \(\mathfrak{L}_{\theta, p, 1}\) and \(\mathfrak{L}_{\theta, p, 2}\), respectively, in L 2(μ ). An elementary computation shows that for every finitely supported function \(\mathfrak{f}\text{:}{{\mathcal{E}}^{*}}\to \mathbb{R}\)

$$ \begin{gathered} (\mathfrak{n}_{*}^{*}\mathfrak{f})(A)=-(\mathfrak{n}_{*}^{*}\mathfrak{f})(A)-2U(A)\mathfrak{f}(A), \hfill \\ (\mathfrak{L}_{{\theta .p,1}}^{*}\mathfrak{f})(A)=({{\mathfrak{L}}_{\theta }},p{{*}_{1}}\mathfrak{f})(A)-2U(A)\mathfrak{f}(A), \hfill \\ (\mathfrak{L}_{{\theta .p,2}}^{*}\mathfrak{f})(A)=({{\mathfrak{L}}_{\theta }},p{{*}_{2}}\mathfrak{f})(A)+2U(A)\mathfrak{f}(A). \hfill \\ \end{gathered} $$

The derivations of these identities rely on the facts that ∑ xA a(x)=−∑ xA a(x), ∑ x,yA a(yx)=0 because a is anti-symmetric.

It follows from the previous computation that the symmetric and anti-symmetric parts of the operator \(\mathfrak{L}_{\theta, p, 1} - \mathfrak{N}_{\star}\) are \(\mathfrak{L}_{\theta, s, 1}\) and \(\mathfrak{L}_{\theta, a, 1} - \mathfrak{N}_{\star}\), respectively, while the symmetric and anti-symmetric parts of the operator \(\mathfrak{L}_{\theta, p, 2} + \mathfrak{N}_{\star}\) are \(\mathfrak{L}_{\theta, s, 2}\) and \(\mathfrak{L}_{\theta, a, 2} + \mathfrak{N}_{\star}\), respectively. It is therefore natural to rewrite the operator \((1-2 \alpha) \mathfrak{N}_{\star}+ \alpha \mathfrak{L}_{\theta, p, 1} (1-\alpha) \mathfrak{L}_{\theta, p, 2} \) as

$$ \alpha(\mathfrak{L}_{\theta, p, 1} - \mathfrak{N}_\star) + (1-\alpha) (\mathfrak{L}_{\theta, p, 2} + \mathfrak{N}_\star). $$

Furthermore, \(\mathfrak{L}_{\theta, s, 1}\) and \(\mathfrak{L}_{\theta , s, 2}\) are symmetric operators in L 2(μ ) which preserve the degrees of functions, while the operators \(\mathfrak{L}_{\theta, a, 1} - \mathfrak{N}_{\star}\), \(\mathfrak{L}_{\theta, a, 2} + \mathfrak{N}_{\star}\) are anti-symmetric operators in L 2(μ ) which preserve the degrees of functions.

We turn now to the operators which do not preserve the degree of a function: \(\mathfrak{J}_{0, +}\), \(\mathfrak{J}_{0,-}\), \(\mathfrak {J}_{\theta, p, +}\), \(\mathfrak{J}_{\theta, p, -}\). The adjoint of these operators can be written as

$$ \begin{gathered} (\mathfrak{J}_{{0,+}}^{*}\mathfrak{f})(A)=-({{\mathfrak{J}}_{{0,-}}}\mathfrak{f})(A)-2\sum\limits_{{y\notin A}} {a(y)\mathfrak{f}(A\cup \{y\})} \hfill \\ (\mathfrak{J}_{{0,-}}^{*}\mathfrak{f})(A)=-({{\mathfrak{J}}_{{\theta ,p*,-}}}\mathfrak{f})(A)+2\sum\limits_{{y\notin A}} {a(y)\mathfrak{f}(A\cup \{y\})} , \hfill \\ (\mathfrak{J}{{_{\theta }^{*}}_{{,p,-}}}\mathfrak{f})(A)=-({{\mathfrak{J}}_{{\theta ,p*,+}}}\mathfrak{f})(A)+2\sum\limits_{{y\in A}} {a(y)\mathfrak{f}(A\backslash \{y\})} . \hfill \\ \end{gathered} $$

Note that the indices ± on the left-hand side are changed to ∓ on the right-hand side. It follows from these identities that

$$ \mathfrak{J}^*_{\theta, s, +} = \mathfrak{J}_{\theta, s, -}, \quad \{ \mathfrak{J}_{0, +} + \mathfrak{J}_{\theta, a, +}\}^* = - \{ \mathfrak{J}_{0, -} + \mathfrak{J}_{\theta, a, -}\} . $$

Therefore, \(\mathfrak{J}_{\theta, s, +} +\mathfrak{J}_{\theta, s, -}\) is symmetric in L 2(μ ), while \(\mathfrak{J}_{0, +} + \mathfrak{J}_{0, -} + \mathfrak{J}_{\theta, a, +} +\mathfrak{J}_{\theta, a, -}\) is anti-symmetric.

Let \(\mathfrak{J}_{+} = \mathfrak{J}_{0,+} + \mathfrak{J}_{\theta, p, +}\), \(\mathfrak{J}_{-} = \mathfrak{J}_{0,-} + \mathfrak{J}_{\theta, p, -}\). With this notation we have that

$$ \mathfrak{L}_{\ast, \alpha}= \mathfrak{S}_\star + \alpha\{ \mathfrak{L}_{\theta, p, 1} - \mathfrak{N}_\star\} + (1-\alpha) \{ \mathfrak{L}_{\theta, p, 2} + \mathfrak {N}_\star\} + \sqrt{\chi( \alpha)} \{ \mathfrak{J}_{+} + \mathfrak {J}_{-} \}. $$

The Spaces \(\pmb{\mathfrak{H}_{0,1}}\) and \(\pmb{\mathfrak{H}_{0,-1}}\)

Denote by ℌ0,1 the Hilbert space generated by the finitely supported functions endowed with the scalar product \(\langle \mathfrak {f}, (-\mathfrak{S}_{\star}) \mathfrak{g}\rangle_{\mu_{\star}}\). We use the same notation ∥⋅∥0,1 to denote the norm of the Hilbert space ℌ0,1 and the norm of the Hilbert space \({{\mathcal{H}}_{0,1}}\) introduced at the end of Sect. 6.5. An elementary computation shows that

$$\left\| \mathfrak{f} \right\|_{0,1}^{2}={{\left\langle \mathfrak{f},(-{{\mathfrak{S}}_{^{\star }}})\mathfrak{f} \right\rangle }_{{{\mu }_{^{\star }}}}}=\frac{1}{4}\sum\limits_{x,y\in \mathbb{Z}_{*}^{d}}{s(y-x)\sum\limits_{A\in \mathcal{E}}{{{\left\{ \mathfrak{f}({{A}_{x,y}})-\mathfrak{f}(A) \right\}}^{2}}}}$$
(6.27)

for all finitely supported functions \(\mathfrak{f}:{{\mathcal{E}}^{*}}\to \mathbb{R}.\)

Let ℌ0,−1 be the dual space of ℌ0,1, given by (2.6) with μ , ℭ in place of π, \({{\left\| . \right\|}_{0,-1}}\) Here also ∥⋅∥0,−1 stands for the norm of the ℌ0,−1 and the norm of the \({{\mathcal{H}}_{0,-1}}.\)

Fix a cylinder function f in \(\mathcal{C}\) and recall that we denote by \({{\mathcal{S}}_{0}}\) the generator \({{\mathcal{L}}_{0}}\) with the probability measure p replaced by its symmetric part s. An elementary computation shows that \(\mathfrak{F}{{\mathcal{S}}_{\text{0}}}f\text{=}{{\mathfrak{S}}_{\star ,}}\mathfrak{F}f\) so that

$$ \mathfrak{F}{{\mathcal{S}}_{0}}={{\mathfrak{S}}_{*}}\mathfrak{F}. $$

Hence, since \(\mathfrak{F}\) is an isomorphism from \(L^{2}(\nu^{*}_{\alpha})\) to L 2(μ ), for any cylinder function f,

$$\left\| \mathfrak{f} \right\|_{0,1}^{2}={{\left\langle \mathfrak{f},(-{{\mathcal{S}}_{0}})f \right\rangle }_{v_{\alpha }^{*}}}={{\left\langle \mathfrak{f},(-{{\mathfrak{S}}_{^{\star }}})\mathfrak{f} \right\rangle }_{{{\mu }_{^{\star }}}}}=\left\| \mathfrak{f} \right\|_{0,1}^{2},$$
(6.28)

where \(\mathfrak{f} = \mathfrak{F} f\). Therefore \(\mathfrak{F}\) is also an isomorphism from \({{\mathcal{H}}_{0,1}}\) to ℌ0,1. By duality these identities extend to \({{\mathcal{H}}_{0,-1}},\)0,−1. For every function \(f \in L^{2}(\nu^{*}_{\alpha})\),

$$ \Vert f\Vert_{0,-1}^2= \Vert\mathfrak{F} f \Vert_{0,-1}^2 . $$
(6.29)

The isomorphism from \(L^{2}(\nu^{*}_{\alpha})\) to L 2(μ ) and the relation \(\mathfrak{F}\mathcal{L}\text{=}{{\mathfrak{L}}_{*,\alpha }}\mathfrak{F}\) give also that

$${{\left\langle f,(-\mathcal{L})f \right\rangle }_{v_{\alpha }^{*}}}{{\left\langle \mathfrak{F}f,(-{{\mathfrak{L}}_{*,\alpha }})\mathfrak{F}f \right\rangle }_{{{\mu }_{\star }}}}$$
(6.30)

for all cylinder functions f, an identity needed later.

7 The Asymmetric Case in Dimension d≥3

In this section, we prove Theorem 6.6 for asymmetric exclusion processes in dimension d≥3. We have seen in Sect. 6.5 that the proof is reduced to the inspection of conditions (6.16).

This section is divided in two parts. We first prove that the cylinder function \(V_{\mathfrak{a}}\) belongs to \({{\mathcal{H}}_{0,-1}}\) in dimensions d≥3. This part is based on the estimates of the Green function of transient Markov processes presented in Sect. 5.7. In the second part of the section we prove that the second condition in (6.16) holds following the strategy presented in Sect. 2.7.4.

Lemma 6.15

Assume that d≥3. Then, the cylinder function \(V_{\mathfrak{a}}\) introduced in (6.14) belongs to and \(\Vert V_{\mathfrak{a}} \Vert_{0,-1}^{2} \le C_{0} \alpha(1-\alpha) |\mathfrak{a}|^{2}\).

Proof

The cylinder function \(V_{\mathfrak{a}}\) can be written in the orthonormal basis as \(- \sqrt{\chi(\alpha)} \sum_{x\in\mathbb{Z}^{d}} x\cdot\mathfrak{a} p(x) \varPsi_{\{x\}}\) and therefore belongs to the space \({{\mathcal{A}}_{n}}\) of functions of degree 1. Since functions of different degrees are orthogonal both in \(L^{2}(\nu_{\alpha}^{*})\) and in \({{\mathcal{H}}_{0,-1}}.\) in the variational formula which defines the ∥⋅∥0,−1 norm we may restrict the supremum to cylinder functions of degree 1:

$$ \left\| {{{V}_{a}}} \right\|_{{0,-1}}^{2}=\mathop{{\sup }}\limits_{{f\in \mathcal{C}}}\left\{ {2{{{\left\langle {{{V}_{a}},f} \right\rangle }}_{{v_{\alpha }^{*}}}}-\left\| f \right\|_{{0,1}}^{2}} \right\}=\mathop{{\sup }}\limits_{{f\in \mathcal{C}\cap {{\mathcal{A}}_{1}}}}\left\{ {2{{{\left\langle {{{V}_{a}},f} \right\rangle }}_{{v_{\alpha }^{*}}}}-\left\| f \right\|_{{0,1}}^{2}} \right\}. $$

Denote the Fourier coefficients of the cylinder function f by \(\mathfrak{f}:{{\mathcal{E}}^{*}}\to \mathbb{R}\) so that \(f = \sum_{x} \mathfrak{f} (\{ x\}) \varPsi_{\{x\}}\). With this notation the previous variational formula can be written as

$$ \sup_{\mathfrak{f}} \biggl\{ - 2 \sqrt{\chi(\alpha)} \sum _{x} x\cdot\mathfrak{a} p(x) \mathfrak{f} \bigl(\{x\} \bigr) - D_0(\mathfrak {f}) \biggr\}. $$

In this formula the supremum is carried over all finitely supported functions \(\mathfrak{f}:\mathcal{E}_{1}^{*}\to \mathbb{R}\) and \(D_{0}(\mathfrak{f})\) corresponds to the Dirichlet form of a symmetric random walk on \(\mathbb{Z}^{d}_{*}\) in which a jump from x to y occurs at rate (1/2)s(yx):

$$ D_0(\mathfrak{f}) = \frac{1}{4} \sum_{x,y\in\mathbb{Z}^d_*} s(y-x) \bigl\{\mathfrak{f}\bigl(\{y\}\bigr) - \mathfrak{f} \bigl(\{x\}\bigr) \bigr\}^2. $$

By Schwarz inequality, the linear term in the variational formula is less than or equal to

$$ A \chi(\alpha) \sum_{x} (x\cdot \mathfrak{a})^2 p(x) + \frac{1}{A} \sum _{x} p(x) \mathfrak{f} \bigl(\{x\}\bigr)^2 $$

for all A>0. By Proposition 5.23, since the transition probability p has finite range, the second term is less than or equal to

$$ \frac{1}{A} \sup_{y\in\mathbb{Z}^d_*} G_0(y,y) D_0( \mathfrak{f}) , $$

where G 0(x,y) stands for the Green function of the random walk with Dirichlet form D 0 defined above. By Proposition 5.25, the Green function G 0 can be estimated by the Green function of the random walk on ℤd which jumps from x to y at rate (1/2)s(yx). The previous expression is thus less than or equal to \(C_{0} A^{-1} D_{0}(\mathfrak{f})\), for some finite constant C 0 depending only on p. Choosing A=C 0 we conclude the proof of the lemma. □

We now turn to the proof of the second condition in (6.16). Let \(\mathfrak{u}_{\lambda}= \mathfrak{F} u_{\lambda}\), λ>0. Since \(\mathfrak{F}\mathcal{L}\text{=}{{\mathfrak{L}}_{*,\alpha }}\mathfrak{F},\) applying the operator \(\mathfrak{F}\) on both sides of the resolvent equation (6.15), we obtain that

$$ \lambda\mathfrak{u}_\lambda- \mathfrak{L}_{\ast, \alpha} \mathfrak{u}_\lambda= \mathfrak{V}_{\mathfrak{a}} , $$
(6.31)

where \(\mathfrak{V}_{\mathfrak{a}} = \mathfrak{F} V_{\mathfrak {a}}\). Since \({{\mathfrak{L}}_{*,\alpha }}{{\mathfrak{u}}_{\lambda }}=\mathfrak{F}\mathcal{L}{{u}_{\lambda }},\) by (6.29) \({{\left\| \mathcal{L}{{u}_{\lambda }} \right\|}_{0,-1}}={{\left\| {{\mathfrak{L}}_{*,\alpha }}{{u}_{\lambda }} \right\|}_{0,-1.}}\) Thus, in view of (6.21), to prove the second condition in (6.16) it is enough to show that

$$ \sup_{0<\lambda\le1} \Vert\mathfrak{L}_{\ast, \alpha} \mathfrak {u}_\lambda \Vert_{0,-1} <\infty. $$
(6.32)

We place ourselves in the set-up of Sect. 2.7.4, where μ stands for π and we have the following correspondence between the operators:

$$ \begin{gathered} {{B}_{0}}\to \alpha \left\{ {{{\mathfrak{L}}_{{\theta ,p,1}}}-{{\mathfrak{N}}_{\star }}} \right\}+(1-\alpha )\left\{ {{{\mathfrak{L}}_{{\theta ,p,2}}}+{{\mathfrak{N}}_{\star }}} \right\}, \hfill \\ {{S}_{0}}\to {{\mathfrak{S}}_{\star }},\,\,\,{{L}_{+}}\to \sqrt{{\chi (\alpha )\text{)}}}{{\mathfrak{J}}_{+}},\,\,{{L}_{-}}\to \sqrt{{\chi (\alpha )}}\mathfrak{J}-. \hfill \\ \end{gathered} $$

By Lemma 2.23, (6.32) holds if we show that conditions (2.39), (2.40), (2.45) and (2.50) are in force. The fist condition is trivially satisfied. In the present context the other three conditions read as follows.

There exists a finite constant C 0 such that

$$ 0 \le \langle\mathfrak{f}, -\mathfrak{S}_\star \mathfrak {f}\rangle_{\mu_\star} \le C_0 \bigl\langle \mathfrak{f}, (-\mathfrak{L}_{\ast, \alpha}) \mathfrak {f}\bigr \rangle_{\mu_\star} $$
(6.33)

for all functions \(\mathfrak{f}:{{\mathcal{E}}^{*}}\to \mathbb{R}\) of finite support.

There exists β<1 and a finite constant C 0 such that for all n≥0,

(6.34)

for any finitely supported functions \(\mathfrak{f}:\mathcal{E}_{n}^{*}\to \mathbb{R},\) \(\mathfrak{g}\text{:}\mathcal{E}_{n+1}^{*}\to \mathbb{R}.\)

As we have seen in Sect. 5.6, condition (2.50) is not expected to hold, but can be replaced by the following one. There exists a finite constant C 0 such that

$$ \| \varPi_n \mathfrak{N}_\star \mathfrak{u}_\lambda\|_{0,-1}^2 \le C_0 n \| \mathfrak{V}_{\mathfrak{a}}\|_{0,-1}^2 + C_0 n^3 \sum_{j=n-1}^{n+1} \Vert\varPi_j \mathfrak{u}_\lambda\Vert_{0,1}^2 $$
(6.35)

for all n≥1; and identical bounds with \(\mathfrak{L}_{\theta, p , j}\), j=1, 2, in place of \(\mathfrak{N}_{\star}\), because in our context the asymmetric part of the generator which keeps the degree is the operator \(\alpha \{ \mathfrak{L}_{\theta, p, 1} - \mathfrak{N}_{\star}\} + (1-\alpha) \{ \mathfrak{L}_{\theta, p , 2} + \mathfrak{N}_{\star}\}\).

In the remaining part of this section we prove assertions (6.33), (6.34), (6.35).

Condition (6.33)

This condition is straightforward since the operators \(\mathfrak {S}_{\star}\), \(\mathfrak{L}_{\ast, \alpha}\) are generators and therefore non-positive: For every finitely supported function \(\mathfrak{f}:{{\mathcal{E}}^{*}}\to \mathbb{R}\text{,}\) by (6.28), (6.30),

$$ \begin{array}{*{20}{c}} {0\leq {{{\left\langle {-{{\mathcal{S}}_{0}}f,f} \right\rangle }}_{{v_{\alpha }^{*}}}}={{{\left\langle {-{{\mathfrak{S}}_{\star }}\mathfrak{f},\mathfrak{f}} \right\rangle }}_{{{{\mu }_{{_{\star }}}},}}}} \\ {{{{\left\langle {-{{\mathfrak{S}}_{\star }}\mathfrak{f},\mathfrak{f}} \right\rangle }}_{{{{\mu }_{{_{\star }}}}}}}={{{\left\langle {-{{\mathcal{S}}_{0}}f,f} \right\rangle }}_{{v_{\alpha }^{*}}}}\leq {{{\left\langle {-\mathcal{L}f,f} \right\rangle }}_{{v_{\alpha }^{*}}}}={{{\left\langle {\mathfrak{f},(-{{\mathfrak{L}}_{{_{\star },\alpha }}})\mathfrak{f}} \right\rangle }}_{{{{\mu }_{{_{\star }}}},}}}} \\ \end{array} $$

where f is the cylinder function \(f(\xi )=\sum\nolimits_{A\in {{\mathcal{E}}^{*}}}{\mathfrak{f}\text{(}A\text{)}{{\Psi }_{A}}(\xi ).}\)

Condition (6.34)

We start with a graded sector condition on the symmetric diagonal operators.

Lemma 6.16

Fix j=1, 2. There exists a constant C 0, depending only on p(⋅), such that for all n≥1

$$ \langle -\mathfrak{L}_{\theta, s, j} \mathfrak{f}, \mathfrak{f} \rangle_{\mu_\star} \le C_0 n \langle- \mathfrak{S}_\star\mathfrak{f}, \mathfrak{f} \rangle_{\mu_\star} $$

for all finitely supported functions \(\mathfrak{f}:\mathcal{E}_{n}^{*}\to \mathbb{R},\)

Proof

Set j=1 and fix n≥1 and a finitely supported function \(\mathfrak{f}:\mathcal{E}_{n}^{*}\to \mathbb{R}.\) An elementary computation shows that

$$ \langle-\mathfrak{L}_{\theta, s, 1} \mathfrak{f}, \mathfrak{f} \rangle_{\mu_\star} = (1/2) \sum_{z\in\mathbb{Z}_*^d} s(z) \sum_{A\ni z} \bigl\{\mathfrak {f}(\theta_{-z} A) - \mathfrak{f}(A)\bigr\}^2. $$

Since the sum over z is finite, we need to estimate for each z

$$ \sum\limits_{{A\in \mathcal{E}_{n}^{*}}} {{{{\left\{ {\mathfrak{f}\text{(}{{\theta }_{{-z}}}A\text{)-}\mathfrak{f}(A)} \right\}}}^{2}}} $$

in terms of the Dirichlet form \(\langle-\mathfrak{S}_{\star}\mathfrak {f}, \mathfrak{f} \rangle_{\mu_{\star}}\) in which only exchanges of sites are allowed.

We can assume without loss of generality that z=(1,0,…,0), the unit vector in the direction of the first coordinate axis. The problem is now reduced to the following: we are given a function \(\mathfrak{f}:\mathcal{E}_{n}^{*}\to \mathbb{R}.\) We think of \(\mathcal{E}_{n}^{*}\) as a graph with edges \(\mathbb{E}^{*}_{n}\), defined as

$$ \mathbb{E}^*_n = \bigl\{ (A,A_{x,y}) : s(y-x)>0 \bigr\}. $$

The Dirichlet form can be written as

$$ \langle-\mathfrak{S}_\star\mathfrak{f}, \mathfrak{f} \rangle_{\mu _\star} = \frac{1}{2} \sum_{e\in{\mathbb{E}^*}_n} s(e) \big|(\delta\mathfrak{f}) (e)\big|^2 $$

where \((\delta\mathfrak{f})(e) = f(A_{x,y}) - f(A)\), if e=(A,A x,y ) and s(e)=s(yx).

We are claiming an estimate of the form

$$ \sum\limits_{{A\in \mathcal{E}_{n}^{*}}} {{{{\left| {\mathfrak{f}\text{(}{{\theta }_{z}}A\text{)-}\mathfrak{f}(A)} \right|}}^{2}}\leq {{C}_{0}}n{{{\left\langle {-{{\mathfrak{S}}_{\star }}\mathfrak{f},\mathfrak{f}} \right\rangle }}_{{{{\mu }_{\star }}}}}} $$

with a constant C 0 independent of n.

It is clear that for any set A one can move from A to θ z A along edges of the graph \(\mathcal{E}_{n}^{*},\) using only the edges in \(\mathbb{E}^{*}_{n}\). We will verify that for every \(A\in \mathcal{E}_{n}^{*}\) we can assign a set of edges \({\mathbb{E}^{*}}_{A}\subset \mathbb{E}^{*}_{n}\) such that, (i) for every A one can use the edges of \({\mathbb{E}^{*}}_{A}\) to go from A to θ z A, (ii) for any A there are at most n edges in \({\mathbb{E}^{*}}_{A}\) and (iii) the subsets \(\{{\mathbb{E}^{*}}_{A}\}\) are mutually disjoint as A varies over \(\mathcal{E}_{n}^{*}.\) Then it is easy to see that

$$ \big|\mathfrak{f} (\theta_{-z} A) - \mathfrak{f} (A)\big|^2 = \biggl\vert\,\sum_{e\in{\mathbb{E}^*}_A}(\delta\mathfrak{f} ) (e) \biggr\vert^2 \le n \sum_{e\in{\mathbb{E}^*}_A} \big\vert(\delta \mathfrak {f}) (e)\big\vert^2 $$

and summing over \(A\in \mathcal{E}_{n}^{*},\) because \({\mathbb{E}^{*}}_{A}\) are disjoint, we can establish the lemma.

To construct the paths from A to θ z A we totally order the points of \(\mathbb{Z}^{d}_{*}\) by lexicographic ordering. We say that \(z=(z_{1},\ldots,z_{d})\in{\mathbb{Z}^{d}_{*}}\) is positive if, either z 1>0; or z 1=0,z 2=0,…,z j−1=0 and z j >0 for some 2≤jd. The total ordering declares y>x if yx is positive. Let the set \(A\in \mathcal{E}_{n}^{*}\) consist of the n points (x 1,x 2,…,x n ) of \(\mathbb{Z}^{d}_{*}\). We can assume that they are ordered so that x 1>x 2>⋯>x n . Then \(\theta_{-z} A= (x_{1}^{*},\ldots,x_{n}^{*})\) where \(x_{j}^{*}=x_{j}-z\) unless x j =z in which case \(x_{j}^{*}=x_{j}-2z=-z\). Assume without loss of generality that z<0. We use the edges in \(\mathbb{E}^{*}_{n}\) to shift successively each x i to \(x_{i}^{*}\) starting from x 1 and proceeding in order and ending with shifting x n . Any edge that is used goes from some A 1 to \(A_{2}=\sigma^{x_{i},x^{*}_{i}}A_{1}\). Since the shifts were made in lexicographic order we can determine without ambiguity which points of A 1 have already been shifted and which have not been. In other words the paths from any two different A to the corresponding θ z A do not share a common edge. It is also clear that exactly n edges are used. Any edge corresponds to \(x_{j}^{*}-x_{j}=-z\) or −2z.

We have thus proved that

$$ \langle -\mathfrak{L}_{\theta, s, j} \mathfrak{f}, \mathfrak{f} \rangle_{\mu_\star} \le n \langle-\hat{\mathfrak{S}_\star} \mathfrak{f}, \mathfrak{f} \rangle_{\mu_\star} $$

where \(\hat{\mathfrak{S}_{\star}}\) allows jumps of size −2z. It is however an easy matter to show that

$$ \langle-\hat{\mathfrak{S}_\star}\mathfrak{f}, \mathfrak{f} \rangle_{\mu_\star} \le C_0 \langle-{\mathfrak{S}_\star} \mathfrak{f}, \mathfrak{f} \rangle_{\mu_\star} $$

for some finite constant C 0 depending only on the probability p(⋅). This concludes the proof of the lemma for j=1. The proof for j=2 is identical. □

It follows from the variational formula for the ℌ0,−1 norm, from the symmetry of the operators \(\mathfrak{L}_{\theta, s ,j}\) and from the previous lemma that there exists a finite constant C 0 such that for all n≥1,

$$ \Vert\mathfrak{L}_{\theta, s ,j} \mathfrak{f} \Vert_{0,-1}^2 \le C_0 n^2 \Vert\mathfrak{f} \Vert_{0,1}^2 $$
(6.36)

for j=1, 2 and all finitely supported functions \(\mathfrak{f}:\mathcal{E}_{n}^{*}\to \mathbb{R}.\)

We are now in a position to show that the off-diagonal operators \(\mathfrak{J}_{+}\), \(\mathfrak{J}_{-}\) satisfy the graded sector condition (6.34) with β=1/2.

Lemma 6.17

Assume that d≥3. There exists a finite constant C 0, depending only on the probability p(⋅), such that for every n≥0,

$$ \langle\mathfrak{J}_+ \mathfrak{f}, \mathfrak{g}\rangle_{\mu _\star}^2 \le C_0 n \langle-\mathfrak{S}_\star\mathfrak{f}, \mathfrak {f}\rangle_{\mu_\star} \langle-\mathfrak{S}_\star \mathfrak{g}, \mathfrak{g}\rangle_{\mu _\star} $$

for any finitely supported functions \(\mathfrak{f}:\mathcal{E}_{n}^{*}\to \mathbb{R},\) \(\mathfrak{g}:\mathcal{E}_{n+1}^{*}\to \mathbb{R}.\) A similar bound holds for \(\mathfrak{J}_{-}\).

Proof

To fix ideas we prove the estimate for the operator \(\mathfrak {J}_{\theta, s, +}\). The proof for \(\mathfrak{J}_{\theta, a, +}\) is identical and the one for \(\mathfrak{J}_{0, +}\) is similar to the proof of Lemma 5.15.

Fix n≥0 and consider two finitely supported functions \(\mathfrak{f}:\mathcal{E}_{n}^{*}\to \mathbb{R},\) \(\mathfrak{g}:\mathcal{E}_{n+1}^{*}\to \mathbb{R}.\) From the explicit form of the operator \(\mathfrak{J}_{\theta, s, +}\), we obtain that \(\mathfrak{J}_{\theta, s, +} \mathfrak{f}(\{x\})=0\) for all \(x\in\mathbb{Z}^{d}_{*}\). We may therefore assume that n≥1. In this case,

$$ \left| {\left\langle {{{\mathfrak{J}}_{{\theta ,s,+}}}\mathfrak{f}\text{,}\mathfrak{g}} \right\rangle {{\mu }_{*}}} \right|\leq \sum\limits_{{A\in \mathcal{E}_{{n+1}}^{*}}} {\sum\limits_{{y\in A}} {s(y)\left| {\mathfrak{f}(A\backslash \left\{ y \right\})-f([{{\theta }_{{-y}}}A]\backslash \left\{ {-y} \right\})||\mathfrak{g}(A)} \right|} } . $$

The elementary inequality 2abλa 2+λ −1 b 2 gives that the previous expression is less than or equal to

$\mathfrak{g}_{1}(y)^{2} = \sum_{A \ni y} \mathfrak{g}(A)^{2}$
(6.37)

for every λ>0, where \(\mathfrak{g}_{1}(y)^{2} = \sum_{A \ni y} \mathfrak{g}(A)^{2}\). We estimate separately these two expressions.

The change of variables B=A∖{y} permits to rewrite the first term in (6.37) as \(\lambda\langle (-\mathfrak {L}_{\theta, s, 2}) \mathfrak{f} , \mathfrak{f} \rangle_{\mu_{\star}}\). By Lemma 6.16, this term is bounded by \(C_{0} \lambda n \langle (-\mathfrak{S}_{\star}) \mathfrak{f} , \mathfrak{f} \rangle_{\mu_{\star}}\).

On the other hand, since s generates an irreducible transient Markov process on \(\mathbb{Z}_{*}^{d}\) and since s(⋅) has finite range, by Propositions 5.23 and 5.25,

$$ \sum_{y\in\mathbb{Z}_*^d} s(y) \mathfrak{g}_1(y)^2 \le C_0 \sum_{x,y\in\mathbb{Z}_*^d} s(y-x) \bigl\{ \mathfrak{g}_1(y) - \mathfrak{g}_1(x) \bigr \}^2 . $$

A change of variables B=A x,y gives that

$$ \mathfrak{g}_1(y) = \biggl\{ \sum_{A\ni x, y} \mathfrak{g}(A)^2 + \sum_{A\ni x, A\not\ni y} \mathfrak{g}(A_{x,y})^2 \biggr\}^{1/2}. $$

Since

$$ \mathfrak{g}_1(x) = \biggl\{ \sum_{A\ni x, y} \mathfrak{g}(A)^2 + \sum_{A\ni x, A\not\ni y} \mathfrak{g}(A)^2 \biggr\}^{1/2}, $$

by Schwarz inequality,

$$ \bigl\{ \mathfrak{g}_1(y) - \mathfrak{g}_1(x) \bigr \}^2 \le \sum_{A\ni x, A\not\ni y} \bigl\{ \mathfrak{g}(A_{x,y}) - \mathfrak{g}(A) \bigr\}^2 . $$

Therefore, in view of (6.27),

$$ \sum_{y\in\mathbb{Z}_*^d} s(y) \mathfrak{g}_1(y)^2 \le C_0\bigl\langle (-\mathfrak{S}_\star) \mathfrak{g}, \mathfrak{g} \bigr\rangle $$

for some finite constant C 0 depending only on p(⋅).

In view of (6.37), to conclude the proof of the lemma, it remains to optimize on λ>0. □

It follows from the variational formula for the ℌ0,−1 norm and from this lemma that there exists a finite constant C 0 such that for every n≥1,

$$ \Vert\mathfrak{J}_\pm\mathfrak{f} \Vert_{0,-1}^2 \le C_0 n \Vert\mathfrak{f} \Vert_{0,1}^2 $$
(6.38)

for all finitely supported functions \(\mathfrak{f}:\mathcal{E}_{n}^{*}\to \mathbb{R}.\)

Condition (6.35)

There are three diagonal operators: \(\mathfrak{L}_{\theta, p , 1}\), \(\mathfrak{L}_{\theta, p , 2}\) and \(\mathfrak{N}_{\star}\). It follows from (6.36) that \(\mathfrak{L}_{\theta, s , j}\), j=1, 2, satisfies trivially Condition (6.35). The next lemma shows that \(\mathfrak{L}_{\theta, a , 1}\) also satisfies a graded sector condition.

Lemma 6.18

Assume that d≥3. There exists a finite constant C 0, depending only on the probability p(⋅), such that for every n≥0,

$$ \langle\mathfrak{L}_{\theta, a , 1} \mathfrak{f}, \mathfrak {g} \rangle_{\mu_\star}^2 \le C_0 n \langle- \mathfrak{S}_\star\mathfrak{f}, \mathfrak {f}\rangle_{\mu_\star} \langle-\mathfrak{S}_\star\mathfrak{g}, \mathfrak{g}\rangle_{\mu _\star} $$

for any finitely supported functions \(\mathfrak{f}\), \(\mathfrak{g}:\mathcal{E}_{n}^{*}\to \mathbb{R}.\)

Proof

The proof is similar and slightly simpler than the one of Lemma 6.17. Details are left to the reader which should remember that |a(y)|≤s(y). □

It follows from the variational formula for the ℌ0,−1 norm and from this lemma that there exists a finite constant C 0 such that for all n≥0,

$$ \Vert\mathfrak{L}_{\theta, a , 1} \mathfrak{f} \Vert_{0,-1}^2 \le C_0 n \Vert \mathfrak{f} \Vert_{0,1}^2 $$
(6.39)

for all finitely supported functions \(\mathfrak{f}:\mathcal{E}_{n}^{*}\to \mathbb{R}.\) In particular, Condition (6.35) holds also for \(\mathfrak {L}_{\theta, a , 1}\) and \(\mathfrak{L}_{\theta, p, 1}\).

The proof of the previous lemma does not apply to \(\mathfrak {L}_{\theta, a , 2}\) because in the definition of this operator the sum is carried over sites not in A. Actually, such an estimate is not expected to hold for \(\mathfrak{L}_{\theta, a , 2}\) and for \(\mathfrak{N}_{\star}\), as explained in the beginning of Sect. 5.6.

The proof of Condition (6.35) for the operators \(\mathfrak {N}_{\star}\), \(\mathfrak{L}_{\theta, a , 2}\) is similar to the one of Lemma 5.20 and relies on the comparison of the operators \(\mathfrak{S}_{\star}\), \(\mathfrak{N}_{\star}\), \(\mathfrak{L}_{\theta, a , 2}\) with generators associated to the evolution of independent randoms walks which can be examined through Fourier analysis.

Lemma 6.19

There exists a finite constant C 0 depending only on p(⋅) such that for all n≥1,

$$ \| \varPi_n \mathfrak{N}_\star\mathfrak{u}_\lambda \|_{0,-1}^2 \le C_0 n \| \mathfrak{V}_{\mathfrak{a}}\|_{0,-1}^2 + C_0 n^3 \sum_{j=n-1}^{n+1} \Vert\varPi_j \mathfrak{u}_\lambda\Vert_{0,1}^2 . $$

An identical bound holds if we replace \(\mathfrak{N}_{\star}\) by \(\mathfrak{L}_{\theta, a , 2}\).

Proof

The proof of this lemma is similar to the one of Lemma 5.20 and divided into several steps. Let \(\mathfrak{w}_{1} = \mathfrak{V}_{\mathfrak{a}} + \sqrt{\chi(\alpha)} \{ {\mathfrak {J}}_{+} + {\mathfrak{J}}_{-} \} {\mathfrak{u}}_{\lambda}+ \alpha \mathfrak{L}_{\theta, p, 1} {\mathfrak{u}}_{\lambda}+ (1-\alpha) \mathfrak{L}_{\theta, s, 2} {\mathfrak{u}}_{\lambda}\) so that

$$ \lambda{\mathfrak{u}}_\lambda- \bigl\{ \mathfrak{S}_\star+ (1- 2\alpha) \mathfrak{N}_\star+ (1-\alpha) \mathfrak{L}_{\theta, a, 2} \bigr\} {\mathfrak {u}}_\lambda= \mathfrak{w}_1 . $$
(6.40)

By (6.36), (6.38), (6.39), there exists a finite constant C 0 such that

$$ \|\varPi_n \mathfrak{w}_1 \|^2_{0,-1} \le 2 \|\varPi_n \mathfrak{V}_\mathfrak{a} \|^2_{0, -1} + C_0 n^2 \sum_{j=n-1}^{n+1} \|\varPi_j {\mathfrak{u}}_\lambda\|^2_{0,1} $$
(6.41)

for all n≥1.

The operator \({\mathfrak{S}_{\star}} + (1-2\alpha) {\mathfrak {N}_{\star}} + (1-\alpha ) \mathfrak{L}_{\theta, a , 2}\) does not change the degree of a function. We may therefore examine equation (6.40) on each set \(\mathcal{E}_{n}^{*}.\)

$$ \lambda{\mathfrak{u}}_{\lambda,n} - \bigl\{{\mathfrak{S}_\star} + (1-2\alpha) \mathfrak{N}_\star+ (1-\alpha) \mathfrak{L}_{\theta, a , 2} \bigr\} {\mathfrak{u}}_{\lambda,n} = \varPi_n \mathfrak{w}_1 , $$

where \({\mathfrak{u}}_{\lambda,n} = \varPi_{n} {\mathfrak{u}}_{\lambda }\). Since n is fixed until estimate (6.44), we omit the operator Π n in the following formulas.

The main idea of this proof is to approximate the operator \({\mathfrak {S}_{\star}} + (1-2\alpha) \mathfrak{N}_{\star}+ (1-\alpha) \mathfrak{L}_{\theta, a , 2}\) by a convolution operator which can be analyzed through Fourier transforms. Fix n≥1 and let \({{\mathfrak{X}}_{n}}={{({{\mathbb{Z}}^{d}})}^{n}}.\) Note that we include the origin of ℤd in \({{\mathfrak{X}}_{n}}={{({{\mathbb{Z}}^{d}})}^{n}}.\) We consider a set A in \(\mathcal{E}_{n}^{*}\) as an equivalent class of n! sets of distinct points of ℤd. A function \(\mathfrak{f}:\mathcal{E}_{n}^{*}\to \mathbb{R}\) can be lifted into a symmetric function \(\varXi^{\star}{\mathfrak{f}}\) on which vanishes on \({{\mathfrak{X}}_{n}}\backslash \mathcal{E}_{n}^{*}:\)

$$ \bigl(\varXi^\star{\mathfrak{f}}\bigr) (x_1, \dots, x_n) = \begin{cases} {\mathfrak{f}} (\{ x_1, \dots, x_n \}) & \text{if}\ x_i \neq x_j\ \mbox{for}\ i \neq j\ \mbox{and}\ x_i \neq0, \vspace{2pt}\cr 0 & \mbox{otherwise}. \end{cases} $$

The operators \({\mathfrak{S}_{\star}}\), \({\mathfrak{N}_{\star}}\), \(\mathfrak{L}_{\theta, a , 2}\) can also be extended in a natural way to \({{\mathfrak{X}}_{n}}.\) Consider on \({{\mathfrak{X}}_{n}}\)> the operators \(\mathfrak{S}^{o}\), \(\mathfrak{N}^{o}\) defined in the previous chapter by (5.34) and \({\mathfrak{L}}_{\theta}^{o}\) defined by

$$ \bigl({\mathfrak{L}}^o_\theta{\mathfrak{f}}\bigr) ( \mathbf{x}) = \sum_{z\in\mathbb{Z}^d} a(z) \bigl\{ { \mathfrak{f}} (x_1- z, \dots, x_n-z) - {\mathfrak{f}} ( \mathbf {x}) \bigr\} . $$

In this formula and below, x=(x 1,…,x n ) is an element of \({{\mathfrak{X}}_{n}},\) so that each x j belongs to ℤd. Recall the definition of the norms \({{\left\| . \right\|}_{{{\mathfrak{X}}_{n,1}}}}\) and \({{\left\| . \right\|}_{{{\mathfrak{X}}_{n,-1}}}}\) defined by (5.35) and (5.36).

Lifting the resolvent equation (6.40) to \({{\mathfrak{X}}_{n}}\) and adding and subtracting \({\mathfrak{S}^{o}} \varXi^{\star}{\mathfrak{u}}_{\lambda}+ (1-2\alpha) {\mathfrak{N}^{o}} \varXi^{\star}{\mathfrak{u}}_{\lambda}+ (1-\alpha) {\mathfrak{L}}^{o}_{\theta}\varXi^{\star}{\mathfrak{u}}_{\lambda}\), we obtain that

$$ \lambda\varXi^\star{\mathfrak{u}}_\lambda- \bigl\{ {\mathfrak{S}^o} + (1-2\alpha) {\mathfrak{N}^o} + (1- \alpha) {\mathfrak{L}}^o_\theta \bigr\} \varXi^\star{ \mathfrak{u}}_\lambda= \mathfrak{w}_2 , $$
(6.42)

where \(\mathfrak{w}_{2}\) is equal to \(\varXi^{\star}\mathfrak{w}_{1}\) plus the remainder

$$ \begin{gathered} \left\{ {{{\Xi }^{\star }}{{\mathfrak{S}}_{{\star ,}}}-{{\mathfrak{S}}^{o}}{{\Xi }^{\star }}} \right\}{{\mathfrak{u}}_{\lambda }}+(1-2\alpha )\left\{ {{{\Xi }^{{^{\star }}}}{{\mathfrak{N}}_{{^{\star }}}}-{{\mathfrak{N}}^{o}}{{\Xi }^{\star }}} \right\}{{\mathfrak{u}}_{\lambda }} \hfill \\ +(1-\alpha )\left\{ {{{\Xi }^{\star }}{{\mathfrak{L}}_{{\theta ,a,2}}}-\mathfrak{L}_{\theta }^{o}{{\Xi }^{\star }}} \right\}{{\mathfrak{u}}_{\lambda }}. \hfill \\ \end{gathered} $$

We claim that \(\mathfrak{w}_{2}\) has finite \({{\mathcal{H}}_{-1}}({{\mathfrak{X}}_{n}})\) norm. Indeed, for each n≥1, by (6.46) and Lemma 6.21 below, there exists a finite constant C 0 such that

$$ \begin{gathered} ||{{\Xi }^{*}}{{\Pi }_{n}}{{\mathfrak{w}}_{1}}||_{{\mathcal{X}n,-1}}^{2}\leq ||{{\Pi }_{n}}{{\mathfrak{w}}_{1}}||_{{0,-1}}^{2}, \hfill \\ ||{{\Xi }^{*}}{{\mathfrak{S}}_{*}}{{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }}-{{\mathfrak{S}}^{0}}{{\Xi }^{*}}{{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }}||_{{\mathcal{X}n,-1}}^{2}\leq {{C}_{0}}{{n}^{2}}||{{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }}||_{{0,1}}^{2}, \hfill \\ ||{{\Xi }^{*}}{{\mathfrak{n}}_{*}}{{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }}-{{\mathfrak{n}}^{0}}{{\Xi }^{*}}{{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }}||_{{\mathcal{X}n,-1}}^{2}\leq {{C}_{0}}{{n}^{2}}||{{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }}||_{{0,1}}^{2}, \hfill \\ ||{{\Xi }^{*}}{{\mathfrak{L}}_{{\theta ,a,2}}}{{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }}-\mathfrak{L}_{\theta }^{0}{{\Xi }^{*}}{{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }}||_{{\mathcal{X}n,-1}}^{2}\leq {{C}_{0}}{{n}^{2}}||{{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }}||_{{0,1}}^{2}, \hfill \\ \end{gathered} $$

so that

$$\left\| {{\Pi }_{n}}{{\mathfrak{w}}_{2}} \right\|_{{{\mathfrak{X}}_{n,}}-1}^{2}\le 2\left\| {{\Pi }_{n}}{{\mathfrak{w}}_{1}} \right\|_{0,-1}^{2}+{{C}_{0}}{{n}^{2}}\left\| {{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }} \right\|_{0,1}^{2}$$
(6.43)

for some finite constant C 0.

It remains to examine the resolvent equation (6.42) through Fourier analysis. Let \(\mathbb{T}_{n,d} = [-\pi, \pi]^{nd}\) and denote by \(\widehat{\mathfrak{u}}_{\lambda}\colon\mathbb{T}_{n,d} \to\mathbb {C}\) the Fourier transform of \(\varXi^{\star}\mathfrak{u}_{\lambda}\):

$$ \left\| {{{\Pi }_{n}}{{\mathfrak{w}}_{2}}} \right\|_{{{{\mathfrak{X}}_{{n,}}}-1}}^{2}\leq 2\left\| {{{\Pi }_{n}}{{\mathfrak{w}}_{1}}} \right\|_{{0,-1}}^{2}+{{C}_{0}}{{n}^{2}}\left\| {{{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }}} \right\|_{{0,1}}^{2} $$

In this formula, xk=∑1≤jn x j k j . It follows from the resolvent equation (6.42) that \(\widehat{\mathfrak{u}}_{\lambda}\) is the solution of

$$ \lambda\widehat{\mathfrak{u}}_\lambda(\mathbf{k}) - \bigl\{\widehat{ \mathfrak{S}^o} (\mathbf{k}) + (1-2\alpha) \widehat{ \mathfrak{N}^o} (\mathbf{k}) +(1-\alpha) \widehat{ \mathfrak{L}}^o_\theta(\mathbf{k}) \bigr\} \widehat{\mathfrak{u}}_\lambda( \mathbf{k}) = \widehat{{\mathfrak{w}}}_2(\mathbf{k}) , $$

where \(\widehat{\mathfrak{S}^{o}}\), \(\widehat{\mathfrak{N}^{o}}\), \(\widehat{\mathfrak{L}}^{o}_{\theta}\) are the functions associated to the operators \({\mathfrak{S}^{o}}\), \({\mathfrak{N}^{o}}\), \(\mathfrak {L}^{o}_{\theta}\):

$$ \begin{gathered} \widehat{{-{{\mathfrak{S}}^{o}}}}(\mathbf{K})=\sum\limits_{\begin{subarray}{l} 1\leq j\leq n \\ z\in {{\mathbb{Z}}^{d}} \end{subarray}} {s(z)\left\{ {1-\cos ({{k}_{j}}.z)} \right\},} \hfill \\ \widehat{{-{{\mathfrak{N}}^{\mathfrak{o}}}}}(\mathbf{k})=i\sum\limits_{\begin{subarray}{l} 1\leq j\leq n \\ z\in {{\mathbb{Z}}^{d}} \end{subarray}} {a(z)\sin ({{k}_{j}}.z),} \hfill \\ \widehat{{\mathfrak{L}_{\theta }^{\mathfrak{o}}}}(\mathbf{k})=i\sum\limits_{{z\in {{\mathbb{Z}}^{d}}}} {a(z)\sin (\sum\limits_{{j=1}}^{n} {{{k}_{j}}.z} ).} \hfill \\ \end{gathered} $$

The \({{\mathcal{H}}_{-1}}({{\mathfrak{X}}_{n}})\) norm of a function \(\mathfrak{v}\text{:}{{\mathfrak{X}}_{n}}\to \mathbb{R}\) has a simple and explicit expression in terms of the Fourier transform:

$$ \left\| \mathfrak{v} \right\|_{{{{\mathfrak{X}}_{{n,}}}-1}}^{2}=\frac{{-1}}{{n!{{{(2\pi )}}^{{nd}}}}}\int\limits_{{{{\mathbb{T}}_{{n,d}}}}} {d\mathbf{k}\left| {{{{\left| {\widehat{\mathfrak{v}}(\mathbf{k})} \right|}}^{2}}\frac{1}{{\widehat{{{{\mathfrak{S}}^{\mathfrak{o}}}}}(\mathbf{k})}}.} \right|} $$

Since \(\varXi^{\star}{\mathfrak{u}}_{\lambda}\) is the solution of the resolvent equation (6.42), for every λ>0,

$$ \begin{array}{*{20}{c}} {\left\| {{{\mathfrak{N}}^{o}}{{\Xi }^{\star }}{{\mathfrak{u}}_{\lambda }}} \right\|_{{{{\mathfrak{X}}_{{n,-1}}}}}^{2}} \\ {=\frac{{-1}}{{n!{{{(2\pi )}}^{{nd}}}}}\int\limits_{{{{\mathbb{T}}_{{n,d}}}}} {{{{\left| {\frac{{\widehat{{{{\mathfrak{N}}^{\mathfrak{o}}}}}(\mathbf{k})}}{{\lambda -\widehat{\mathfrak{S}}(\mathbf{k})-(1-2\alpha )\widehat{{{{\mathfrak{N}}^{\mathfrak{o}}}}}(\mathbf{k})-(1-\alpha )\widehat{{\mathfrak{L}_{\theta }^{\mathfrak{o}}}}(\mathbf{k})}}} \right|}}^{2}}\frac{{{{{\left| {\widehat{{{{\mathfrak{w}}_{2}}}}(\mathbf{k})} \right|}}^{2}}}}{{\widehat{{{{\mathfrak{S}}^{\mathfrak{o}}}}}(\mathbf{k})}}d\mathbf{k}.} } \\ \end{array} $$

It follows from the explicit formulae for the functions \(\widehat {\mathfrak{S}^{o}}\), \(\widehat{\mathfrak{N}^{o}}\), \(\widehat{\mathfrak {L}}^{o}_{\theta}\) and a Taylor expansion for |k| small that the previous expression is bounded by

$$ \frac{{-{{C}_{0}}}}{{n!{{{(2\pi )}}^{{nd}}}}}\int\limits_{{{{\mathbb{T}}_{{n,d}}}}} {\frac{{{{{\left| {\widehat{{{{\mathfrak{w}}_{2}}}}(\mathbf{k})} \right|}}^{2}}}}{{\widehat{{{{\mathfrak{S}}^{\mathfrak{o}}}}}(\mathbf{k})}}d\mathbf{k}={{C}_{0}}\left\| {{{\mathfrak{w}}_{2}}} \right\|_{{{{\mathfrak{X}}_{{n,}}}-1}}^{2}} $$

for some finite constant C 0. We have thus proved that

$$\left\| {{\Pi }_{n}}{{\mathfrak{w}}_{2}} \right\|_{{{\mathfrak{X}}_{n,}}-1}^{2}\le 2\left\| {{\Pi }_{n}}{{\mathfrak{w}}_{1}} \right\|_{0,-1}^{2}+{{C}_{0}}{{n}^{2}}\left\| {{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }} \right\|_{0,1}^{2}$$
(6.44)

In the same way, for every λ>0, \(\left\| \mathfrak{L}_{\theta }^{0}{{\Xi }^{\star }}{{\mathfrak{u}}_{\lambda }} \right\|_{{{\mathfrak{X}}_{n}}-1}^{2}\) is equal to

$$ \begin{gathered} \begin{array}{*{20}{c}} {} \\ {=\frac{{-1}}{{n!{{{(2\pi )}}^{{nd}}}}}\int\limits_{{{{\mathbb{T}}_{{n,d}}}}} {{{{\left| {\frac{{\widehat{{\mathfrak{L}_{\theta }^{o}}}(\mathbf{k})}}{{\lambda -\widehat{{{{\mathfrak{S}}^{o}}}}(\mathbf{k})-(1-2\alpha )\widehat{{{{\mathfrak{N}}^{\mathfrak{o}}}}}(\mathbf{k})-(1-\alpha )\widehat{{\mathfrak{L}_{\theta }^{\mathfrak{o}}}}(\mathbf{k})}}} \right|}}^{2}}\frac{{{{{\left| {\widehat{{{{\mathfrak{w}}_{2}}}}(\mathbf{k})} \right|}}^{2}}}}{{\widehat{{{{\mathfrak{S}}^{\mathfrak{o}}}}}(\mathbf{k})}}d\mathbf{k}.} } \\ \end{array} \hfill \\ \leq \frac{{-{{C}_{0}}}}{{n!{{{(2\pi )}}^{{nd}}}}}\int\limits_{{{{\mathbb{T}}_{{n,d}}}}} {\frac{{{{{\left| {\widehat{{{{\mathfrak{w}}_{2}}}}(\mathbf{k})} \right|}}^{2}}}}{{\widehat{{{{\mathfrak{S}}^{\mathfrak{o}}}}}(\mathbf{k})}}d\mathbf{k}={{C}_{0}}\left\| \mathfrak{w} \right\|_{{{{\mathfrak{X}}_{{n,-1}}}.}}^{2}} \hfill \\ \end{gathered} $$

for some finite constant C 0 so that

$$ \left\| {\mathfrak{L}_{\theta }^{o}{{\Xi }^{\star }}{{\mathfrak{u}}_{\lambda }}} \right\|_{{{{\mathfrak{X}}_{{n,-1}}}}}^{2}\leq {{C}_{0}}\left\| {{{\mathfrak{w}}_{2}}} \right\|_{{{{\mathfrak{X}}_{{n,-1}}}}}^{2}. $$

We may now conclude the proof of Lemma 6.19. Fix n≥1. By (6.46), Lemma 6.21 and (6.44), there exists a finite constant C 0, which may change from line to line, such that

$$ \begin{array}{*{20}{c}} {\left\| {{{\Pi }_{n}}{{\mathfrak{N}}_{{\star \text{,}}}}{{\mathfrak{u}}_{\lambda }}} \right\|_{{0,-1}}^{2}=\left\| {{{\mathfrak{N}}_{{\star \text{,}}}}{{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }}} \right\|_{{0,-1}}^{2}\leq {{C}_{0}}n\left\| {{{\Xi }^{\star }}{{\mathfrak{N}}^{o}}{{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }}} \right\|_{{{{\mathfrak{X}}_{{n,-1}}}}}^{2}} \\ {\leq {{C}_{0}}n\left\{ {{{n}^{2}}\left\| {{{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }}} \right\|_{{0,1}}^{2}+\left\| {{{\mathfrak{N}}^{o}}{{\Xi }^{\star }}{{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }}} \right\|_{{{{\mathfrak{X}}_{{n,-1}}}}}^{2}} \right\}} \\ {\leq {{C}_{0}}\left\{ {{{n}^{3}}\left\| {{{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }}} \right\|_{{0,1}}^{2}+n\left\| {{{\Pi }_{n}}{{\mathfrak{w}}_{2}}} \right\|_{{{{\mathfrak{X}}_{{n,-1}}}}}^{2}} \right\}.} \\ \end{array} $$

In particular, by (6.43) and (6.41),

$$ \begin{array}{*{20}{c}} {\left\| {\left\| {{{\Pi }_{n}}{{\mathfrak{N}}_{{\star \text{,}}}}{{\mathfrak{u}}_{\lambda }}} \right\|_{{0,-1}}^{2}} \right\|\leq {{C}_{0}}\left\{ {{{n}^{3}}\left\| {{{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }}} \right\|_{{0,1}}^{2}+n\left\| {{{\Pi }_{n}}{{\mathfrak{w}}_{1}}} \right\|_{{0,-1}}^{2}} \right\}} \\ {\leq {{C}_{0}}\left\{ {{{n}^{3}}\sum\limits_{{j=n-1}}^{{n+1}} {\left\| {{{\Pi }_{n}}{{\mathfrak{u}}_{\lambda }}} \right\|_{{0,1}}^{2}+n\left\| {{{\Pi }_{n}}{{\mathfrak{w}}_{a}}} \right\|_{{0,-1}}^{2}} } \right\}.} \\ \end{array} $$

A similar set of inequalities holds for \(\mathfrak{L}_{\theta, a, 2}\) in place of \(\mathfrak{N}_{\star}\). This concludes the proof of the lemma. □

Estimates on Liftings

We close this section with some results used in the previous proof.

Lemma 6.20

There exists a finite constant C 0 such that for any n≥1 and any finitely supported function in0,1,

$$ \left\| \mathfrak{f} \right\|_{{0,1}}^{2}\leq \left\| {{{\Xi }^{\star }}\mathfrak{f}} \right\|_{{{{\mathfrak{X}}_{{n,1}}}}}^{2}\leq {{C}_{0}}n\left\| \mathfrak{f} \right\|_{{0,1}}^{2}. $$

Proof

The first inequality is elementary and follows from the explicit formulae for the respective H 1 norms. The only difference between the two expressions is that some gradients which are present in the \({{\mathcal{H}}_{1}}({{\mathfrak{X}}_{n}})\) norm do not appear in ℌ0,1 norm.

To prove the second inequality, let

$$ W(A)= \sum_{x,y\in A} s(y-x) + \sum _{x\in A} s(x). $$

We also denote by W the lifted function Ξ W. A simple computation shows that there exists a finite constant C 0 such that

$$ \big| \varXi^\star{\mathfrak{S}_\star} { \mathfrak{f}} (\mathbf{x}) - {\mathfrak{S}^o} \varXi^\star{ \mathfrak{f}} (\mathbf{x}) \big| \leq C_0 W (\mathbf{x}) \big| \varXi^\star{\mathfrak{f}}(\mathbf{x})\big| $$
(6.45)

for every x in \(\mathcal{E}_{n}^{*}\) and \([\mathfrak{f}:\mathcal{E}_{n}^{*}\to \mathbb{R}.\)

We are now in a position to prove the second bound. By definition,

$$ \left\| {{{\Xi }^{\star }}\mathfrak{f}} \right\|_{{{{\mathfrak{X}}_{n}},1}}^{2}=-\frac{1}{{n!}}\sum\limits_{{\mathbf{x}\in {{\mathfrak{X}}_{n}}}} {({{\Xi }^{\star }}\mathfrak{f})(\mathbf{x})({{\mathfrak{S}}^{o}}{{\Xi }^{\star }}\mathfrak{f})(\mathbf{x}).} $$

Since \(\varXi^{\star}{\mathfrak{f}}\) vanishes outside \(\mathcal{E}_{n}^{*},\) we may restrict the sum to \(\mathcal{E}_{n}^{*}.\) Now, adding and subtracting \((\varXi^{\star}{\mathfrak{S}_{\star}} {\mathfrak{f}})(\mathbf{x})\) in this expression and recalling (6.45), we obtain that

$$ \begin{array}{*{20}{c}} {\left\| {{{\Xi }^{\star }}\mathfrak{f}} \right\|_{{{{\mathfrak{X}}_{{n,}}}1}}^{2}\leq \left\| \mathfrak{f} \right\|_{{0,1}}^{2}+\frac{{{{C}_{0}}}}{{n!}}\sum\limits_{{\mathbf{x}\in \mathcal{E}_{n}^{*}}} {W(\mathbf{x}){{{\left\{ {{{\Xi }^{\star }}\mathfrak{f}(\mathbf{x})} \right\}}}^{2}}} } \\ {=\left\| \mathfrak{f} \right\|_{{0,1}}^{2}+{{C}_{0}}\sum\limits_{{A\in \mathcal{E}_{n}^{*}}} {W(A)\mathfrak{f}{{{(A)}}^{2}}.} } \\ \end{array} $$

By Lemma 6.22, the second term of the previous formula is bounded by \(C_{0} n \|{\mathfrak{f}}\|_{0,1}^{2}\), which concludes the proof of the lemma. □

It follows from this result and from the variational formulas for the H −1 norms that there exists a finite constant C 0 such for any n≥1 and any finitely supported function

$$\frac{1}{{{C}_{0}}n}\left\| \mathfrak{f} \right\|_{0,-1}^{2}\le \left\| {{\Xi }^{\star }}\mathfrak{f} \right\|_{{{\mathfrak{X}}_{n,-1}}}^{2}\le \left\| \mathfrak{f} \right\|_{0,-1}^{2}.$$
(6.46)

Lemma 6.21

There exists a finite constant C 0 depending only on p(⋅) such that

$$ \left\| \mathfrak{f} \right\|_{{0,1}}^{2}\leq \left\| {{{\Xi }^{\star }}\mathfrak{f}} \right\|_{{{{\mathfrak{X}}_{{n,1}}}}}^{2}\leq {{C}_{0}}n\left\| \mathfrak{f} \right\|_{{0,1}}^{2}. $$

for all n≥1 and all functions \(\mathfrak{f}:\mathcal{E}_{n}^{*}\to \mathbb{R}.\)

Proof

We prove the first and the third estimates and leave to the reader the details of the second. Fix n≥1 and a finitely supported function \(\mathfrak{h}\text{:}{{\mathfrak{X}}_{n}}\to \mathbb{R}.\) We need to estimate the scalar product

$$\frac{1}{n!}\sum\limits_{\mathbf{x}\in {{\mathfrak{X}}_{n}}}{\mathfrak{h}(\mathbf{x})}\left\{ {{\Xi }^{\star }}{{\mathfrak{S}}_{\star ,}}\mathfrak{f}\text{(}\mathbf{x}\text{)-}{{\mathfrak{S}}^{o}}\mathfrak{f}\text{(}\mathbf{x}\text{)} \right\}$$
(6.47)

in terms of the norm of \(\mathfrak {h}\) and the ℌ0,1 norm of \({\mathfrak{f}}\). There are two possible cases. Either x belongs to \(\mathcal{E}_{n}^{*}\) or x does not belong to \(\mathcal{E}_{n}^{*}.\)

In the first case, by (6.45), the expression inside braces in the previous formula is absolutely bounded by \(C_{0} W (\mathbf{x}) |\varXi^{\star}{\mathfrak{f}} (\mathbf{x})|\) for some finite constant C 0. Therefore, the corresponding piece in the previous formula is bounded above by

$$ \frac{{{{C}_{0}}}}{{n!}}\sum\limits_{{\mathbf{x}\in \mathcal{E}_{n}^{*}~~}} {W(\mathbf{x})\left| {\mathfrak{h}(\mathbf{x})} \right|\left| {{{\Xi }^{\star }}\mathfrak{f}(\mathbf{x})} \right|\leq } \frac{{{{C}_{0}}}}{{n!}}\sum\limits_{{\mathbf{x}\in \mathcal{E}_{n}^{*}~~}} {W(\mathbf{x})\mathfrak{h}{{{(\mathbf{x})}}^{2}}+{{C}_{0}}\ell \sum\limits_{{A\in \mathcal{E}_{n}^{*}}} {W(A)\mathfrak{f}{{{\text{(}A\text{)}}}^{2}}} } $$

for every a>0.

If x does not belong to \(\mathcal{E}_{n}^{*},\) the corresponding piece of the scalar product writes

$$ -\frac{1}{{n!}}\sum\limits_{\begin{subarray}{l} \mathbf{x}\in {{\mathfrak{X}}_{n}}\backslash \mathcal{E}_{n}^{*} \\ z\in \mathbb{Z},1\leq j\leq n \end{subarray}} {s(z)\mathfrak{h}\text{(}\mathbf{x}\text{)}{{\Xi }^{\star }}\mathfrak{f}\text{(}\mathbf{x}+z{{e}_{j}}\text{)}} $$

because in this case \(\varXi^{\star}{\mathfrak{S}_{\star}} {\mathfrak{f}} (\mathbf{x}) = \varXi^{\star}{\mathfrak{f}} (\mathbf{x}) = 0\). In this formula, {e j ,1≤jn} stands for the canonical basis of ℝn so that x+z e j =(x 1,…,x j−1,x j +z,x j+1,…,x n ). Since \(\varXi^{\star}{\mathfrak{f}}\) vanishes outside \(\mathcal{E}_{n}^{*},\) it is implicit in the previous formula that the sum is restricted to all x such that x+z e j belongs to \(\mathcal{E}_{n}^{*}.\) Since \(\mathbf{x}+z{{\mathbf{e}}_{j}}\in \mathcal{E}_{n}^{*}\) and \(\mathbf{x}\notin \mathcal{E}_{n}^{*},\) either x j =x k for some k or x j =0. In particular, since 2abℓa 2+ −1 b 2 for every >0, a change of variables shows that the previous sum is bounded above by

$$\frac{1}{n!}\sum\limits_{\mathbf{x}\in {{\mathfrak{X}}_{n}}}{h{{(\mathbf{x})}^{2}}\tilde{W}(\mathbf{x})+\frac{\ell }{n!}\sum\limits_{\mathbf{x}\in {{\mathfrak{X}}_{n}}}{{{\Xi }^{\star }}\mathfrak{f}{{\text{(}\mathbf{x}\text{)}}^{2}}W(\mathbf{x}),}}$$
(6.48)

where \(\widetilde{W} (\mathbf{x}) = \sum_{j\neq k} \mathbf{1}\{ x_{j} = x_{k}\} + \sum_{j} \mathbf{1}\{x_{j}=0\}\). We may of course replace the sum over \({{\mathfrak{X}}_{n}}\) by a sum over \(\mathcal{E}_{n}^{*}\) in the second term, losing the factor n!.

Adding together all previous estimates, we obtain that the scalar product (6.47) is bounded above by

$$ \frac{{{{C}_{0}}}}{{n!\ell }}\sum\limits_{{\mathbf{X}\in {{\mathfrak{X}}_{n}}}} {\mathfrak{h}{{{\text{(x)}}}^{2}}\left\{ {\tilde{W}(\text{x})+W(x)} \right\}+{{C}_{0}}\ell \sum\limits_{{A\in \mathcal{E}_{n}^{*}}} {\mathfrak{f}{{{\text{(A)}}}^{2}}W(A).} } $$

By Lemma 6.22, the second term is less than or equal to \(C_{0} n \ell\Vert{\mathfrak{f}} \Vert_{0,1}^{2}\) for some finite constant C 0. On the other hand, a simple adaptation of the proof of the same lemma gives that the first term is bounded by \({{C}_{1}}n{{\ell }^{-1}}\left\| \mathfrak{h} \right\|_{\mathfrak{X},\mathfrak{n},1}^{2}.\) To conclude the proof, it remains to minimize over and to recall the variational formula for the H −1 norm of a function.

We turn now to the third estimate. As above, fix a finitely supported function\(\mathfrak{h}\text{:}{{\mathfrak{X}}_{n}}\to \mathbb{R}.\) Observe that \(\varXi^{\star}\mathfrak{L}_{\theta, a, 2} \mathfrak{f} (\mathbf{x}) = \mathfrak{L}^{o}_{\theta} \varXi^{\star}\mathfrak{f} (\mathbf{x})\) if x belongs to \(\mathcal{E}_{n}^{*}.\) On the other hand, if \(\mathbf{x}\notin \mathcal{E}_{n}^{*},\) this difference is equal to \(- \sum_{z} a(z) \mathfrak{f}(x_{1}-z, \dots, x_{n}-z)\). Therefore

$$ \begin{array}{*{20}{c}} {\frac{1}{{n!}}\sum\limits_{{\mathbf{x}\in {{\mathfrak{X}}_{n}}}} {\mathfrak{h}(\mathbf{x})} {{\Xi }^{\star }}{{\mathfrak{L}}_{{\theta ,a,2}}}\{\mathfrak{f}\text{(X)=}\mathfrak{L}_{\theta }^{o}{{\Xi }^{\star }}\mathfrak{f}\text{(X)\}}} \\ {=\frac{{-1}}{{n!}}\sum\limits_{\begin{subarray}{l} \mathbf{x}\notin \mathcal{E}_{n}^{*} \\ z\in {{\mathbb{Z}}^{d}} \end{subarray}} {\mathfrak{h}(\mathbf{x}){{\Xi }^{\star }}\mathfrak{f}({{x}_{1}}-z,\ldots ,{{x}_{n}}-z).} } \\ \end{array} $$

Since \(\varXi^{\star}\mathfrak{f}\) vanishes in \({{\text{(}\mathcal{E}_{n}^{*})}^{c}}\) and since x does not belong to \(\mathcal{E}_{n}^{*},\) we must have x i =0 for some i for the previous term to be different from 0. Introducing the indicator function ∑1≤in 1{x i =0} and proceeding as in the first part of the proof, we estimate the absolute value of the previous sum by (6.48) This concludes the proof of the lemma. □

Lemma 6.22

Let \(W:{{\mathcal{E}}^{*}}\to \mathbb{R}\) be defined by W(A)=∑ x,yA s(yx)+∑ xA s(x). There exists a finite constant C 0, depending only on p(⋅) such that

$$ \sum\limits_{{A\in \varepsilon _{n}^{*}}} \mathfrak{g} {{(A)}^{2}}W(A)\leq {{C}_{0}}n\langle -{{\mathfrak{S}}_{*}}\mathfrak{g},\mathfrak{g}\rangle {{\mu }_{*}} $$

for all n≥1 and all finitely supported function \(\mathfrak{g}:\mathcal{E}_{n}^{*}\to \mathbb{R}.\)

The proof of this lemma is similar to the one of Lemma 5.18 and left to the reader. Instead of employing the transience of the symmetric random walk on ℤd, d≥3, one uses the transience of the symmetric walk which avoids the origin stated in Lemma 5.26 and the estimates of the Green function presented in Proposition 5.25.

8 The Self-diffusion Matrix

In this section, we obtain a finite upper bound and a strictly positive lower bound for the diffusion matrix D(α) seen as a quadratic form.

Proposition 6.23

There exists a strictly positive and finite constant C 0, depending only on p, such that

$$ C_0^{-1} \alpha(1-\alpha) |\mathfrak{a}|^2 \le \mathfrak{a} \cdot D(\alpha) \mathfrak{a} \le C_0 (1- \alpha) |\mathfrak{a}|^2 $$

for all \(\mathfrak{a}\) ind.

The proof of this proposition is divided into several steps. We first derive a variational formula for the self-diffusion matrix in the symmetric case, denoted by D s (α).

Lemma 6.24

Assume that p(x)=p(−x). For all \(\mathfrak{a}\) ind,

$$ \begin{array}{*{20}{c}} {\mathfrak{a}.D(\alpha )\mathfrak{a}=\mathop{{\inf }}\limits_{u}\{\sum\limits_{{z\in \mathbb{Z}_{*}^{d}}} {s(z)\int {[1-\xi (z)]{{{\left\{ {z.a+({{T}^{z}}{{u}_{\lambda }})(\xi )} \right\}}}^{2}}v_{\alpha }^{*}(d\xi )} } } \\ {+(1/2)\sum\limits_{{x,y\in \mathbb{Z}_{*}^{d}}} {s(y-x)\int {({{T}^{{x,y}}}{{u}_{\lambda }}){{{(\xi )}}^{2}}v_{\alpha }^{*}(d\xi )\},} } } \\ \end{array} $$

where the infimum is taken over all cylinder functions u.

Proof

Fix \(\mathfrak{a}\) in ℝd. By (6.19), Lemma 6.11 and Lemma 6.12, the asymptotic variance of \(t^{-1/2} Z_{t} \cdot\mathfrak {a}\) is equal to \(\lim_{t \to\infty} \lim_{\lambda\to0} t^{-1} \mathbb {E}_{\nu_{\alpha}^{*}} [ (M_{t} + m^{\lambda}_{t})^{2} ]\). By the representation of the martingales M t , \(m^{\lambda}_{t}\) in terms of the elementary martingales, this expectation is equal to

$$ \sum\limits_{{Z\in \mathbb{Z}_{*}^{d}}} {s(z)} \int {[1-\xi (z)]{{{\{z.\mathfrak{a}+({{T}^{z}}{{u}_{\lambda }})(\xi )\}}}^{2}}v_{\alpha }^{*}+(1/2)\sum\limits_{{x,y\in \mathbb{Z}_{*}^{d}}} {s(y-x)\int {({{T}^{{x,y}}}{{u}_{\lambda }}){{{(\xi )}}^{2}}v_{\alpha }^{*}(d\xi ),} } } $$

where u λ is the solution of the resolvent equation (6.15). Expand the square \(\{ z\cdot\mathfrak{a} - (T^{z} u_{\lambda})(\xi)\}^{2}\). The contribution of the term which does not depend on u λ is equal to \((1-\alpha) \mathfrak{a} \cdot \sigma^{2} \mathfrak{a}\), where σ 2 is the symmetric matrix with entries \(\sigma^{2}_{i,j} = \sum_{x} x_{i} x_{j} p(x)\), since s=p. A change of variables and the symmetry of s(⋅) show that the cross term is equal to

(6.49)

Therefore, by the explicit formula for the Dirichlet forms obtained right after (6.3), the asymptotic variance of \(t^{-1/2} Z_{t} \cdot\mathfrak{a}\) is equal to

$$ (1-\alpha) \mathfrak{a} \cdot\sigma^2 \mathfrak{a} - 2 \lim_{\lambda\to0} \bigl\{ 2\langle V_\mathfrak{a} , u_\lambda \rangle_{\nu_\alpha^*} - \Vert u_\lambda\Vert_1^2 \bigr\} . $$

By (2.24) and Sect. 2.7.1, both \(\langle V_{\mathfrak{a}} , u_{\lambda}\rangle_{\nu_{\alpha}^{*}}\) and \(\Vert u_{\lambda}\Vert_{1}^{2}\) converge to \(\Vert V_{\mathfrak{a}}\Vert^{2}_{-1}\) as λ↓0. Therefore,

$$ \mathfrak{a} \cdot D_s(\alpha) \mathfrak{a} = (1- \alpha) \mathfrak{a} \cdot\sigma^2 \mathfrak{a} - 2 \Vert V_\mathfrak{a}\Vert^2_{-1}. $$
(6.50)

It remains to recall the variational formula for the \({{\mathcal{H}}_{-1}}\) norm of a cylinder function and to repeat the computations presented at the beginning of the proof in the opposite order to conclude the lemma. □

The second step in the proof of Proposition 6.23 consists in obtaining a lower bound for the self-diffusion coefficient in the symmetric case. For sake of completeness, we present also an upper bound.

Lemma 6.25

Assume that p(x)=p(−x). There exists a strictly positive constant C 0, depending only on p, such that

$$ C_0 \alpha (1-\alpha) \mathfrak{a} \cdot \sigma^2 \mathfrak {a} \le \mathfrak{a} \cdot D_s( \alpha) \mathfrak{a} \le (1-\alpha) \mathfrak{a} \cdot \sigma^2 \mathfrak{a} $$

for all \(\mathfrak{a}\) ind.

Proof

The upper bound follows from (6.50). In view of this identity, to prove the lower bound it is enough to show that

$$ 2 \| V_{\mathfrak{a}}\|^2_{-1} \le C_1 (1-\alpha) \mathfrak{a} \cdot\sigma^2 \mathfrak{a} $$
(6.51)

for some constant C 1<1.

We claim that there exists a finite constant C 0 such that

(6.52)

where the supremum is carried over all cylinder functions f.

To prove the first inequality, recall (6.49) and apply Schwarz inequality to obtain that \(2 \langle f, V_{\mathfrak{a}}\rangle_{\nu _{\alpha}^{*}}\) is bounded above by

$$ (1/2) (1-\alpha) \sum_{x\in\mathbb{Z}^d_*} s(x) (x \cdot \mathfrak{a})^2 + (1/2) \sum_{x\in\mathbb{Z}^d_*} s(x) \int\bigl(T^x f\bigr) (\xi)^2 \nu_\alpha^*(d\xi). $$

The first term is equal to \((1/2) (1-\alpha) \mathfrak{a} \cdot \sigma^{2} \mathfrak{a}\), while the second term is equal to \({{\mathcal{D}}_{0}}(f).\) This proves the first inequality in (6.52). The second inequality follows from (6.22) and the strict ellipticity of the matrix σ 2.

We are now in a position to prove (6.51). Since

$$ ||{{V}_{\mathfrak{a}}}||_{{-1}}^{2}=\sup \{2{{\langle f,{{V}_{\mathfrak{a}}}\rangle }_{{v_{\alpha }^{*}}}}-{{\mathcal{D}}_{\theta }}(f)-{{\mathcal{D}}_{\theta }}(f)\}, $$

we may split \(2 \langle f, V_{\mathfrak{a}}\rangle_{\nu_{\alpha}^{*}}\) optimally in two pieces and recall (6.52) to obtain (6.51) with C 1=2C 0 α(1+2C 0 α)−1. Some elementary algebra permits to conclude. □

Proof of Proposition 6.23

We first derive the upper bound which is easier. In view of (6.19) and Lemma 6.12, to obtain an upper bound on the variance of \(t^{-1/2} Z_{t}\cdot\mathfrak{a}\), we just need to estimate the variances of the martingales M t and m t . On the one hand, by definition (6.18) of the martingale M t and by the explicit formulas for the quadratic variations of the elementary martingales \(M^{z}_{t}\),

$$ \mathbb{E}_{\nu_\alpha^*} \bigl[ M^2_t \bigr] \le C_0 t (1-\alpha) |\mathfrak{a}|^2 . $$

On the other hand, by Lemma 6.11,

$$ \mathbb{E}_{\nu_\alpha^*} \bigl[ m_t^2 \bigr] = \lim_{\lambda\to0} \mathbb{E}_{\nu_\alpha^*} \bigl[ \bigl(m^\lambda_t \bigr)^2 \bigr] = \lim_{\lambda\to0} 2 t \Vert u_\lambda \Vert_1^2, $$

where u λ is the solution of the resolvent equation (6.15). By (2.15) and by Lemma 6.13, this last expression is less than or equal to \(2 t \Vert V_{\mathfrak{a}} \Vert_{-1}^{2} \le C_{0} t \alpha(1-\alpha) |\mathfrak{a}|^{2}\). This concludes the proof of the upper bound.

We now turn to the lower bound. In view of the previous lemma, it is enough to show that the self-diffusion coefficient of the asymmetric exclusion process is bounded below by the self-diffusion coefficient of the symmetric process: D(α)≥D s (α).

Recall formula (6.20) of the diffusion coefficient D(α). Since the sequence Ψ is the limit in \(\mathbb{L}^{2}(\nu_{\alpha}^{*})\) of Ψ λ,

$$ \begin{array}{*{20}{c}} {\mathfrak{a}.D(\alpha )\mathfrak{a}=\mathop{{\inf }}\limits_{u}\{\sum\limits_{{z\in \mathbb{Z}_{*}^{d}}} {s(z)\int {[1-\xi (z)]{{{\left\{ {z.a+({{T}^{z}}{{u}_{\lambda }})(\xi )} \right\}}}^{2}}v_{\alpha }^{*}(d\xi )} } } \\ {+(1/2)\sum\limits_{{x,y\in \mathbb{Z}_{*}^{d}}} {s(y-x)\int {({{T}^{{x,y}}}{{u}_{\lambda }}){{{(\xi )}}^{2}}v_{\alpha }^{*}(d\xi )\},} } } \\ \end{array} $$

where the infimum is taken over all cylinder functions u. By Lemma 6.24, the right-hand side is just \(\mathfrak{a} \cdot D_{s}(\alpha) \mathfrak{a}\). This concludes the proof of the proposition. □

9 Comments and References

The invariance of the Bernoulli measures \(\nu^{*}_{\alpha}\) for the process seen from the tagged particle was proved by Spitzer (1970). The ergodicity of the measures \(\nu^{*}_{\alpha}\) and the law of large numbers stated in Theorem 6.5 are due to Saada (1987). Kipnis and Varadhan (1986) proved the invariance principle for a tagged particle in symmetric exclusion processes and showed that the diffusion matrix is non-degenerate. This results has been extended to the mean zero case by Varadhan (1995), and to the asymmetric case in dimension d≥3 by Sethuraman et al. (2000). We refer to Sect. 5.8 for comments and references on the duality method, on the estimates of the asymmetric diagonal and off-diagonal terms of the generator, and on the removal of the hard core interaction used in the proof of Lemma 6.19. We followed here (Landim et al., 2001), where the reader can find a variational formula for the diffusion coefficient in the asymmetric case.

A central limit theorem for a tagged particle has been proved along the lines described in this chapter in several different contexts.

Nearest Neighbor, Asymmetric Case in Dimension 1

In the totally asymmetric case, p(1)=1, Kesten, cited in Spitzer (1970), observed that under the stationary state \(\mathbb {P}_{\nu^{*}_{\alpha}}\) the position of the tagged particle, Z t , is a Markov process with mean one exponential jump rates. It follows from this remark that Z t /t converges a.s. to 1−α and that \([Z_{t} - (1-\alpha)t]/\sqrt{t}\) converges in distribution to a mean zero Gaussian random variable with variance (1−α)2. Kipnis (1986) extended this result to the asymmetric, nearest neighbor case, p(−1)+p(1)=1, p(1)≠1/2, proving that Z t /t converges a.s. to \((1-\alpha) \mathfrak{m}\) and that \([Z_{t} - (1-\alpha)\mathfrak{m} t]/\sqrt{t}\) converges in distribution to a mean zero Gaussian random variable with a strictly positive variance.

Hard Spheres

Tanemura (1989) considers a reversible system of infinitely many hard balls with the same diameter moving on ℝd by random jumps under the hard core condition. The jump rates depend on the configuration. The author proves a central limit theorem for a tagged particle provided the density of particles is sufficiently small. This latter condition is imposed to ensure the ergodicity of the process. Osada (1998a,b) proves an invariance principle for a tagged particle among infinitely many hard core Brownian balls and shows that the self-diffusion matrix is strictly positive.

Exclusion Processes and Related Models

Carlson et al. (1993a,b) proves an invariance principle for a tagged particle in a one-dimensional exclusion process where a particle jumps from site x to site x+y at rate c(|y|) if all sites between x and x+y are occupied, where c is a non-increasing function. When c decays slowly they show that the variance of the Brownian motion diverges as the density increases to one.

An invariance principle for a tagged particle in exclusion processes with degenerate rates is proved in Bertini and Toninelli (2004) and Toninelli and Biroli (2004). The self-diffusion matrix is shown to be always positive for cooperative models.

Sethuraman (2007) examines the central limit theorem for a tagged particle in asymmetric zero-range processes.

Mechanical Systems

A classical problem in mathematical physics consists in deriving Brownian motion from Hamiltonian principles. There is a huge bibliography on the subject. The martingale approximation presented in this chapter has been used by Szász and Tóth (1987a,b) to study the following problem. They consider a one-dimensional system of a Brownian particle of fixed mass interacting via elastic collisions with an infinite ideal gas of bath particles of mass 1. Letting the mass of the Brownian particle increase at an appropriate rate as the scaling parameter changes, the authors proved convergence to Brownian motion.

We prove in this book convergence to Brownian motion. Certain models, such as the tagged particle in the one-dimensional nearest neighbor symmetric exclusion process or the tagged particle in an exclusion process with long range jumps, exhibit different asymptotic behavior.

Convergence to Fractional Brownian Motion

Consider the tagged particle in the one-dimensional nearest neighbor symmetric exclusion process. This is the case not covered by Theorem 6.3 and not examined in this chapter. Arratia (1983) proved that X t /t 1/4 converges in distribution to a normal random variable with mean 0 and variance \((1-\alpha)/\alpha \sqrt{2/\pi}\). Based on the approach developed by Brox and Rost (1984) to prove the equilibrium fluctuations for the density field, Rost and Vares (1985) showed that the finite dimensional distributions of the tagged particle converge to the finite dimensional distributions of a fractional Brownian motion of Hurst parameter 1/4. Peligrad and Sethuraman (2008) completed the proof of the functional central limit theorem by deriving the tightness of the process. Jara and Landim (2006, 2008) proved the convergence to a Gaussian process of the finite dimensional distributions of the position of the tagged particle starting from a local equilibrium state. In the second article the authors considered an exclusion process with bond disorder. This is one of the few examples where a non-equilibrium fluctuation has been proven. With similar methods Gonçalves and Jara (2008) examined the tagged particle in the exclusion process with variable diffusion coefficient.

Subdiffusive Behavior

Shiga (1988) examined the evolution of a tagged particle in exclusion models on ℤ in which more than one particle jumps simultaneously. He proved that the position of the tagged particle is asymptotically Gaussian with variance equal to Ct 1/a, where a∈(1,2] depends on the rates at which particles jump.

Convergence to Lévy Processes

Jara (2009) proved that a tagged particle in an equilibrium exclusion process with long range jumps converges to a symmetric, α-stable Lévy process. When the initial state is a local equilibrium, the tagged particle converges to the solution of a martingale problem involving the solution of the hydrodynamic equation.

Rod Dynamics

In the exclusion model, a “rod” is a large particle which occupies several lattice sites. In the symmetric case Alexander and Lebowitz (1994) prove that the rescaled position of the rod converges to a Brownian motion and obtain by computer simulations how the variance depends on the length of the rod. In the asymmetric case Alexander and Lebowitz (1990, 1994) show that the velocity of the rod increases with its length, which is counter-intuitive since we would expect the size to restrain the displacement.

Einstein Relation

Consider particles evolving in time and observe the displacement of one them, called the tagged particle. Denote by X(t) its position at time t and assume that this displacement process converges weakly to a centered d-dimensional Brownian motion with covariance matrix D, when space and time are appropriately scaled. Perturb the process by adding a small external force of intensity f felt only by the tagged particle. Under reasonable conditions, it is expected that the tagged particle converges in the same time scale to a Brownian motion with the same covariance matrix, but with a drift of the form Mf, where M is called the mobility. The Einstein relation predicts that the diffusivity and the mobility are related by the equation M=(1/2kT)D, where T is the temperature and k the Boltzmann constant.

Lebowitz and Rost (1994) proved the validity of Einstein relation for a tagged particle in a system of interacting diffusions, for a random walk with random conductances, and for a particle in a random potential whose velocity is governed by a Langevin equation.

Loulakis (2002, 2005) computed the mobility of the tagged particle in the symmetric and in the mean zero asymmetric simple exclusion process in dimension d≥3. While the Einstein relation holds for the symmetric simple exclusion process, it fails to hold in the mean zero asymmetric case.

Landim et al. (1998a) considered on ℤ a tagged particle jumping with rate p>1/2 to the right and rate 1−p to the left, evolving in a sea of particles moving according to the nearest neighbor symmetric simple exclusion process. Denote by X t the position of the tracer particle at time t. Assuming that the tracer particle starts at the origin, while the remaining particles start from a product measure with density α, the authors prove that \(X_{t}/\sqrt{t}\) converges in probability to v α (p), and that

$$ \lim_{p\downarrow1/2}\frac{v_\alpha(p)}{p-(1-p)} = \frac{1-\alpha}{\alpha} \sqrt{\frac{2}{\pi}}\cdot $$

When p=1/2, Arratia (1983) proved that X t /t 1/4 converges in distribution to a normal random variable with mean 0 and variance \((1-\alpha)/\alpha \sqrt{2/\pi}\). This establishes the Einstein relation for this process. In contrast with the previous models, the drift is not rescaled in this last example.

Stability of the Effective Diffusion

Consider a tagged particle in the exclusion process on a d-dimensional torus of length N with K particles. Denote by \(\sigma^{2}_{N,K}\) the effective diffusion of the limiting Brownian motion. Landim et al. (2002) proved that in the symmetric case, \(\sigma^{2}_{N,K}\) converges, as N↑∞ and K/N dα, to the effective diffusion of the Brownian motion obtained as limit of a tagged particle in the exclusion process on ℤd under the stationary measure ν α . The argument reduces essentially to the proof of the Liouville D-property, discussed in Chap. 1, for symmetric exclusion processes. Jara (2006) extends the result to mean zero exclusion processes in every dimension and to asymmetric exclusion processes in dimension d≥3. Caputo and Ioffe (2003) examined this problem for random walks with random conductances. In the same spirit, Owhadi (2003) investigated the approximation of the effective conductivity of bounded elliptic random operators by periodization.