1 Introduction

Informally, the exclusion process is an interacting particle system consisting of a collection of continuous-time dependent random walks moving on the lattice \({\mathbb {Z}}^d\): A particle at x waits an exponential(1) time and then chooses to displace to \(x+y\) with translation-invariant probability p(y). If, however, \(x+y\) is already occupied, the jump is suppressed and the clock is reset. The process \(\eta _t = \{\eta _t(x): x\in {\mathbb {Z}}^d\} \in \{0,1\}^{{\mathbb {Z}}^d}\) for \(t\ge 0\) is a Markov process which keeps track of the occupied locations on \({\mathbb {Z}}^d\). These systems have been much investigated since the 1970’s when they were introduced as models of queues, traffic, fluid flow etc. In particular, the model has proved useful and fundamental in the context of statistical physics [16, 17, 29].

The exclusion model has many invariant measures, being ‘mass-conservative’ with no birth or death. In fact, there is a one parameter family of Bernoulli product invariant measures \(\nu _\rho \), indexed by the ‘mass density’ \(\rho \in [0,1]\) (cf. Chapter VIII in [16]). Here, under \(\nu _\rho \), particles are placed at lattice points \(x\in {\mathbb {Z}}^d\) independently with probability \(\rho \). Throughout the paper, we fix a density \(\rho \in (0,1)\) and begin the process under \(\nu _\rho \).

The study of the fluctuations of occupation times of a vertex, or a local region, or more generally that of additive functionals in exclusion particle systems on \({\mathbb {Z}}^d\), starting from an invariant measure \(\nu _\rho \) has a long history going back to [11] and [13]. When the infinitesimal interactions are ‘finite-range’, that is when p is compactly supported, several interesting dependencies on the dimension d, the density \(\rho \), and the type of underlying single particle transition probability \(p=p(\cdot )\) have been found. In particular, for the asymmetric exclusion model, when \(\rho =1/2\), connections with ‘Kardar–Parisi–Zhang’ (KPZ) class variance orders of the space-time bulk mass density of the process have been made (cf. Sect. 1.4 below).

The purpose of this article is to ask what happens if the system has ‘long-range’ interactions, that is say when \(p(\cdot )\) has a long tail, proportional to \(|\cdot |^{-(d+\alpha )}\) for \(\alpha >0\). Such systems are of interest in models with anomalous diffusion, a subject of recent interest (cf. [1, 6] and references therein). In the particle systems context, symmetric long-range exclusion processes have been studied with respect to tagged particles [10]. However, in the asymmetric context, there appears to be little work on long-range processes. We note the ‘long-range’ systems considered in this article are not those systems, with the same name, where at rate 1 a particle hops to the nearest empty location found by iterating a random walk kernel (cf. [2]).

What are the variance orders and scaled centered limits of the occupation time at a vertex or more general additive functionals, and how do they relate to d, \(\rho \), \(\alpha \) and the structure of p? In particular, one wonders under asymmetric long-range infinitesimal interactions if there are still connections with‘KPZ’ exponent orders, and if so how to interpret them. Can one infer the notion of ‘long-range KPZ’ exponent orders, which to our knowledge have not before been considered?

To discuss these questions and to put our work in better context, we first develop connections with ‘second-class’ particles and \(H_{-1}\) norms in the setting of occupation times at the origin, and then discuss previous ‘finite-range’ literature afterwards.

Let \(\eta _s(0)\) be the indicator of a particle at the origin at time s with respect to the process, and let \(\Gamma (t) = \int _0^t f(\eta _s)ds\) with \(f(\eta ) = \eta (0)-\rho \) be the centered occupation time up to time t. Let \(a^2_t = {\mathbb {E}}_\rho ([\Gamma (t)]^2)\) be the variance starting from \(\nu _\rho \).

1.1 Connection with a ‘second-class’ particle

The variance may be computed from a standard argument. By stationarity of \(\nu _\rho \) and changing variables,

$$\begin{aligned} a^2_t= & {} 2\int _0^t \int _0^s {\mathbb {E}}_\rho [f(\eta _u)f(\eta _0)]duds\\= & {} 2t\int _0^t \left( 1- s/t\right) {\mathbb {E}}_\rho [f(\eta _s)f(\eta _0)]ds. \end{aligned}$$

Now, the covariance, or ‘two-point’ function as it sometimes called, as \(\rho = {\mathbb {P}}_\rho (\eta _s(0)=1)\) for \(s\ge 0\), and by Bayes’s formula,

$$\begin{aligned} {\mathbb {E}}_\rho [f(\eta _s)f(\eta _0)]= & {} {\mathbb {E}}_\rho [\eta _s(0)\eta _0(0)] - \rho ^2\\= & {} \rho \left\{ {\mathbb {E}}_\rho [\eta _s(0)|\eta _0(0)=1] - {\mathbb {E}}_\rho [\eta _s(0)]\right\} \\= & {} \rho (1-\rho )\left\{ {\mathbb {E}}_\rho [\eta _s(0)|\eta _0(0)=1] - {\mathbb {E}}_\rho [\eta _s(0)|\eta _0(0)=0]\right\} . \end{aligned}$$

From the basic coupling, which compares two exclusion systems starting from \(\eta _0\) and \(\eta '_0\), a configuration which ‘flips’ the value at the origin, that is \(\eta '_0(x) = \eta _0(x)\) for \(x\ne 0\) and \(\eta '_0(0)= 1-\eta _0(0)\), we can track the location of the discrepancy \(R_s\), initially at the origin, for times \(s\ge 0\). The dynamics of the discrepancy, or ‘second-class’ particle, is that it moves from location x to \(x+y\) at time s with rate \(p(y)(1-\eta _s(x+y)) + p(-y)\eta _s(x+y)\). The interpretation is that it jumps as any other particle in the system, corresponding to the part \(p(y)(1-\eta _s(x+y))\); but, also it must move if one of the other particles jumps to its location, corresponding to the part \(p(-y)\eta _s(x+y)\). Hence,

$$\begin{aligned} {\mathbb {E}}_\rho [\eta _s(0)|\eta _0(0)=1] - {\mathbb {E}}_\rho [\eta _s(0)|\eta _0(0)=0] = \bar{{\mathbb {P}}}_\rho (R_s = 0) \end{aligned}$$

where \(\bar{{\mathbb {P}}}_\rho \) is the coupled measure. See Section VIII.2 in [16] for more discussion on the basic coupling.

Putting these observations together, we have

$$\begin{aligned} a^2_t= 2t\int _0^t \left( 1-s/t\right) \bar{{\mathbb {P}}}_\rho (R_s = 0) ds, \end{aligned}$$

roughly t times the expected occupation time of the second-class particle at the origin.

1.2 Connection with ‘\(H_{-1}\)’ norms

Instead of dealing directly with \(a^2_t\), one might consider the Laplace transform \(L_\lambda = \int _0^\infty e^{-\lambda t} a^2_t dt\) and its behavior as \(\lambda \downarrow 0\). By a formal Tauberian ansatz, \(t^{-1}L_{t^{-1}} \sim t^{-1}\int _0^t a^2_u du \sim a^2_t\). Moreover, the object \(L_\lambda \), after two integration by parts, may be written as

$$\begin{aligned} L_\lambda= & {} \frac{2}{\lambda ^2}\int _0^\infty e^{-\lambda t} {\mathbb {E}}_\rho [f(\eta _t)f(\eta _0)]dt\\= & {} \frac{2}{\lambda ^{2}}{\mathbb {E}}_\rho [f(\eta _0)u_\lambda (\eta _0)] \end{aligned}$$

where \(u_\lambda (\eta ) = \int _0^\infty e^{-\lambda t}T_tf(\eta ) = (\lambda - {{\mathscr {L}}})^{-1}f(\eta )\) and \(T_t\) and \({{\mathscr {L}}}\) are the process semigroup and generator respectively. The term \(\{{\mathbb {E}}_\rho [f(\eta ) (\lambda - {{\mathscr {L}}})^{-1}f(\eta )]\}^{1/2}\) is well defined for \(f\in {\mathbb {L}}^2(\nu _\rho )\) and can be written in variational form, in terms of \(H_1\) and \(H_{-1}\) (semi-)norms and the symmetric and anti-symmetric decomposition of \({{\mathscr {L}}} = {{\mathscr {S}}} + {{\mathscr {A}}}\), which may be leveraged in bounding \(L_\lambda \). Moreover, a useful test for when \(a^2_t = O(t)\) is that the \(H_{-1}\) norm \(\Vert f\Vert _{-1}<\infty \). See Sect. 3.1 for a more comprehensive treatment.

1.3 Finite-range models: symmetric and mean-zero transitions

When p is symmetric, \(p(\cdot ) = p(-\cdot )\), the transition rates of the second-class particle from x to \(x+y\) reduce to \(p(y)(1-\eta _s(x+y)) + p(-y)\eta _s(x+y) = p(y)\). Hence, marginally, the second-class particle moves as a symmetric random walk. In this case, \(\bar{{\mathbb {P}}}_\rho (R_s=0)\) can be explicitly estimated. When p is finite-range, along similar lines, it was shown in [11] that

$$\begin{aligned} a^2_t = \left\{ \begin{array}{ll} O(t^{3/2}) &{} \hbox {in }d=1\\ O(t\log (t)) &{} \hbox {in }d=2\\ O(t) &{} \hbox {in }d\ge 3.\end{array}\right. \end{aligned}$$

Moreover, in the above scales, the functional CLT in the uniform topology was shown in [11, 23]:

$$\begin{aligned} \frac{1}{a_N}\Gamma (Nt) \ \xrightarrow [N\rightarrow \infty ] \ \left\{ \begin{array}{ll} {\mathbb {B}}_{3/4}(t) &{} \hbox {in }d=1\\ {\mathbb {B}}(t) &{} \hbox {in }d\ge 2. \end{array}\right. \end{aligned}$$
(1.1)

Here, \({\mathbb {B}}_{H}\) is fractional Brownian motion with Hurst parameter H and \({\mathbb {B}}={\mathbb {B}}_{1/2}\) is standard Brownian motion.

We remark similar claims on the Laplace transform \(L_\lambda \) hold when p is finite-range, asymmetric and mean-zero, \(\sum xp(x)=0\) by different methods. Also, corresponding CLT’s and scaling limits have been shown [9, 23, 30].

1.4 Finite-range models: asymmetric transitions and KPZ exponents

When p is finite-range and has a drift, \(m=\sum xp(x) \ne 0\), although the second-class particle \(R_s\) is not a random walk, it has a mean drift of \((1-2\rho )m\) under \(\bar{{\mathbb {P}}}_\rho \) (cf. [3] and references therein). In analogy with random walks, the second-class particle should be transient exactly when \(\rho \ne 1/2\). Partly based on this intuition, it was proved for \(\rho \ne 1/2\) in \(d\ge 1\) that \(a^2_t = O(t)\), and also the functional CLT \(N^{-1/2}\Gamma (Nt) \Rightarrow {\mathbb {B}}(t)\) (cf. [7, 22, 23]).

However, now fix \(\rho =1/2\) for the remainder of the subsection. This case interestingly connects with ‘Kardar–Parisi–Zhang’ (KPZ) behavior and exponents of driven diffusive systems. In this situation, the process macroscopic characteristic speed \((1-2\rho )\sum xp(x)\) vanishes. By the same sort of calculation presented above in Sect. 1.1, the variance of the second-class particle can be written in terms of the ‘diffusivity’ of the system:

$$\begin{aligned} \rho (1-\rho )\bar{{\mathbb {E}}}_\rho \left| R_t\right| ^2 = \sum |x|^2 {\mathbb {E}}_\rho \left[ (\eta _t(x) - \rho )(\eta _0(0) - \rho )\right] =: \ D(t) \end{aligned}$$

which in \(d=1\) is related to the variance of the ‘height’ function for an associated interface which is in the KPZ class (cf. Chapter 5 in [29] and [21] for definition of the height function and more discussion).

In [5], it was formulated that

$$\begin{aligned} D(t) = \left\{ \begin{array}{ll} O(t^{4/3}) &{}\quad \hbox {in }d=1\\ O(t(\log (t))^{2/3}) &{}\quad \hbox {in }d=2\\ O(t) &{}\quad \hbox {in }d\ge 3.\end{array}\right. \end{aligned}$$

This has been proved, in Tauberian form, by various techniques and discussed in more detail in [4, 8, 15, 20, 21], and [31].

Then, allowing a Gaussian ansatz, \(\bar{{\mathbb {P}}}_\rho (R_t = 0)\) should decay as \(O(t^{-2/3})\) in \(d=1\), \(O(t^{-1}(\log (t))^{1/3})\) in \(d=2\), and \(O(t^{-d/2})\) in \(d\ge 3\). Although these local limit type estimates have not been shown, they would imply that the occupation time variance should satisfy the same estimates as for D(t) above. However, in \(d\ge 3\), when \(\rho =1/2\), the conclusion \(a^2_t = O(t)\) is known [23, 26].

Although the conjecture in \(d\le 2\) for the order of \(a^2_t\) has not been substantiated, the following \(H_{-1}\) estimates have been found in [7] and [25]: As \(\lambda \downarrow 0\),

$$\begin{aligned} \begin{aligned} C\lambda ^{-9/4}&\le L_\lambda \ \le \ C^{-1}\lambda ^{-5/2} \quad \hbox { in }d=1\\ C\lambda ^{-2}\log |\log (\lambda )|&\le L_\lambda \ \le \ C^{-1}\lambda ^{-2}|\log (\lambda )| \hbox { d}=2 \end{aligned} \end{aligned}$$
(1.2)

with an improvement in the second line lower bound of \(C\lambda ^{-2}|\log (\lambda )|^{1/2}\) when the p-drift, \(\sum xp(x)\), lies on a coordinate axis. These Tauberian bounds formally imply that

$$\begin{aligned} Ct^{5/4}\le & {} a^2_t \ \le \ C^{-1}t^{3/2} \quad \hbox { in }d=1\\ Ct\log (\log (t))\le & {} a^2_t \ \le \ C^{-1}t\log (t) \quad \hbox { in }d=2. \end{aligned}$$

1.5 Finite-range models: General additive functionals and \(H_{-1}\) norms

Besides the occupation function, one can consider the additive functional \(\Gamma _f(t) = \int _0^t f(\eta _s)ds\) for a general class of ‘local’ mean-zero functions, \({\mathbb {E}}_\rho [f]=0\). That is, by ‘local’, we mean f is compactly supported: \(f(\eta )\) depends only on the variables \(\eta (x)\) for \(x\in \Lambda \subset {\mathbb {Z}}^d\) and \(\Lambda \) is a finite set. Let \(\sigma ^2_t(f) = {\mathbb {E}}_\rho (\Gamma _f(t))^2\).

One may ask for which functions f is \(\sigma ^2_t(f) = O(t)\), that is the variance is of ‘diffusive’ order. When p is finite-range, there is a dimension dependent characterization of such f’s depending on the ‘degree’ or ‘smoothness’ of the functions (cf. Proposition ). In particular, for the symmetric process, we have seen \(f(\eta ) = \eta (0)-\rho \) in dimensions \(d\le 2\) is not smooth enough.

When \(\sigma ^2_t(f)\) is not ‘diffusive’, divergence orders have been found for symmetric and mean-zero processes (cf. Proposition 2.2) and bounds for the asymmetric model (cf. Proposition 2.3).

Functional CLT’s in diffusive scale, converging to Brownian motion, for \(\Gamma _f(t)\) when \(\sigma ^2_t(f) = O(t)\) have been shown (cf. [13, 19, 27], and [25] and references therein for statements and more discussion). When p is mean-zero and f is a degree 1 function (such as the occupation function \(f(\eta ) = \eta (0)-\rho \)), in \(d=1\), a functional CLT in anomalous scale has been proved [9]. Otherwise, characterizing the fluctuations of \(\Gamma _f(t)\) is open.

1.6 Long-range transitions and main results

We will take p to be ‘long-range’ if its symmetrization \(2^{-1}(p(x)+p(-x))\) is proportional to \(|x|^{-(d+\alpha )}\) for \(\alpha >0\). This natural choice introduces the parameter \(\alpha \) which controls the order of moments allowed. We also consider several types of asymmetries, both ‘short’ and ‘long’, detailed in the next section.

When \(\alpha >2\), p has two moments; in this case, we show that the asymptotics of the occupation time \(\Gamma (\cdot )\) behaves as if p were finite-range (cf. Theorem 2.4). Also, when \(0<\alpha <1\) or \(d\ge 3\), the random walk generated by p is transient [28]; in this case, we prove that the long time behavior of \(\Gamma (\cdot )\) is diffusive (cf. part of Theorems 2.6, 2.11, 2.12).

Our main interest is when \(1\le \alpha \le 2\) and \(d\le 2\). When p is symmetric, one of our main results is to derive a fractional Brownian motion scaling limit in \(d=1\) for \(\Gamma (\cdot )\) in scale \(a_t = O(t^{1-(2\alpha )^{-1}})\), corresponding to Hurst parameter \(H= 1-(2\alpha )^{-1}\). This microscopic derivation of a collection of fractional BM’s, in a range of Hurst parameters, generalizes the \(H=3/4\) limit when p is finite-range. In \(d\le 2\), other additive functional variance divergence orders and CLTs are also found (cf. Theorems 2.8, 2.9, and 2.11). We also observe that most of these results also hold for a class of long-range mean-zero processes.

However, when p is asymmetric with a ‘drift’–an example is when p(x) is proportional to \(\mathbf{1}_{(x_i>0: 1\le i\le d)}|x|^{-d-\alpha } \)—other new phenomena appear. In particular, in \(d=1\) when \(\rho =1/2\), we observe a curious transition point at \(\alpha = 3/2\). When \(\alpha \le 3/2\), we show the variance \(a^2_t\) is of the same Tauberian order as if p were symmetric. In particular, when \(\alpha =3/2\), we prove \(a^2_t = O(t^{4/3})\) in the Tauberian sense (cf. Theorem 2.14).

However, as \(\alpha \) increases, the process is less heavy-tailed and one feels less mixing, more volatile and more susceptible to ‘traffic jams’. In fact, we propose for a large class of exclusion systems that \(L_\lambda \) and \(a^2_t\) should increase as \(\alpha \) increases. In support, we verify this intuition for mean-zero type processs (cf. Theorem 2.18).

Moreover, we conjecture, from (1) this intuition, (2) the statement \(a^2_t = O(t^{4/3})\), in the Tauberian sense, when \(\alpha = 3/2\) and \(\rho =1/2\), (3) the result \(a^2_t\) is of the same Tauberian order as for finite-range processes when \(\alpha >2\), and (4) the belief for \(d=1\) finite-range processes with drift that also \(a^2_t = O(t^{4/3})\), that we have \(a^2_t = O(t^{4/3})\) in the Tauberian sense for all \(\alpha \ge 3/2\) in \(d=1\) (cf. Conjecture 2.17). We note superdiffusive lower and upper bounds, consistent with this conjecture, are given in Theorem 2.14.

We remark the apparent dichotomy in the behavior of \(a^2_t\) when variously \(\alpha \le 3/2\) and \(\alpha >3/2\) in \(d=1\) for \(\rho =1/2\) suggests a novel extension of the scope of the KPZ class behavior to long-range models. This topic and supporting results are discussed more in Sects. 2.5, 2.6.

In dimension \(d=2\) when \(\rho =1/2\), analogously, we show for \(\alpha \le 2\) that \(a^2_t\) is of the same Tauberian order as in the symmetric case (Theorems 2.12, 2.15). Here, it seems, the KPZ class behavior does not extend below \(\alpha \le 2\). As in the finite-range case, what is expected for \(\alpha >2\) is that \(a^2_t = O(t(\log (t))^{2/3})\).

In addition, when \(\rho \ne 1/2\), since the process characteristic speed drifts away from the origin, one expects \(a^2_t = O(t)\). This is indeed the case and stated in Theorem 2.12 for almost all values of \(\alpha \) and \(d\le 2\).

We also consider the variance \(\Gamma _f(t)\) for general local functions f, and find an \(\alpha \), \(\rho \), d-dependent characterization of when \(a^2_t(f) = O(t)\) (Theorems 2.6, 2.12), and also exceptional orders (Theorems 2.14, 2.15). Corresponding functional CLT’s are also given for the symmetric model (cf. Theorems 2.11) and remarked upon for the asymmetric process (cf. Remark 2.13).

The methods of the article make use of a combination of martingale CLT, ‘duality’, and \(H_{-1}\) norm variational formula arguments. In particular, part of the arguments nontrivially generalize, to long-range models, the works [7, 11, 23] in the finite-range setting. On the other hand, some new tools such as the sector inequality in Lemma 4.2, which may be of interest itself, are developed.

1.6.1 Notation and plan of the article

The canonical basis of \({\mathbb {R}}^d\) and coordinates of a vertex \(x\in {\mathbb {R}}^d\) are denoted by \(e_i\) and \(x_i\) for \(1\le i\le d\) respectively. The usual scalar product between x and y in \({\mathbb {R}}^d\) is denoted by \(x \cdot y\) and the corresponding norm by \(| \cdot |\).

Define the relations ‘ \(\approx \)”, \(\sim \)’, ‘\(\preccurlyeq \)’, ‘\(\succcurlyeq \)’ and note usual conventions ‘\(O(\cdot )\)’ and ‘\(o(\cdot )\)’ between sequences \(a(s) \ge 0\) and \(b(s)>0\):

  • \(a(s) \approx b(s)\) when \(0< \liminf _{s \rightarrow \infty } a(s)/b(s)\) and \(\limsup _{s \rightarrow \infty } a(s)/b(s) <\infty \),

  • \(a(s) \sim b(s)\) when \(\lim _{s \rightarrow \infty } a(s)/b(s)\) exists and \(0 <\lim _{s \rightarrow \infty } a(s)/b(s) <\infty \),

  • \(a(s)= O(b(s))\) when \(\limsup _{s \rightarrow \infty } a(s)/b(s) < \infty \),

  • \(a(s)=o(b(s))\) when \(\limsup _{s \rightarrow \infty } a(s)/b(s)=0\),

  • \(a(s)\preccurlyeq b(s)\) when \(a(s) = O(b(s))\), and

  • \(a(s)\succcurlyeq b(s)\) when \(b(s) = O(a(s))\).

Sometimes, the parameter s will denote the time t which tends to infinity. At other times, \(s = \lambda \), a parameter we will send to 0, and the relations above are defined accordingly.

In the next section, we more carefully define the model, and discuss the results. In Sect. 3, we give notions of \(H_{-1}\) norms, ‘duality’ with respect to the (asymmetric) exclusion process, ‘free particle’ approximations, and other basic estimates useful in the proofs. In Sect. 4, finite and long-range \(H_{-1}\) norm comparison results, as well as the monotonicity result Theorem 2.18 are proved. In Sects. 5 and 6, we prove the main results for symmetric and asymmetric long-range exclusion processes respectively. Finally, in “Appendix”, some more technical computations are collected.

2 Definitions and main results

Let \(\alpha >0\) and let \(p(\cdot )\) be a transition function on \({\mathbb {Z}}^d\) such that for any \(y \in {\mathbb {Z}}^d\),

$$\begin{aligned} p(y)= \frac{\gamma (y)}{|y|^{d +\alpha }}, \quad \gamma (y)= \sum _{\sigma =\pm } \; c\sum _{i=1}^d b_i^\sigma (y) \, \mathbf{1}_{\sigma (y \cdot e_i) >0} \end{aligned}$$

and \(p(0)=0\). Here, c is a normalization constant and \(\{b_i^{\pm }(y): 1\le i\le d, y\in {\mathbb {Z}}^d\}\) are nonnegative real numbers, which are bounded \(b_i^{\pm }(\cdot )\le \bar{b}\), such that \((p(\cdot ) + p(-\cdot ))/2\) is irreducible.

The symmetric and antisymmetric parts of p are denoted respectively by s and a where \(s(y)= (p(y)+p(-y))/2\) and \(a(y)= (p(y)-p(-y))/2\). The mean of p, equal to the mean of a, is defined by \(m=\sum _{y \in {\mathbb {Z}}^d} y p(y)\in {\mathbb {R}}^d\) if it converges.

We now distinguish several types of natural asymmetric long-range probabilities:

(LA):

(Long asymmetric range) There are constants \(b^\sigma _i\ge 0\) such that \(b^\sigma _i(y) \equiv b^\sigma _i\),

$$\begin{aligned} \min _{1\le i \le d}b^+_i\wedge b^-_i>0 \quad \hbox {and}\quad \sum _{i=1}^d |b_i^+ - b_i^-| > 0. \end{aligned}$$
(SA):

(Short asymmetric range) There is an \(R<\infty \) and \(b_i>0\) such that \(b^+_i(y) = b^-_i(y) = b_i\) for \(|y|> R\), \(\sum _{|y|\le R}yp(y) \ne 0\). Here, a is finite range, but jumps of all large sizes are supported by p.

(NNA):

(Nearest-neighbor asymmetry) A particular case of the short asymmetric range probability is when \(R=1\) and the asymmetry is nearest-neighbor.

(MZA):

(Mean-zero asymmetry) Another case of the short asymmetric range probability is when \(\sum _{|y|\le R}yp(y)=0\), but p is not symmetric.

We will on occasion make comparisons with respect to the more studied ‘finite-range’ jump probability, for which symmetric, mean-zero asymmetric and asymmetric versions can be analogously defined.

(FR):

(Finite range) There is an \(R<\infty \) such that for all \(1\le i\le d\), \(b^+_i(y)=b^-_i(y) = 0\) for \(|y|>R\). As before, to avoid sublattice periodicity, we assume the symmetric part s is irreducible.

(FR-NN):

(Nearest-neighbor) A case of the finite-range probability is when \(R=1\). Here, necessarily \(s(e_i)>0\) for \(1\le i\le d\).

Most of our focus, to make a choice, is on long asymmetric range model (LA), and for the remainder of the article p denotes such a probability. However, some comparisons with other types of probabilities are made in Sect. 2.2. In the following, quantities with respect to the different types of probabilities will be denoted with corresponding superscripts; in this respect, (S) signifies the jump probability is s.

The corresponding d-dimensional exclusion process is a Markov process \(\{\eta _t\, ; \, t \ge 0\}\), with state space \(\Omega =\{0,1\}^{\mathbb {Z}^d}\), whose generator acts on local functions \(f:\Omega \rightarrow {\mathbb {R}}\) as

$$\begin{aligned} \mathscr {L} f(\eta ) = \sum _{x,y \in {\mathbb {Z}}^d} p(y)\eta (x)(1-\eta (x+y)) \nabla _{x,x+y} f(\eta ), \end{aligned}$$

where \(\nabla _{x,x+y}f(\eta ) = f(\eta ^{x,x+y})-f(\eta )\) and

$$\begin{aligned} \eta ^{x,x+y}(z) = \left\{ \begin{array}{l@{\quad }l} \eta (x+y), &{} z=x\\ \eta (x), &{} z=x+y\\ \eta (z), &{} z\ne x,x+y.\\ \end{array} \right. \end{aligned}$$

We will denote by \(T_t\) the associated semigroup.

As mentioned in the introduction, for every \(\rho \in [0,1]\), the Bernoulli product measure \(\nu _\rho \) with density \(\rho \) is invariant for \(\{ \eta _t \, ; \, t\ge 0\}\). Let \({{\mathbb {P}}}_{\rho }\) be the law of the process \(\{\eta _t\, ; \, t \ge 0\}\) starting from \(\nu _{\rho }\). We denote by \({{\mathbb {E}}}_\rho \), as it will be clear by context, the expectation with respect to both \(\nu _\rho \) and \({\mathbb {P}}_\rho \). We will also use the notation \(\langle f,g\rangle _\rho := {\mathbb {E}}_\rho [fg]\).

One may compute that the \({\mathbb {L}}^2(\nu _\rho )\) adjoint \({{\mathscr {L}}}^*\) itself is an exclusion generator with reversed jump probability \(p^*(\cdot ) = p(-\cdot )\). When \(p=s\), the \({\mathbb {L}}^2(\nu _\rho )\) process generator \({{\mathscr {L}}}\) and semigroup \(T_t\) are reversible. The construction and basic properties of this Markov process can be found in Chapter I, VIII in [16]; its extension to \({\mathbb {L}}^2(\nu _\rho )\), with a core including local functions, follows from the development in Section IV.4 in [16].

Recall the additive functional for this process

$$\begin{aligned} \Gamma _f(t)=\int _0^{t} f(\eta _s)ds, \end{aligned}$$

where \(f: \Omega \rightarrow {\mathbb {R}}\) is a local function, and its variance \({\sigma _t}^2 (f)\) with respect to the stationary measure \(\nu _{\rho }\) with density \(\rho \). We now define the ‘limiting variance’ \(\sigma ^2 (f)\) by

$$\begin{aligned} \sigma ^2 (f) = \limsup _{t \rightarrow \infty } t^{-1} \sigma _t^2 (f). \end{aligned}$$

A local function f such that \(\sigma ^2(f)<\infty \) or equivalently \(\sigma _t^2 (f) \le C t\) for a constant \(C>0\) independent of t, is said to be admissible.

Define the Laplace transform of \(\sigma ^2_t(f)\) as \(L_{f} (\lambda ) = \int _0^{\infty } e^{-\lambda t} \sigma _t^2 (f) dt\). We observe that if f is admissible then \(\lambda ^{2}L_f(\lambda )\) is uniformly bounded as \(\lambda \downarrow 0\).

The behavior of the variance \(\sigma _t^2 (f)\) and \(L_f(\lambda )\) are much related to the degree of f. Define \(\mu _f(z) =\int f d\nu _{z}\) the mean of f with respect to \(\nu _z\). For \(i\ge 1\), the \(i^\mathrm{{th}}\) derivative of a function g is denoted by \(g^{(i)}\).

Definition 2.1

Let \({{\mathrm{deg}}}(f)\) be the degree of the local function f, with respect to \(\nu _\rho \), that is the integer \(i\ge 0\) such that \(\mu _f^{(i)}(\rho ) \ne 0\) and \(\mu _f^{(j)}(\rho )=0\) for any \(j<i\). If \(\mu _f^{(j)}(\rho )=0\) for all \(0\le j \in {\mathbb {N}}_0\) we say \({{\mathrm{deg}}}(f)=\infty \).

For a finite subset \(A\subset {\mathbb {Z}}^d\) with cardinality |A|, let \(\Phi _A(\eta ):=\prod _{i\in A} (\eta (i)-\rho )\). Then, \(\Phi _A\) is a degree |A| function and \(\mu _{\Phi _A}(z) = (z-\rho )^{|A|}\). All local, mean-zero functions f, \({\mathbb {E}}_\rho [f]=0\), can be decomposed in terms of \(\{\Phi _A: A\subset {\mathbb {Z}}^d\}\): Since the occupation variables are at most 1,

$$\begin{aligned} f(\eta ) = \sum _{n\ge 1}\sum _{|A|=n} c(A)\Phi _A(\eta ), \end{aligned}$$

in terms of coefficients c(A) where all sums are finite. In particular, if f is a degree i local function then \(\mu _f(z)\) is a degree i polynomial.

Moreover, we may conclude,

$$\begin{aligned} \begin{array}{ll} \hbox {if } {{\mathrm{deg}}}(f)=1, &{} \hbox {then }\sum _{|A|=1}c(A) \ne 0\\ \hbox {if } {{\mathrm{deg}}}(f) =2, &{} \hbox {then }\sum _{|A|=2}c(A) \ne 0 \quad \text{ and } \quad \sum _{|A|=1}c(A)= 0\\ \hbox {if } {{\mathrm{deg}}}(f)\ge 3, &{} \hbox {then }\sum _{|A|=1}c(A) = \sum _{|A|=2}c(A)=0. \end{array} \end{aligned}$$
(2.1)

It will be helpful, before stating our main long-range results in Sects. 2.22.6, to state precisely some of the work on finite-range systems.

2.1 Previous work on (FR) models

Admissibility has been previously characterized for exclusion with finite range probabilities \(p^{(FR)}\) in [7, 23, 27].

Proposition 2.1

Suppose \(p^{(FR)}\) is mean-zero. Then, a local function f is admissible exactly when

$$\begin{aligned} {{\mathrm{deg}}}(f) \ \ge \ \left\{ \begin{array}{ll} 3&{} \hbox {in } d=1\\ 2&{} \hbox {in } d=2\\ 1&{} \hbox {in } d\ge 3.\end{array}\right. \end{aligned}$$

But when \(p^{(FR)}\) has a drift, \(\sum xp^{(FR)}(x)\ne 0\), then f is admissible exactly when

$$\begin{aligned} {{\mathrm{deg}}}(f) \ \ge \ \left\{ \begin{array}{ll} 1&{} \hbox { if } \rho \ne 1/2 \hbox { or } d\ge 3\\ 2&{} \hbox { if } \rho = 1/2 \hbox { and } d\le 2. \end{array}\right. \end{aligned}$$

In the exceptional cases, the following is known. We remark when \(p^{(FR)}\) is symmetric, \(L_f(\lambda )\) and \(\approx \) below can be replaced by \(\sigma ^2_t(f)\) and \(\sim \) respectively; see [19, 23, 27] for more details and refinements.

Proposition 2.2

Suppose \(p^{(FR)}\) is mean-zero and f is local. Then, in \(d=1\),

$$\begin{aligned} L_f(\lambda )\ \approx \ \left\{ \begin{array}{l@{\quad }l} \lambda ^{-5/2} &{} \hbox {if } {{\mathrm{deg}}}(f) =1\\ \lambda ^{-2}|\log (\lambda )| &{} \hbox {if }{{\mathrm{deg}}}(f)=2\\ \lambda ^{-2}&{} \hbox {if }{{\mathrm{deg}}}(f)\ge 3.\end{array}\right. \end{aligned}$$

In \(d=2\),

$$\begin{aligned} L_f(\lambda ) \ \approx \ \left\{ \begin{array}{l@{\quad }l} \lambda ^{-2}|\log (\lambda )| &{} \hbox {if }{{\mathrm{deg}}}(f)=1\\ \lambda ^{-2}&{} \hbox {if }{{\mathrm{deg}}}(f)\ge 2. \end{array}\right. \end{aligned}$$

In \(d\ge 3\),

$$\begin{aligned} L_f(\lambda )\ \approx \ \lambda ^{-2}. \end{aligned}$$

When \(p^{(FR)}\) has a drift, \(\rho =1/2\) and \({{\mathrm{deg}}}(f)=1\), the behavior of \(\sigma ^2_t(f)\) will be of the same conjectured orders \(t^{4/3}\) in \(d=1\) and \(t(\log (t))^{2/3}\) in \(d=2\) with respect to the occupation time function \(f_0(\eta ) = \eta (0)-1/2\) discussed in the introduction.

On the other hand, the bounds on \(L_\lambda = L_{f_0}(\lambda )\) given in (1.2) extend to degree 1 functions [7, 25].

Proposition 2.3

Suppose \(p^{(FR)}\) has a drift, \(\sum xp^{(FR)}(x)\ne 0\), \(\rho =1/2\), and f is local and \({{\mathrm{deg}}}(f)=1\). Then,

$$\begin{aligned} \lambda ^{-9/4}\le & {} L_f(\lambda ) \ \le \ \lambda ^{-5/2} \quad \hbox {in } d=1\\ \lambda ^{-2}\log |\log (\lambda )|\le & {} L_f(\lambda ) \ \le \ \lambda ^{-2}|\log (\lambda )| \hbox { in } d=2. \end{aligned}$$

Also, in \(d=2\), when in addition \(\sum xp^{(FR)}(x)\) is on a coordinate axis, the lower bound can be replaced by \(\lambda ^{-2}|\log (\lambda )|^{1/2}\).

2.2 Finite/long-range and other comparisons

We now compare Tauberian variances \(L_f\), with respect to (LA) long-range processes, and \(L_f^{(FR)}\) when \(\alpha >2\), that is when p has strictly more than 2 moments. We remark the results of Theorem 2.4 hold also with respect to comparisons between \(L^{(\cdot )}_f\), for all the types of long-range jump probabilities mentioned before, and \(L^{(FR)}_f\).

Theorem 2.4

Let f be a local function. Then, for \(\alpha >2\) and \(d\ge 1\), when \(\sum yp(y) = c\sum yp^{(FR)}(y)\) for a constant \(c\ne 0\), we have

$$\begin{aligned} L_f(\lambda ) \ \approx \ L_f^{(FR)}(\lambda ). \end{aligned}$$

We remark, in \(d=1\), the ‘parallel’ condition \(\sum yp(y) = c\sum y p^{(FR)}(y)\) for a nonzero c is the same as \(\sum yp(y)=\sum yp^{(FR)}(y) = 0\) or both \(\sum yp(y)\), \(\sum yp^{(FR)}(y)\ne 0\). When \(\alpha >2\), the long-range exclusion dynamics has similar properties as when the process is finite-range and parallel. In particular, one may apply results for finite-range processes when \(\alpha >2\).

However, when \(\alpha >0\), Tauberian variances for long-range (MZA) models are comparable with their symmetric long-range counterparts.

Theorem 2.5

Let f be a local function. Then, for \(\alpha >0\) and \(d\ge 1\), with respect to long-range (MZA) processes, we have

$$\begin{aligned} L^{(MZA)}_f(\lambda ) \ \approx \ L_f^{(S)}(\lambda ). \end{aligned}$$

2.3 Symmetric jumps

We now consider the symmetric process, when \(p(\cdot )=s(\cdot )\) corresponds to the symmetrization of a (LA) long-range jump probability. Results in this section also for symmetrizations of (SA) jump probabilities, with similar proofs. We first characterize admissibility of local functions.

Theorem 2.6

Consider the symmetric long-range exclusion process in dimension d. We have the following characterization of admissibility.

  • \(d=1\): Every local function f such that:

    1. 1.

      \({{\mathrm{deg}}}(f) \ge 3\) is admissible,

    2. 2.

      \({{\mathrm{deg}}}(f) =2\) is admissible if \(\alpha <2\),

    3. 3.

      \({{\mathrm{deg}}}(f) =1\) is admissible if \(\alpha <1\).

  • \(d=2\): Every local function f such that:

    1. 1.

      \({{\mathrm{deg}}}(f) \ge 2\) is admissible,

    2. 2.

      \({{\mathrm{deg}}}(f)=1\) is admissible if \(\alpha <2\).

  • \(d \ge 3\): Every local function with \({{\mathrm{deg}}}(f)\ge 1\) is admissible.

Remark 2.7

In terms of variance asymptotics, the following observation reduces the consideration of a general local degree 1 function f to that of the occupation time function \(\eta (0) - \rho \). Indeed, note that \(g=f -\mu _f'(\rho )(\eta (0) - \rho )\) is at least a degree 2 function. When \(d=1\), \(\alpha <2\), we have \(\sigma ^2_t(g) = O(t)\) by Theorem 2.6. Hence, if \(\sigma ^2_t(\eta (0)-\rho ))\) is superdiffusive in growth, it is the dominant term with respect to the equation \(f = g +\mu _f'(\rho )(\eta (0)-\rho )\).

Similarly, noting (2.1), a degree k function f can be written as \(f = h + \frac{1}{k!}\mu _f^{(k)}(\rho )\Phi _A\) where \(|A|=k\) and h is now at least a degree \(k+1\) function. Hence, one deduces \(\sigma ^2_t(f) \sim \sigma ^2_t(\Phi _A)\) when \(\sigma ^2_t(\Phi _A)\) dominates \(\sigma ^2_t(h)\).

Next, the following results give the variance behavior for exceptional functions f in terms of dimension d. As discussed earlier, when \(\alpha >2\), the orders match those for the symmetric finite-range model (cf. Theorem 2.4).

Theorem 2.8

Let f be a local degree 1 function. It holds that

  • In \(d=1\)

    $$\begin{aligned} \sigma _t^2 (f) \ \sim \ \left\{ \begin{array}{l@{\quad }l} t, &{} \hbox {if },\alpha <1 \\ t\log (t), &{} \hbox {if } \alpha =1\\ t^{2-1/\alpha }, &{} \hbox {if } 1<\alpha <2\\ t^{3/2}(\log (t))^{-1/2},&{} \hbox {if } \alpha = 2 \\ t^{3/2},&{} \hbox {if }\alpha >2.\end{array}\right. \end{aligned}$$
  • In d=2

    $$\begin{aligned} \sigma _t^2 (f) \ \sim \ \left\{ \begin{array}{l@{\quad }l} t, &{} \hbox {if } \alpha <2 \\ t\log (\log (t)), &{} \hbox {if } \alpha =2 \\ t \log (t),&{} \hbox {if } \alpha >2.\end{array}\right. \end{aligned}$$
  • In \(d\ge 3\),

    $$\begin{aligned} \sigma _t^2 (f) \ \sim t, \quad \text {for all} \quad \alpha . \end{aligned}$$

Theorem 2.9

Let \(d=1\) and let f be a local degree 2 function. Then, as \(\lambda \downarrow 0\), we have

$$\begin{aligned} L_f(\lambda )\ \approx \ \left\{ \begin{array}{l@{\quad }l} \lambda ^{-2} |\log (\lambda )| &{} \hbox {if } \alpha >2\\ \lambda ^{-2}\log |\log (\lambda )| &{} \hbox {if } \alpha =2. \end{array}\right. \end{aligned}$$

Remark 2.10

When \({{\mathrm{deg}}}(f)=2\), we expect variance asymptotics \(\sigma _t^2 (f) \sim t \log (t)\) if \(\alpha >2\) and \(\sigma _t^2 (f) \sim t\log (\log (t))\) if \(\alpha =2\). In this respect, in the nearest-neighbor case, by computing the Green’s function of a system of two interacting exclusion particles, which seems more difficult when jumps are not nearest-neighbor, these asymptotics are shown in [27].

The following convergence results hold. Recall \({\mathbb {B}}_H\) denotes fractional Brownian motion with Hurst exponent H, and \({\mathbb {B}}={\mathbb {B}}_{1/2}\) is standard Brownian motion.

Theorem 2.11

  1. (i)

    If f is an admissible function then we have weak convergence in the uniform topology:

    $$\begin{aligned} \frac{1}{\sigma _{N} (f) } \Gamma _f (tN) \xrightarrow [N\rightarrow {\infty }]\, {\mathbb {B}}(t). \end{aligned}$$
  2. (ii)

    If f is a (non-admissible) function of degree 1, we have the following weak convergences in the uniform topology

    • In \(d=1\)

      $$\begin{aligned} \frac{1}{\sigma _{N} (f) } \Gamma _f (tN) \xrightarrow [N\rightarrow {\infty }]\, \ \left\{ \begin{array}{l@{\quad }l} {\mathbb {B}}(t), &{} \hbox {if } \alpha = 1\\ {\mathbb {B}}_{1-1/{2\alpha }}(t), &{} \hbox {if } 1<\alpha < 2\\ {\mathbb {B}}_{3/4} (t),&{} \hbox {if } \alpha \ge 2.\end{array}\right. \end{aligned}$$
    • In \(d= 2\), for all \(\alpha \ge 2\),

      $$\begin{aligned} \frac{1}{\sigma _{N} (f) } \Gamma _f (tN) \xrightarrow [N\rightarrow {\infty }]\, {\mathbb {B}}(t). \end{aligned}$$
  3. (iii)

    If f is a (non-admissible) function of degree 2, i.e. \(\alpha \ge 2\) and \(d=1\), then for any \(t>0\), we have the one-time CLT, convergence in law

    $$\begin{aligned} \frac{1}{\sigma _{N} (f) } \Gamma _f (tN) \xrightarrow [N\rightarrow {\infty }]\, \mathscr {N}\mathrm{(t)} \end{aligned}$$

    where \(\mathscr {N}\mathrm{(t)}\) is a centered normal variable with variance t.

The last part is weaker than the previous lines in Theorem 2.11 as the exact asymptotics of \(\sigma _{tN}(f)\) have not been found (cf. Remark 2.10).

2.3.1 Mean-zero (MZA) processes

We make a few remarks on (MZA) systems and note all statements in Theorems 2.6 and 2.9 hold for these processes. In addition, statements in Theorem 2.8, interpreted in the Tauberian sense, that is with respect to the asymptotics of \(L_f(\lambda ) = \int _0^\infty e^{-\lambda t}\sigma ^2_t(f)dt\), also hold for (MZA) processes.

Indeed, by the bound \(\sigma ^2_t(f) \le 10t^{-1}L^{(S)}_f(t^{-1})\) in the \(H_{-1}\) norm Lemmas 3.1 and 3.2, and admissibility for the symmetric process in Theorem 2.6, the same admissibility statements follow for (MZA) systems. Also, the Tauberian variance statements for the symmetric process transfer to (MZA) processes by Theorem 2.5.

Finally, we remark, the statement in Part (i) Theorem 2.11 also holds for (MZA)-systems, by the method in [30] for finite-range mean-zero systems, since \(a^{(MZA)}\) is the anti-symmetric part of a finite-range mean-zero jump probability. Otherwise, the fluctuations have not been considered.

2.4 Asymmetric jumps

We now consider (LA) asymmetric processes with long-range probability p, which require more delicate considerations than in the symmetric situation.

However, we remark all results of this subsection also hold for long-range (SA) models with short-range asymmetries, with similar proofs.

Theorem 2.12

Consider the asymmetric long-range exclusion process in dimension d. We have the following characterization of admissibility.

  • \(d=1\): Every local function f such that:

    1. 1.

      \({{\mathrm{deg}}}(f) \ge 3\) is admissible,

    2. 2.

      \({{\mathrm{deg}}}(f) = 2\) is admissible if \(\alpha \ne 2\),

    3. 3.

      \({{\mathrm{deg}}}(f)=1\) is admissible if \(\rho \ne 1/2\) and \(\alpha \ne 1, 2\) or if \(\rho =1/2\) and \(\alpha <1\).

  • \(d=2\): Every local function f such that:

    1. 1.

      \({{\mathrm{deg}}}(f) \ge 2\) is admissible,

    2. 2.

      \({{\mathrm{deg}}}(f)=1\) is admissible

    if and only if \(\rho \ne 1/2\) for all \(\alpha \) or if \(\rho =1/2\) and \(\alpha <2\).

  • \(d \ge 3\): Every local function such that \({{\mathrm{deg}}}(f)\ge 1\) is admissible.

Remark 2.13

Cases left open, by our methods, are the boundary cases when \(d=1\), \(\alpha =1,2\), \(\rho \ne {1/2}\) and \({{\mathrm{deg}}}(f)=1\) or when \(d=1\), \(\alpha =2\), \({{\mathrm{deg}}}(f)=2\) for which we conjecture such functions are admissible. Moreover, we show later in Theorems 2.14 and 2.15 that functions not satisfying either the assumptions of Theorem 2.12 or the two cases above are not admissible.

When all mean-zero local functions are admissible, that is when \(\alpha <1\) in \(d=1\), \(\alpha <2\) in \(d=2\), or \(d\ge 3\), the CLT display in Part (i) Theorem 2.11 holds by the same argument as for Corollary 2.1 in [22]. Otherwise, the fluctuation limits for \(\Gamma _f\) have not been characterized.

The next results give upper and lower bounds on \(L_f(\lambda )\) in exceptional non-admissible situations. Formal estimates on \(\sigma ^2_t(f)\) can be recovered by the formal Tauberian relation \(\sigma ^2_t(f) \sim t^{-1}L_f(t^{-1})\).

Theorem 2.14

Consider the asymmetric long-range exclusion process in dimension \(d=1\) with \(\alpha \ge 1\) and \(\rho =1/2\). Let f be a local function of degree one.

  • When \(\alpha =1\), as \(\lambda \downarrow 0\),

    $$\begin{aligned} L_f(\lambda ) \ \sim \ \lambda ^{-2}|\log (\lambda )|. \end{aligned}$$
  • When \(1 < \alpha \le 3/2\), as \(\lambda \downarrow 0\),

    $$\begin{aligned} L_f(\lambda ) \ \sim \ \lambda ^{1/\alpha -3}. \end{aligned}$$
  • When \(3/2 \le \alpha <2\), there exists a constant C such that for all small \(\lambda \),

    $$\begin{aligned} C^{-1} \lambda ^{-1/2\alpha -2} \ \le \ L_f(\lambda ) \ \le \ C \lambda ^{1/\alpha -3 } \end{aligned}$$
  • When \(\alpha =2\), there exists a constant C such that for all small \(\lambda \)

    $$\begin{aligned} C^{-1} \lambda ^{-9/4} {|\log (\lambda ) |^{1/4}} \ \le \ L_f(\lambda ) \ \le \ C \frac{\lambda ^{-5/2} }{\sqrt{|\log (\lambda ) }|}. \end{aligned}$$
  • When \(\alpha >2\), let \(L^{(FR)}_f(\lambda )\) correspond to \(p^{(FR)}\) with a drift, \(\sum xp^{(FR)}(x) \ne 0\). Then, by Theorem 2.4, \(L_f(\lambda ) \approx L^{(FR)}_f(\lambda )\), and the bounds in Proposition 2.3 hold.

Theorem 2.15

Consider the asymmetric long-range exclusion process in dimension \(d=2\) with \(\alpha \ge 2\) and \(\rho =1/2\). Let f be a local function of degree one.

  • When \(\alpha =2\), as \(\lambda \downarrow 0\),

    $$\begin{aligned} L_f(\lambda ) \ \approx \ \lambda ^{-2}\log (|\log (\lambda )|). \end{aligned}$$
  • When \(\alpha >2\), let \(L^{(FR)}_f(\lambda )\) correspond to \(p^{(FR)}\) with a drift, \(\sum xp^{(FR)}(x) \ne 0\). Then, by Theorem 2.4, \(L_f(\lambda ) \approx L^{(FR)}_f(\lambda )\), and the bounds in Proposition 2.3 hold.

Remark 2.16

We note all upper bounds in Theorems 2.14 and 2.15 hold in the Abelian sense: That is, \(\sigma ^2_t(f) \le 5 t^{-1}L^{(S)}_f(t^{-1})\) by the \(H_{-1}\) norm result Corollary 3.3, and the variance bounds for the symmetric long-range process in Theorem 2.8.

2.5 A conjecture and partial monotonicity argument

As remarked in the Introduction, with respect to finite-range asymmetric exclusion processes, when \(\rho = 1/2\) and \(\sum yp^{(FR)}(y)\ne 0\), it is believed that the occupation time variance \(\sigma ^2_t(\eta (0)-1/2)) \approx t^{4/3}\) in \(d=1\) and \(\approx t(\log (t))^{2/3}\) in \(d=2\). Given Theorem 2.4, these are the same orders conjectured for the variance, in the Tauberian sense, for the long-range asymmetric exclusion process when \(\alpha >2\) in \(d=1,2\).

Now, as \(\alpha \) increases, the jump probability p becomes less heavy-tailed. Correspondingly, because of the exclusion dynamics, particles which are bunched together disperse slower and traffic jams are more likely to persist. In particular, it is known that the occupation time at the origin has positively associated increments in time [23]. One feels consequently that the origin occupation time is more volatile as \(\alpha \) grows, that is \(\alpha \mapsto E_{\rho }[\int _0^t f_0(\eta _s) ds]^2= \sigma ^2_t(f_0)\), and \(\alpha \mapsto \int _0^\infty e^{-\lambda t}{\mathbb {E}}_{\rho }[f_0P_t f_0]dt = L_{f_0}(\lambda )\), in terms of their orders, are increasing functions of \(\alpha \), where \(f_0(\eta ) = \eta (0)-\rho \).

Recall, also, when \(\alpha = 3/2\) and \(\rho =1/2\), the order of the variance \(\sigma ^2_t(f_0)\), in both the symmetric and asymmetric cases, in the Tauberian sense, is \(O(t^{4/3})\), the same order believed under asymmetric finite-range dynamics. These comments form the basis of the following conjecture.

Conjecture 2.17

For \(\rho =1/2\), with respect to long-range asymmetric exclusion dynamics such that \(m = \sum yp(y)\ne 0\), the Tauberian variance satisfies

$$\begin{aligned} L_{f_0}(\lambda ) = \int _0^\infty e^{-\lambda t}\sigma ^2_t(f_0)dt \ \approx \ \left\{ \begin{array}{ll}\lambda ^{-7/3} &{} \hbox { in }d=1 \,\mathrm{and}\,\alpha \ge 3/2\\ \lambda ^{-2}|\log (\lambda )|^{2/3} &{} \hbox { in }d=2\,\mathrm{and}\,\alpha >2.\end{array}\right. \end{aligned}$$

Correspondingly, when \(\rho =1/2\), this type of approximation would formally imply \(\sigma ^2_t(f_0) \approx t^{4/3}\) in \(d=1\) for \(\alpha \ge 3/2\), and \(\sigma ^2_t(f_0)\approx t(\log (t))^{2/3}\) in \(d=2\) for \(\alpha >2\).

In support of the conjecture, consider \(d\ge 1\) long-range models, with short range mean-zero asymmetries, where the jump rate \(p^\alpha \) is in form \(p^\alpha = s_\alpha + a\). Here, a is a finite-range anti-symmetric mean-zero jump rate \(\sum ya(y)=0\), and \(s_\alpha (y) = c_\alpha \mathbf{1}_{y\ne 0} |y|^{-(d+\alpha )}\) where \(c_\alpha \) is the normalization. For a local function f, let \(L^\alpha _f\) be the corresponding Tauberian variance.

Theorem 2.18

For \(0<\alpha <\beta \), \(\rho \in [0,1]\) and \(\lambda >0\), there is a constant \(C = C(d,\alpha ,\beta , a)\) such that \(L^\alpha _f(\lambda ) \le CL^\beta _f(\lambda )\).

Remark 2.19

We conjecture the same monotonicity statement holds for \(d=1\) short asymmetric long range processes with nonzero drift, and \(f = f_0\), when \(\rho =1/2\), where the jump rate \(p^{(SA), \alpha } = s^\alpha + a\) and \(\sum x a(x)\ne 0\). Suppose indeed such a monotonicity statement holds. Then, (1) \(L^{(SA), \alpha }_{f_0}(\lambda ) \ge C_1 L^{(SA), 3/2}_{f_0}(\lambda )\ge C_2\lambda ^{-7/3}\) by Theorem 2.14, when \(\alpha \ge 3/2\) and \(\rho =1/2\), and (2) \(L^{(SA),\alpha }_{f_0}(\lambda ) \le C_3 L^{(SA), 2+\varepsilon }_{f_0}(\lambda ) \le C_4 L^{(FR)}_{f_0}(\lambda )\), by Theorem 2.4, when \(\alpha \le 2\) and \(\varepsilon >0\). Recall also that (3) \(L^{(SA),\alpha }_{f_0}(\lambda ) \approx L^{(FR)}_{f_0}(\lambda )\) when \(\alpha >2\) by Theorem 2.4. Then, by (1), (2) and (3), to show Conjecture 2.17 for (SA) processes with drift, it would be enough to prove for \(\rho =1/2\) that \(L^{(FR)}_{f_0}(\lambda ) \le C \lambda ^{-7/3}\), an estimate which is expected.

2.6 Role of \(\alpha = 3/2\)

Given Conjecture 2.17, it seems the long-range parameter value \(\alpha = 3/2\) is a change-point for the occupation time dynamics with respect to \(d=1\) asymmetric exclusion with jump probability p when \(\rho = 1/2\). On the one hand, for \(\alpha \le 3/2\), the occupation time variance behaves as that under the symmetric dynamics (cf. Theorems 2.8, 2.14). But, otherwise, it would seem, for \(\alpha \ge 3/2\), the variance acts as that under an asymmetric finite-range (FR) model.

That the occupation time variance orders are computed exactly, namely 1 for \(0<\alpha \le 1\) and \(2-1/\alpha \) for \(1\le \alpha \le 3/2\) in \(d=1\) (cf. Theorem 2.14), in particular a power of 4 / 3 for \(\alpha = 3/2\), is one of the few exact calculations with respect to the fluctuations of asymmetric particle systems across process characteristics. Technically, the symmetric part of the generator \({{\mathscr {L}}}\) “dominates” the anti-symmetric part exactly when \(0<\alpha <3/2\). At \(\alpha =3/2\), they are of the same order, and exact computations can be made.

To try to understand a more physical basis for the phenomenon, one might consider the hydrodynamic space-time scaling limit for the empirical particle density in \(d=1\). In finite-range asymmetric processes, the empirical measure \((1/N)\sum _{x\in {\mathbb {Z}}}\eta _{Nt}(x) \delta _{x/N}\) is known to converge to the entropic solution of

$$\begin{aligned} \partial _t \rho + m\nabla \left( \rho (1-\rho )\right) = 0; \quad \rho (0,x) = \rho _0(x), \end{aligned}$$
(2.2)

when the initial configurations have ‘profile’ \(\rho _0\) (cf. [12] for statements and details). When the process begins in the invariant measure \(\nu _\rho \), fluctuations of the empirical measure should be governed by an equation taking input from a Taylor expansion of (2.2) around the constant density \(\rho \) (cf. [29]).

The first and second derivatives of the flux \(F(\rho ) = m\rho (1-\rho )\) are \(m(1-2\rho )\) and \(-2m\). When \(\rho \ne 1/2\), the first derivative is dominant, meaning there is an underlying drift of the ‘bulk’ of particles. In this case, particles do not return to the origin often. Accordingly, one expects, as is known, that the finite-range occupation time fluctuations are diffusive.

However, when \(\rho =1/2\), the drift vanishes and the second order derivative is dominant. This is the kernel of a physical ‘reason’ why the finite-range occupation time fluctuations are in terms of KPZ exponents.

In the long-range asymmetric setting, when \(\alpha >1\), the mean \(m<\infty \). A formal calculation in \(d=1\), no matter the value of \(\alpha >1\), gives again that \((1/N)\sum _{x\in {\mathbb {Z}}} \eta _{Nt}(x)\delta _{x/N}\) converges to the solution of (2.2). Then, if \(\rho \ne 1/2\), one should expect, as is proven here, that the occupation time fluctuations are diffusive. However, when \(\rho =1/2\), although one can understand that the occupation time fluctuations should be different, without a more refined scaling analysis, the role of \(\alpha =3/2\) is not revealed by the above hydrodynamics heuristics.

At this point, when \(\rho \ne 1/2\) and \(d=1\), we conjecture the same scaling behavior, as in Theorem 2.14 and Sect. 2.5, for the occupation time of the vertex in the moving frame with process characteristic velocity \(\lfloor (1-2\rho )m\rfloor \), that is for \(\int _0^t (\eta _s(\lfloor m(1-2\rho )\rfloor s)-\rho )ds\), when one is observing occupation in the frame of the motion of the ‘bulk’ particles. The \({H}_{-1}\) methods of the article should give (non-optimal) variance upper bounds, although lower bounds seem to be more difficult to obtain.

One can also ask about the fluctuations of the occupation time at the origin, when starting in ‘flat’ initial conditions, where say particles and holes are placed deterministically in a repeating regular pattern. One suspects that the behavior should be the same as when starting from \(\nu _\rho \) where \(\rho \) is the asymptotic initial density of particles, although this is open in the context of our techniques which use the invariance of \(\nu _\rho \).

Finally, it would be also of interest to explore more the proposed ‘extension’ of the KPZ class to other long-range models when \(3/2\le \alpha \le 2\). One feels that it is perhaps a generic feature of a large class of mass conservative particle systems.

3 Tools

The goal of this section is to develop in the context of general (LA) long-range processes, \(H_{-1}\) norm estimates, generalized ‘duality’ decompositions, ‘free particle’ approximations and other technical bounds useful in the sequel. We refer the reader to [7, 14, 23] for more discussion of the material in the finite-range context.

3.1 Resolvent norms

Denote the symmetric and antisymmetric parts of \(\mathscr {L}\) by \(\mathscr {S}\) and \(\mathscr {A}\), respectively:

$$\begin{aligned} \mathscr {S}:=\frac{\mathscr {L}+\mathscr {L}^*}{2} \quad \hbox {and}\quad \mathscr {A}:=\frac{\mathscr {L}-\mathscr {L}^*}{2}. \end{aligned}$$

A straightforward calculation shows that \({{\mathscr {S}}}\) itself generates the symmetric exclusion process with jump probability s: On local functions,

$$\begin{aligned} {{\mathscr {S}}}f(\eta ) = \sum _{x,y\in {\mathbb {Z}}^d} p(y) \left[ f(\eta ^{x,x+y}) - f(\eta )\right] . \end{aligned}$$

The corresponding Dirichlet form \(\langle f, -\mathscr {L}f\rangle _\rho \), acting on local functions, after a calculation, is given by

$$\begin{aligned} \langle f, -\mathscr {L}f \rangle _{\rho }= \langle f,{-}\mathscr {S}f\rangle _{\rho } = \frac{1}{2}\sum _{x,y\in {\mathbb {Z}}^d} s(y){\mathbb {E}}_\rho \left[ \left( f(\eta ^{x,x+y}) {-} f(\eta )\right) ^2\right] \ \ge \ 0. \end{aligned}$$
(3.1)

In particular, \(-{{\mathscr {S}}}\) is a nonnegative operator.

We now define the following resolvent norms. Fix \(\lambda >0\) and consider \((\lambda -\mathscr {S})^{-1}:{\mathbb {L}}^2(\nu _\rho )\rightarrow {{\mathbb {L}}^2(\nu _{\rho })}\) where, in terms of the semigroup \(T^{(S)}_t\) for the symmetric process generated by \({{\mathscr {S}}}\),

$$\begin{aligned} (\lambda -\mathscr {S})^{-1}f(\zeta ):=\int _{0}^{\infty }e^{-\lambda t}T^{(S)}_tf(\zeta )dt. \end{aligned}$$

Denote by \({H}_{1,\lambda }\) the closure of local functions f such that \(\Vert f\Vert _{1,\lambda }^2:=\langle f,(\lambda -\mathscr {S})f\rangle _{\rho }<\infty \). Let \({H}_{-1,\lambda }\) be its topological dual with respect to \({\mathbb {L}}^2(\nu _{\rho })\) and let \(\Vert \cdot \Vert _{-1,\lambda }\) be its norm. One has

$$\begin{aligned} \Vert f\Vert _{-1,\lambda }= & {} \sup \left\{ \langle f, \phi \rangle _\rho /\Vert \phi \Vert _{1,\lambda }: \ \phi \ \mathrm{local}\right\} \\= & {} \langle f, (\lambda - {{\mathscr {S}}})^{-1}f\rangle _\rho \\= & {} \int _0^\infty e^{-\lambda t}\langle f, T^{(S)}_t f\rangle _\rho . \end{aligned}$$

Analogously, let \({H}_1\) be the closure over local f such that \(\Vert f\Vert _1^2:=\langle f,-\mathscr {S}f\rangle _{\rho }<\infty \). Denote \({H}_{-1}\) as its topological dual with respect to \({\mathbb {L}}^2({\nu }_{\rho })\) and \(\Vert \cdot \Vert _{-1}\) its norm, namely \(\Vert f\Vert _{-1} = \sup \{\langle f, \phi \rangle _\rho /\Vert \phi \Vert _1: \ \phi \ \mathrm{local}\}\).

By the formulas, we have \(\Vert f\Vert _{1,\lambda } \ge \Vert f\Vert _1\) and so \(\Vert f\Vert _{-1,\lambda }\le \Vert f\Vert _{-1}\). Moreover, as \(T^{(S)}_t\) is reversible with respect to \(\nu _\rho \), \(\langle f, T^{(S)}_t f\rangle _\rho = \langle T^{(S)}_{t/2} f, T^{(S)}_{t/2}f\rangle _\rho \ge 0\). Hence, the limit \(\lim _{\lambda \downarrow 0} \Vert f\Vert _{-1,\lambda } = \Vert f\Vert _{-1}\) exists, which may be infinite.

The resolvent \((\lambda - {{\mathscr {L}}})^{-1}:{\mathbb {L}}^2(\nu _\rho ) \rightarrow {\mathbb {L}}^2(\nu _\rho )\) is given by

$$\begin{aligned} (\lambda - {{\mathscr {L}}})^{-1} f(\zeta ) = \int _0^\infty e^{-\lambda t}T_t f(\zeta ) dt, \end{aligned}$$

with respect to the (asymmetric) generator \({{\mathscr {L}}}\) and semigroup \(T_t\), will be important in many arguments. Observe that by a simple integration by parts and stationarity of the process, we may relate the Tauberian variance \(L_f(\lambda )\) to the quadratic form with respect to \((\lambda - {{\mathscr {L}}})^{-1}\):

$$\begin{aligned} L_f (\lambda )= & {} \int _0^\infty e^{-\lambda t}\sigma ^2_f(t)dt \nonumber \\= & {} 2\int _0^\infty e^{-\lambda t}\int _0^t\int _0^s \langle f, T_{s-u}f\rangle _\rho du ds dt\nonumber \\= & {} \frac{2}{\lambda ^2} \; \langle f, (\lambda -{{\mathscr {L}}})^{-1} f \rangle _{\rho }. \end{aligned}$$
(3.2)

As discussed in [23],

$$\begin{aligned} \left[ \frac{1}{2}\left( \lambda - {{\mathscr {L}}})^{-1} + (\lambda - {{\mathscr {L}}}^*)^{-1}\right) \right] ^{-1} = (\lambda - {{\mathscr {L}}}^*)(\lambda - {{\mathscr {S}}})^{-1}(\lambda - {{\mathscr {L}}}) =: Q, \end{aligned}$$

the point being that one can symmetrize in the inner product \(\langle f, (\lambda - {{\mathscr {L}}})^{-1}f\rangle _\rho \) and interpret it as the dual form with respect to the operator Q. Since \(\langle f, Qf\rangle _\rho = \langle (\lambda - {{\mathscr {L}}})f, (\lambda - {{\mathscr {S}}})^{-1}(\lambda - {{\mathscr {L}}})f\rangle _\rho \ge 0\) for all local f, we see that Q and \(Q^{-1}\) are nonnegative symmetric operators which admit square roots. Hence, we may apply Schwarz’s inequality to obtain

$$\begin{aligned} L_{f+g}(\lambda ) \ \le \ 2L_f(\lambda ) + 2L_g(\lambda ). \end{aligned}$$
(3.3)

We now recall a basic estimate, proved in [23].

Lemma 3.1

For \(t>0\) and \(f\in {{\mathbb {L}}^2(\nu _{\rho })}\) such that \({\mathbb {E}}_\rho [f]=0\), we have

$$\begin{aligned} {\mathbb {E}}_{\rho }\left[ \left( \Gamma _f(t)\right) ^2\right]\le & {} {10\, t \, \langle f , (1/t -{{\mathscr {L}}})^{-1} f \rangle _{\rho }} = 10t^{-1}L_f(t^{-1}). \end{aligned}$$

In [23], the following sup variational form for the quadratic form is proved. The inf variational form is an equivalent relation.

Lemma 3.2

Let \(f:\Omega \rightarrow {\mathbb {R}}\) be a local function and let \(\lambda >0\). Then,

$$\begin{aligned} \langle f,(\lambda -\mathscr {L})^{-1}f\rangle _{\rho }= & {} \sup _{g}\left\{ 2\langle f,g\rangle _{\rho }-\Vert g\Vert _{1,\lambda }^2-\Vert \mathscr {A}g\Vert _{-1,\lambda }^2 \right\} \\= & {} \inf _{g}\left\{ \Vert f+\mathscr {A}g\Vert _{-1,\lambda }^2+\Vert g\Vert _{1,\lambda }^2\right\} , \end{aligned}$$

where the supremum and the infimum are taken over local functions g. In particular, by taking \(g\equiv 0\), we have

$$\begin{aligned} \langle f,(\lambda -\mathscr {L})^{-1}f\rangle _{\rho } \ \le \ \langle f,(\lambda -\mathscr {S})^{-1}f\rangle _{\rho }. \end{aligned}$$

We remark, although these variational formulas are quite difficult to compute, by restricting the supremum or the infimum over the class of degree one functions, that is linear combinations of the functions \(\{\eta (x) -\rho : x \in {\mathbb {Z}}^d\}\), we can sometimes extract interesting lower and upper bounds.

Putting things together, we obtain the following estimate which bounds the variance, with respect to the process generated by \({{\mathscr {L}}}\), in terms of the symmetric part \({{\mathscr {S}}}\).

Corollary 3.3

For \(t>0\) and \(f\in {{\mathbb {L}}^2(\nu _{\rho })}\) such that \({\mathbb {E}}_\rho [f]=0\), we have

$$\begin{aligned} {\mathbb {E}}_{\rho }\left[ \left( \Gamma _f(t)\right) ^2\right] \ \le \ {10\, t \, \Vert f \Vert ^2_{-1,t^{-1}}} = 5t^{-1}L_f^{(S)}(t^{-1}). \end{aligned}$$

3.2 Duality

We now detail certain ‘duality’ decompositions which often help simplify calculations. For finite subsets \(A \subset {\mathbb {Z}}^d\), let \(\Psi _A\) be the function

$$\begin{aligned} \Psi _A = \prod _{x \in A} \frac{\eta (x) -\rho }{\sqrt{\chi (\rho )}}, \end{aligned}$$

where \(\chi (\rho )=\rho (1-\rho )\). The collection \(\{ \Psi _A \; : \; A \subset {\mathbb {Z}}^d\}\) forms an orthonormal basis of \({\mathbb {L}}^2 (\nu _{\rho })\).

Let \({{\mathscr {E}}}_n = \{A\subset {\mathbb {Z}}^d: |A|=n\}\) be the class of subsets of \({\mathbb {Z}}^d\) with \(n\ge 1\) points. Let also \({{\mathscr {H}}}_n\) be the set of functions \({F}: {{\mathscr {E}}}_n \rightarrow {\mathbb {R}}\) such that \(\sum _{|A|=n} F^2 (A) <\infty \); when \(n=0\), \({{\mathscr {H}}}_0\) denotes the space of constants. Denote also, for \(n\ge 1\), \({M}_n\) as the space of ‘n-point’ functions f in form \(f=\sum _{|A|=n} {{\mathfrak {f}}} (A) \Psi _A\) with \({{\mathfrak {f}}} \in {{\mathscr {H}}}_n\); for \(n=0\), as before, \({M}_0\) denotes the space of constants. We have thus the orthogonal decomposition

$$\begin{aligned} {\mathbb {L}}^2 (\nu _{\rho }) = \oplus _{n\ge 0} {M}_n. \end{aligned}$$

Functions \({{\mathfrak {f}}}\) in \({{\mathscr {H}}}_n\) can be identified with a symmetric function \({{\mathfrak {f}}}: \chi _n \backslash D_n \rightarrow {\mathbb {R}}\) where \(\chi _n =({\mathbb {Z}}^d)^n\) and \(D_n=\{ (x_1, \ldots ,x_n) \in ({{\mathbb {Z}}^d})^n \, ; \, \exists i \ne j \; \text {such that} \; x_i =x_j\}\) via \({{\mathfrak {f}}} (x_1, \ldots ,x_n) := {{\mathfrak {f}}} (\{ x_1, \ldots ,x_n\})\). In the sequel, we will use this identification implicitly.

We now decompose the generator \({{\mathscr {L}}}\) on the basis \(\{ \Psi _A \; : \; A \subset {\mathbb {Z}}^d\}\). Given a subset A of \({\mathbb {Z}}^d\) and \(x,y \in {\mathbb {Z}}^d\) denote by \(A_{x,y}\) the set \(A_{x,y} =A \backslash \{ x \} \cup \{ y \}\) if \(x \in A\) and \(y \notin A\), by \(A_{x,y} =A \backslash \{ y \} \cup \{x \}\) if \( x \notin A\) and by \(A_{x,y} =A\) otherwise. Let also \(\mathscr {E}:=\bigcup _{n\ge 0}\mathscr {E}_n\). Then,

$$\begin{aligned} {{\mathscr {L}}} f= & {} \sum _{A \in {{\mathscr {E}}}} ({{\mathfrak {L}}} {{\mathfrak {f}}})(A) \Psi _A, \\ {{\mathscr {S}}} f= & {} \sum _{A \in {{\mathscr {E}}}} ({{\mathfrak {S}}} {{\mathfrak {f}}})(A) \Psi _A,\\ {{\mathscr {A}}} f= & {} \sum _{A \in {{\mathscr {E}}}} ({{\mathfrak {A}}} {{\mathfrak {f}}})(A) \Psi _A, \end{aligned}$$

where

$$\begin{aligned} {{\mathfrak {L}}}= {{\mathfrak {S}}} +{{\mathfrak {A}}}\quad \hbox {and}\quad {{\mathfrak {S}}}={{\mathfrak {L}}}^1, \ \ {{\mathfrak {A}}} = (1-2\rho ) {{\mathfrak {L}}}^2 + 2 \sqrt{\chi (\rho )} ({{\mathfrak {L}}}^+ -{{\mathfrak {L}}}^-), \end{aligned}$$

and

$$\begin{aligned} ({{\mathfrak {L}}}^1 {{\mathfrak {f}}}) (A)= & {} (1/2) \sum _{x,y \in {\mathbb {Z}}^d} s(y-x) \left[ {{\mathfrak {f}}} (A_{x,y} ) -{{\mathfrak {f}}} (A)\right] ,\\ ({{\mathfrak {L}}}^2 {{\mathfrak {f}}}) (A)= & {} \sum _{x \in A, y \notin A} a(y-x) \left[ {{\mathfrak {f}}} (A_{x,y} ) -{{\mathfrak {f}}} (A)\right] ,\\ ({{\mathfrak {L}}}^{-} {{\mathfrak {f}}}) (A)= & {} \sum _{x \notin A, y \notin A} a(y-x) {{\mathfrak {f}}} (A \cup \{ x \} ),\\ ({{\mathfrak {L}}}^{+} {{\mathfrak {f}}}) (A)= & {} \sum _{x \in A, y \in A} a(y-x) {{\mathfrak {f}}} (A \backslash \{ y \} ). \end{aligned}$$

The operator \({{\mathfrak {S}}}\), which generates the dual symmetric exclusion process, takes \({{\mathscr {H}}}_n\) to \({{\mathscr {H}}}_n\) for \(n\ge 0\). Its restriction to \({{\mathscr {H}}}_n\) is the generator of the set of n particles interacting by the exclusion rule with the jump probability s. This property represents the classical self-duality of the symmetric exclusion process [16].

Since the spaces \(\{M_n: n\ge 0\}\) are orthogonal and \({{\mathscr {S}}}\) is invariant on each \(M_n\), for \(f\in M_n\) and \(g\in M_m\) with \(n\ne m\), we have \(\Vert f + g\Vert ^2_{1,\lambda } = \Vert f\Vert ^2_{1,\lambda } + \Vert g\Vert ^2_{1,\lambda }\). Similarly, from the sup-variational formula in Lemma 3.2, we have

$$\begin{aligned} \Vert f+g\Vert ^2_{-1,\lambda } = \Vert f\Vert ^2_{-1,\lambda } + \Vert g\Vert ^2_{-1,\lambda }. \end{aligned}$$
(3.4)

Although self-duality is not valid in the asymmetric setting, the decomposition of the generator gives an extension of the duality relations. Note that the operators \({{\mathfrak {L}}}^1\) and \({{\mathfrak {L}}}^2\) preserve the degree of functions, but that \({{\mathfrak {L}}}^+\) and \({{\mathfrak {L}}}^-\) respectively increase and decrease the degree by 1. The operator \({{\mathfrak {A}}}\) has a decomposition of the form

$$\begin{aligned} {{\mathfrak {A}}} = \sum _{n \ge 1}\left( {{\mathfrak {A}}}_{n-1\, n} +{{\mathfrak {A}}}_{n \, n} +{{\mathfrak {A}}}_{n \, n+1}\right) , \end{aligned}$$

where \({{\mathfrak {A}}}_{n \, m}\) is the projection onto \({{\mathscr {H}}}_m\) of the restriction of \({{\mathfrak {A}}}\) to \({{\mathscr {H}}}_n\).

Later on, we will primarily consider functions of degree 1 and degree 2. We note the following action of the operators \({{\mathfrak {A}}}_{11}=(1-2\rho ) {{\mathfrak {B}}}_{11}\) and \({{\mathfrak {A}}}_{12} =2 {\sqrt{\chi (\rho )}} {{\mathfrak {B}}}_{12}\):

$$\begin{aligned} ({{\mathfrak {B}}}_{11} {{\mathfrak {f}}}) (x)= & {} \sum _{y\in {\mathbb {Z}}^d} a (y-x) \left[ {{\mathfrak {f}}} (y) -{{\mathfrak {f}}} (x)\right] ,\\ ({{\mathfrak {B}}}_{12} {{\mathfrak {f}}} ) (\{ x,y\})= & {} a (y-x) \left[ {{\mathfrak {f}}} (x) - {{\mathfrak {f}}} (y) \right] . \end{aligned}$$

3.3 Approximation by free particles

We now discuss ‘free particle’ approximations though which n-particle exclusion interactions can be estimated in terms of n-‘free’ or independent particles. For a local function \(f= \sum _{|A|=n} {{\mathfrak {f}}}(A) \Psi _A \in {M}_n\), the \(H_{1,\lambda }\) norm can be written in terms of the dual function \({{\mathfrak {f}}}\in {{\mathscr {H}}}_n\):

$$\begin{aligned} \Vert f\Vert _{1,\lambda }^2 = \lambda \sum _{|A|=n} {{\mathfrak {f}}}^2 (A) + \sum _{u,v \in {\mathbb {Z}}^d} \sum _{|A|=n} s(v-u) \left[ {{\mathfrak {f}}}(A_{u,v}) -{{\mathfrak {f}}} (A) \right] ^2. \end{aligned}$$
(3.5)

Similarly, the \(H_{-1,\lambda }\) norm of f can be written in terms \({{\mathfrak {f}}}\).

Because of the exclusion interaction, it is not easy, even for simple functions, to compute these norms. The idea then is to compare them to corresponding norms without the exclusion, that is for a system composed of free particles. Observe there exists a positive constant \(K_0\) such that

$$\begin{aligned} K_0^{-1} s_0 (\cdot ) \ \le \ s (\cdot ) \ \le \ K_0 s_0 (\cdot ) \end{aligned}$$
(3.6)

where \(s_0\) is the symmetric probability, defined for \(y\in {\mathbb {Z}}^d\) by

$$\begin{aligned} s_0 (y) = \frac{c_0}{|y|^{d+\alpha }}, \end{aligned}$$

where \(c_0\) is a normalization constant.

The \({{\mathbb {H}}}_{1, \mathrm {free}, \lambda }\)-norm of the symmetric function \(F:\chi _n\rightarrow {\mathbb {R}}\) is defined by

$$\begin{aligned} \Vert F\Vert _{1, \mathrm{{free}},\lambda }^2 = \lambda \frac{1}{n!} \sum _{\mathbf{x}} F^2 (\mathbf{x}) + \frac{1}{n!} \sum _{j=1}^n \sum _{z\in {\mathbb {Z}}^d } \sum _{\mathbf{x}} s_0 (z) \left[ F({\mathbf{x}} +z{\mathbf{e}}_j) -F({\mathbf{x}}) \right] ^2 \end{aligned}$$

where \(\mathbf{x}+ z \mathbf{e}_j=(x_1, \ldots , x_{j-1}, x_j +z, x_{j+1}, \ldots , x_n)\). If \(n=1\), the formula reduces to

$$\begin{aligned} \Vert F\Vert _{1,\mathrm{{free}},\lambda }^2 = {\lambda } \sum _{x \in {\mathbb {Z}}^d} F^2 (x) + \sum _{z,x \in {\mathbb {Z}}^d } s_0 (z-x) \left[ F(z) -F(x) \right] ^2. \end{aligned}$$

When \(n=2\), it is given by

$$\begin{aligned} \Vert F\Vert _{1,\mathrm{{free}},\lambda }^2 = \frac{\lambda }{2} \sum _{x,y \in {\mathbb {Z}}^d} F^2 (x,y) + \sum _{z,x,y \in {\mathbb {Z}}^d } s_0 (z-x) \left[ F(z,y) -F(x,y) \right] ^2. \end{aligned}$$

The \({{\mathbb {H}}}_{-1, \mathrm {free},\lambda }\)-norm of the symmetric function \(G: \chi _n \rightarrow {\mathbb {R}}\) is defined by

$$\begin{aligned} \Vert G\Vert _{-1, \mathrm{{free}},\lambda }^2 = \sup _{F: \chi _n \rightarrow {\mathbb {R}}} \left\{ \frac{2}{n!} \sum _{\mathbf{x}} F(\mathbf{x}) G(\mathbf{x}) - \Vert F \Vert _{1, \mathrm{{free},\lambda } }^2 \right\} . \end{aligned}$$

To \({{\mathfrak {f}}}\in {{\mathscr {H}}}_n\), we associate a symmetric function \({\tilde{{\mathfrak {f}}}}: \chi _n \rightarrow {\mathbb {R}}\) which coincides with \({{\mathfrak {f}}}\) outside \(D_n\) and for \((x_1, \ldots ,x_n) \in D_n\) by

$$\begin{aligned} {\tilde{{\mathfrak {f}}}} (x_1, \ldots ,x_n)= {\mathbf E} \left[ {{\mathfrak {f}}} (X_1(T), \ldots , X_n (T))\right] \end{aligned}$$

where \(\mathbf{E}\) is the expectation with respect to the law of n-independent simple symmetric random walks \((X_1 (t), \ldots ,X_n(t))_{t \ge 0}\) on \({\mathbb {Z}}^d\) starting from \((x_1, \ldots , x_n)\) and T is the hitting time of \(\chi _n \backslash D_n\). For example, if \({{\mathfrak {f}}} \in {{\mathscr {H}}}_2\) then

$$\begin{aligned} {\tilde{{\mathfrak {f}}}} (x,y)= \left\{ \begin{array}{ll} {{{\mathfrak {f}}}} (\{x,y\}) &{} {\text { if } } x \ne y,\\ (2d)^{-1} \sum _{i=1}^d \left( {{\mathfrak {f}}}(\{x + e_i, x\}) +{{\mathfrak {f}}}(\{x-e_i,x\}) \right) &{} {\text { if }} x = y. \end{array}\right. \end{aligned}$$
(3.7)

With respect to the symmetric function \(F:\chi _n \rightarrow {\mathbb {R}}\), we also associate the function \({{\mathfrak {W}}}_n {F} : \chi _n\rightarrow {\mathbb {R}}\) which coincides with F outside \(D_n\) and is equal to 0 on \(D_n\).

Lemma 3.4

Let \(n \ge 1\). There exists a constant \(C_{n,d}\) independent of \(\lambda \) such that for \(f \in M_n\) and its dual function \({{\mathfrak {f}}}\in {{\mathscr {H}}}_n\) we have

$$\begin{aligned} C_{n,d}^{-1} \Vert {\tilde{{\mathfrak {f}}}} \Vert _{1, \mathrm{{free}},\lambda }^2 \ \le \ \Vert f\Vert _{1,\lambda }^2 \ \le \ C_{n,d} \Vert {\tilde{{\mathfrak {f}}}} \Vert _{1,\mathrm{{free}},\lambda }^2. \end{aligned}$$

It follows that

$$\begin{aligned} \Vert f\Vert _{-1,\lambda }^2 \ \le \ C_{n,d} \Vert { {{\mathfrak {W}}}_n { \tilde{{\mathfrak {f}}}} } \Vert _{-1,\mathrm{{free}, \lambda }}^2. \end{aligned}$$

Proof

We only give the proof of the first claim for \(n=2\) to reduce notation; the argument for general \(n\ge 1\) is similar. The second claim is a consequence of the first one: Inputting \(\langle f, \phi \rangle _\rho = (1/2)\sum _{x,y\in {\mathbb {Z}}^d}({{\mathfrak {W}}}_2{\tilde{{\mathfrak {f}}}})(x,y) \tilde{\phi }(x,y)\) and \(\Vert \phi \Vert ^2_{1,\lambda } \ge C^{-1}_{2,d}\Vert \tilde{\phi }\Vert ^2_{1, \mathrm{free}, \lambda }\) into the variational formula for \(\Vert f\Vert ^2_{-1,\lambda }\) given in Lemma 3.2, and noting the definition of \(\Vert \cdot \Vert ^2_{-1, \mathrm{free}, \lambda }\) above, the second claim follows. See the proof of Theorem 3.2 in [7] for more details. Let now C be a positive constant independent of \(\lambda \) whose value can change from line to line.

The first term in (3.5), noting (3.7), can be bounded by Schwarz’s inequality:

$$\begin{aligned} C^{-1} \sum _{x,y\in {\mathbb {Z}}^d} {\tilde{{\mathfrak {f}}}}^2 (x,y) \ \le \ \sum _{x\ne y} {{\mathfrak {f}}}^2 (\{x,y\}) \ \le \ \sum _{x,y\in {\mathbb {Z}}^d} {\tilde{{\mathfrak {f}}}}^2 (x,y). \end{aligned}$$

With respect to the second term in (3.5), noting (3.6), by replacing \({{\mathfrak {f}}}\) with \(\tilde{{\mathfrak {f}}}\), we have trivially

$$\begin{aligned}&\sum _{z,x,y \in {\mathbb {Z}}^d } s(z-x) \left[ {{{\mathfrak {f}}}} (\{z,y\}) -{{{\mathfrak {f}}}}(\{x,y\}) \right] ^2 \mathbf{1}_{z \ne y, z \ne x, x \ne y} \\&\qquad \le C \sum _{z,x,y \in {\mathbb {Z}}^d } s_0 (z-x) \left[ {\tilde{{\mathfrak {f}}}} (z,y) -{\tilde{{\mathfrak {f}}}}(x,y) \right] ^2. \end{aligned}$$

On the other hand, to show

$$\begin{aligned}&\sum _{z,x,y\in {\mathbb {Z}}^d} s_0(z-x)\left[ {\tilde{{\mathfrak {f}}}} (z,y) -{\tilde{{\mathfrak {f}}}} (x,y)\right] ^2 \\&\qquad \le \ C \sum _{z,x,y\in {\mathbb {Z}}^d}s(z-x)\left[ {\tilde{{\mathfrak {f}}}} (z,y) -{\tilde{{\mathfrak {f}}}} (x,y)\right] ^2 \mathbf{1}_{z\ne y, z\ne x, x\ne y} \end{aligned}$$

it is enough to verify

$$\begin{aligned}&\sum _{x\ne y} s_{0} (y-x) \left[ {\tilde{{\mathfrak {f}}}} (y,y) -{\tilde{{\mathfrak {f}}}} (x,y)\right] ^2 \\&\qquad \le C\sum _{z,x,y \in {\mathbb {Z}}^d } s(z-x) \left[ {{{\mathfrak {f}}}} (\{z,y\}) -{{{\mathfrak {f}}}}(\{x,y\}) \right] ^2\mathbf{1}_{z\ne y, z\ne x, x\ne y}. \end{aligned}$$

To this end, by Schwarz’s inequality, we have

$$\begin{aligned}&\sum _{x\ne y} s_{0} (y-x) \left[ {\tilde{{\mathfrak {f}}}} (y,y) -{\tilde{{\mathfrak {f}}}} (x,y)\right] ^2\\&\quad \le \ C \sum _{i=1}^d \sum _{x \ne y} s_0 (y{-}x) \left\{ \left[ {{\mathfrak {f}}}(\{y + e_i, y\}) {-}{{\mathfrak {f}}}(\{x,y\}) \right] ^2 + \left[ {{\mathfrak {f}}}(\{y {-} e_i, y\}) {-}{{\mathfrak {f}}}(\{x,y\}) \right] ^2\right\} . \end{aligned}$$

Since \(\sup _{i =1, \ldots ,d} \sup _{z \ne 0, \pm e_i} s_0 (z)/s_0 (z \pm e_i) \le C\) and \(\sum _{i=1}^d 1 =d\), the right-side above is bounded by

$$\begin{aligned} C \sum _{x,y,z\in {\mathbb {Z}}^d} s_0 (z-x) \left[ {{\mathfrak {f}}}(\{z, y\}) -{{\mathfrak {f}}} (\{x,y\}) \right] ^2 \mathbf{1}_{x \ne y, x \ne z, z\ne y}, \end{aligned}$$

as desired. \(\square \)

3.4 Fourier estimates

Let \({\mathbb {T}}^d =[0,1)^d\) be the d-dimensional torus. Denote the Fourier transform of the function \(\psi \in {\mathbb {L}}^2 (\chi _n)\) by \({\widehat{\psi }}\): For \((s_1, \ldots ,s_n) \in ({\mathbb {T}}^d)^n\),

$$\begin{aligned} {\widehat{\psi }} (s_1, \ldots ,s_n) = \frac{1}{\sqrt{n!}} \sum _{(x_1, \ldots , x_n) \in \chi _n} e^{2 \pi i(x_1 \cdot s_1 + \cdots + x_n \cdot s_n)} \psi (x_1, \ldots ,x_n). \end{aligned}$$

As the ‘free’ dynamics consists of independent random walks moving with jump probability \(s_0\), the \({{\mathbb {H}}}_{1,\mathrm{free}, \lambda }\)-norm of \(\psi \) is

$$\begin{aligned} \Vert \psi \Vert ^2_{1, \mathrm{free}, \lambda } = \frac{1}{(2\pi )^{nd}}\int _{({\mathbb {T}}^d)^n} \left( \lambda + \sum _{i=1}^n\theta _d(s_i;s_0(\cdot )) \right) |\hat{\psi }(s_1,\ldots , s_n)|^2 ds_1\ldots ds_n. \end{aligned}$$

Also, the \({{\mathbb {H}}}_{-1, \mathrm{{free}},\lambda }\)-norm of \(\psi \) is written as

$$\begin{aligned} \Vert \psi \Vert _{-1, \lambda , \mathrm{{free}},}^2 = \frac{1}{(2\pi )^{nd}}\int _{({{\mathbb {T}}^d})^n} \frac{|{\hat{\psi }} (s_1, \ldots ,s_n)|^2}{\lambda + \sum _{i=1}^n\theta _d (s_i;s_0(\cdot )) }\, {d {s}_1 \ldots d{s}_n}. \end{aligned}$$
(3.8)

Here, for \(u \in {\mathbb {T}}^d\), and symmetric transition function \(r:{\mathbb {Z}}^d \rightarrow [0,1]\),

$$\begin{aligned} \theta _d (u;r(\cdot )) = 2 \sum _{z \in {\mathbb {Z}}^d} r(z) \sin ^2 (\pi u\cdot z). \end{aligned}$$
(3.9)

When ‘free’ particle \(H_{\pm 1}\) norms are used in the sequel, \(r = s_0(\cdot )\). However, in the proof of the functional CLT in Theorem 2.11, \(r = s(\cdot )\), the symmetric part of p given by

$$\begin{aligned} s(z) = \frac{c \gamma (z)}{{|z|}^{d+\alpha }}, \quad \mathrm{and }\quad \gamma (z)=\sum _{j=1}^d \frac{b_j^+ + b_j^-}{2}\mathbf{1}_{z \cdot e_j \ne 0}. \end{aligned}$$

Note that \(s_0\) is a special case of the more general formulation of s.

We now state an estimate used throughout the proofs. Let \({{\mathscr {C}}}_d\) be the set of extremal points of \([0,1]^d\),

$$\begin{aligned} {{\mathscr {C}}}_d=\{ \sigma _1 e_1 + \cdots + \sigma _d e_d \, ; \, \sigma _i \in \{0,1\} \}. \end{aligned}$$
(3.10)

We note that \(\theta _d(u;s(\cdot ))\) is smooth, even, positive on \({\mathbb {T}}^{d} \backslash {{\mathscr {C}}}_d\) and vanishes exactly on \({{\mathscr {C}}}_d\).

Lemma 3.5

Let \(\gamma _0= \frac{1}{2} \sum _{j=1}^d (b_j^+ + b_j^-)\). The function \(\theta _d = \theta _d(\cdot ;s(\cdot ))\) is bounded above by a positive constant. For \(u\in {\mathbb {T}}^d\) and \(w \in {{\mathscr {C}}}^d\), \(\theta _d(u-w) = \theta _d(u)\) and, as \(u-w\rightarrow 0\),

$$\begin{aligned} \theta _d(u-w)\ = \ J(d,\alpha ) F_{\alpha } (u-w) + o(F_{\alpha } (u-w) ) \end{aligned}$$

where

$$\begin{aligned} F_{\alpha } (x)= {\left\{ \begin{array}{ll} |x|^{\alpha } \quad &{}\text {if}\, \alpha <2\\ |x|^2\log (|x|) \quad &{}\text {if}\, \alpha =2\\ |x|^{2} \quad &{}\text {if}\, \alpha >2 \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} J(d,\alpha )= {\left\{ \begin{array}{ll} c_0\gamma _0\int _{ q \in {\mathbb {R}}^d } \frac{\sin ^2 \left( \pi q_1 \right) }{|q|^{d+\alpha }}dq \quad &{}\text { if } \alpha <2\\ -\frac{ c_0\gamma _0\pi ^2}{d} \quad &{}\text {if}\, \alpha =2\\ \frac{c_0\gamma _0\pi ^2}{d (\alpha -2)} \quad &{}\text {if}\, \alpha >2. \end{array}\right. } \end{aligned}$$

Proof

By periodicity of \(\theta _d\), we can restrict the proof to the case \(w=0\). Since \(s_0\) is a radial function, we can write \(\theta _d (u)\) as

$$\begin{aligned} \theta _d (u) = c_0| u|^{\alpha } \left[ |u|^{d} \sum _{z \ne 0} \frac{\gamma (|u|z) }{| \,|u| z\, |^{d+ \alpha }}\, \sin ^2 \left( \pi \frac{u}{|u|} \cdot |u| z \right) \right] . \end{aligned}$$

This is equivalent in order, as u vanishes, to

$$\begin{aligned} c_0\gamma _0|u|^{\alpha }\int _{|q| \ge |u| } \frac{1}{|q|^{d+\alpha }} \, \sin ^2 \left( \pi \frac{u}{|u|} \cdot q \right) \, dq= c_0 \gamma _0|u|^{\alpha } \int _{|q| \ge |u| } \frac{1}{|q|^{d+\alpha }} \, \sin ^2 \left( \pi q_1 \right) \, dq. \end{aligned}$$

Here, the second equality follows from the invariance of the Lebesque measure by the orthogonal group.

If \(\alpha <2\), the last integral is convergent. However, for \(\alpha \ge 2\), the integral diverges as u vanishes:

$$\begin{aligned} \int _{|q| \ge |u| } \frac{1}{|q|^{d+\alpha }} \, \sin ^2 ( \pi q_1) \, dq \ \sim \ \left\{ \begin{array}{ll} \frac{\pi ^2}{d (\alpha -2)} |u|^{2 -\alpha } &{} \ \ \mathrm{if}\,\alpha >2\\ -\frac{\pi ^2}{d} \log (|u|) &{} \ \ \mathrm{if}\,\alpha =2. \end{array}\right. \end{aligned}$$

\(\square \)

3.5 One point function lower bounds

The following lower bound will be useful in the proof of Theorems 2.14 and 2.15, and may be skipped on first reading. We estimate the variational formulas of the resolvent norms given in Lemma 3.2 with respect to the occupation function \(\Psi _{\{0\}}\).

Recall the decomposition of the probability \(p = s + a\) and the notation in Sect. 3.4. Let \(\theta _d = \theta _d(\cdot ;s_0(\cdot ))\) and

$$\begin{aligned} F^d_{\lambda , \rho } (u):= & {} [ \lambda + \theta _d (u) ] + (1-2\rho )^2 \frac{|{\hat{a}} (u)|^2 }{\lambda + \theta _d (u)} \\&+ \chi (\rho ) \sum _{V \in {{\mathscr {C}}}_d} \int _{ s\in D_V (u)} \frac{|{\hat{a}} (s) +{\hat{a}} (u-s)|^2 }{\lambda + \theta _d (s) + \theta _d (u-s) } \, ds, \end{aligned}$$

where

$$\begin{aligned} D_V(u) := \left\{ s \in [0,1)^d, \, (u-s+V) \in [0,1)^d \right\} , \end{aligned}$$
(3.11)

and

$$\begin{aligned} I_d (\lambda ,\rho ) := \int _{{\mathbb {T}}^d} \frac{1}{F^d_{\lambda , \rho } (u)}\; du. \end{aligned}$$
(3.12)

Proposition 3.6

There exists a constant C, not depending on \(\lambda \), such that

$$\begin{aligned} \left\langle (\lambda -{{\mathscr {L}}} )^{-1} \Psi _{\{0\}}, \Psi _{\{0\}} \right\rangle _\rho \ \ge \ C I_d(\lambda ,\rho ). \end{aligned}$$

Proof

The first step is to use the sup-variational formula in Lemma 3.2 to express

$$\begin{aligned} \left\langle (\lambda -{{\mathscr {L}}} )^{-1} \Psi _{\{0\}}, \Psi _{\{0\}} \right\rangle _\rho = \sup _{g} \left\{ 2 \langle \Psi _{\{0\}}, g \rangle - \Vert g\Vert _{1, \lambda }^2 - \Vert {{\mathscr {A}}} g \Vert _{-1, \lambda }^2\right\} . \end{aligned}$$

The second step is to restrict the supremum over functions \(g= \sum _{x \in {\mathbb {Z}}^d} {{\mathfrak {g}}} (x) \Psi _{\{x\}}\) in \({M}_{1}\) to get a lower bound. By orthogonality relation (3.4) and Lemma 3.4, we have

$$\begin{aligned} \Vert g \Vert _{1,\lambda }^2\le & {} C \Vert {{\mathfrak {g}}} \Vert _{1, \mathrm{free}, \lambda }^2 = C \left[ \lambda \sum _{x} {{\mathfrak {g}}}^2 (x) + \sum _{x,y} s_{0} (y-x) \left[ {{\mathfrak {g}}} (y) -{{\mathfrak {g}}} (x) \right] ^2 \right] \nonumber \\ \Vert {{\mathscr {A}}} g \Vert _{-1,\lambda }^2= & {} \left\| \sum _{|A|=1}({{\mathfrak {A}}}_{1,1}{{\mathfrak {g}}})(A)\Psi _A\right\| _{-1,\lambda }^2 + \left\| \sum _{|A|=2}({{\mathfrak {A}}}_{1,2}{{\mathfrak {g}}})(A)\Psi (A)\right\| _{-1,\lambda }^2\nonumber \\\le & {} C \left[ \Vert {{\mathfrak {W}}}_1 {{\mathfrak {A}}}_{1,1} {{\mathfrak {g}}} \Vert _{-1, \mathrm{{free}}, \lambda }^2 + \Vert {{\mathfrak {W}}}_2 {{\mathfrak {A}}}_{1,2} {{\mathfrak {g}}} \Vert _{-1, \mathrm{{free}}, \lambda }^2 \right] . \end{aligned}$$
(3.13)

Recall the operators \({{\mathfrak {T}}}_{1,1}:={{\mathfrak {W}}}_1 {{\mathfrak {A}}}_{1,1}\) and \({{\mathfrak {T}}}_{1,2}:={{\mathfrak {W}}}_1 {{\mathfrak {A}}}_{1,2}\) act on functions defined on \({\mathbb {Z}}^d\) and \(({\mathbb {Z}}^d)^2\) respectively, and are given by

$$\begin{aligned} ({{\mathfrak {T}}}_{1,1} {{\mathfrak {g}}}) (x)= & {} (1-2\rho ) \sum _{y\in {\mathbb {Z}}^d} a(y -x) \left[ {{\mathfrak {g}}} (y) -{{\mathfrak {g}}} (x) \right] , \\ ({{\mathfrak {T}}}_{1,2} {{\mathfrak {g}}} )(x,y)= & {} \sqrt{\chi (\rho )} a(y -x) \left[ {{\mathfrak {g}}} (x) -{{\mathfrak {g}}} (y) \right] . \end{aligned}$$

It follows that

$$\begin{aligned}&C\left\langle (\lambda -{{\mathscr {L}}} )^{-1} \Psi _{\{0\}}, \Psi _{\{0\}}\right\rangle _\rho \nonumber \\&\quad \ge \ \sup _{ {{\mathfrak {g}}}} \left\{ 2 {{\mathfrak {g}}} (0) - \lambda \sum _{x\in {\mathbb {Z}}^d} {{\mathfrak {g}}}^2 (x) - \sum _{x,y\in {\mathbb {Z}}^d} s_{0} (y-x) \left[ {{\mathfrak {g}}} (y) -{{\mathfrak {g}}} (x) \right] ^2\nonumber \right. \\&\qquad \left. - \Vert {{\mathfrak {T}}}_{1,1} {{\mathfrak {g}}}\Vert _{-1, \mathrm{{free}}, \lambda }^2 - \Vert {{\mathfrak {T}}}_{1,2} {{\mathfrak {g}}}\Vert _{-1, \mathrm{{free}}, \lambda }^2 \right\} \end{aligned}$$
(3.14)

We now express the terms in this formula via the Fourier transform of \({{\mathfrak {g}}}\). The term \(\Vert {\mathfrak {g}}\Vert _{1, \mathrm{free}, \lambda }^2\) in terms of the Fourier transform \(\widehat{{\mathfrak {g}}}\) is given above (3.8). Also \({\mathfrak {g}}(0) = \int _{{\mathbb {T}}^d}\widehat{{\mathfrak {g}}}(s)ds\).

In addition, as a is anti-symmetric,

$$\begin{aligned} \widehat{ {{\mathfrak {T}}}_{1,1} {{\mathfrak {g}}} }\, (s)= & {} - (1-2\rho ) \, \widehat{a}(s) \, \widehat{{\mathfrak {g}}} (s), \\ \widehat{ {{\mathfrak {T}}}_{1,2} {{\mathfrak {g}}} } \, (s,t)= & {} -\sqrt{\chi (\rho )} \left[ \widehat{a}(s) + \widehat{a}(t) \right] \, \widehat{{\mathfrak {g}}} (s+t). \end{aligned}$$

Recall \({{\mathscr {C}}}_d = \{ \sigma _1 e_1 + \cdots +\sigma _d e_d \, ; \, \sigma _i \in \{0,1\} \}\subset {\mathbb {Z}}^d\). Observe that the set \([0,2)^d\) is equal to the disjoint union of the sets \([0,1)^d + V\) over \(V \in {{\mathscr {C}}}^d\). Then, by periodicity of \(\hat{g}\), \(\theta _d\) and \({\hat{a}}\), we have

$$\begin{aligned} \Vert {{\mathfrak {T}}}_{1,2} {{\mathfrak {g}}}\Vert _{-1, \mathrm{{free}}, \lambda }^2= \chi (\rho ) \int _{[0,1)^d} |{\hat{g}} (u)|^2 \left[ \sum _{V \in {{\mathscr {C}}}_d} \int _{ s\in D_V (u)} \frac{|{\hat{a}} (s) +{\hat{a}} (u-s)|^2 }{\lambda + \theta _d (s) + \theta _d (u-s) } \, ds \right] \, du. \end{aligned}$$

The term \(\Vert {{\mathfrak {T}}}_{1,1} {{\mathfrak {g}}}\Vert _{-1, \mathrm{{free}}, \lambda }^2\) is given in terms of the Fourier transform \(\widehat{ {{\mathfrak {T}}}_{1,1} {{\mathfrak {g}}} }\) in (3.8).

Because \({\mathfrak {g}}\) is a real function, \(\hat{{{\mathfrak {g}}}}(u) = \sum _x e^{2\pi i u\cdot x} {\mathfrak {g}}(x)\) has even real and odd imaginary parts. To obtain a lower bound of (3.14), we maximize, over the subset of such square integrable complex functions \(\varphi : {\mathbb {T}}^d \rightarrow {\mathbb {C}}\), with even real and odd imaginary parts, the following expression

$$\begin{aligned} \int _{{\mathbb {T}}^d} du \left\{ 2 {\varphi } (u) - F^d_{\lambda , \rho } (u) \, | \varphi (u)|^2\right\} \, du. \end{aligned}$$
(3.15)

Note that \(\int _{{\mathbb {T}}^d} \mathrm{Im}\varphi (u)du = 0\), and also, for \(A>0\), that \(\sup _{x,y \in {\mathbb {R}}} [2x-A (x^2 + y^2)] = 1/ A\) is realized at \(x= 1/ A\) and \(y=0\). Then, the supremum in (3.15) is attained at \(\varphi = 1/ F^d_{\lambda , \rho }\) and the value of the supremum in (3.15) is \(I_d(\lambda ,\rho )\).\(\square \)

4 Comparison results: proofs of Theorems 2.4, 2.5 and 2.18

We first prove two preliminary results for (LA) long-range models, before proving the main theorems at the end of the section. Denote by \(\Vert \cdot \Vert _{\pm 1, (FR)}\) and \(\Vert \cdot \Vert _{\pm 1, (FR-NN)}\) the \({H}_{\pm 1}\)-norms defined in terms of \({{\mathscr {S}}}^{(FR)}\) and \({{\mathscr {S}}}^{(FR-NN)}\) respectively.

Lemma 4.1

For \(\alpha >2\), and \(d\ge 1\), there exist constants \(C=C(p,d),D=D(p,d)>0\) such that on local functions \(\varphi \),

$$\begin{aligned} \begin{aligned} C^{-1} \; \Vert \varphi \Vert _{1,(FR)}^2&\le \Vert \varphi \Vert _1^2 \le C \Vert \varphi \Vert _{1,(FR)}^2\\ D^{-1} \; \Vert \varphi \Vert _{-1,(FR)}^2&\le \Vert \varphi \Vert _{-1}^2 \le D\Vert \varphi \Vert _{-1,(FR)}^2. \end{aligned} \end{aligned}$$
(4.1)

Remark 4.2

As the proof of Lemma 4.1 will show, the inequalities \(C^{-1}\Vert \varphi \Vert _{1, (FR)}^2 \le \Vert \varphi \Vert ^2_1\) and \(\Vert \varphi \Vert ^2_{-1} \le D\Vert \varphi \Vert ^2_{-1, (FR)}\) hold for all \(\alpha >0\). Only the proofs of the other inequalities in the display make use that \(\alpha >2\).

Proof

The second display follows from the first in (4.1) and the definition of \(H_{-1}\) norms.

To prove the first line of (4.1), we now give a reduction: As \(s^{(FR)}\) is irreducible, Lemma 3.7 in [24] states that \(\Vert \cdot \Vert _{\pm 1, (FR)}\) and \(\Vert \cdot \Vert _{\pm 1, (FR-NN)}\) are equivalent. Hence, we need only to show (4.1) with respect to \(p^{(FR-NN)}\).

Recall, the Dirichlet form \(\Vert \varphi \Vert _{1}^2=\sum _{x,y\in \mathbb {Z}^d}s(y)D_{x,x+y}(\varphi )\). Similarly, \( \Vert \varphi \Vert _{1,(FR-NN)}^2= \sum _{x\in \mathbb {Z}^d}\sum _{i=1}^d s^{(FR-NN)}(e_i)D_{x,x+e_i}(\varphi )\). Here, for \(u,v\in {\mathbb {Z}^d}\), \(D_{u,v}(\varphi )={\mathbb {E}}_\rho (\varphi (\eta ^{u,v})-\varphi (\eta ))^2\).

We now argue in \(d=1\), and remark later on modifications to \(d\ge 2\). The left inequality in (4.1) is trivial since \(s^{(FR-NN)}(1)=2^{-1}\), \(s(1)=c2^{-1}(b_1^++b_1^-)>0\) and so \(\Vert \varphi \Vert _{1}^2\ge \frac{s(1)}{s^{(FR-NN)}(1)} \Vert \varphi \Vert _{1,(FR-NN)}^2\).

For the right inequality in (4.1), consider the bond \((x,x+y)\) for \(y>0\). Rewrite \(\eta ^{x,x+y}\) as a series of nearest-neighbor exchanges. One exchanges in succession the values on bonds \((x, x+1)\), \((x+1, x+2)\) and so on to bond \((x+y-1,x+y)\). In this way, the value at x is now at \(x+y\). Exchange now on bonds \((x+y-1, x+y-2)\), and so on to \((x,x+1)\). This puts the value initially at \(x+y\) at x, also shifts back the values at intermediate points to their initial states.

Then, the Dirichlet bond \(D_{x,x+y}(\varphi )\), by invariance of \(\nu _\rho \), by adding and subtracting \(2y -1\) terms and Schwarz inequality is bounded \(D_{x,x+y}(\varphi ) \le 2y\sum _{z=x}^{x+y-1} D_{z,z+1}(\varphi )\). Since \(\alpha >2\), we have \(\sum y^2s(y)<\infty \) and

$$\begin{aligned} \Vert \varphi \Vert _1^2\le & {} \sum _y 2ys(y)\sum _x\sum _{z=x}^{x+y-1}D_{z,z+1}(\varphi ) \le \left( \sum _y 2y^2s(y)\right) \sum _x D_{x,x+1}(\varphi ) \\\le & {} \ s^{(FR-NN)}(1)^{-1} \left( \sum _y 2y^2s(y)\right) \Vert \varphi \Vert ^2_{1, FR-NN}. \end{aligned}$$

In \(d\ge 2\), the proof of the left inequality in (4.1) is similar, as \(s^{(FR-NN)}(e_i), s(e_i)>0\) for \(1\le i\le d\). For the right inequality, an exchange over the bond \((x,x+y)\) is decomposed by nearest-neighbor exchanges first on bonds \((x,x+e_1)\) to \(((x_1+y_1-1,x_2), (x_1+y_1,x_2))\), and then from \(((x_1+y_1,x_2), (x_1+y_1,x_2+1))\) to \(x+y\). Then, as before in the \(d=1\) argument, exchanges are made on the vertical and horizontal lines to bring the value at \(x+y\) to x, and shift back other values. The analysis is now analogous with more notation (cf. Appendix 3.3 in [12]).\(\square \)

We will say the ‘drift’ of an exclusion generator \({{\mathscr {L}}}_0\) with transition function \(p_0(\cdot )\) on \({\mathbb {Z}}^d\) is the vector \(\sum _{y\in {\mathbb {Z}}^d} yp_0(y)\). Let \({{\mathscr {L}}}\) denote an (LA) long-range generator with jump probability \(p(\cdot )\), and let \({{\mathscr {L}}}^1\) generate a nearest-neighbor finite range (FR-NN) exclusion process where

$$\begin{aligned} ({{\mathscr {L}}}^1 f)(\eta ) = \sum _{z\in {\mathbb {Z}}^d} \sum _{i=1}^d |m_i|\eta (z+ {{\mathrm{sgn}}}(m_i)e_i)(1-\eta (z))\nabla _{z,z+{{\mathrm{sgn}}}(m_i)e_i}f(\eta ). \end{aligned}$$

Note that the drifts of \({{\mathscr {L}}}\) and \({{\mathscr {L}}}^1\) equal \(m = \sum yp(y)\) and \(-m\) respectively.

Lemma 4.3

Suppose \(\alpha >2\), \(d\ge 1\) and consider the exclusion process generated by \( \tilde{{{\mathscr {L}}}}={{\mathscr {L}}} +{{\mathscr {L}}}^1\). Then, \(\tilde{{\mathscr {L}}}\) satisfies a sector condition: There exists a constant \(C=C(p,d,\alpha )\) such that on local functions \(\varphi , \psi :\Omega \rightarrow {\mathbb {R}}\) we have

$$\begin{aligned} \langle (-\tilde{{{\mathscr {L}}}})\varphi , \psi \rangle _{\rho } \ \le \ C\, \Vert \varphi \Vert _{1, (FR-NN)}\, \Vert \psi \Vert _{1, (FR-NN)}. \end{aligned}$$
(4.2)

Remark 4.4

We remark (4.2) is a generalization, to the long-range setting, of the finite-range sector inequality in Lemma 5.2 of [30]: Let \(\widehat{{\mathscr {L}}}\) be the generator of a finite-range mean-zero process.

$$\begin{aligned} \langle (-\widehat{{{\mathscr {L}}}})\varphi , \psi \rangle _{\rho } \le C\, \Vert \varphi \Vert _{1, (FR)}\, \Vert \psi \Vert _{1, (FR)}. \end{aligned}$$
(4.3)

Proof

We will prove the result in \(d=1\) and will assume that \(p(\cdot )\) is ‘totally asymmetric to the right’, that is p is supported on integers \(z>0\). The same proof will hold when p is ‘totally asymmetric to the left’, that is p supported on negative integers. Since a general transition function p is a linear combination of such probabilities, the desired inequality in Lemma 4.3 would also hold.

In \(d\ge 2\), a similar but more notationally involved argument, decomposing a jump from x to \(x+y\) into jumps parallel to axes, will also hold.

The idea is to show that \(\tilde{{\mathscr {L}}}\) can be decomposed into a finite sum of operators, corresponding to smaller jump sizes, where each operator satisfies a sector inequality as in (4.2) with the same right-hand side. With such inequalities in hand, (4.2) would follow.

In the following, as has been our convention, we will denote the adjoint generators in \({\mathbb {L}}^2(\nu _\rho )\) by the superscript \(*\). An adjoint \(\mathscr {N}^*\) will move particles in the opposite direction with the same rates and increments as \(\mathscr {N}\).

To this end, let \(\beta \ge 0\) be a real number. For integers \(k>0\), define \(\bar{k}_\beta = \lfloor k/\lfloor k^\beta \rfloor \rfloor \) and \(\hat{k}_\beta = \lfloor k^\beta \rfloor \). Let \(w:{\mathbb {Z}}\rightarrow {\mathbb {R}}\) be a ‘weight’ function such that \(0\le w(k)\le \bar{k}_\beta \) for \(k>0\). Define,

$$\begin{aligned} (\mathscr {N}^{\beta , w} f)(\eta ) = \sum _{x\in {\mathbb {Z}}} \sum _{k>0} p(k) w(k) \eta (x)\left( 1-\eta (x + \hat{k}_\beta )\right) \nabla _{x,x+\hat{k}_\beta }f(\eta ). \end{aligned}$$

Here, particles are moved to the right, for index \(k>0\), in steps of size \(\hat{k}_\beta \). When \(\beta = 0\) and \(w(k)\equiv \bar{k}_\beta \equiv k\), since \(\hat{k}_\beta = 1\), we have \(\mathscr {N}^{\beta , w, *} = {{\mathscr {L}}}^{1}\). Also, if \(\beta = 1\) and \(w(k)\equiv \bar{k}_\beta \equiv 1\), since \(\hat{k}_\beta \equiv k\), then \(\mathscr {N}^\beta = {{\mathscr {L}}}\).

For \(0\le \gamma \le \beta \), define

$$\begin{aligned} (\mathscr {N}^{\beta ,\gamma ,w} f)(\eta )= & {} \sum _{x\in {\mathbb {Z}}} \sum _{k>0} p(k) w(k) \left\{ \lfloor \hat{k}_\beta /\hat{k}_\gamma \rfloor \eta (x)\left( 1-\eta (x + \hat{k}_\gamma )\right) \nabla _{x,x+\hat{k}_\gamma }f(\eta )\right. \\&\left. +\, \eta \left( x+ \lfloor \hat{k}_\beta /\hat{k}_\gamma \rfloor \hat{k}_\gamma \right) (1-\eta (x+\hat{k}_\beta )) \nabla _{x+ \lfloor \hat{k}_\beta /\hat{k}_\gamma \rfloor \hat{k}_\gamma , x+\hat{k}_\beta }f(\eta )\right\} . \end{aligned}$$

The operator \(\mathscr {N}^{\beta , \gamma , w}\) is in some sense a ‘truncation’ of \(\mathscr {N}^{\beta , w}\) in that for index \(k>0\) the particle jump size is truncated to at most \(\hat{k}_\gamma \).

An observation shows that the operator \(\mathscr {N}^{\beta ,w} + \mathscr {N}^{\beta ,\gamma ,w, *}\) can be written in terms of certain ‘loops’, where a particle moves from x to \(x+ \hat{k}_\beta \) and then back in increments of \(\hat{k}_\gamma \), except possibly for one step:

$$\begin{aligned} \left[ \mathscr {N}^{\beta ,w} + \mathscr {N}^{\beta ,\gamma ,w, *}\right] \varphi (\eta )= & {} \sum _{x\in {\mathbb {Z}}}\sum _{k>0} p(k)w(k)\mathscr {N}^{\beta ,\gamma }_{x,k}\varphi (\eta ) \quad \mathrm{where}\\ \mathscr {N}^{\beta ,\gamma }_{x,k}\varphi (\eta )= & {} \eta (x)(1-\eta (x+\hat{k}_\beta ))\nabla _{x,x+\hat{k}_\beta }\varphi (\eta )\\&+\sum _{y=0}^{\lfloor \hat{k}_\beta /\hat{k}_\gamma \rfloor -1} \eta \left( x+(y+ 1)\hat{k}_\gamma \right) \left( 1-\eta \left( x+y\hat{k}_\gamma \right) \right) \\&\quad \times \nabla _{x+y\hat{k}_\gamma ,x+(y+ 1)\hat{k}_\gamma }\varphi (\eta )\\&+ \eta (x+\hat{k}_\beta )\left( 1-\eta \left( x+ \lfloor \hat{k}_\beta /\hat{k}_\gamma \rfloor \hat{k}_\gamma \right) \right) \\&\quad \times \nabla _{x+ \lfloor \hat{k}_\beta /\hat{k}_\gamma \rfloor \hat{k}_\gamma , x+\hat{k}_\beta }\varphi (\eta ), \end{aligned}$$

with the convention that the empty sum \(\sum _{y=0}^{-1} = 0\).

It will be convenient to work with generalizations of the above operators: For \(k>0\), let \(u_\beta (k)\) be an integer such that \(0\le u_\beta (k)\le \hat{k}_\beta \). Define

$$\begin{aligned} \mathscr {N}^{gen, u_\beta , w}f(\eta )= & {} \sum _{x\in {\mathbb {Z}}}\sum _{k>0} p(k)w(k) \eta (x) \left( 1{-} \eta (x+ u_\beta (k))\right) \nabla _{x, x+ u_\beta (k)}f(\eta ) \ \ \mathrm{and }\\ \mathscr {N}^{gen, u_\beta , \gamma ,w}f(\eta )= & {} \sum _{x\in {\mathbb {Z}}}\sum _{k>0} p(k)w(k) \left\{ \lfloor u_\beta (k)/\hat{k}_{\gamma }\rfloor \eta (x)\left( 1{-}\eta (x+\hat{k}_{\gamma })\right) \nabla _{x,x+\hat{k}_{\gamma }}f(\eta ) \right. \\&\left. + \eta \left( x+\lfloor u_\beta (k)/\hat{k}_{\gamma }\rfloor \hat{k}_\gamma \right) \left( 1{-}\eta (x + u_\beta (k)\right) \nabla _{x + \lfloor u_\beta (k)/\hat{k}_{\gamma }\rfloor \hat{k}_{\gamma }, x+u_\beta (k)}f(\eta )\right\} . \end{aligned}$$

Again, \(\mathscr {N}^{gen, u_\beta , \gamma , w}\) is a truncation of \(\mathscr {N}^{gen, u_\beta , w}\). If \(u_\beta (k)<\hat{k}_\gamma \), then all jumps in \(\mathscr {N}^{gen, u_\beta , \gamma , w}\) are to the right of size \(u_\beta (k)\) for \(k>0\); but, if \(\gamma =0\), then all jumps are to the right of size 1. Write

$$\begin{aligned}&\left[ \mathscr {N}^{gen, u_\beta , w} + \mathscr {N}^{gen, u_\beta ,\gamma , w, *}\right] \varphi (\eta ) = \sum _{x\in {\mathbb {Z}}}\sum _{k>0} p(k)w(k) \mathscr {N}^{gen, u_\beta , \gamma }_{x,k}\varphi (\eta )\,\mathrm{where}\\&\quad \mathscr {N}^{gen,u_\beta ,\gamma }_{x,k}\varphi (\eta ) = \eta (x)(1-\eta (x+u_\beta (k)))\nabla _{x,x+u_\beta (k)}\varphi (\eta )\\&\qquad +\sum _{y=0}^{\lfloor u_\beta (k)/\hat{k}_\gamma \rfloor -1} \eta \left( x+(y+ 1)\hat{k}_\gamma \right) \left( 1-\eta \left( x+y\hat{k}_\gamma \right) \right) \\&\qquad \times \nabla _{x+y\hat{k}_\gamma ,x+(y+ 1)\hat{k}_\gamma }\varphi (\eta )\\&\qquad +\, \eta (x+u_\beta (k))\left( 1-\eta \left( x+ \lfloor u_\beta (k)/\hat{k}_\gamma \rfloor \hat{k}_\gamma \right) \right) \\&\qquad \times \, \nabla _{x+ \lfloor u_\beta (k)/\hat{k}_\gamma \rfloor \hat{k}_\gamma , x+u_\beta (k)}\varphi (\eta ). \end{aligned}$$

Note, by construction, that the drift of \(\mathscr {N}^{gen, u_\beta , w} + \mathscr {N}^{gen, u_\beta , \gamma , w, *}\) vanishes.

Claim. Let \(0\le \beta , \gamma \le 1\) and let \(r_\beta (\cdot )\) be a weight function such that \(\sup _{k>0} r_\beta (k)/k^{1-\beta }<\infty \). Then, for \(\beta \ge \gamma >\beta - (\alpha -2)\), there is a constant \(C = C(\alpha , \beta , \gamma )\) such that

$$\begin{aligned} \left\langle -\left[ \mathscr {N}^{gen, u_\beta , r_\beta } + \mathscr {N}^{gen, u_\beta , \gamma , r_\beta , *}\right] \varphi , \psi \right\rangle _{\rho } \ \le \ C\, \Vert \varphi \Vert _{1, (FA-NN)}\, \Vert \psi \Vert _{1, (FA-NN)}.\nonumber \\ \end{aligned}$$
(4.4)

Assuming (4.4), which is proved at the end, we now prove the sector inequality (4.2).

Step 1. Let \(\beta =1\) and \(w_1(k)\equiv \bar{k}_\beta \equiv 1\). Then, \(\mathscr {N}^{\beta ,w_1} = {{\mathscr {L}}}\). By (4.4), \(\mathscr {N}^{\beta , w_1} + \mathscr {N}^{\beta , \gamma _1, w_1, *}\) satisfies a sector inequality when \(\beta \ge \gamma _1>\beta - (\alpha -2)\) and \(\gamma _1\ge 0\). When \(\alpha >3\), since \(1 - (\alpha -2)<0\), we may take \(\gamma _1 =0\). In this case, \(\mathscr {N}^{1,w_1} + \mathscr {N}^{1,0,w_1, *} = {{\mathscr {L}}} + {{\mathscr {L}}}^1\), and the desired sector inequality already follows. The reader can now skip to the proof of (4.4).

However, when \(2<\alpha \le 3\), fix \(\gamma _1 = 1 - (\alpha -2)/2\). We need only prove a sector inequality for \(-\mathscr {N}^{1,\gamma _1, w_1,*} + {{\mathscr {L}}}^{1}\), or equivalently \(\mathscr {N}^{1,\gamma _1, w_1} - {{\mathscr {L}}}^{1, *}\). Decompose \({{\mathscr {L}}}^1 = {{\mathscr {S}}}^1 + {{\mathscr {A}}}^1\) into symmetric and anti-symmetric parts. A sector inequality holds for the self-adjoint generator \({{\mathscr {S}}}^1\): By Schwarz inequality, \(\langle (-{{\mathscr {S}}}^1)\varphi , \psi \rangle _\rho = \langle (-{{\mathscr {S}}}^1)^{1/2}\varphi , (-{{\mathscr {S}}}^1)^{1/2}\psi \rangle _\rho \le C \Vert \varphi \Vert _{1, (FA-NN)}\Vert \psi \Vert _{1, (FA-NN)}\).

Hence, since \({{\mathscr {S}}}^1 = {{\mathscr {S}}}^{1,*}\) and \({{\mathscr {A}}}^1 = -{{\mathscr {A}}}^{1,*}\), and therefore \(-{{\mathscr {L}}}^{1, *} = {{\mathscr {L}}}^1 -2{{\mathscr {S}}}^1\), it will be enough to show a sector inequality for \(\mathscr {N}^{1,\gamma _1, w_1} + {{\mathscr {L}}}^1\), where all jumps, for index \(k>0\), are of length at most \(\hat{k}_{\gamma _1}\).

We will apply (4.4) in the sequel to make further reductions in terms of the jump sizes.

Step 2. More generally, for \(0\le \beta \le 1\), let weight w and \(u_\beta \) be such that \(0\le w(k)\le \bar{k}_\beta \) and \(0\le u_\beta (k)\le \hat{k}_\beta \) for \(k>0\). Suppose now \(\beta \ge \gamma \ge \beta - (\alpha -2)\) and \(\gamma \ge 0\). We may write \(\mathscr {N}^{gen,u_\beta ,\gamma ,w} = \mathscr {N}^{\gamma , w'} + \mathscr {N}^{gen, u_{\gamma },w}\) where \(w'(k) = v(k; w(k), \hat{k}_\beta , \gamma ) \le \bar{k}_{\gamma }\) and \(u_{\gamma }(k) = q(k; u_\beta (k), \gamma ) \le \hat{k}_{\gamma }(k)\) for \(k>0\). Here,

$$\begin{aligned} v(k; w(k),u(k), \gamma ):= & {} w(k) \lfloor u(k)/\hat{k}_{\gamma }\rfloor \ \ \mathrm{and }\\ q(k; u(k),\gamma ):= & {} u(k) - \lfloor u(k)/\hat{k}_{\gamma }\rfloor \hat{k}_{\gamma }. \end{aligned}$$

Since \(w(k)\le \bar{k}_\beta \le \bar{k}_\gamma \), notice that \(\mathscr {N}^{\gamma , w'}\) and \(\mathscr {N}^{gen, u_\gamma , w}\) are in form \(\mathscr {N}^{gen, u^{+}_\gamma , w^+}\) where \(u^+_\gamma (k) \le \hat{k}_\gamma \) and \(w^+(k)\le \bar{k}_\gamma \) for \(k>0\). Then, by (4.4), when \(\gamma \ge \pi \ge \gamma - (\alpha -2)\) and \(\pi \ge 0\), a sector inequality holds for both \(\mathscr {N}^{\gamma , w'} + \mathscr {N}^{gen, \gamma , \pi , w', *}\) and \(\mathscr {N}^{gen, u_\gamma , w} + \mathscr {N}^{gen, u_\gamma , \pi , w, *}\).

Hence, to prove a sector inequality for \(\mathscr {N}^{gen, u_\beta , \gamma , w} + b{{\mathscr {L}}}^1\), where b is the constant such that the drift of \(\mathscr {N}^{gen, u_\beta , \gamma , w} + b{{\mathscr {L}}}^1\) vanishes, that is

$$\begin{aligned} \sum _{k>0} p(k)w(k) \left[ \lfloor u_\beta (k)/\hat{k}_\gamma \rfloor \hat{k}_\gamma + q(k; u_\beta , \gamma )\right] - b\sum _{k>0} k p(k) = 0, \end{aligned}$$

by the discussion in Step 1 with respect to the sector inequality for \({{\mathscr {S}}}^1\), it is enough to show a sector inequality for \(\mathscr {N}^{gen, \gamma , \pi , w'} + \mathscr {N}^{gen, u_\gamma , \pi , w} + b{{\mathscr {L}}}^1\). In particular, it is sufficient to show a sector inequality for \(\mathscr {N}^{gen, \gamma , \pi , w'} + b_1{{\mathscr {L}}}^1\) and \(\mathscr {N}^{gen, u_\gamma , \pi , w} + b_2{{\mathscr {L}}}^1\), where \(b_1, b_2\) are such that the drifts of the two operators vanish. Note, by construction, that \(\mathscr {N}^{gen, u_\beta , \gamma , w}\) and \(\mathscr {N}^{gen, \gamma , \pi , w'} + \mathscr {N}^{gen, u_\gamma , \pi , w}\) have the same drift, and so necessarily \(b_1+b_2 = b\).

When \(\gamma - (\alpha -2)\ge 0\), fix \(\pi = \gamma - (\alpha -2)/2\). In this case, since both truncated operators \(\mathscr {N}^{gen, \gamma , \pi , w'}\) and \(\mathscr {N}^{gen, u_\gamma , \pi , w}\) are in form \(\mathscr {N}^{gen, u^\#_\gamma , \pi , w^\#}\) where \(u^\#_\gamma \le \hat{k}_\gamma \) and weight \(w^\#(k) \le \bar{k}_\gamma \) for \(k>0\), we can now repeat the above analysis with parameters \((\gamma , \pi )\) in place of \((\beta , \gamma )\) to obtain a further reduction.

Step 3. We will need only to iterate the above procedure \(\ell -1\) times, with respect to parameters \((1,\gamma _1), (\gamma _1, \gamma _2), \ldots (\gamma _{\ell -1},\gamma _\ell )\) such that \(\gamma _{i+1} = \gamma _i - (\alpha -2)/2\) for \(1\le i\le \ell -2\), and \(\gamma _{\ell -1}\ge \gamma _\ell >\gamma _{\ell -1}- (\alpha -2)\) and \(\gamma _\ell \ge 0\). Here, \(\ell \ge 2\) is the smallest integer satisfying \(\gamma _{\ell -1} - (\alpha -2)=1-(\ell -1)(\alpha -2)/2 - (\alpha -2)< 0\).

At this point, to show (4.2), we need only show a sector inequality for \(2^\ell \) operators, each of the form \(\mathscr {N}^{gen, u_{\gamma _{\ell -1}}, \gamma _{\ell }, w_{\ell -1}} + b{{\mathscr {L}}}^1\), where b is such that the drift vanishes, \(0\le u_{\gamma _{\ell -1}}(k) \le \hat{k}_{\gamma _{\ell -1}}\), weight \(0\le w_{\ell -1}(k)\le \bar{k}_{\gamma _{\ell -1}}\) and jump sizes are less than \(\hat{k}_{\gamma _\ell }\) for \(k>0\).

We may select \(\gamma _\ell =0\), in which case, by definition, all jumps in \(\mathscr {N}^{gen, u_{\gamma _{\ell -1}}, \gamma _\ell , w_{\ell -1}} + b{{\mathscr {L}}}^1\) are of size at most 1, the jumps in \(\mathscr {N}^{gen, u_{\gamma _{\ell -1}}, \gamma _\ell , w_{\ell -1}}\) and \(b{{\mathscr {L}}}^1\) being to the right and left respectively. Since the operator has zero drift, it equals \(2b{{\mathscr {S}}}^1\). The sector inequality follows now by Schwarz inequality as in Step 1. \(\square \)

Proof of (4.4)

We first collect some observations. In the following, C may be a constant which changes from line to line.

  1. (1)

    For fixed \(x\in {\mathbb {Z}}\) and \(k>0\), when \(u_\beta (k)>0\), the operator \(\mathscr {N}^{gen, u_\beta , \gamma }_{x,k}\) can be viewed as a totally asymmetric nearest-neighbor exclusion generator on a ring \(\Lambda _{x,k}\) of \(\kappa (k):=\lfloor u_\beta (k)/\hat{k}_\gamma \rfloor +2\le Ck^{\beta - \gamma }\) sites \(y_0 =y_{\kappa (k)}=x\), \(y_1 = x+ u_\beta (k)\), \(y_2 = x+\lfloor u_\beta (k)/\hat{k}_\gamma \rfloor \hat{k}_\gamma \), \(y_3 = x + (\lfloor u_\beta (k)/\hat{k}_\gamma \rfloor -1\big )\hat{k}_\gamma \), ..., \(y_{\kappa (k)-1} = x + \hat{k}_\gamma \). When \(u_\beta (k)=0\), \(\mathscr {N}^{gen, u_\beta , \gamma }_{x,k} \equiv 0\).

  2. (2)

    When \(u_\beta (k)>0\), as \(\mathscr {N}^{gen, u_\beta ,\gamma }_{x,k}\) is a generator, the function \(\mathscr {N}^{gen, u_\beta ,\gamma }_{x,k}\varphi \) is mean-zero with respect to each ring-canonical invariant measure \(\nu _\rho ^{(j, \zeta )}:=\nu _\rho \{\cdot |\sum _{i=0}^{\kappa (k)} \eta (y_i) = j, \{\eta (z): z\not \in \Lambda _{k,x}\}=\zeta \}\) for \(0\le j \le \kappa (k)\) and outside configurations \(\zeta \); when \(j=0\) or \(\kappa (k)\), there is no motion and \(\mathscr {N}^{gen, u_\beta , \gamma }_{x,k}\varphi \equiv 0\). When \(\Lambda _{x,k}\) has \(1\le j\le \kappa (k)-1\) particles, \(\nu _\rho ^{(j,\zeta )}\) is also the unique invariant measure for the symmetrized process with generator \(\mathscr {N}^s_{x,k}\). The smallest eigenvalue of \(-\mathscr {N}^s_{x,k}\) is 0, corresponding to constant eigenfuctions; in particular, \(\mathscr {N}^{gen, u_\beta , \gamma }_{x,k}\varphi \) is orthogonal to this eigenspace. Also, the spectral gap of \(-\mathscr {N}^s_{x,k}\) on the ring with \(1\le j\le \kappa (k)-1\) particles is bounded below by \(K/{\kappa (k)}^2\), where K is a universal constant, in particular not depending on j [18]. Then,

    $$\begin{aligned}&E_{\nu ^{(j,\zeta )}_\rho }\left[ \left( {-}\mathscr {N}^{gen, u_\beta , \gamma }_{x,k}\varphi \right) \psi \right] = E_{\nu ^{(j,\zeta )}_\rho }\left[ \left( {-}\mathscr {N}^s_{x,k}\right) ^{{-}1/2}\left( -\mathscr {N}^{gen, u_\beta , \gamma }\varphi \right) \left( {-}\mathscr {N}^s_{x,k}\right) ^{1/2}\psi \right] \\&\quad \le \ K^{-1/2}\kappa (k)\Vert \mathscr {N}^{gen, u_\beta , \gamma }_{x,k}\varphi \Vert _{{\mathbb {L}}^2(\nu ^{(j,\zeta )}_\rho )}E_{\nu ^{(j,\zeta )}_\rho }\left[ \psi \left( -\mathscr {N}^s_{x,y}\psi \right) \right] ^{1/2}. \end{aligned}$$

    Note that the ring-Dirichlet form

    $$\begin{aligned} E_{\nu _\rho ^{(j,\zeta )}}\left[ \psi (-\mathscr {N}^s_{x,k}\psi )\right] = \frac{1}{4}\sum _{i=0}^{\kappa (k)-1}\Vert \nabla _{y_i,y_{i+1}}\psi \Vert ^2_{{\mathbb {L}}^2(\nu _\rho ^{(j,\zeta )})}. \end{aligned}$$

    Then, we have \(\langle -\mathscr {N}^{gen, u_\beta , \gamma }_{x,k}\varphi , \psi \rangle _\rho \) equals

    $$\begin{aligned} E_{\nu _\rho }\left[ \sum _{j=0}^{\kappa (k)} \nu _\rho \left( \sum _{i=0}^{\kappa (k)-1}\eta (y_i) = j\big | \{\eta (z): z\not \in \Lambda _{x,k}\}= \zeta \right) E_{\nu _\rho ^{(j,\zeta )}}\left[ \left( -\mathscr {N}^{gen,u_\beta ,\gamma }_{x,k}\varphi \right) \psi \right] \right] \end{aligned}$$

    which is less than

    $$\begin{aligned}&\le \ C\kappa (k)E_{\nu _\rho }\left[ \sum _{j=0}^{\kappa (k)} \nu _\rho \left( \sum _{i=0}^{\kappa (k)-1}\eta (y_i) = j\big | \{\eta (z): z\not \in \Lambda _{x,k}\}=\zeta \right) \right. \nonumber \\&\quad \times \left. \left\| \mathscr {N}^{gen, u_\beta , \gamma }_{x,k}\varphi \right\| _{{\mathbb {L}}^2(\nu _\rho ^{(j,\zeta )})}\left[ \sum _{i=0}^{\kappa (k)-1}\Vert \nabla _{y_i,y_{i+1}}\psi \Vert ^2_{{\mathbb {L}}^2(\nu _\rho ^{(j,\zeta )})}\right] ^{1/2}\right] \nonumber \\&\quad \le \ C\kappa (k) \left\| \mathscr {N}^{gen,u_\beta , \gamma }_{x,k}\varphi \right\| _{{\mathbb {L}}^2(\nu _\rho )}\left[ \sum _{i=0}^{\kappa (k)-1}D_{y_i,y_{i+1}}(\psi )\right] ^{1/2}. \end{aligned}$$
    (4.5)

    Here, in the last line, we have used the notation \(\Vert \nabla _{a,b}f\Vert ^2_{{\mathbb {L}}^2(\nu _\rho )} = D_{a,b}(f)\) (introduced in Lemma 4.1), and the relation \(2ab = \inf _{\varepsilon >0} \{\varepsilon a^2 + \varepsilon ^{-1}b^2\}\) to recover the \({\mathbb {L}}^2(\nu _\rho )\) norms.

  3. (3)

    When \(u_\beta (k)>0\), by standard inequalities, \(\Vert \mathscr {N}^{gen,u_\beta ,\gamma }_{x,k}\varphi \Vert ^2_{{\mathbb {L}}^2(\nu _\rho )}\) is less than

    $$\begin{aligned}&2\left\| \eta (y_0)(1{-}\eta (y_{1}))\nabla _{y_0, y_{1}}\varphi \right\| ^2_{{\mathbb {L}}^2(\nu _\rho )} + 2\left\| \sum _{i=1}^{\kappa (k){-}1} \eta (y_i)(1-\eta (y_{i+1}))\nabla _{y_i, y_{i+1}}\varphi \right\| ^2_{{\mathbb {L}}^2(\nu _\rho )}\\&\quad \le \ 2\left\| \nabla _{y_0, y_{1}}\varphi \right\| ^2_{{\mathbb {L}}^2(\nu _\rho )}+ 2(\kappa (k)-1)\sum _{i=1}^{\kappa (k)-1} \left\| \nabla _{y_i, y_{i+1}}\varphi \right\| ^2_{{\mathbb {L}}^2(\nu _\rho )}. \end{aligned}$$

    In other words,

    $$\begin{aligned} \left\| \mathscr {N}^{gen,u_\beta ,\gamma }_{x,k}\varphi \right\| ^2_{{\mathbb {L}}^2(\nu _\rho )} \ \le \ 2D_{y_0, y_1}(\varphi ) + 2(\kappa (k)-1)\sum _{i=1}^{\kappa (k)-1} D_{y_i, y_{i+1}}(\varphi ).\nonumber \\ \end{aligned}$$
    (4.6)
  4. (4)

    As in the proof of Lemma 4.1, we have for \(a,b,x\in {\mathbb {Z}}\) that \(D_{x+a,x+b}(f) \le |b-a|\sum _{i=a}^{b-1} D_{x+i,x+i+1}(f)\) and

    $$\begin{aligned} \sum _{x\in {\mathbb {Z}}} D_{x+a, x+b}(f) \ \le (b-a)^2 \sum _{x\in {\mathbb {Z}}} D_{x,x+1}(f) \ \le \ C(b-a)^2 \Vert f\Vert _{1, (FA-NN)}^2.\nonumber \\ \end{aligned}$$
    (4.7)

    Finally, we now combine the estimates in (1)–(4). For each \(k>0\), through the relation \(2ab = \inf _{\varepsilon _k>0} \{\varepsilon _k a^2 + \varepsilon _k^{-1} b^2\}\) and noting (4.5) and (4.6), we have

    $$\begin{aligned}&\sum _{x\in {\mathbb {Z}}} p(k)r_\beta (k) \left\langle -\mathscr {N}^{gen, u_\beta ,\gamma }_{x,k}\varphi , \psi \right\rangle _\rho \\&\le C\sum _{x\in {\mathbb {Z}}}p(k)r_\beta (k) \left\{ \varepsilon _k{\kappa (k)}^2\left[ D_{y_0, y_1}(\varphi ) + (\kappa (k)-1)\sum _{i=1}^{\kappa (k)-1}D_{y_i,y_{i+1}}(\varphi )\right] \right. \\&\quad \left. + \varepsilon _k^{-1}\left[ D_{y_0,y_1}(\psi ) + \sum _{i=1}^{\kappa (k)-1}D_{y_i,y_{i+1}}(\psi )\right] \right\} . \end{aligned}$$

    Now, note that \(|y_0 - y_1|\le k^\beta \) and \(|y_i - y_{i+1}|\le k^\gamma \) for \(1\le i\le \kappa (k)-1\). Recall also that \(0\le \gamma \le \beta \le 1\), \(\kappa (k)\le Ck^{\beta -\gamma }\) and \(r_\beta (k)\le Ck^{1-\beta }\) for \(k>0\). Then, noting (4.7), the last display is less than

    $$\begin{aligned}&Cp(k)r_\beta (k)\left\{ \varepsilon _k (k^{\beta - \gamma })^2\left[ k^{2\beta }\Vert \varphi \Vert ^2_{1, (FA-NN)} + (k^{\beta - \gamma })^2k^{2\gamma }\Vert \varphi \Vert ^2_{1, (FA-NN)}\right] \right. \\&\quad \left. + \varepsilon _k^{-1}\left[ k^{2\beta }\Vert \psi \Vert ^2_{1, (FA-NN)} + k^{\beta - \gamma }k^{2\gamma } \Vert \psi \Vert ^2_{1, (FA-NN)}\right] \right\} . \end{aligned}$$

    Optimizing over \(\varepsilon _k\), we have the upper bound:

    $$\begin{aligned}&Cp(k)r_\beta (k)\left[ (k^{\beta - \gamma })^2\left( k^{2\beta } + (k^{\beta - \gamma })^2 k^{2\gamma }\right) \right. \nonumber \\&\left. \qquad \times \left( k^{2\beta } + k^{\beta - \gamma }k^{2\gamma }\right) \right] ^{1/2}\Vert \varphi \Vert _{1, (FA-NN)}\Vert \psi \Vert _{1, (FA-NN)}\\&\quad \le \ C p(k) k^{1-\beta } k^{3\beta - \gamma }\Vert \varphi \Vert _{1, (FA-NN)}\Vert \psi \Vert _{1, (FA-NN)}\\&\quad =C p(k) k^{1+2\beta -\gamma }\Vert \varphi \Vert _{1, (FA-NN)}\Vert \psi \Vert _{1,(FA-NN)}. \end{aligned}$$

    To finish, we will need to sum over \(k>0\). Recall that \(p(k) = ck^{-1-\alpha }\) for \(k>0\). When \(1+2\beta -\gamma <\alpha \), the sum \(\sum _{k>0} p(k) k^{1+2\beta -\gamma } <\infty \). Since \(\beta \le 1\), this relation is satisfied when \(\gamma > \beta - (\alpha -2)\), and (4.4) is verified. \(\square \)

Proof of Theorem 2.4

For local functions f, we first compare \(L_f\) with \(L^{(FR-NN)}_f\), corresponding to generators \({{\mathscr {L}}}\) and \({{\mathscr {L}}}^{(FR-NN)}\) with the same drift. Recall \({{\mathscr {L}}}^1\) defined before Lemma 4.3. Writing \({{\mathscr {L}}}^1 = {{\mathscr {S}}}^1 + {{\mathscr {A}}}^1\) in terms of symmetric and anti-symmetric parts, we may take \({{\mathscr {L}}}^{(FR-NN)} = {{\mathscr {S}}}^1 - {{\mathscr {A}}}^1\). In the following, the constant C may change line to line. Recall from Lemma 3.2 that

$$\begin{aligned} L_f(\lambda ) = 2\lambda ^{-2}\sup _\varphi \left\{ 2\langle f,\varphi \rangle _\rho - \langle \varphi , (\lambda - {{\mathscr {S}}})\varphi \rangle _\rho - \langle {{\mathscr {A}}}\varphi , (\lambda - {{\mathscr {S}}})^{-1}{{\mathscr {A}}}\varphi \rangle _\rho \right\} . \end{aligned}$$

The inner product \(\langle \varphi , (\lambda - {{\mathscr {S}}})\varphi \rangle _\rho \le C \langle \varphi , (\lambda - {{\mathscr {S}}}^1)\varphi \rangle _\rho \) by Lemma 4.1.

Decompose \(\tilde{{{\mathscr {L}}}}={{\mathscr {L}}} +{{\mathscr {L}}}^1 = {{\mathscr {A}}} + {{\mathscr {A}}}^1 + {{\mathscr {S}}} + {{\mathscr {S}}}^1\). Then, by the triangle inequality, with respect to the \(\Vert \cdot \Vert _{-1}\) norm,

$$\begin{aligned}&\left\langle {{\mathscr {A}}}\varphi , (\lambda - {{\mathscr {S}}})^{-1}{{\mathscr {A}}}\varphi \right\rangle _\rho \nonumber \\&\quad \le \ 3\left\langle {{\mathscr {A}}}^1\varphi , (\lambda - {{\mathscr {S}}})^{-1}{{\mathscr {A}}}^1\varphi \right\rangle _\rho + 3\left\langle \tilde{{\mathscr {L}}}\varphi , (\lambda - {{\mathscr {S}}})^{-1}\tilde{{\mathscr {L}}}\varphi \right\rangle _\rho \nonumber \\&\qquad + 3\left\langle [ {{\mathscr {S}}} + {{\mathscr {S}}}^1]\varphi , (\lambda - {{\mathscr {S}}})^{-1}[{{\mathscr {S}}} + {{\mathscr {S}}}^1]\varphi \right\rangle _\rho . \end{aligned}$$
(4.8)

The second inner product is bounded

$$\begin{aligned} \left\langle \tilde{{\mathscr {L}}}\varphi , (\lambda - {{\mathscr {S}}})^{-1}\tilde{{\mathscr {L}}}\varphi \right\rangle _\rho\le & {} C\Vert \varphi \Vert _{1, (FR-NN)} \Vert (\lambda - {{\mathscr {S}}})^{-1}\tilde{{\mathscr {L}}}\varphi \Vert _{1, (FR-NN)}\\\le & {} C\Vert \varphi \Vert _{1, (FR-NN)} \Vert (\lambda - {{\mathscr {S}}})^{-1}\tilde{{\mathscr {L}}}\varphi \Vert _{1}\\\le & {} C\Vert \varphi \Vert _{1, (FR-NN)} \Vert (\lambda - {{\mathscr {S}}})^{-1}\tilde{{\mathscr {L}}}\varphi \Vert _{1,\lambda }\\= & {} C\Vert \varphi \Vert _{1, (FR-NN)} \Vert \tilde{{\mathscr {L}}}\varphi \Vert _{-1,\lambda }. \end{aligned}$$

In the first line, approximating \((\lambda - {{\mathscr {S}}})^{-1}\tilde{{\mathscr {L}}}\varphi \) by local functions in \({\mathbb {L}}^2(\nu _\rho )\), Lemma 4.3 is used. In the second line, Lemma 4.1 is employed. The third line uses \(\Vert \cdot \Vert _1 \le \Vert \cdot \Vert _{1,\lambda }\). The fourth line follows by definition of the \(\Vert \cdot \Vert _{-1,\lambda }\) norm. Dividing through by \(\Vert \tilde{{\mathscr {L}}}\varphi \Vert _{-1,\lambda } = \langle \tilde{{\mathscr {L}}}\varphi , (\lambda - {{\mathscr {S}}})^{-1}\tilde{{\mathscr {L}}}\varphi \rangle _\rho ^{1/2}\), we obtain \(\Vert \tilde{{\mathscr {L}}}\varphi \Vert _{-1,\lambda } \le C \Vert \varphi \Vert _{1,(FR-NN)}\). Then, the second inner product is less than \(C\Vert \varphi \Vert ^2_{1,\lambda , (FR-NN)}=C\langle \varphi , (\lambda - {{\mathscr {S}}}^1)\varphi \rangle _\rho \).

The third inner product is bounded

$$\begin{aligned} \left\langle [ {{\mathscr {S}}} + {{\mathscr {S}}}^1]\varphi , (\lambda - {{\mathscr {S}}})^{-1}[{{\mathscr {S}}} + {{\mathscr {S}}}^1]\varphi \right\rangle _\rho \ \le \ 2\Vert {{\mathscr {S}}}\varphi \Vert ^2_{-1,\lambda } + 2 \Vert {{\mathscr {S}}}^1\varphi \Vert ^2_{-1,\lambda }. \end{aligned}$$

By Lemma 4.1 and \(\Vert \cdot \Vert _{-1,\lambda }\le \Vert \cdot \Vert _{-1}\), we have

$$\begin{aligned} \Vert {{\mathscr {S}}}^1\varphi \Vert _{-1,\lambda } \ \le \ \Vert {{\mathscr {S}}}^1\varphi \Vert _{-1} \ \le \ C\Vert {{\mathscr {S}}}^1\varphi \Vert _{-1, (FR-NN)}. \end{aligned}$$

Then, \(\Vert {{\mathscr {S}}}^1\varphi \Vert ^2_{-1, (FR-NN)} \le \Vert \varphi \Vert ^2_{1, (FR-NN)} \le \langle \varphi , (\lambda - {{\mathscr {S}}}^1)\varphi \rangle _\rho \) by Schwarz inequality and the definition of \({H}_{-1}\) norm. Similarly, \(\Vert {{\mathscr {S}}}\varphi \Vert ^2_{-1,\lambda } \le \langle \varphi , (\lambda - {{\mathscr {S}}})\varphi \rangle _\rho \) and by Lemma 4.1 \(\langle \varphi , (\lambda - {{\mathscr {S}}})\varphi \rangle _\rho \le C\langle \varphi , (\lambda - {{\mathscr {S}}}^1)\varphi \rangle _\rho ^{1/2}\). Hence, together, the third inner product is less than \(C \langle \varphi , (\lambda -{{\mathscr {S}}}^1)\varphi \rangle _\rho \).

Now, inserting into the variational formula, we obtain \(L_f(\lambda ) \ge 2\lambda ^{-2}\sup _\varphi \{\langle f, \varphi \rangle _\rho - C\langle \varphi , (\lambda - {{\mathscr {S}}})\varphi \rangle _\rho \} =C^{-1}L_f^{(FR-NN)}(\lambda )\). Analogously, we bound \(L_f^{(FR-NN)}(\lambda ) \ge C' L_f(\lambda )\), in terms of a constant \(C'\), starting from the variational formula for \(L_f^{(FR-NN)}(\lambda )\).

Now consider \({{\mathscr {L}}}^+ = c{{\mathscr {L}}}^{(FR-NN)}\) which has drift \(c\sum yp(y)\). Since the factor c represents a speed-up factor, a calculation with (3.2) shows that \(L^+_f(c\lambda ) = c^3L^{(FR-NN)}_f(\lambda )\). Hence, \(L^+_f\approx L^{(FR-NN)}_f\) by considering again the variational formulas in Lemma 3.2.

Finally, let \({{\mathscr {L}}}^{(FR)}\) be a finite-range generator with drift \(c\sum yp(y)\). Then \({{\mathscr {L}}}^\# = {{\mathscr {S}}}^+ + {{\mathscr {A}}}^+ - {{\mathscr {A}}}^{(FR)}\) is a finite-range mean-zero generator. By similar arguments as above, with the finite-range equivalence \(\Vert \cdot \Vert _{\pm 1, (FR)}\) and \(\Vert \cdot \Vert _{\pm 1, (FR-NN)}\) (cf. Lemma 3.7 in [24]) in place of Lemma 4.1, and the ‘finite range’ sector inequality (4.3) in place of Lemma 4.3, we conclude \(L_f^+ \approx L_f^{(FR)}\). \(\square \)

Proof of Theorem 2.5

The argument is a long-range adaptation of Lemma 4.4 in [23]. Since \(L^{(MZA)}_f(\lambda ) = 2\lambda ^{-2}\langle f, (\lambda - {{\mathscr {L}}}^{(MZA)})^{-1}f\rangle _\rho \) and \(\langle f, (\lambda - {{\mathscr {L}}}^{(MZA)})^{-1}f\rangle _\rho \le \langle f, (\lambda - {{\mathscr {S}}})^{-1}f\rangle _\rho \) by (3.2) and Lemma 3.2, we have \(L^{(MZA)}_f \le L^{(S)}_f\).

To obtain a lower bound \(L^{(MZA)}_f \ge C L^{(S)}_f\), we follow the proof of Theorem 2.4. Consider the finite-range mean-zero operator \(\widehat{{\mathscr {L}}} = {{\mathscr {S}}}^1 + {{\mathscr {A}}}^{(MZA)}\) where \({{\mathscr {S}}}^1\) is a symmetric (FR-NN) generator. We may bound

$$\begin{aligned}&\left\langle {{\mathscr {A}}}^{(MZA)}\varphi , (\lambda - {{\mathscr {S}}})^{-1}{{\mathscr {A}}}^{(MZA)}\varphi \right\rangle _\rho \\&\quad \le \ 2\left\langle \widehat{{\mathscr {L}}}\varphi , (\lambda - {{\mathscr {S}}})^{-1}\widehat{{\mathscr {L}}}\varphi \rangle _\rho + 2 \langle {{\mathscr {S}}}^1\varphi , (\lambda - {{\mathscr {S}}})^{-1}{{\mathscr {S}}}^1\varphi \right\rangle _\rho . \end{aligned}$$

In the following, the constant C may change line to line.

We recall now, as noted in Remark 4.2 for \(\alpha >0\), that

$$\begin{aligned} \Vert \varphi \Vert _{1, (FR-NN)} \le C \Vert \varphi \Vert _1 \quad \hbox {and}\quad \Vert \varphi \Vert _{-1}\le C \Vert \varphi \Vert _{-1, (FR-NN)}. \end{aligned}$$
(4.9)

Hence, using (4.3) (instead of Lemma 4.3), and (4.9) (instead of Lemma 4.1), we may plug into the sequence of steps after (4.8), to find \(\langle \widehat{{\mathscr {L}}}\varphi , (\lambda - {{\mathscr {S}}})^{-1}\widehat{{\mathscr {L}}}\varphi \rangle _\rho \le C\Vert \varphi \Vert ^2_{1, (FR-NN)}\) and \(\langle {{\mathscr {S}}}^1\varphi , (\lambda - {{\mathscr {S}}})^{-1}{{\mathscr {S}}}^1\varphi \rangle _\rho \le C\Vert \varphi \Vert ^2_{1, (FR-NN)}\). These right-hand sides are further bounded by \(C\Vert \varphi \Vert ^2_1\) using (4.9) again.

Then, \(\langle {{\mathscr {A}}}^{(MZA)}\varphi , (\lambda - {{\mathscr {S}}})^{-1}{{\mathscr {A}}}^{(MZA)}\varphi \rangle _\rho \le C \langle \varphi , (\lambda - {{\mathscr {S}}})\varphi \rangle _\rho \). By Lemma 3.2,

$$\begin{aligned} L^{(MZA)}_f(\lambda )= & {} 2\lambda ^{-1}\sup _\varphi \left\{ 2\langle f,\varphi \rangle _\rho - \langle \varphi , (\lambda - {{\mathscr {S}}})\varphi \rangle _\rho \right. \\&\left. - \left\langle {{\mathscr {A}}}^{(MZA)}\varphi , (\lambda - {{\mathscr {S}}})^{-1}{{\mathscr {A}}}^{(MZA)}\varphi \right\rangle _\rho \right\} \\\ge & {} 2\lambda ^{-1}\sup _\varphi \left\{ 2\langle f,\varphi \rangle _\rho - (1+C)\langle \varphi , (\lambda - {{\mathscr {S}}})\varphi \rangle _\rho \right\} = (1+C)^{-1}L^{(S)}_f(\lambda ). \end{aligned}$$

Hence, \(L^{(MZA)}_f\approx L^{(S)}_f\). \(\square \)

Proof of Theorem 2.18

Write \({{\mathscr {L}}}^\gamma = {{\mathscr {S}}}^\gamma + {{\mathscr {A}}}\) where \({{\mathscr {S}}}^\gamma \) is symmetric and \({{\mathscr {A}}}\) is a finite-range mean-zero anti-symmetric operator. Recall \(\alpha <\beta \). The crux of the argument is that, since \(s^\alpha (y)\ge (c_\alpha /c_\beta )s^\beta (y)\) for \(y\in {\mathbb {Z}}^d\), noting (3.1), the Dirichlet forms \(\langle \varphi , -{{\mathscr {S}}}^\alpha \varphi \rangle \ge (c_\alpha /c_\beta )\langle \varphi , -{{\mathscr {S}}}^\beta \varphi \rangle _\rho \). Then,

$$\begin{aligned} \left\langle \varphi , (\lambda - {{\mathscr {S}}}^\alpha )\varphi \right\rangle _\rho \ \ge (1+c_\alpha /c_\beta )\left\langle \varphi , (\lambda - {{\mathscr {S}}}^\beta )\varphi \right\rangle _\rho . \end{aligned}$$
(4.10)

Let \(L^{(S),\gamma }_f(\lambda ) = 2\lambda ^{-2}\langle f, (\lambda - {{\mathscr {S}}}^\gamma )^{-1}f\rangle _\rho \) denote the Tauberian variance with respect to process generated by \({{\mathscr {S}}}^\gamma \). By the formula \(\langle f, (\lambda - {{\mathscr {S}}}^\gamma )^{-1}f\rangle _\rho = \sup _\varphi \{ 2\langle f, \varphi \rangle _\rho - \langle \varphi , (\lambda - {{\mathscr {S}}}^\gamma )\varphi \rangle _\rho \}\) in Lemma 3.2, and (4.10), we have \(L^{(S),\alpha }_f \le C_1L^{(S),\beta }_f\) where \(C_1 = (1+c_\alpha /c_\beta )^{-1}\).

Now, by Theorem 2.5, \(L^\gamma _f \approx L^{(S),\gamma }_f\). Hence, \(L^\alpha _f \approx L^{(S),\alpha }_f \le C_1L^{(S),\beta }_f \approx L^\beta _f\), to finish. \(\square \)

5 Proof of results: symmetric jumps

The proofs of Theorems 2.6, 2.8 and 2.9 are based on the self-duality property of the exclusion process, and follow from several computations. On the other hand, the proof of Theorem 2.11 follows the martingale approximation scheme in [11, 19] and [23] for the finite-range case. Nevertheless, several estimates are different because of the presence of the heavy tails of the symmetric (LA) jump probability \(p(\cdot )=s(\cdot )\). In this section, we abbreviate \(\theta _d = \theta _d(\cdot ; s(\cdot ))\) (cf. Sect. 3.4).

5.1 Proof of Theorem 2.6

By the basis decomposition in Sect. 3.2, a local, mean-zero function can be written as

$$\begin{aligned} f = \sum _{n\ge 1}\sum _{|A|=n} {{\mathfrak {f}}}(A)\Psi _A \end{aligned}$$

where \(A\subset {{\mathscr {E}}}\) and all sums are finite. Let \(n\ge 1\) be such that \(\alpha \wedge 2 < nd\) and suppose \({{\mathrm{deg}}}(f)=n\). By Remark (2.1), (1) if \(n=1\), \(\sum _{|A|=1}{{\mathfrak {f}}}(A) \ne 0\); (2) If \(n=2\), \(\sum _{|A|=1}{{\mathfrak {f}}}(A) = 0\) and \(\sum _{|A|=2}{{\mathfrak {f}}}(A)\ne 0\); (3) and if \(n\ge 3\), \(\sum _{|A|=1}{{\mathfrak {f}}}(A) = \sum _{|A|=2}{{\mathfrak {f}}}(A) =0\). Our goal will be to show that f is admissible, thereby completing the proof.

Note that \(\sum _{|A|=k}{{\mathfrak {f}}}(A)\mathbf{1}_A\) is the dual form of \(\sum _{|A|=k}{{\mathfrak {f}}}(A)\Psi _A\) for \(k\ge 1\). To show f is admissible, it is enough to show in case (1) that \(\mathbf{1}_A\) is admissible for \(|A|\ge 1\); in case (2), it is enough to prove \(\sum _{|A|=1}{{\mathfrak {f}}}(A) \mathbf{1}_A\) and \(\mathbf{1}_A\) for \(|A|\ge 2\) are admissible; in case (3), we need to show \(\sum _{|A|=1}{{\mathfrak {f}}}(A)\mathbf{1}_A\), \(\sum _{|A|=2}{{\mathfrak {f}}}(A)\mathbf{1}_A\) and \(\mathbf{1}_A\) for \(|A|\ge 3\) are admissible.

To show \(\mathbf{1}_A\) for \(|A|\ge n\) is admissible, in the various cases, by Lemma 3.1, we need only to bound \(\Vert \mathbf{1}_A\Vert _{-1,\lambda }\) uniformly as \(\lambda \downarrow 0\). By Lemma 3.4, it is sufficient to prove

$$\begin{aligned} {\limsup }_{\lambda \rightarrow 0} \Vert {{\mathfrak {W}}}_n {\widetilde{\mathbf{1}}_{A}} \Vert _{-1, \lambda , \mathrm{free}} \ < \ \infty . \end{aligned}$$
(5.1)

Since the function \(g={{\mathfrak {W}}}_n {\widetilde{\mathbf{1}}_A}=1\) when \(\{x_1, \ldots ,x_n\} =A\) and vanishes otherwise, its Fourier transform is bounded. Thus, expressing the \({{\mathbb {H}}}_{-1, \lambda , \mathrm{free}}\)-norm in Fourier space (cf. (3.8)), the display (5.1) follows if we show that

$$\begin{aligned} {\limsup }_{\lambda \rightarrow 0} \int _{({{\mathbb {T}}^d})^n} \frac{d {k}_1 \ldots d{k}_n}{\lambda + \theta _d (k_1) +\ldots \theta _d (k_n) } \ < \ \infty . \end{aligned}$$

The integrand can only diverge for \((k_1, \ldots ,k_n)\) close to a point in \({{\mathscr {C}}}_d \times \cdots \times {{\mathscr {C}}}_d\). It is straightforward to check that all divergences are the same as for \((k_1, \ldots , k_n)\) close to \((0,\ldots ,0)\). Standard analysis, using Lemma 3.5, which estimates \(\theta _d(k)\), shows the bound (5.1) when \(\alpha \wedge 2<nd\).

But, when \(\sum _{|A|=\ell }{{\mathfrak {f}}}(A) =0\), the square of the Fourier transform of  \({{\mathfrak {W}}}_\ell {\widetilde{\sum _{|A|=\ell }{{\mathfrak {f}}}(A)\mathbf{1}_A}}\) behaves quadratically near points in \(({{\mathscr {C}}}_d)^\ell \), for instance of order \(|k_1|^2 + \cdots + |k_\ell |^2\) near the origin. Since at these points, by Lemma 3.5, \(\sum _{i=1}^\ell \theta _d(k_i)\) is of larger or equal order when \(\alpha \wedge 2<nd\), the norm \(\Vert {{\mathfrak {W}}}_\ell {\widetilde{\sum _{|A|=\ell }{{\mathfrak {f}}}(A)\mathbf{1}_A}}\Vert _{-1,\lambda , \mathrm{free}}\) converges as \(\lambda \downarrow 0\).

Combining these estimates, we conclude f is admissible in all cases. \(\square \)

5.2 Proof of Theorem 2.9

Let \(f(\eta )= (\eta (0) -\rho ) (\eta (1) -\rho )= \chi (\rho ) \Psi _{\{0,1\}}\) whose dual function \({{\mathfrak {f}}} = \chi (\rho ) \mathbf{1}_{\{0,1\}}\). By our assumption \({{\mathscr {L}}}={{\mathscr {S}}}\), Remark 2.7 and that functions of degree strictly larger than 2 are admissible by Theorem 2.6, and (3.2), we need only show

$$\begin{aligned} \left\langle f, (\lambda - {{\mathscr {L}}})^{-1} f\right\rangle _{\rho } = \left\langle f, (\lambda - {{\mathscr {S}}})^{-1}f\right\rangle _\rho = \Vert f \Vert _{-1,\lambda }^2 \ \approx \ |\log \lambda |. \end{aligned}$$

Further, by Lemma 3.4, we need only to show this estimate with \(\Vert f \Vert _{-1, \lambda }\) replaced by \(\Vert {{\mathfrak {W}}}_2 {\tilde{{\mathfrak {f}}}} \Vert _{-1, \lambda , \mathrm{{free}}}\). Observe, by (3.7), that \(({{\mathfrak {W}}}_2 {\tilde{{\mathfrak {f}}}}) (x,y) = \chi (\rho ) \left[ \mathbf{1}_{x=0, y=1} +\mathbf{1}_{x=1, y=0}\right] \) and its Fourier transform is \(\chi (\rho )\left[ e^{2\pi is_1} + e^{2\pi i s_2}\right] \). Then, by (3.8), it is enough to show

$$\begin{aligned} \int _{{\mathbb {T}}^2} \frac{1}{\lambda +\theta _1 (s_1) +\theta _1 (s_2) } \, {ds_1\, ds_2} \ \approx \ \left\{ \begin{array}{ll} |\log \lambda | &{} \ \ \mathrm{if}\,\alpha >2\\ \log |\log (\lambda )| &{} \ \ \mathrm{if}\,\alpha =2.\end{array}\right. \end{aligned}$$

as \(\lambda \downarrow 0\). This is accomplished using Lemma 3.5 and standard analysis. \(\square \)

5.3 Proof of Theorem 2.8

By Remark 2.7, the lower order of variance for degree 2 functions in Theorem 2.9, and admissibility of functions of at least degree 3 in Theorem 2.6, we need only to consider \(f(\eta ) =\eta (0) -\rho \). Recall from (3.2) that the Laplace transform \(L_{f} (\cdot ) \) of \(\sigma ^2_f(t)\) is given by \(L_{f} (\lambda ) = 2 \lambda ^{-2} \langle f, (\lambda -{{\mathscr {S}}})^{-1} f \rangle _{\rho }\), since \({{\mathscr {L}}}={{\mathscr {S}}}\).

Write \(f= {\sqrt{\chi (\rho )}} \Psi _{\{0\}}\in M_1\) and consider its dual function \({{\mathfrak {f}}}=\sqrt{\chi (\rho )}\mathbf{1}_{\{0\}}\in {{\mathscr {H}}}_1\). Identifying cardinality 1 subsets of \({\mathbb {Z}}^d\) with points in \({\mathbb {Z}}^d\), we see that the generator \({{\mathfrak {S}}}\) restricted to \({{\mathscr {H}}}_1\) is nothing but the generator of a random walk on \({{\mathbb {Z}}}^d\) with kernel s. Then,

$$\begin{aligned} L_{f} (\lambda )= & {} 2 \chi (\rho ) \lambda ^{-2} (\lambda -{ {\mathfrak {S}}})^{-1} (\{0\}, \{ 0 \})\\= & {} 2 \chi (\rho ) \lambda ^{-2} \int _{{\mathbb {T}}^d} \frac{du}{\lambda + \theta _d (u)}\\= & {} 2\chi (\rho )\lambda ^{-2}\int _{0}^{\infty } e^{-\lambda t} \left[ \int _{{\mathbb {T}}^d} e^{-\theta _d (u) t} du \right] \, dt, \end{aligned}$$

using Fubini’s Theorem for the last line.

After two integration by parts, we recover the variance

$$\begin{aligned} \sigma _t^2 (f) = 2 \chi (\rho ) \int _{{\mathbb {T}}^d} \, \frac{ \theta _d (u) \, t -1 + e^{-\theta _d (u)t}}{\theta ^2_d (u)} du. \end{aligned}$$
(5.2)

Now, by Lemma 7.1, which analyzes (5.2), we obtain Theorem 2.8. \(\square \)

5.4 Proof of Theorem 2.11

The functional CLT follows from a combination of arguments. In particular, since the symmetric exclusion process starting from \(\nu _\rho \) is reversible, part (i) follows from the Kipnis-Varadhan theorem [13]. Also, the proof of part (iii) is the same as in Section 3.2 in Kipnis [11] given the scalings in Theorem 2.9.

However, part (ii) is more involved as the long-range character of the process needs to be addressed.

5.4.1 Proof of Theorem 2.11, (ii)

Let f be a local function of degree 1. Again, by Remark 2.7 and the lower order variance growth of degree 2 or more functions in Theorem 2.8, it is enough to prove the result for the function \(f(\eta )=\eta (0) -\rho \). In the following, we denote \(\bar{\eta }(x):= \eta (x)-\rho \).

Recall, the notation from the introduction, \(a_N=\sigma _{N} (f)\). In order to show \(A^{(N)}_t :=a^{-1}_N \Gamma _{f} (tN)\) converges in the uniform topology as \(N\uparrow \infty \), it is sufficient to show tightness in the sup-norm, and that the finite-dimensional distributions converge. Tightness is established with the same argument as for Theorem 1.2 in [23] with respect to the finite-range limit (1.1). Also, by the Markov property and scalings in Theorem 2.14, convergence of finite-dimensional distributions to \({\mathbb {B}}(t)\) when \(d=1, \alpha =1\) or \(d=2, \alpha \ge 2\), \({\mathbb {B}}_{1-1/2\alpha }(t)\) when \(d=1, 1 < \alpha <2\), and \({\mathbb {B}}_{3/4} (t)\) when \(d=1, \alpha \ge 2\) follow from the convergence of the marginal sequence \(A^{(N)}_t\) to a Gaussian limit. We now give a sketch how to obtain this marginal convergence.

Let \(T>0\) be fixed. Suppose there is a function \(v^T_s\) such that for \(s \in [0,T]\),

$$\begin{aligned} (\partial _s + {\mathscr {L}}) v^T_s(\eta ) = -\bar{\eta }_s (0) \end{aligned}$$

and \(v_T^T =0\). Then, by Dynkin’s formula

$$\begin{aligned} {\mathscr {M}}^T_t = v^T_t(\eta _t) - v^T_0(\eta _0) - \int _0^t (\partial _s + {\mathscr {L}}) v^T_s(\eta _s)ds \end{aligned}$$

is a centered martingale and

$$\begin{aligned} \int _0^T \bar{\eta }_s(0) ds = v^T_0(\eta _0) + {\mathscr {M}}^T_T. \end{aligned}$$
(5.3)

Moreover, by the martingale property, \(v^T_0(\eta _0)\) and \({\mathscr {M}}^T_T\) are uncorrelated since \({{\mathscr {M}}}_0^T=0\). Then, \(a^2_T={\mathbb {E}}_\rho [\Gamma ^2_{f}(T)]\) is the sum of the variances of these terms. Define the limiting variances, assuming they converge,

$$\begin{aligned} \sigma ^2_{1,T}\ :=\ \lim _{N\rightarrow \infty } {\mathbb {E}}_\rho \left( \frac{1}{a_{N}} \, v^{TN}_0(\eta _0)\right) ^2\quad \hbox {and}\quad \sigma ^2_{2,T} \ :=\ \lim _{N\rightarrow \infty } {\mathbb {E}}_\rho \left( \frac{1}{a_{N}} {\mathscr {M}}_{TN}^{TN}\right) ^2. \end{aligned}$$

Write

$$\begin{aligned}&\left| {\mathbb {E}}_\rho \left[ e^{itA^{(N)}_T} - e^{-\frac{t^2}{2}(\sigma ^2_{1,T} + \sigma ^2_{2,T})}\right] \right| \\&\quad \le \ {\mathbb {E}}_\rho \left| {\mathbb {E}}_{\eta (0)}\left[ e^{\frac{it}{a_{N}} \, M_{TN}^{TN}} - e^{-\frac{t^2}{2}\sigma ^2_{2,T}}\right] \right| + \left| {\mathbb {E}}_\rho \left[ e^{\frac{it}{a_{N}} \, v^{TN}_0(\eta _0)} - e^{-\frac{t^2}{2}\sigma ^2_{1,T}}\right] \right| . \end{aligned}$$

Later, in Lemmas 5.1 and 5.2, we show \(\sigma ^2_{1,T}\) and \(\sigma ^2_{2,T}\) indeed converge, and that the first and second terms above vanish, finishing the marginal convergence argument.

To make rigorous this sketch, we first establish the martingale decomposition (5.3). Let \(p_t (y)\) be the continuous-time transition probability of the random walk on \({\mathbb {Z}} ^d\), starting at the origin, with translation-invariant symmetric rates \(p(x,x+y):= p(y)=s(y)\). Define

$$\begin{aligned} u_t(x) = \int _0^t p_s(x) ds, \end{aligned}$$

the Green’s function, which satisfies

$$\begin{aligned} \partial _t u_t = \Delta u_t + \delta _0 \end{aligned}$$

where \(\Delta \) is the generator of the random walk, \(\Delta f(x) = \sum _{y\in \mathbb {Z}^d} p(y) (f(x+y)-f(x))\).

We now verify that \(U_t^T(\eta ) =: v^T_t(\eta )\) where

$$\begin{aligned} U_t^T(\eta ) \, = \ \sum _{x\in \mathbb {Z}^d} u_{T-t}(x) \bar{\eta } (x). \end{aligned}$$

Indeed, write

$$\begin{aligned} \partial _s U^T_s= & {} - \sum _{x\ne 0} \Delta u_{T-s}(x)\bar{\eta } (x)- (\Delta u_{T-s}(0) + 1)\bar{\eta } (0)\\= & {} -\sum _{x\in \mathbb {Z}^d} \Delta u_{T-s}(x) \bar{\eta } (x)- \bar{\eta } (0) \ \; = \ \; - {{\mathscr {L}}} U_s^T - \bar{\eta } (0) \end{aligned}$$

noting \(U_t(\eta ^{x,x+y}) - U_t(\eta ) = (u_{T-t}(x+y) - u_{T-t}(x))(\eta (x) - \eta (x+y))\), \(p(\cdot )=s(\cdot )\) and

$$\begin{aligned} {\mathscr {L}} U^T_t(\eta )= & {} \sum _{x,y\in \mathbb {Z}^d}p(y) \eta (x)(1-\eta (x+y))\left( u_{T-t}(x+y)-u_{T-t}(x)\right) \nonumber \\= & {} \sum _{x,y\in {\mathbb {Z}}^d}p(y) \left( u_{T-t}(x+y) - u_{T-t}(x)\right) \eta (x). \end{aligned}$$
(5.4)

Observe that \(U_T^T(\eta ) \equiv 0\), since \(u_0(x)=0\) for all \(x\in \mathbb {Z}^d\). Hence, (5.3) follows and

$$\begin{aligned} \int _0^T \bar{\eta }_s(0) ds= & {} U_0^T(\eta _0) + {\mathscr {M}} _T^T. \end{aligned}$$

Lemma 5.1

We have

$$\begin{aligned} \frac{1}{a_N} \;U_0^{NT}(\eta _0)= \frac{1}{a_N} \sum _{x \in {\mathbb {Z}}^d} u_{NT}(x) \bar{\eta }_0(x) \end{aligned}$$
(5.5)

converges weakly as \(N\uparrow \infty \) to a centered Normal variable with limiting variance \(\sigma ^2_{1,T}\). When \(0<\alpha \le 1\) in \(d=1\) or \(\alpha \ge 2\) in \(d=2\), \(\sigma _{1,T}^2= 0\). But, for \(\alpha >1\) in \(d=1\), \(0<\sigma ^2_{1,T}<\infty \).

Proof

The Fourier transform of \(u_{t} (\cdot )\) is given by

$$\begin{aligned} {\hat{u}} _t (k) = \int _0^t e^{-(1-{\hat{p}} (k)) s } ds \end{aligned}$$

for \(k \in {\mathbb {T}}^d\) where \({\hat{p}}(k) = \sum _{y \in {\mathbb {Z}}^d} p(y) e^{2 \pi ik \cdot y} \) is the Fourier transform of \(p(\cdot ) = s(\cdot )\). By symmetry of \(s(\cdot )\), the fact that \(1- \cos (2\pi k \cdot y) =2 \sin ^2 (\pi k \cdot y)\), and definition of \(\theta _d\) in (3.9), we have

$$\begin{aligned} 1- {\hat{p}}(k) = 2 \sum _{y \in {\mathbb {Z}}^d } s(y) \sin ^2 (\pi k \cdot y) = \theta _d (k). \end{aligned}$$

Thus, we obtain

$$\begin{aligned} \hat{u}_t(k)= \frac{1- e^{-\theta _d (k) t} }{\theta _d (k)} \end{aligned}$$
(5.6)

and as a consequence

$$\begin{aligned} u_t (x) = \int _{{\mathbb {T}}^d} e^{-2i \pi k \cdot x} \left[ \frac{1- e^{-\theta _d (k) t} }{\theta _d (k)} \right] dk. \end{aligned}$$
(5.7)

By Parseval’s relation, \({\mathbb {E}}_\rho [(\eta (x) - \rho )^2] = \rho (1-\rho ) = \chi (\rho )\), and the equation for \(a_N^2=\sigma ^2_N(f)\) in (5.2), the variance of \(a^{-1}_N U_0^{NT}(\eta _0)\) under \(\nu _\rho \) is equal to

$$\begin{aligned} \frac{\chi (\rho )}{a^2_N} \sum _{x \in {\mathbb {Z}}^d} |{u}_{TN}|^2 (x)= & {} \chi (\rho ) \int _{{\mathbb {T}}^d} {\left[ \frac{1- e^{-\theta _d (k) TN}}{\theta _d (k) }\right] ^2 dk} \nonumber \\&\cdot \left[ 2\chi (\rho )\int _{{\mathbb {T}}^d}\frac{\theta _d(u)N - 1 + e^{-\theta _d(u)N}}{\theta ^2_d(u)}du\right] ^{-1}. \end{aligned}$$
(5.8)
  1. (i)

    If \(d=1\) and \(\alpha =1\), by the scaling relation \(a^2_N\sim N\log (N)\), \(\theta _1(k)\sim |k|\) (cf. Lemma 3.5), and simple computation, the variance (5.8) vanishes as \(N\uparrow \infty \). Therefore, (5.5) converges in distribution to the Dirac mass centered at 0.

  2. (ii)

    If \(d=1\) and \(1<\alpha <2\), recall \(a_N \sim N^{1-1/2\alpha }\). By (5.6), we have

    $$\begin{aligned} \sum _{x \in {\mathbb {Z}}} |u_t (x)|^2= & {} \int _{0}^1 \left[ \frac{1- e^{-\theta _1 (k) t} }{\theta _1 (k)} \right] ^2 dk\\= & {} 2 \int _{0}^{1/2} \left[ \frac{1{-} e^{-\theta _1 (k) t} }{\theta _1 (k)} \right] ^2 dk = 2t^{2-1/\alpha } \int _{0}^{t^{1/\alpha } /2} \left[ \frac{1- e^{-t \theta _1 (\ell t^{-1/\alpha }) } }{t\theta _1 (\ell t^{-1/\alpha })} \right] ^2 d\ell . \end{aligned}$$

    By Lemma 3.5 and dominated convergence, we have, as \(t\uparrow \infty \),

    $$\begin{aligned} \sum _{x \in {\mathbb {Z}}} |u_t (x)|^2 \ \sim \ 2t^{2- 1/\alpha } \int _0^{\infty } \left[ \frac{1-e^{-a_1 (\alpha ) \ell ^{\alpha }}}{a_1 (\alpha ) \ell ^{\alpha } }\right] ^2 d\ell , \end{aligned}$$
    (5.9)

    where the constant \(a_1 (\alpha )\) is such that \(\theta _1 (k) \sim a_1 (\alpha ) |k|^{\alpha }\) as \(k\downarrow 0\). We also note that a similar argument shows, for \(x \in {\mathbb {Z}}\) and \(t>0\), that

    $$\begin{aligned} |u_{t} (x) | \ \le \ \int _{0}^{1} \left| \frac{1- e^{-\theta _1 (k) t} }{\theta _1 (k)} \right| dk = O \left( t^{1-1/\alpha }\right) . \end{aligned}$$
    (5.10)

    By (5.9) and asymptotics of \(a_N\), one concludes that \(\sigma ^2_{1, T}\), the limit of (5.8) as \(N\uparrow \infty \), converges.

    Now, for \(\beta \in {\mathbb {R}}\), we have

    $$\begin{aligned}&\log \left[ \int d\nu _{\rho } (\eta ) \, \exp \left( \frac{i \beta }{a_N } \sum _{x \in {\mathbb {Z}}} u_{NT} (x) {\bar{\eta }} (x) \right) \right] \\&\quad = \log \left[ \prod _{x \in {\mathbb {Z}}} \int d\nu _{\rho } (\eta ) \, \exp \left( \frac{i \beta }{a_N } u_{NT} (x) {\bar{\eta }} (x) \right) \right] \\&\quad = \log \left[ \prod _{x \in {\mathbb {Z}}} \left[ 1 -\frac{\beta ^2}{2 a_{N}^2 } u_{NT} (x)^2 + O(a_N^{-3} |u_{NT} (x)|^3 )\right] \right] . \end{aligned}$$

    Since \(\sum _{x} |u_{NT} (x)|^3 \le (\sum _{x} |u_{NT} (x)|^2 ) \sup _{x} |u_{NT} (x)| = O(a_N^2 N^{1-1/\alpha })\) and \(e^{-z} = 1-z + O(z^2)\) as \(|z|\downarrow 0\), by (5.9) and (5.10), we get

    $$\begin{aligned} \lim _{N\rightarrow \infty } \int d\nu _{\rho } (\eta ) \, \exp \left( \frac{i \beta }{a_N } \sum _{x \in {\mathbb {Z}}} u_{NT} (x) {\bar{\eta }} (x) \right) = \exp \left( - \sigma _{1,T}^2 \beta ^2 /2\right) . \end{aligned}$$
  3. (iii)

    If \(d=1\) and \(\alpha > 2\), the argument is similar to the case when \(1<\alpha <2\). If \(\alpha =2\), using the substitution \(k = \beta _t u\) with \(t\beta ^2_t|\log \beta _t| = 1\) and \(\beta _t =O((t\log (t))^{-1/2})\), the proof is also analogous.

  4. iv)

    If \(d=2\) and \(\alpha \ge 2\), as when \(d=1\) and \(\alpha =1\), noting the scaling relation for \(a^2_N\) in Theorem 2.8 and that \(\theta _d(k)\sim |k|^2\) for \(\alpha >2\) and \(\theta _d(k)\sim |k|^2\log (|k|)\) for \(\alpha =2\) by Lemma 3.5, the limit of the variance in (5.8) vanishes and (5.5) converges to the Dirac mass at 0.

\(\square \)

Lemma 5.2

For \(T>0\) and \(\beta \in {\mathbb {R}}\), the limiting variance satisfies \(0<\sigma ^2_{2,T}<\infty \) and

$$\begin{aligned} \lim _{N\rightarrow \infty }{\mathbb {E}}_\rho \left| {\mathbb {E}}_{\eta _0}\left[ e^{i\beta \frac{1}{a_N (T)} {{\mathscr {M}}}_{TN}^{TN}} - e^{-\frac{\beta ^2}{2}\sigma ^2_{2,T}}\right] \right| = 0. \end{aligned}$$
(5.11)

Proof

Although \(U^T_s\) is not a local function, by standard approximations, the quadratic variation of the martingale \({{\mathscr {M}}}_{T}^{s}\) is \(\int _0^t {{\mathscr {L}}}(U^T_s)^2 - 2U^T_s{{\mathscr {L}}}U^T_s ds\). Recalling \(p(\cdot ) = s(\cdot )\), the integrand may be computed as

$$\begin{aligned} {{\mathscr {L}}}(U^T_s)^2 - 2U^T_s{{\mathscr {L}}}U^T_s= \sum _{x,y\in {\mathbb {Z}}^d}p(y-x)\left( u_{T-s}(y) - u_{T-s}(x)\right) ^2\eta _s(x)\left( 1-\eta _s(y)\right) . \end{aligned}$$

Hence, the variance \(\sigma ^2_{2,T}\) is given by

$$\begin{aligned} \lim _{N\uparrow \infty }\frac{1}{a^2_N}{\mathbb {E}}_\rho \left( {{\mathscr {M}}}^{TN}_{TN}\right) ^2= & {} \lim _{N\uparrow \infty }\frac{\rho (1-\rho )}{a^2_N}\int _0^{TN}\nonumber \\&\times \sum _{x,y\in {\mathbb {Z}}^d}p(y-x)\left( u_{TN-s}(y)- u_{TN-s}(x)\right) ^2 ds\nonumber \\= & {} \lim _{N\uparrow \infty }\frac{2\rho (1-\rho )}{a^2_N} \int _0^{TN} \int _{{\mathbb {T}}^d}\theta _d(k)|\hat{u}_{TN -s}(k)|^2dk ds\nonumber \\ \end{aligned}$$
(5.12)

using a form of Parseval’s relation: The random walk Dirichlet form

$$\begin{aligned} \frac{1}{2}\sum _{x,y\in {\mathbb {Z}}^d}p(y-x)\left( u_{TN-s}(y) - u_{TN-s}(x)\right) ^2= & {} -\left\langle u_{TN-s},\Delta u_{TN-s}\right\rangle \\= & {} \int _{{\mathbb {T}}^d}\theta _d(k)|\hat{u}_{TN-s}(k)|^2dk. \end{aligned}$$

Then, the limit converges to a positive quantity, noting the explicit form of \(\hat{u}_t\) in (5.6), Lemma 3.5, and the asymptotics of \(a_N\) (cf. Theorem 2.8), from standard analysis as used in the proof of Lemma 5.1.

Now, by Feynman–Kac’s formula, for \(\beta \in {\mathbb {R}}\), the process

$$\begin{aligned} {{\mathscr {N}}}_t^{T,\beta } = \exp \left\{ i \beta U_t^T (\eta _t) - i \beta U_0^T (\eta _0) - \int _0^t e^{- i \beta U_s^T (\eta _s)} (\partial _s +{{\mathscr {L}}}) e^{i \beta U_s^T (\eta _s)} ds \right\} , \end{aligned}$$

for \(0\le t\le T\), is a martingale with expectation 1. By the form of \(U^T\) and (5.4), we have

$$\begin{aligned} e^{- i \beta U_s^T (\eta _s)} (\partial _s +{{\mathscr {L}}}) e^{i \beta U_s^T (\eta _s)} = i \beta (\partial _s + {{\mathscr {L}}}) U_s^T (\eta _s) + A(\beta , s,T) \end{aligned}$$

with \(A(\beta , s, T)\) equal to

$$\begin{aligned} \sum _{x,y} p(y-x) \left[ e^{i \beta (u_{T-s} (y) -u_{T-s} (x))} - i\beta (u_{T-s} (y) -u_{T-s} (x)) -1 \right] \eta _s (x) (1-\eta _s (y)). \end{aligned}$$

We have to show that

$$\begin{aligned}&{{\mathbb {E}}}_{\rho }\left| {\mathbb {E}}_{\eta (0)} \left[ \exp \left( i \beta \frac{{{\mathscr {M}}}_{TN}^{TN}}{a_N}\right) -\exp \left( -\sigma _{2,T}^2 \beta ^2 /2)\right) \right] \right| \\&\quad ={{\mathbb {E}}}_{\rho } \left| {\mathbb {E}}_{\eta (0)} \left[ {{\mathscr {N}}}_{TN}^{TN,\beta /a_N } \left\{ \exp \left[ {-}\int _0^{NT} A\left( \frac{\beta }{a_N }, s,NT \right) ds \right] {-} \exp \left( {-}\sigma _{2,T}^2 \beta ^2 /2)\right) \right\} \right] \right| \end{aligned}$$

vanishes as \(N\uparrow \infty \).

Note, for \(x,t\in {\mathbb {R}}\),

$$\begin{aligned} \left| e^{i tx} - 1- it x +{x^2 t^2}/2\right| \le Ct^2 x^2 \min ( 1, |t x|) \end{aligned}$$
(5.13)

and that \(a^{-1}_N\sup _x|u_{NT-s}|(x)\rightarrow 0\) by (5.10), \(a_N\)-asymptotics in Theorem 2.8 and straightforward computations.

With this estimate, there exists a constant \(C>0\) such that

$$\begin{aligned} \left| {{\mathscr {N}}}_{TN}^{TN, \beta / a_N }\right|\le & {} \exp \left[ \int _0^{NT} \left| \, A\left( \frac{\beta }{a_N } , s,NT \right) \right| ds\, \right] \nonumber \\\le & {} \exp \left\{ \frac{C\beta ^2}{a^2_N } \int _0^{NT} \sum _{x,y} p(y-x) [ u_{NT-s}(y) -u_{NT -s} (x) ]^2 ds \right\} \nonumber \\= & {} \exp \left\{ \frac{C\beta ^2}{2a^2_N } \int _0^{NT} \left( \int _{{\mathbb {T}}^d} \theta _d (k) | \, {\hat{u}}_{NT -s} (k) \, |^2 \, dk \right) ds \right\} , \end{aligned}$$
(5.14)

where the second inequality comes from a Taylor expansion and the equality from the Parseval relation for the \(\Delta \)-Dirichlet form.

As the variance in (5.12) converges, the quantity \( \int _0^{NT} |A(a_N^{-1} \beta , s,NT) | ds\) and (5.14) are uniformly bounded in N. Therefore, things are reduced to show that

$$\begin{aligned} \lim _{N \rightarrow \infty } \, \int _0^{NT} A\left( \frac{\beta }{a_N } , s,NT \right) ds = \frac{\sigma _{2,T}^2 \beta ^2}{2} \end{aligned}$$
(5.15)

in probability under \({{\mathbb {P}}}_{\rho }\).

Then, to prove (5.15), noting (5.13), it is sufficient to show, in probability, that

$$\begin{aligned} \lim _{N \rightarrow \infty } \, \frac{1}{a_N^2 } \int _0^{NT} \left[ \sum _{x,y\in {\mathbb {Z}}^d} b_N (s,x,y) \, \eta _s (x) (1-\eta _s (y) ) \right] ds = \sigma _{2,T}^2 \end{aligned}$$

where

$$\begin{aligned} b_N (s,x,y) = p(y -x) (u_{NT -s} (y) -u_{NT-s} (x))^2. \end{aligned}$$

This statement, by the form of \(\sigma ^{2,T}_2\) (5.12), would follow if we can replace \(\eta _s(s)(1-\eta _s(y))\) by \(\rho (1-\rho )\) in \({\mathbb {L}}^2({\mathbb {P}}_\rho )\):

$$\begin{aligned} \lim _{N \rightarrow \infty } \, \frac{1}{a_N^2 } \int _0^{NT} \left[ \sum _{x,y\in {\mathbb {Z}}^d} b_N(s,x,y) \, \left\{ \eta _s (x) (1{-}\eta _s (y) ) {-} \rho (1{-}\rho ) \right\} \right] ds = 0. \end{aligned}$$
(5.16)

To prove (5.16), after squaring terms, since \((a_N^{-2}\int _0^{NT} \sum _{x,y}b_N(s,x,y)ds)^2\) converges in (5.12), we need only show the covariance

$$\begin{aligned} {\mathbb {E}}_\rho \left[ \left\{ \eta _s (x) (1-\eta _s (y) ) - \rho (1-\rho ) \right\} \left\{ \eta _u (z) (1-\eta _u (w) ) - \rho (1-\rho ) \right\} \right] \end{aligned}$$

vanishes uniformly in xyzw as \(|u-s|\uparrow \infty \). As

$$\begin{aligned} \eta (\ell )(1{-}\eta (k)) {-} \rho (1-\rho ) = \ (1{-}\rho )(\eta (\ell ){-}\rho ) {-}\rho (\eta (k)-\rho ) {-} (\eta (\ell ){-}\rho )(\eta (k){-}\rho ), \end{aligned}$$

by a calculation using the duality process decompositions in Sect. 3.2, namely the symmetric semigroup action

$$\begin{aligned} T_t \prod _{i=1}^n (\eta (x_i)-\rho ) = \sum _{|A|=n} p_t^{(n)}\left( \left\{ x_1,\ldots , x_n\right\} , A\right) \prod _{y\in A}(\eta (y)-\rho ), \end{aligned}$$

the covariance is bounded by

$$\begin{aligned} C(\rho )\left\{ p^{(1)}_{|u-s|}(x,z) + p^{(1)}_{|u-s|}(x,w) + p^{(1)}_{|u-s|}(y,z) + p^{(1)}_{|u-s|}(y,w) + p^{(2)}_{|u-s|}\left( (x,y), (z,w)\right) \right\} \end{aligned}$$

where \(p^{(n)}\) is the continuous-time transition probability of n particles in symmetric simple exclusion for \(n\ge 1\). By Corollary VIII.1.9 in [16], we have the bound

$$\begin{aligned} p^{(2)}_v\left( (k_1,k_2), (\ell _1,\ell _2)\right) \ \le \ C\sum _{i,j = 1}^2 p^{(1)}_v(k_i,\ell _j). \end{aligned}$$

As \(p^{(1)}_v(k,\ell ) = p^{(1)}_v(0,k{-}\ell )\), to show the covariance vanishes, we show \(\lim _{v\uparrow \infty }p^{(1)}_v(0,k) = 0\) uniformly in k.

To this end, we bound \(p^{(1)}_v(0,k)^2 = p^{(1)}_v(0,k)p^{(1)}_v(k,0) \le p^{(1)}_{2v}(0,0)\) uniformly in k. But,

$$\begin{aligned} p^{(1)}_v(0,0) = \int _0^1 e^{-v(1-\hat{p}(k))}dk\ =\ \int _0^1 e^{-v\theta _d(k)}dk. \end{aligned}$$

Since for \(\alpha \ge 1\), by Lemma 3.5, \(\theta _d(k) \ge C|k|^2\) near the zeroes of \(\theta _d\), we have \(p^{(1)}_v(0,0) \le C'v^{-1/2}\), which shows the covariance vanishes uniformly. \(\square \)

6 Proof of results: asymmetric jumps

The proofs of the results for the (LA) long-range asymmetric model rely on several ingredients, among them estimation of variational formulas for \(L_f(\lambda )\), which we have prepared for in Sect. 3.5, and several technical results collected in “Appendix”.

6.1 Proof of Theorem 2.12

We first make a few reductions. By Corollary 3.3, the variance \(\sigma ^2_f(t) \le 5 t^{-1} L^{(S)}_f(t^{-1})\). Then, by Theorem 2.6, which bounds \(L^{(S)}_f(\lambda )\), all statements in Theorem 2.12 follow modulo a few exceptions in \(d\le 2\). In \(d=1\), we still need to show (a) admissibility when \({{\mathrm{deg}}}(f)=1\), \(\alpha \in (1,2)\cup (2,\infty )\) and \(\rho \ne 1/2\), and (b) admissibility when \({{\mathrm{deg}}}(f)= 2\), \(\alpha > 2\), \(\rho \in [0,1]\). In \(d=2\), the case not obtained is (c) admissibility when \({{\mathrm{deg}}}(f)=1\), \(\alpha \ge 2\) and \(\rho \ne 1/2\).

When \(\alpha >2\), by Lemma 3.1, \(\sigma ^2_f(t) \le 10t^{-1}L_f(t^{-1})\) and by Theorem 2.4, \(L_f(\lambda )\approx L^{(FR)}_f(\lambda )\) with respect to a jump probability \(p^{(FR)}\) with a drift. Also, by Proposition , when \(\rho \ne 1/2\), \(\lambda ^2L_f^{(FR)}(\lambda )\) is bounded as \(\lambda \downarrow 0\) for all local f. Hence, \(\lambda ^2L_f(\lambda )\) is also bounded and \(\sigma ^2_f(t)= O(t)\) when \(\rho \ne 1/2\) in \(d=1,2\), and so parts (a) and (c) in these cases also hold. Also, by Proposition , for local functions f with degree \({{\mathrm{deg}}}(f)=2\), and any \(0\le \rho \le 1\), we know \(\lambda ^2L^{(FR)}_f(\lambda )\) is bounded as \(\lambda \downarrow 0\). Therefore, \(\lambda ^2L_f(\lambda )\) is also bounded and \(\sigma ^2_f(t)=O(t)\), establishing part (b).

What remains then is to conclude the proof of Theorem 2.12 is to show admissibility of degree one functions when

  1. (A)

    \(\alpha \in (1,2)\), \(d=1\) and \(\rho \ne 1/2\), and

  2. (B)

    \(\alpha =2\), \(d=2\) and \(\rho \ne 1/2\).

By Remark 2.7 and the already proven admissibility of functions of at least degree 2 in these cases (A) and (B), it is sufficient to focus on the degree 1 function \(f(\eta )= \eta (0) -\rho \).

For the rest of the section, we remind that \(\theta _d = \theta _d(\cdot ; s_0(\cdot ))\) (cf. Sect. 3.4) in all the formulas.

6.1.1 Proof of (A)

To prove \(f(\eta )=\eta (0)-\rho \) is admissible, by Lemma 3.1, we need to bound \(\langle f, (t^{-1}-{{\mathscr {L}}})^{-1}f\rangle _\rho \). Then, by Lemma 3.2, using the inf form, to get an upper bound, we restrict the infimum to the set of functions g of degree one. By the estimate (3.13), in the ‘free particle’ formulation, we have

$$\begin{aligned}&\inf _{{\text {g of degree one}}}\left\{ \Vert \eta (0)-\rho +\mathscr {A}g\Vert _{-1,\lambda }^2+\Vert g\Vert _{1,\lambda }^2\right\} \\&\qquad \le \inf _\varphi \left\{ \Vert \delta _0+\mathfrak {T}_{1,1}\varphi \Vert _{-1,\lambda ,\mathrm{{free}}}^2+\Vert \mathfrak {T}_{1,2}\varphi \Vert _{-1,\lambda ,\mathrm{{free}}}^2+\Vert \varphi \Vert _{1,\lambda , \mathrm{{free}}}^2\right\} , \end{aligned}$$

which is further expressed, in terms of the Fourier transform \(\hat{\varphi }\), as

$$\begin{aligned}&\inf _{\hat{\varphi }}\left\{ \int _{0}^1\frac{|1+(1-2\rho )\hat{a}(u)\hat{\varphi }(u)|^2}{\lambda +\theta _1(u)}du+\int _{0}^1(\lambda +\theta _1(u))|\hat{\varphi }(u)|^2 du\right. \nonumber \\&\quad \left. + \chi (\rho )^2\int _{0}^1|\hat{\varphi }(u)|^2\int _{0}^1\frac{|\hat{a}(s)+\hat{a}(u-s)|^2}{\lambda +\theta _1(s)+\theta _1(u-s)}ds \; du\right\} . \end{aligned}$$
(6.1)

We note, as \(\varphi \) is real, \({\hat{\varphi }}\) is a complex function with even real part and odd imaginary part. The previous infimum is taken over this set of complex functions.

Now, for real numbers \(b,c>0\) and \(a\ne 0\), we observe

$$\begin{aligned} \inf _{z\in {\mathbb {C}}} \left\{ \frac{|1+iaz|^2}{b}+c|z|^2\right\} = \frac{1}{b+\frac{a^2}{c}} \end{aligned}$$

and the infimum is realized at \(z=i a/(bc+a^2)\). In our case, we have

$$\begin{aligned} ia= & {} (1-2\rho )\hat{a}(u),\\ b= & {} \lambda +\theta _1(u)\\ c= & {} \lambda +\theta _1(u)+\chi (\rho )^2\int _0^1\frac{|\hat{a} (s)+\hat{a}(u-s)|^2}{\lambda +\theta _1(s)+\theta _1(u-s)}ds. \end{aligned}$$

Then, the infimum (6.1) is realized for the function

$$\begin{aligned} {\hat{\varphi }} (u)= -\frac{G^{(1)}_{\lambda ,\rho } (u)}{(1-2\rho ){\hat{a}} (u) \left[ \lambda + \theta _1 (u) +G_{\lambda , \rho } (u)\right] }, \end{aligned}$$

where \(G^{(1)}_{\lambda ,\rho } \) is given by

$$\begin{aligned} G^{(1)}_{\lambda ,\rho }(u)= \frac{(1-2\rho )^2|\hat{a}(u)|^2}{\lambda +\theta _1(u)+\chi (\rho )^2\int _0^1\frac{|\hat{a} (s)+\hat{a}(u-s)|^2}{\lambda +\theta _1(s)+\theta _1(u-s)}ds}. \end{aligned}$$

Noting that \(G^{(1)}_{\lambda , \rho }\) is even, we see \({\hat{\varphi }} (u)\) has odd imaginary part and zero real part.

Therefore, we obtain the infimum (6.1) is equal to

$$\begin{aligned} \int _{0}^1 \frac{1}{\lambda +\theta _1(u)+G^{(1)}_{\lambda ,\rho }(u) }du. \end{aligned}$$

We split the above integral over u-regions \([0,\delta ]\), \([\delta , 1-\delta ]\) and \([1-\delta , 1]\) for \(\delta >0\) small. The contributions to the first and last regions are the same, while the integral over the middle region is O(1) independent of \(\lambda \) since \(\theta _1\) vanishes only on \({{\mathscr {C}}}_1\).

By Lemma 7.2, \(\sup _{s\in {\mathbb {T}}}|\hat{a}(s) + \hat{a}(u-s)|^2 \le Cu^2\). Also, by Lemma 7.4, for \(1<\alpha <2\),

$$\begin{aligned} \int _{(0,\delta )\cup (1-\delta ,1)} \frac{ds}{\lambda +\theta _1(s) + \theta _1(s-u)}ds \ \le \ C_0(\lambda + u^\alpha /C_1)^{1/\alpha -1}. \end{aligned}$$

On the other hand, \(\int _\delta ^{1-\delta }(\lambda +\theta _1(s) + \theta _1(s-u))^{-1}ds = O(1)\) not depending on \(\lambda \).

Hence, there exist \(\kappa _0,\kappa _1>0\) such that for any \(0<u\le \delta \)

$$\begin{aligned} G^{(1)}_{\lambda ,\rho }(u)\ \ge \ {\frac{\kappa _0u^2}{\lambda +u^\alpha +u^2[(\lambda +\kappa _1 u^\alpha )^{1/\alpha -1}+1]}}. \end{aligned}$$

Therefore

$$\begin{aligned} \int _{0}^\delta \frac{du}{\lambda +\theta _1(u)+G^{(1)}_{\lambda ,\rho }(u)}\ \le \ {\int _{0}^{\delta }\frac{du}{\lambda +u^\alpha +\frac{\kappa _0u^2}{\lambda +u^\alpha +u^2[1+(\lambda +\kappa _1 u^\alpha )^{1/\alpha -1}]}}}=: J(\lambda ), \end{aligned}$$

where

$$\begin{aligned} \limsup _{\lambda \rightarrow 0}J(\lambda )= \int _{0}^\delta \frac{du}{u^\alpha +\frac{\kappa _0u^2}{u^\alpha +u^2+\kappa _1^{1/\alpha -1} u^{3-\alpha }}}. \end{aligned}$$
  • If \(1<\alpha <3/2\), as \(u\rightarrow 0\),

    $$\begin{aligned} u^\alpha +\frac{\kappa _0u^2}{u^\alpha +u^2+\kappa _1^{1/\alpha -1} u^{3-\alpha }}\ \sim \ \kappa _0 u^{2-\alpha } \end{aligned}$$

    because \(3-\alpha >\alpha \) and \(\alpha >2-\alpha \).

  • If \(3/2<\alpha <2\), as \(u\rightarrow 0\),

    $$\begin{aligned} u^\alpha +\frac{\kappa _0u^2}{u^\alpha +u^2+\kappa _1^{1/\alpha -1} u^{3-\alpha }}\ \sim \ \frac{\kappa _0}{\kappa _1^{1/\alpha -1}} u^{\alpha -1} \end{aligned}$$

    because \(3-\alpha <\alpha <2\) and \(\alpha -1<\alpha \).

  • If \(\alpha =3/2\), as \(u\rightarrow 0\),

    $$\begin{aligned} u^\alpha +\frac{\kappa _0u^2}{u^\alpha +u^2+\kappa _1^{1/\alpha -1} u^{3-\alpha }}\ \sim \frac{\kappa _0}{1+\kappa _1^{-1/3}} u^{1/2}. \end{aligned}$$

In all these cases, \(\limsup _{\lambda \rightarrow 0} J(\lambda )\) is finite, finishing the proof of part (A). \(\square \)

6.1.2 Proof of (B)

We proceed as in Sect. 6.1.1, and note that it suffices to show

$$\begin{aligned} \limsup _{\lambda \rightarrow 0} \int _{{\mathbb {T}}^2} \frac{1}{\lambda + \theta _2 (u) + G^{(2)}_{\lambda ,\rho } (u)} \, du \ < \ \infty \end{aligned}$$
(6.2)

where

$$\begin{aligned} G^{(2)}_{\lambda ,\rho } (u) =\frac{(1-2\rho )^2|\hat{a}(u)|^2}{\lambda +\theta _2 (u)+\chi (\rho )^2\int _{{\mathbb {T}}^2} \frac{|\hat{a} (s)+\hat{a}(u-s)|^2}{\lambda +\theta _2(s)+\theta _2(u-s)}ds}. \end{aligned}$$

We split the integral appearing in (6.2) in five parts according to when u is close to one of the four points in \({{\mathscr {C}}}_2\) or not. The integral corresponding to the exceptional region is bounded O(1) independent of \(\lambda \) as in part (A). The four remaining integrals can all be treated similarly, and we restrict ourselves to the integral corresponding to the small ball \(\{ u \in {\mathbb {T}}^2 \, ; \, |u| \le \delta \}\) where \(\delta >0\) is small.

In the sequel C is a positive constant, which can depend on \(\delta \) but not on \(\lambda \), changing line to line. By Lemmas 7.2 and 3.5, we have

$$\begin{aligned} \int _{|u| \le \delta } \frac{1}{\lambda + \theta _2 (u) + G_{\lambda ,\rho } (u)} \, du \ \le \ \int _{|u| \le \delta } \frac{1}{\lambda + C |u|^2 |\log |u| | + C H_{\lambda ,\rho } (u)} \, du \end{aligned}$$

where, recalling m is the mean of p,

$$\begin{aligned} H_{\lambda ,\rho } (u) = \frac{|u \cdot m|^2}{\lambda + C |u|^2 | \log |u| | + C | u|^2 \int _{{\mathbb {T}}^2} \frac{ds}{\lambda + \theta _2 (s) + \theta _2 (s-u)}}. \end{aligned}$$

We now write

$$\begin{aligned} \int _{{\mathbb {T}}^2} \frac{ds}{\lambda + \theta _2 (s) +\theta _2(s-u)} = \sum _{w \in {{\mathscr {C}}}_2 } \int _{|s-w| \le \delta /2} \frac{ds}{\lambda + \theta _2 (s) + \theta _2 (s-u)} + R_{\delta } (\lambda ), \end{aligned}$$

where \(\sup _{\lambda >0} R_\delta (\lambda ) \le C\) since \(\theta _2\) is positive and vanishes only on \({{\mathscr {C}}}_2\). Similarly, as \(|u|\le \delta \), all integrals in the sum over \(w \in {{\mathscr {C}}}_2\) are equivalent in order to the integral on the domain \(\{|s| \le \delta /2\}\). By Lemma 3.5 and the fact, for |x| small, that \(|x^2| |\log |x|| \ge |x|^2\), it follows

$$\begin{aligned} \int _{{\mathbb {T}}^2} \frac{ds}{\lambda + \theta _2 (s) + \theta _2 (s-u)}\le & {} C\int _{|s| \le \delta /2} \frac{ds}{\lambda + |s|^2 + |s-u|^2} +C\\\le & {} C\int _{|s| \le \delta /2} \frac{ds}{\lambda + |s|^2 + |u|^2} +C\\\le & {} C \left| \log ( \lambda + |u|^2) \right| + C, \end{aligned}$$

where the second inequality is obtained from \(|x|^2/4 \le (|y|^2 + |x-y|^2)/2\) and the third from direct computations.

Substituting into \(H_{\lambda ,\rho }\) and noting again \(|x|^2 | \log |x| | \ge |x|^2\) for small |x|, we get

$$\begin{aligned} H_{\lambda ,\rho } (u)\ge & {} \frac{|u \cdot m|^2}{\lambda + C |u|^2 | \log |u| | + C |u|^2 \left| \log ( \lambda + |u|^2) \right| } \\\ge & {} \frac{|u \cdot m|^2}{\lambda + C |u|^2 \left| \log ( \lambda + |u|^2) \right| }. \end{aligned}$$

Fix \(\varepsilon \in (0,1)\) and observe, for \(\delta \) sufficiently small, that \(\sup _{|t| \le \delta }\{ |t|^{\varepsilon } |\log |t|| \} \le 1\). Then,

$$\begin{aligned} H_{\lambda ,\rho } (u) \ \ge \ \frac{|u \cdot m|^2}{\lambda + C \left( \lambda + |u|^2 \right) ^{1-\varepsilon } }, \end{aligned}$$

and we arrive at an upper bound for the integral in (6.2) given by

$$\begin{aligned} C \int _{|u| \le \delta } \left[ \lambda +|u|^{2} + \frac{|u \cdot m|^2}{\lambda + C(\lambda +|u|^2)^{1-\varepsilon } }\right] ^{-1} du. \end{aligned}$$
(6.3)

We can assume \(m=(m_1,m_2) \in {\mathbb {R}}^2\) is such that \(m_1 \ne 0, m_2 \ne 0\), and so \(|u \cdot m|^2 \ge C |u|^2\). Otherwise, we choose a rotation \(R_{-\theta }\) of angle \(-\theta \in (0,2\pi )\), such that \(R_{-\theta } m\) satisfies the previous condition, and change variables \(v=R_{\theta } u\) in the above integral (6.3). Thus, an upper bound of (6.3) is

$$\begin{aligned}&C \int _{|u| \le \delta } \left[ \lambda +|u|^{2} + \frac{|u|^2}{\lambda + C (\lambda +|u|^2)^{1-\varepsilon } }\right] ^{-1} du \\&\quad \le \ C \int _{|u| \le \delta } \left[ \lambda +|u|^{2} + \frac{|u|^2}{C (\lambda +|u|^2)^{1-\varepsilon } }\right] ^{-1} du, \end{aligned}$$

where we note \(\lambda \le \lambda ^{1- \varepsilon } \le (\lambda +|u|^2)^{1-\varepsilon }\) for all small \(\lambda \).

Through polar coordinates, we are left to show

$$\begin{aligned} \limsup _{\lambda \rightarrow 0} \int _{0}^{\delta } \frac{r}{\lambda + r^2 + C \frac{r^2}{(\lambda +r^2)^{1-\varepsilon }} } \, dr \ < \ \infty . \end{aligned}$$

Changing variables \(v= \lambda ^{1/2} r\), the integral

$$\begin{aligned}&\int _0^{\delta \lambda ^{-1/2}} \frac{v}{1+v^2 + C\lambda ^{\varepsilon -1} \frac{v^2}{(1+ v^2)^{1-\varepsilon }}} dv\\&\quad \le \ \int _0^1 v dv + C \lambda ^{1- \varepsilon } \int _1^{\delta \lambda ^{-1/2}} \frac{ (1+v^2)^{1-\varepsilon }}{v} dv = O(1). \end{aligned}$$

This finishes the proof of (B). \(\square \)

6.2 Proof of Theorem 2.14

Only the results for \(\alpha \le 2\) need proof. The upper bounds are obtained using Corollary 3.3 and Theorem 2.8. Indeed, for completeness, we discuss the case \(1<\alpha <2\), the rest being similar. From Theorem 2.8 we have that \(\sigma ^2_t(f)\sim t^{2-1/\alpha }\). Then, by the change of variables \(\lambda t=s\), we obtain

$$\begin{aligned} {{\mathscr {L}}}_f(\lambda ) = \int _0^{\infty }e^{-\lambda t}\sigma _t^2(f)dt\ \le \ \lambda ^{1/\alpha -3}\int _0^{\infty }e^{-s}s^{2-1/\alpha }ds = O(\lambda ^{1/\alpha - 3}). \end{aligned}$$

To address the lower bounds, we first note a bound for degree 2 functions g in \(d=1\). When \(\alpha <2\), by the admissibility Theorem 2.12, such a g is admissible. When \(\alpha =2\), by Lemma 3.2 and Theorem 2.9, the Tauberian variance \(L_g(\lambda )\le L^{(S)}_g(\lambda )\le C \lambda ^{-2}|\log \lambda |\), which is of smaller order than the desired lower bound for degree 1 functions in this situation; in fact, we believe g is admissible in this case (cf. Remark 2.13), although this is not needed here.

Hence, decompose a local degree 1 function f as \(f=\Psi _{\{0\}} + g\). By the inequality \(L_{\Psi _{\{0\}}}(\lambda ) \le 2L_f(\lambda )+ 2L_g(\lambda )\) in (3.3), we need only to prove the lower bound for the specific one-point function \(f(\eta ) = \Psi _{\{0\}}\). Recall the notation in Sect. 3.5 which is used throughout this subsection.

Noting (3.2), we apply Proposition 3.6 and estimate the integral \(I_{1} (\lambda , 1/2)\) there which serves as a lower bound for \(\langle \Psi _{\{0\}}, (\lambda - {{\mathscr {L}}})^{-1}\Psi _{\{0\}}\rangle _\rho \). For this purpose, we restrict the integration domain of the integral \(I_1(\lambda , 1/2)\) in (3.12), around a small neighborhood of 0, say \((0,\delta )\), for \(\delta >0\) small. Note, since u is very small, the domains \(D_V\) for \(V\in {\mathscr {C}}_1\) (cf. (3.11)) take form

$$\begin{aligned} D_0 (u) =[0,u], \quad D_1 (u)= [u,1]. \end{aligned}$$

Since \(\rho =1/2\), \(d=1\), it follows, from Lemma 7.2, that the sums of the two integrals, over domains \(D_0\) and \(D_1\), appearing in the definition of \(F^1_{\lambda ,1/2}\) in (3.12) with respect to the integral \(I_1(\lambda , 1/2)\) are of order

$$\begin{aligned} b_\alpha (u) \int _0^1 \frac{ds}{\lambda + \theta _1 (s) + \theta _1 (s-u)}. \end{aligned}$$
(6.4)

where

$$\begin{aligned} b_{\alpha } (u)\ =\ {\left\{ \begin{array}{ll} \sin ^2 (\pi u) \log ^2 (u), \quad &{}{\text {if}}\, \alpha =1,\\ \sin ^{2} (\pi u), \quad &{}{\text {if}}\, \alpha >1. \end{array}\right. } \end{aligned}$$

We rewrite the integral in (6.4) as the sum of the integrals over \([0,\delta ]\), \([\delta , 1-\delta ]\) and \([1-\delta , 1]\). By periodicity of \(\theta _1\), the integral on \([1-\delta ,1]\) is the same as that over \([0,\delta ]\). Also, the integral on \([\delta ,1-\delta ]\) is O(1) independent of \(\lambda \) as \(\theta _1\) vanishes only at 0 and 1. However, in Lemma 7.4, in “Appendix”, \(\alpha \)-dependent bounds are given for the integral \(\int _0^{\delta } (\lambda + \theta _1 (s) + \theta _1 (s-u))^{-1}ds\).

We now substitute these estimates for the integral into the formula for \(I_1(\lambda , 1/2)\).

  1. (i)

    For \(\alpha =1\), since \(b_1 (u) = \sin ^2 (\pi u) \log ^2 (u) \sim \pi ^2 u^2 \log ^2 (u)\) for \(u \sim 0\), for some positive constants \(C_0,C_1\),

    $$\begin{aligned} I_1 (\lambda , 1/2) \ \succcurlyeq \ \int _0^{\delta } \frac{du}{\lambda + u + u^{2} \log ^2 (u) \left[ 1+ C_0 \log \left( 1+ \frac{C_1}{\lambda + u/C_1}\right) \right] }. \end{aligned}$$

    To show the last integral is equivalent in order to \(\int _{0}^{\delta } (\lambda +u)^{-1} du =\log (1+ \delta /\lambda )\), it is sufficient to verify that the difference

    $$\begin{aligned} R_{\lambda }\ :=\ \int _{0}^{\delta } \frac{u^{2} \log ^2 (u) \left[ 1+ C_0 \log \left( 1+ \frac{C_1}{\lambda + u/C_1}\right) \right] }{(\lambda +u)\left\{ \lambda + u + u^{2} \log ^2 (u) \left[ 1+ C_0 \log \left( 1+ \frac{C_1}{\lambda + u/C_1}\right) \right] \right\} }du = o(|\log \lambda |). \end{aligned}$$

    To this end, note that the denominator of the integrand is bounded below by \((\lambda +u)^2\). For small \(\varepsilon \in (0,1)\), as \(u^{2} \log ^2 (u) =O(u^{2-\varepsilon })\) for u small, the numerator is bounded by above by a constant times \(u^{2-\varepsilon } |\log (\lambda )|\). Then, by the change of variables \(u = \lambda v\), we have

    $$\begin{aligned} R_{\lambda } \ \le \ C |\log (\lambda )| \int _{0}^{\delta } \frac{u^{2-\varepsilon }}{(\lambda +u)^2} du = O(\lambda ^{1-\varepsilon }|\log (\lambda )|). \end{aligned}$$
  2. (ii)

    For \(\alpha \in (1,2)\), since \(b_\alpha (u)=\sin ^2 (\pi u) \sim \pi ^2 u^2\) for \(u \sim 0\), it follows, for positive constants \(C_0,C_1\), that

    $$\begin{aligned} I_1 (\lambda , 1/2) \ \succcurlyeq \ \int _0^{\delta } \frac{du}{\lambda + |u|^{\alpha } + C_0 u^{2}(1+ (\lambda + u^{\alpha } /C_1)^{1/\alpha -1}) }. \end{aligned}$$
    • Assume that \(1<\alpha \le 3/2\). Changing variables \(u = \lambda ^{1/\alpha }z\), and noting when \(\alpha \le 3/2\) and \(\lambda \le 1\) that \(\lambda ^{3/\alpha -2} \le 1\), we have

    • Assume that \(3/2 \le \alpha < 2\). Changing variables \(u=\lambda ^{1-1/(2\alpha )} z\), similarly,

  3. (iii)

    For \(\alpha =2\), since \(b_2 (u) =\sin ^2 (\pi u) \sim \pi ^2 u^2\) for \(u\sim 0\), changing variables \(u=\lambda ^{3/4}z\), we have

    where

    We have \(R(\lambda , z)\) is of order \(|\log \lambda |^{-1/2}\) for \(0\le z\le \lambda ^{-1/8}\). Hence,

    $$\begin{aligned} I_1(\lambda , 1/2)&\succcurlyeq&\lambda ^{-1/4} \int _0^{ \lambda ^{-1/8}} \frac{dz}{1 + \kappa z^{2}|\log \lambda |^{-1/2}} \ \succcurlyeq \ \lambda ^{-1/4}|\log \lambda |^{1/4}. \end{aligned}$$

\(\square \)

6.3 Proof of Theorem 2.15

The only statement to prove is the first one. The desired upper bound is a consequence of Corollary 3.3 and Theorem 2.8. On the other hand, for the lower bound, again by Remark 2.7 and admissibility of degree 2 or more functions in \(d=2\) given in Theorem 2.12, we need only to focus on \(f = \Psi _{\{0\}}\).

We begin as in the proof of Theorem 2.14: With \(\alpha =2\) and \(\rho =1/2\), to find a lower bound of \(L_f(\lambda )\), using (3.2), we estimate the integral \(I_2(\lambda , 1/2)\) in (3.12) which yields a lower bound for \(\langle \Psi _{\{0\}}, (\lambda - {{\mathscr {L}}})^{-1}\Psi _{\{0\}}\rangle _\rho \). We restrict the domain of integration in \(I_2(\lambda , 1/2)\) over a small box \([0,\delta ]^2\) with \(\delta >0\) small.

For \(u \in [0,\delta ]^2\), by the periodicity in each direction of \(\theta _2\) and \(\hat{a}\), and Lemma 7.2, we bound the term in \(F^2_{\lambda , 1/2}\) (3.12) by

$$\begin{aligned}&\sum _{V \in {{\mathscr {C}}}_2} \int _{s \in D_V (u)} \frac{| {\hat{a}} (s) + {\hat{a}} (u-s)|^2}{\lambda + \theta _2 (s) + \theta _2 (u-s)} ds \le 4\int _{{\mathbb {T}}^2} \frac{| {\hat{a}} (s) + {\hat{a}} (u-s)|^2}{\lambda + \theta _2 (s) + \theta _2 (u-s)} ds\\&\quad \preccurlyeq |u|^2 \int _{{\mathbb {T}}^2} \frac{1}{\lambda + \theta _2 (s) + \theta _2 (u-s)} ds. \end{aligned}$$

We split the region of integration in five parts: The union of four sets \(\{s \in {\mathbb {T}}^2 \, ; \, |s-w| \le \delta /2\}\) for \(w\in {{\mathscr {C}}}_2\) and its complement. The integral on the complement is bounded O(1) uniformly in \(\lambda \) since \(\theta _2\) vanishes exactly on \({{\mathscr {C}}}_2\). But, by periodicity of \(\theta _2\) in each direction, the remaining integrals over the first four regions are all equal. Thus, by Lemma 3.5, \(|x|^2 |\log |x|| \ge |x|^2\) for small |x|, and \(|x|^2/4\le (|y|^2 + |x-y|^2)/2\), we have

$$\begin{aligned} \int _{{\mathbb {T}}^2} \frac{1}{\lambda + \theta _2 (s) + \theta _2 (u-s)} ds&\preccurlyeq&1 + \int _{|s| \le \delta /2} \frac{1}{\lambda + |s|^2|\log |s|| + |u-s|^2|\log |u{-}s||} ds \\&\preccurlyeq&1+\int _{|s| \le \delta /2} \frac{1}{\lambda + |s|^2 + |u-s|^2} ds\\&\preccurlyeq&\left| \log (\lambda +|u|^2) \right| . \end{aligned}$$

Finally, by Lemma 3.5 again, and inequalities \(|u|^2 \le |u|^2 |\log |u||\) and \( |u|^2 |\log (\lambda +|u|^2)| \le |u|^2 |\log |u|^2 |\) for small |u|, we obtain the lower bound,

\(\square \)