1 Introduction

A long standing problem in the field of mathematical physics has to do with the rigorous mathematical derivation of the macroscopic evolution equations of the conserved quantities in Newtonian particle systems. By assuming a stochastic dynamics instead of a deterministic one, the problem becomes mathematically tractable and some answers can be given with success. Over the last four decades both mathematicians and theoretical physicists have been giving many fruitful and relevant contributions related to the aforementioned problem and one of the challenges that has been in the spotlight is the derivation of the well-known hydrodynamic limit from stochastic interacting particle systems, as well as, the characterization of the fluctuations of locally conserved quantities around that limit. In this framework, many types of partial differential equations (PDE) and stochastic PDEs (SPDEs) have been studied and derived from several underlying random dynamics. These equation can be of different nature depending on the type of the underlying dynamics. The hydrodynamic limit consists in showing that the empirical measure associated to each conserved quantity converges to a deterministic measure which is absolutely continuous with respect to the Lebesgue measure and whose density is a solution to a PDE, the hydrodynamic equation. Since the limit is deterministic, probabilistically speaking, the hydrodynamic limit is a law of large numbers for the conserved quantity(ies) of the system, whereas the fluctuations is a central limit theorem, since the limit is random and described by a solution to an SPDE. These two derivations are done through a scaling limit procedure where the scaling parameter n connects the macroscopic space, a continuous space where the solutions of the macroscopic equations will be defined; to the microscopic space, a discrete space where the random system evolves according to a prescribed dynamics. Since two scales for space are considered, two scales for time naturally emerge, a macroscopic time t and a microscopic time \(tn^a\), where the value of the parameter a highly depends on the underlying dynamics. For microscopic systems with only one conservation law, there is no ambiguity on the choice of the fluctuation fields that one should look at. Nevertheless, for multi-component systems one has many ways of considering the fluctuation fields associated to the conserved quantities and moreover a special feature of these systems is the fact that different time scales might coexist, and this never occurs for systems with only one conservation law.

In [42], with a focus on anharmonic chains of oscillators, it was developed the nonlinear fluctuating hydrodynamics theory (NLFH) for the equilibrium time-correlations of the conserved quantities of that model (which has several conservation laws) and analytical predictions were done based on a mode-coupling approximation. Roughly speaking, the approach of [42] starts at the macroscopic level, i.e. assumes that a hyperbolic system of conservation laws is governing the macroscopic evolution of the empirical conserved quantities. Then one adds a diffusion term and a dissipation term to the system of coupled PDEs and then linearizes the system at second order, with respect to the equilibrium averages of the conserved quantities. A fundamental role is played by the normal modes, i.e. the eigenvectors of the linearized equation and these modes evolve with different velocities and in different time scales. These modes might be described by different forms of anomalous super-diffusion or standard diffusions and this description depends on the value of certain coupling constants. These coupling constants fix the value of the quadratic terms in the equation, i.e. terms in the evolution equation of each quantity which are written in terms of products of the conserved quantities and these constants fix the limiting processes that one should obtain. For systems with two conservation laws all the possible limits are summarized in the tables in the appendix, see Tables 1, 2 and 3. Surprisingly, besides the usual diffusive behavior, several forms of anomalous behavior can be obtained either by means of fractional behavior (but for very particular values of fractional power) or the Kardar-Parisi-Zhang (KPZ) behavior.

The KPZ behavior that has been extensively derived from microscopic systems and it is the one that we refer to in this article is characterized through what is called the KPZ equation or its companion the stochastic Burgers (SB) equation that we now briefly describe. The KPZ equation, which was introduced in [31] is the following SPDE

$$\begin{aligned} \partial _t h = \nu \partial _x^2 h + \tau (\partial _x h)^2 + \sqrt{D} \dot{W}. \end{aligned}$$

Above \(t\in [0,\infty )\) and \(x\in \mathbb {R}\), denote temporal and spatial variables, respectively. In addition, \(\nu ,D>0\) and \(\tau \in \mathbb {R}\) are some constants and \(\dot{W}=\dot{W}(t,x)\) denotes the one-dimensional space-time white-noise. The KPZ equation is conjectured to be a universal SPDE describing the fluctuations of randomly growing interfaces of 1-d stochastic dynamics close to a stationary state. Throughout this paper, we focus on the one-dimensional case for space. As a similar object, the tilt \(u=\partial _xh\) formally satisfies the following stochastic Burgers (SB) equation.

$$\begin{aligned} \partial _t u = \nu \partial _x^2 u + \tau \partial _x u^2 + \sqrt{D} \partial _x \dot{W}. \end{aligned}$$

What is notable for the above KPZ/SB equations is their universality, since they have been derived from various types of microscopic systems [10, 16, 19, 21, 26, 28, 29]. Whereas these results are concerned with the scalar-valued case, a vector-valued version, which macroscopically corresponds to a coupled system of KPZ/SB equations, has also been studied [2, 13, 25] for a system of interacting diffusion processes.

Table 1 Classification (I)
Table 2 Classification (II)
Table 3 Classification (III)

Above we have mentioned that fractional behavior has been predicted from NLFH theory for the normal modes of systems with several conservation laws. According to the predictions of [42], the fractional behavior in systems with two conservation is given by a \(\alpha \)-Lévy process with \(\alpha \in \{3/2,5/3, \text {Gold}\}\) where \( \text {Gold}=(1+\sqrt{5})/2\) is the Golden mean number. The fact that only these exponents appear in the limit is still not well understood.

In this paper, we consider a model of interacting oscillators which was introduced in [9] that we refer to as the BS model. The dynamics results from a superposition of an Hamiltonian dynamics (depending on a given potential, say \(V:\mathbb {R}\rightarrow \mathbb {R}\), and whose strength is regulated by a sequence \(\alpha _n\)) and a noise that exchanges variables at nearest-neighbor positions. This model admits two conserved quantities, volume and energy, and anomalous diffusion phenomena have been observed from these quantities. The paper [3] pointed out that the model exhibits anomalous behavior when the potential is given by the so-called Toda lattice potential (a.k.a. the Kac-van Moerbecke potential) \(V(\eta )=e^{-\eta }-1+\eta \). Additionally, [4, 5] identified the limiting process of the energy fluctuation field, for the case of the harmonic potential \(V(\eta )=\eta ^2/2\), by a 3/2-Lévy, while the volume fluctuation field has diffusive behavior. In fact, the limit behavior of the energy fluctuation field is diffusive in the regime where the noise is stronger than the Hamiltonian dynamics but when the Hamiltonian dynamics is stronger, then the behavior is described by a skewed 3/2-fractional Lévy process, see Fig. 1 where it is assumed that \(\alpha _n=O(n^{-\kappa })\) and the system is speeded up in the time scale \(tn^a\):

Fig. 1
figure 1

\(V(\eta )\) (Energy) fluctuations

In [8] it was proved that the last result is also valid for an anharmonic setting when the harmonic potential is perturbed by a weak quartic function, i.e. for \(V_n(\eta )=\eta ^2/2+\gamma _n \eta ^4/4\) and for \(\gamma _n=O(n^{-1/4})\).

More recently, [1] analyzed, in the case of the Toda lattice potential, the limiting behavior of the fluctuation field for another conserved quantity, which corresponds to the derivative of the energy, i.e. for \(V'(\eta )\), and they derived the SB equation as the limiting object when the Hamiltonian dynamics is as strong as the exchange noise dynamics, or a diffusive behavior when the noise is stronger. The results for this quantity can be summarized in Fig. 2.

Fig. 2
figure 2

\(V'(\eta )\) fluctuations

In the regime where the Hamiltonian dynamics is stronger it is believed that the behavior of the quantity \(V'(\eta )\) should be given in terms of the so-called KPZ fixed point (KPZ-fp), constructed in [35]. Nevertheless, for the other normal mode, only diffusive behavior can be obtained, but in the regime where the intensity of the noise is stronger than the Hamiltonian dynamics, and all the other regimes are still open.

As we have seen above, depending on the chosen potential one can either get the limit behavior for the conserved quantities as diffusive, KPZ or a 3/2-Lévy. With this in mind, our motivation in this article is to see how universal is the aforementioned behavior. Therefore, in this paper, we study the above model of interacting oscillators which is driven by a general form of the potential V under the high-temperature regime. To clarify our idea, let \(\beta >0\) be the inverse temperature of the system and consider the infinite temperature limit \(\beta \rightarrow 0\). Then, a rescaled potential \(V_\beta (\eta )=\beta ^{-2}V(\beta \eta )\) is expanded as

$$\begin{aligned} V_\beta (\eta ) = \frac{1}{2} V^{\prime \prime }(0) \eta ^2 + \frac{1}{6} \beta V^{(3)}(0) {\eta ^3} + \cdots , \end{aligned}$$
(1.1)

provided that the potential satisfies a normalizing condition \(V(0)=V^\prime (0)=0\). From this simple Taylor expansion argument, we can extract the harmonic potential as a principal object with a small error term when \(\beta \rightarrow 0\). From this we expect that the previous results can be covered with more generality since the above argument holds for an almost arbitrary form of the potential, as long as, it satisfies some regularity condition, for instance. Moreover, note that the potential possibly has a nonlinear perturbation which is proportional to \(\beta \), or in other words we can call it asymmetry since it comes from a cubic function. From this observation, it is also expected that results that come from a nonlinearity are also re-derived from our model. Our main results state that these two pieces of anticipation are indeed justified.

In this paper, we consider the limiting behavior of volume and energy fluctuation fields. A natural object to consider is the pair of volume and energy fluctuation fields themselves. To obtain a nonlinear term in the limit, however, we are obliged to take moving frames with different values of velocities for the two fields. This makes it difficult to obtain a closed system of limiting equations and only a result for linear fluctuations is obtained (see Theorem 2.5). Instead, we study a linear combination of the volume and energy fluctuation fields, i.e. the field associated to the quantity \(\eta +\mathfrak {u}V(\eta )\) (\(\mathfrak {u}\) is a constant to be fixed) which is a scalar-valued field, with a common value of velocity. We are then in the context of NLFH theory and we can consider the fields of the normal modes associated to the system. Then, it turns out that we have two possible choices of linear combinations, i.e. two choices for the value of \(\mathfrak {u}\) and the corresponding values of velocity, whose corresponding fluctuation fields converge either to solutions of the Ornstein–Uhlenbeck equation, in the weak asymmetry regime; or to the SB equation or a 3/2-Lévy fractional diffusion equation, in the stronger asymmetry regime.

We could be tempted to use the NLFH theory developed in [42], to set up the correct linear combination and respective velocity, and then to obtain the corresponding predictions on the limiting equations for each field. Nevertheless, the starting point is to consider the column matrix given with the average flux of each conserved quantity. Unfortunately, we are not able to perform these computations with great generality because we are not able to rewrite these average currents in terms of the averages of the conserved quantities, and as a consequence, we cannot pose the problem correctly. In [42] NLFH theory was applied to the BS model for a fixed potential and despite the difficulties we just mentioned, some numerical simulations have been done and predictions for the normal modes were obtained, for more on this see the appendix. To stand on exact results, alternatively, what we do is to look at Dynkin’s formula applied to a generic field which is written as a linear combination of the fields of energy and volume, i.e. \(\eta + \mathfrak {u} V(\eta )\) and both evolving in the same reference frame with a given value of velocity v. The choice \(\mathfrak {u}\) and the velocity of reference frames v is such that in Dynkin’s formula, the lower order terms with respect to the conserved quantities are null. From this observation, we can infer what are the correct fields to look at and the respective velocities. Since we are concerned with two conservation laws we have two unknowns \((\mathfrak {u},v)\) to obtain from a system with two equations. By eliminating these diverging terms in Dynkin’s formula, what remains are higher-order terms that are, in principle, easier to control. Nevertheless, our infinitesimal generator when acting at the conserved quantities, energy, and volume, adds more terms to the equations, and, as a consequence, in the evolution of each quantity, we get a hierarchy of equations that have to be carefully estimated and then properly truncated. To that end, we use two different methods. The first one is based on the second-order Boltzmann–Gibbs principle, which is used to derive the crossover from the Ornstein–Uhlenbeck equation to the SB equation (Theorem 2.13). The second-order Boltzmann–Gibbs principle was introduced in [19] to treat quadratic fields of the exclusion process, and was extended to many other models in [12, 20,21,22,23], for instance. This is the same argument that has been used to obtain the diagram in Fig. 2 for the fluctuations of \(V'(\eta )\) and we obtain exactly the same behavior from a generic potential satisfying mild assumptions.

The second method we employ for the derivation of fractional behavior (Theorem 2.16) is to consider higher-order fields of the quantities that appear in the evolution equations and to properly control the new terms in the expansion. This strategy was developed in [4] to analyze anomalous behavior of energy for a harmonic potential and in [8] for showing persistence on the universality behavior observed for the harmonic potential but in the last case, the potential is perturbed with a quartic term. In our specific model we consider a quadratic field associated to the quantity \(V'(\eta )\), i.e. a two-dimensional field given in terms of the quantities \(V'(\eta )\) and by looking at the evolution of this quantity we get new quantities which are written in terms of both volume and energy and that we have to truncate to see the leading terms. In the case of the harmonic potential the evolution of the quadratic field is quite simple since it only involves the energy and the quadratic field itself. In this sense the hierarchy is finite since we start from the energy field and in its evolution we see the quadratic field for the other quantity, namely the volume, and in the evolution of the quadratic field for the volume we get back to the energy field. Then the only difficulty is to properly link the equations of the two fields, but this can be made by means of choosing properly the test functions that solve a Poisson equation. In our model this can also be done but one has to be careful since other terms appear in the evolution equations. Since we can tune the value of the inverse temperature \(\beta \) we are able to control the new terms that appear in the evolution equations and truncate the hierarchy in one step as for the harmonic potential.

The two fluctuation fields that we consider, namely the fields associated to the quantities \(\eta +\mathfrak {u} V(\eta )\) for the choices \(\mathfrak {u} =V^{(3)}(0)\beta \) and \(\mathfrak {u} =-1/\lambda \) (where \(\lambda \) is the average of \(V'(\eta )\) with respect to the stationary measure) asymptotically behave as the fields for \(V'(\eta )\) and \(V(\eta )\) and for this reason our results then extend those of the fluctuations of \(V'(\eta )\) for the Toda Lattice potential and of the fluctuations of the energy \(V(\eta )\) for the harmonic potential. We therefore show that the behavior of the SB equation and the 3/2-Lévy behavior are universal.

Finally, we note the appearance of fractional diffusions from many other contexts, for example, from a 1-d infinite chain of coupled harmonic oscillators [14, 34, 40]. In [40] it is proved that the density of the energy distribution (by means of the Wigner distribution) has a space-time evolution given by a linear phonon Boltzmann equation, whose solution when properly scaled, in the limit becomes a solution of the fractional diffusion equation with exponent 5/3. In [14] it is considered the same model as in [40] but particles are submitted to the action of a magnetic field of intensity B. The case \(B=0\) was studied in [30], where it was proved that the energy super-diffuses as a 3/2-fractional diffusion, while if \(B\ne 0\) it is described by a 5/3-fractional diffusion, see [40]. In [14] it is quantified the intensity of the magnetic field to pass from one regime to the other and the description of the transition mechanism is given in terms of a Lévy process that interpolates between the two fractional universality classes: 3/2-Lévy and 5/3-Lévy. As we can see in the previous examples, by either changing the dynamics or by changing the potential, the limiting laws can have a variety of forms and they are universal in the sense that they can be obtained from a general collection of microscopic dynamics and they do not depend on the special features of the underlying microscopic dynamics but on their phenomenology. The crossover that has been established for those universal laws can be obtained by tuning certain parameters at the microscopic dynamics, which permit a comparison of one dynamics with respect to another. When one dominates, the system falls in one university class, and then it can cross to another universality class when the other dynamics dominate instead. With the previous examples, we have just seen a possible way to cross from the Edwards-Wilkinson (EW) [17] class, i.e. Gaussian fluctuations, to the KPZ universality class, by considering the fluctuation field for \(V'(\eta )\) in the case of the Toda lattice potential and by changing the parameter regulating the Hamiltonian dynamics. When both dynamics are tuned with the same strength we see the SB equation connecting both universality classes. We highlight that the crossover from the SB equation to the (conjectured) KPZ fixed point is still missing. Moreover, we have seen how to cross from the EW class to the 3/2-Lévy class and these classes are connected by a process that is obtained as the superposition of the two dynamics. There are other mechanisms to cross between EW and 3/2-Lévy class, see for example [6, 8]. In [8] a perturbation of the quadratic potential with a small anharmonicity of the quartic form (a version of the Fermi-Pasta-Ulam (FPU) chain) is considered and it is realized that for a certain strength of the quartic potential the limit fluctuations continue to behave as 3/2-Lévy, i.e. the same behavior as the purely quadratic potential. In Fig. 3, we can see a cartoon in \((1+1)\)-dimensions representing some universality classes that have already been obtained from microscopic dynamics. According to [37, 38] for multi-component systems with several conservation laws, there can be a family of universality classes given by \(\alpha \)-Lévy processes with an exponent \(\alpha \) written as the quotient of consecutive Fibonacci numbers. However, we have less understanding for the other universality classes, where the existence of them is predicted from NLFH, and there are still open problems. As future works, we have strong interests on characterizing the limit behavior of fluctuations in the possible universality classes in a mathematically rigorous way.

Fig. 3
figure 3

Classification of some universality classes

1.1 Organization of the paper

In Sect. 2, we give a precise definition of our model and state our main results. Our first result is about a pair of fluctuations corresponding to the two conserved quantities: volume and energy. After that, we consider two possible choices of the linear combination of these original fields, from which the SB equation and 3/2-Lévy fractional diffusion equation are derived. Section 3 is devoted to the computation of Dynkin’s martingale decomposition for the original fluctuation fields and the proof of the first result. Then, in Sect. 4, we present a way, based on the expression of Dynkin’s martingale that allows choosing the linear combination of the original fields, in order to cancel some diverging terms, provided the asymmetric part of the generator is larger than the value considered in the first result for the original fluctuation fields. The proof for the convergence to the SB equation and to the 3/2-Lévy fractional diffusion will be given in Sect. 5 and Sect. 6, respectively. In the appendix we discuss the predictions from NLFH.

1.2 Notation

Given two real-valued functions f and g depending on a variable \(u\in \mathbb {R}^d\) we will write \(f(u)\lesssim g(u)\) if there exists a constant \(C>0\) such that \(f(u)\le C g(u)\) for any u. Moreover, we write \(f=O(g)\) (resp. \(f=o(g)\)) in the neighborhood of \(u_0\) if \(|f|\le |g|\) in the neighborhood of \(u_0\) (resp. \(\lim _{u\rightarrow u_0}f(u)/g(u)=0\)). Sometimes it will be convenient to make precise the dependence of the constant C on some extra parameters and this will be done by the standard notation \(C(\lambda )\) if \(\lambda \) is the extra parameter. Finally, we denote by \(\langle \cdot , \cdot \rangle _{L^2(\mathbb {R})}\) the inner product in \(L^2(\mathbb {R})\), i.e. for any \(f,g\in L^2(\mathbb {R})\)

$$\begin{aligned} \langle f, g\rangle _{L^2(\mathbb {R})} :=\int _{\mathbb {R}} f(x) g(x) dx, \end{aligned}$$

and by \(\Vert \cdot \Vert _{L^2(\mathbb {R})}\) the \(L^2(\mathbb {R})\)-norm, i.e. \( \Vert f \Vert _{L^2(\mathbb {R})} :=( \langle f, f\rangle _{L^2(\mathbb {R})} )^{1/2}\).

2 Statement of Results

2.1 Model description

Here we define our model and state our main results. Let \(\mathscr {X}=\mathbb {R}^{\mathbb {Z}}\) be a space of oscillators whose elements are denoted by \(\eta = (\eta _j)_{j \in \mathbb {Z}}\). For each \(\eta \in \mathscr {X}\) and \(\alpha >0\), define

$$\begin{aligned} ||| \eta |||_\alpha = \sum _{j\in \mathbb {Z}} | \eta _j| e^{-\alpha |j|}, \end{aligned}$$

and let \(\Omega _\alpha \) be the set of configuration \(\eta \in \mathcal {X}\) such that \(|||\eta |||_\alpha <+\infty \). The normed space \((\Omega _\alpha ,|||\cdot |||_\alpha )\) turns out to be a Banach space. Set \(\Omega =\cap _{\alpha >0} \Omega _\alpha \). Note that the space \(\Omega \) is a complete metric space with respect to the distance

$$\begin{aligned} d(\eta _1,\eta _2) = \sum _{\ell \in \mathbb {N}} 2^{-\ell } \min \{ 1, ||| \eta _1 -\eta _2 |||_{1/\ell }\}. \end{aligned}$$

In this paper, we consider the dynamics of oscillators on the configuration space \(\Omega \) driven by a general potential V, which satisfies the following assumptions.

Assumption 2.1

Assume that a smooth convex non-negative function \(V:\mathbb {R}\rightarrow \mathbb {R}\) satisfies \(V(0)=V'(0)=0\) and \(V^{\prime \prime }(0)=1\). Moreover, assume for each \(k\in \{0,\ldots ,5\}\) that, the derivative \(V^{(k)}(\eta )\) has at most exponential growth, that is, there exists a constant \(\gamma _V>0\) such that

$$\begin{aligned} \max _{0\le k\le 5} \sup _{\eta \in \mathbb {R}} \big | e^{-\gamma _V |\eta |} V^{(k)}(\eta ) \big | <+\infty . \end{aligned}$$

Here we used the convention \(V^{(0)}=V\).

In what follows, we fix a nonlinear function V satisfying Assumption 2.1 and for each \(\beta >0\), we set \(V_{\beta }(\eta )=\beta ^{-2}V(\beta \eta )\). Throughout this paper, we are interested in the case when \(\beta \) is small, which means that the temperature of the system goes to infinity. In this regime, by a Taylor expansion (1.1), the scaled potential \(V_{\beta }\) converges to the harmonic potential \(V(\eta )=\eta ^2/2\), when \(\beta \rightarrow 0\). Now, we define the dynamics. Let \(n>0\) be a scaling parameter and let \(L = S + \alpha _n A\) with \(\alpha _n=O(n^{-\kappa })\), where the operators A and S, act on a smooth local function \(f:\Omega \rightarrow {\mathbb {R}}\), as

$$\begin{aligned} A f(\eta ) = \sum _{j \in \mathbb {Z}} \big ( V^\prime _{\beta _n}(\eta _{j-1}) - V^\prime _{\beta _n}(\eta _{j+1}) \big ) \partial _{\eta _j} f(\eta ) = \sum _{j \in \mathbb {Z}} (\xi _{j-1} - \xi _{j+1})\partial _{\eta _j} f(\eta ), \end{aligned}$$

and

$$\begin{aligned} S f(\eta ) =\frac{1}{2} \sum _{j \in \mathbb {Z}} \big ( f(\eta ^{j,j+1}) - f(\eta ) \big ), \end{aligned}$$

respectively. Above we introduced the notation

$$\begin{aligned} \xi _j=V_\beta ^\prime (\eta _j). \end{aligned}$$
(2.1)

We also introduced the notation \(\eta ^{j,j+1}\) for the configuration obtained from \(\eta \) after we exchange the occupation variables at sites j and \(j+1\): \((\eta ^{j,j+1})_k=\eta _{j+1}\) when \(k=j\), \((\eta ^{j,j+1})_k=\eta _{j}\) when \(k=j+1\) and otherwise \((\eta ^{j,j+1})_k=\eta _{k}\). Moreover, let \(\theta (n) = n^a\) be a time scaling factor for some \(a>0\) and set \(L_n = \theta (n)L\). To be concerned in the high temperature regime, we will take the inverse temperature in such a way that

$$\begin{aligned} \beta =\beta _n\rightarrow 0 \end{aligned}$$

as \(n\rightarrow \infty \). In [9] it was assumed that the potential \(V:{\mathbb {R}}\rightarrow {\mathbb {R}}\) is a non-negative smooth function, such that \(Z_{\beta _n, b,\lambda }\) is well defined for \(b>0\) and \(\lambda \in {\mathbb {R}}\) and moreover, the potential satisfies the following condition: \(0\le V^{\prime \prime }\le C\) for some \(C>0\). We observe that these conditions are sufficient to have well-defined dynamics, but they are not necessary. For example, it is shown in [3] that the Toda lattice potential \(V(\eta )=e^{-\eta }-1+\eta \) defines a well-defined dynamics on \(\Omega \) with some restrictions on parameters of stationary states, which we will describe below. In the whole article, we assume that our choice of potential is such that the dynamics in infinite volume is well-defined. Let \(\{ \eta _j(t):t\ge 0, j\in \mathbb {Z}\}\) be the Markov process on \(\Omega \) generated by \(L_n\) where we omit the dependency of n.

2.2 Invariant measure and static estimates

Similarly to the interacting diffusion case [16, 25, 29], the following product Gibbs measure associated to the potential \(V_\beta \), whose common marginal is given by

$$\begin{aligned} \nu _{\beta , {\textsf{b}},\lambda }(d\eta _j) = \frac{1}{Z_{\beta , {\textsf{b}},\lambda }} \exp (-\mathsf bV_\beta (\eta _j) + \lambda \eta _j) d\eta _j, \end{aligned}$$
(2.2)

is invariant for this process. Above, \({\textsf{b}}>0\), \(\lambda \in \mathbb {R}\) are constants and

$$\begin{aligned} Z_{\beta , \mathsf b,\lambda }:=\int _{-\infty }^{+\infty }\exp (-\mathsf bV_\beta (\eta )+\lambda \eta ) d\eta , \end{aligned}$$
(2.3)

is the normalizing constant, which is called the partition function, that makes of \(\nu _{\beta , {\textsf{b}},\lambda }\) a probability measure, and \(\beta >0\) is the inverse temperature whose value is taken in such a way that \(Z_{\beta ,{\textsf{b}},\lambda }\) is finite and thus the measure \(\nu _{\beta ,{\textsf{b}},\lambda }\) is well-defined. The next lemma assures that when we take the inverse temperature \(\beta \) sufficiently small, an assumption that we will impose throughout the paper, the partition function \(Z_{\beta ,{\textsf{b}},\lambda }\) is finite and a uniform exponential moment bound holds. In what follows, we assume that the inverse temperature \(\beta \) satisfies \(\beta <1\) for simplicity.

Lemma 2.2

Fix \(b>0\) and \(\lambda \in \mathbb {R}\). For any \(\gamma >0\), there exists \(C_\gamma >0\) and \(\beta _c=\beta _c(\gamma )>0\) such that

$$\begin{aligned} \sup _{\beta<\beta _c} E_{\nu _{\beta ,{\textsf{b}},\lambda }} \big [e^{\gamma |\eta _j|} \big ] < C_\gamma , \end{aligned}$$
(2.4)

where \(E_{\mu }\) denotes the expectation with respect to a probability measure \(\mu \).

Proof

First we show that the partition function has a uniform bound. Let we choose \(K_0>0\) in such a way that \(K_0\ge 4\lambda /b\). Moreover, we assume that \(\beta _n>0\) is sufficiently small, so that

$$\begin{aligned} \beta _n M_* K_0^2 \le K_0, \end{aligned}$$

where we set \(M_*= \sup _{\delta \in (0,1)} \big | V^{(3)}(\delta K_0) \big |\). Recall that the Taylor expansion for \(V_\beta \) yields

$$\begin{aligned} V'_{\beta }(K_0) = K_0 + (\beta /2) V^{(3)}(\delta \beta K_0) K_0^2 \end{aligned}$$

for some \(\delta =\delta (K_0)\in (0,1)\). Recalling that we assumed \(\beta <1\), we get the bound

$$\begin{aligned} V^\prime _\beta (K_0) \ge K_0 - (\beta /2) M_* K_0^2 \ge K_0/2. \end{aligned}$$

Similarly, we have that

$$\begin{aligned} - V'_\beta (-K_0) \ge -K_0/2. \end{aligned}$$

On the other hand, recalling the convexity and the non-negativity of V, we have that

$$\begin{aligned} V_\beta (K_1) \ge V_\beta (K_2) + V_\beta ^\prime (K_2)(K_1-K_2) \ge V_\beta ^\prime (K_2) (K_1-K_2), \end{aligned}$$

for any \(K_1,K_2\in \mathbb {R}\). As a consequence, for each \(\eta \in \mathbb {R}\), taking \(K_1=\eta \) and \( K_2= \textrm{sgn}(\eta ) K_0\) in the above bound, we have

$$\begin{aligned} V_\beta (\eta ) \ge 2{\textsf{b}}^{-1}|\lambda |(|\eta |-K_0). \end{aligned}$$

Hence, the partition function has the bound

$$\begin{aligned} Z_{\beta ,{\textsf{b}},\lambda } = \int _{\mathbb {R}} e^{-\mathsf bV_\beta (\eta ) + \lambda \eta } d\eta \lesssim \int _{\mathbb {R}} e^{-2|\lambda \eta | + \lambda \eta } d\eta . \end{aligned}$$

In particular, the density function \(\varphi _{\beta ,\mathsf b,\lambda }(\eta ):=\exp ( -\mathsf bV_\beta (\eta )+\lambda \eta )\) inside the integral of \(Z_{\beta ,{\textsf{b}},\lambda }\) is bounded by an \(L^1(\mathbb {R})\)-function, which is independent of \(\beta \). Therefore, noting that \(\varphi _{\beta ,{\textsf{b}},\lambda }(\eta )\rightarrow \exp (-\mathsf b|\eta |^2/2 + \lambda \eta )\) as \(\beta \rightarrow 0\) for each \(\eta \in \mathbb {R}\), we have

$$\begin{aligned} \lim _{\beta \rightarrow 0} {Z_{\beta ,\textsf{b},\lambda }} = \int _{\mathbb {R}} \lim _{\beta \rightarrow 0}\varphi _{\beta ,{\textsf{b}},\lambda }(\eta )d\eta = e^{\lambda ^2/2}, \end{aligned}$$

by the dominated convergence theorem.

Now, we estimate the exponential moment

$$\begin{aligned} E_{\nu _{\beta ,{\textsf{b}},\lambda }}\big [ e^{\gamma |\eta _j| }\big ] = \frac{1}{Z_{\beta ,{\textsf{b}},\lambda }} \int _{\mathbb {R}} e^{ - \mathsf bV_\beta (\eta _j) + \gamma |\eta _j|+ \lambda \eta _j} d\eta _j. \end{aligned}$$

By repeating the same argument for the bound of the partition function, we see that the density \(\exp \big (-\mathsf bV_\beta (\eta _j)+ \gamma |\eta _j|+\lambda \eta _j \big ) \) is bounded by an \(L^1(\mathbb {R})\) function which is independent of \(\beta \). Hence, combining with the boundedness of the partition function, we complete the proof. \(\square \)

Example 2.3

Here we consider the case where the nonlinear function V is given as the so-called FPU-\(\alpha \) potential which has the form \(V(\eta )=\eta ^2/2+ \alpha \eta ^3 + \eta ^4/4\) with \(\alpha \in \mathbb {R}\). This potential is convex if, and only if, \(3\alpha ^2 \le 1\) and it also satisfies all the conditions in the Assumption 2.1. Moreover, the partition function \(Z_{\beta ,{\textsf{b}},\lambda }\) is finite if \(\beta ^{-1}\ge 24{\textsf{b}}^{-1}|\lambda \alpha |\) and the uniform moment bound (2.4) holds with \(\beta _c={\textsf{b}}(24(|\lambda |+\gamma )|\alpha |)^{-1}\).

Example 2.4

As another example, the Toda lattice potential \(V(\eta )=e^{-\eta }-1+\eta \) satisfies all the conditions in the Assumption 2.1 and its invariant measure is a log-gamma distribution, which is well-defined when \(\beta ^{-1}> {\textsf{b}}^{-1}|\lambda |\). Moreover, we can see that \(E_{\nu _{\beta ,{\textsf{b}},\lambda }}[e^{\gamma |\eta |}]\) is bounded by a constant which is independent of \(\beta \), provided \(\beta ^{-1}>{\textsf{b}}^{-1}(\gamma +|\lambda |)\). In other words, the assertion (2.4) holds with \(\beta _c={\textsf{b}}(\gamma +|\lambda |)^{-1}\).

In the sequel, we consider fixed values of \({\textsf{b}},\lambda \) and take sufficiently small \(\beta \), which forces the measure \(\nu _{\beta ,{\textsf{b}},\lambda }\) to be well-defined and the uniform exponential moment bound (2.4) holds with \(\gamma =2\gamma _V\). The above uniform exponential moment bound will be used to bound error terms of the Taylor expansion, combined with the exponential growth of derivatives of the potential. For instance, by Taylor’s theorem, we have

$$\begin{aligned} \xi _j = \eta _j + \frac{1}{2}V^{(3)}(0) \beta \eta _j^2 + \varepsilon _j, \quad \varepsilon _j=\frac{1}{3!}\beta ^2 V^{(4)}(\delta \beta \eta _j) \eta _j^3, \end{aligned}$$
(2.5)

for some \(\delta =\delta (\eta _j)\in (0,1)\). Recall that the derivative \(V^{(4)}(\cdot )\) has at most exponential growth, which yields the bound \(\varepsilon _j \lesssim \beta ^2 e^{2\gamma _V |\eta _j|}\) where \(\gamma _V\) is the constant in Assumption 2.1. Thus, we have the bound

$$\begin{aligned} E_{\nu _{\beta ,{\textsf{b}},\lambda }}[\varepsilon _j] \le \beta ^2 E_{\nu _{\beta ,{\textsf{b}},\lambda }}\big [ e^{2\gamma _V|\eta _j|} \big ] \lesssim \beta ^2, \end{aligned}$$

where we used (2.4) in the second inequality. Such an argument will be repeatedly used to show that error terms of the Taylor expansion are negligible, by taking sufficiently large \(\beta \). (See for example the statement after (4.4) below.)

2.3 Main results

In the sequel, we fix \(\lambda \in \mathbb {R}\) and set \({\textsf{b}}=1\) for simplicity. Then we take \(\beta =\beta _n\) depending on the scaling parameter n and we simply write

$$\begin{aligned} \nu _n = \nu _{\beta _n, {\textsf{b}}, \lambda } \end{aligned}$$

for this setting. We fix a time horizon T. We consider our interacting oscillator model \(\{ \eta _j(t);j \in \mathbb {Z} \}\) with generator \(L_n\), starting from the invariant measure \(\nu _n\). We denote by \({D} ([0,T],\Omega ) \) the space of càdlàg (right-continuous and with left limits) trajectories taking values in \(\Omega \). Let \({\mathbb {P}}_n\) be the probability measure on \( {D} ([0,T],\Omega ) \) which is induced by \(\nu _n\) and let \({\mathbb {E}}_n\) denote the expectation with respect to \({\mathbb {P}}_n\). The above interacting oscillator model admits two conserved quantities

$$\begin{aligned} \sum _{j \in \mathbb {Z}} \eta _j \quad \text {and}\quad \sum _{j \in \mathbb {Z}} \zeta _j, \end{aligned}$$

which are refered to as volume and energy, respectively, where

$$\begin{aligned} \zeta _j = V_\beta (\eta _j). \end{aligned}$$
(2.6)

Let \({\mathcal {S}}({\mathbb {R}})\) be the space of Schwartz functions and \(\mathcal {S}'({\mathbb {R}})\) its dual, i.e. the set of linear continuous functionals defined on \({\mathcal {S}}({\mathbb {R}})\) and taking real values. Let \(D([0,T],{\mathcal {S}}'({\mathbb {R}}))\) be the space of càdlàg (right-continuous and with left limits) trajectories in \({\mathcal {S}}'({\mathbb {R}})\).

Then, as natural objects to investigate, we introduce the volume and the energy fluctuation fields as elements of \( D([0,T],\mathcal {S}^\prime (\mathbb {R}))\) that are defined for each \(\varphi \in \mathcal {S}(\mathbb {R})\) in the following manner:

$$\begin{aligned} \mathcal {V}^n_t (\varphi ) = \frac{1}{\sqrt{n}} \sum _{j \in \mathbb {Z}} \overline{\eta }_j(t) T^-_{f_1t} \varphi ^n_j, \end{aligned}$$
(2.7)

and

$$\begin{aligned} \mathcal {E}^n_t (\varphi ) = \frac{1}{\sqrt{n}} \sum _{j \in \mathbb {Z}} \overline{\zeta }_j(t) T^-_{f_2t} \varphi ^n_j. \end{aligned}$$
(2.8)

Here we used the bar notation over a random variable to mean the deviation from its expectation with respect to the invariant measure \(\nu _n\): \(\overline{\eta }_j=\eta _j - E_{\nu _n}[\eta _j]\), for instance. In addition, \(T^-_\cdot \) denotes a shift operator \(T^-_{v}\varphi ^n_j = \varphi ^n_{j-v}\) for each \(v\in \mathbb {R}\), and \(\varphi ^n_j = \varphi (j/n)\). In the above definition, \(f_1=f_1(n)\) and \(f_2=f_2(n)\) are constants depending on n, which may take different values for volume and energy.

In what follows, the velocities \(f_1\) and \(f_2\) are carefully calibrated, depending on the time scale \(n^a\), the weak asymmetry \(\alpha _n\) and the inverse temperature \(\beta _n\), which also depends on n. Our first result is concerned with the limiting behavior of the pair of fluctuation fields \(\mathcal {Z}^n=(\mathcal {V}^n,\mathcal {E}^n)\).

Theorem 2.5

(Linear fluctuations). Let \(\mathcal {V}^n\) and \(\mathcal {E}^n\) be the volume and the energy fluctuation fields which are defined by (2.7) and (2.8), respectively. We take \(\theta (n)=n^2\), \(\alpha _n=O(n^{-\kappa })\), \(\beta _n=O(n^{-\delta })\) and \((f_1,f_2)=(2\theta (n)\alpha _n,0)\) and assume \(\kappa > 1/2\) and \(\kappa + \delta > 1\). Moreover, assume \(\lambda =0\). Then, the pair of fluctuation fields \((\mathcal {V}^n, \mathcal {E}^n)\) converges in distribution in \(D([0,T]),\mathcal {S}^\prime (\mathbb {R})^2)\) to some \((u^1,u^2)\), which satisfies the following system of uncorrelated stochastic heat equations with additive noise:

$$\begin{aligned} \begin{aligned} \partial _t u^i = \frac{1}{2}\partial _x^2 u^i + \sigma ^i \partial _x \dot{W}^i. \end{aligned} \end{aligned}$$

Here \(\dot{W}^1=\dot{W}^1(t,x)\) and \(\dot{W}^2=\dot{W}^2(t,x)\) denote independent space-time white-noises and we set \(\sigma ^1=1\) and \(\sigma ^2=1/\sqrt{2}\).

Remark 2.6

When \(\lambda \ne 0\), in the time evolution of the pair of fluctuation fields a diverging term survives, see the terms of degree one in the second line of (3.5). Currently, we are not aware of how to treat these terms and therefore, we imposed the condition \(\lambda =0\).

Remark 2.7

The analysis in the sub-diffusive time scale, i.e., \(\theta (n)=n^a\) with \(a<2\), and derivation of some linear drift term, as explored in [1, Remark 2], are possible extensions. However, this is not necessary for our goal to derive anomalous behavior, so that we will not stick to these cases here.

On the contrary, when \(\kappa \le 1/2\), we expect to derive a non-trivial SPDE. Indeed, in this sub-critical regime, we can choose distinct moving frames \(f_1\) and \(f_2\), in order that nonlinear terms survive in the limit. These nonlinear terms, however, cannot be written in terms of the fluctuation fields, since we might have different frame velocities for volume and energy. To avoid such a difficulty, instead, we consider linear combinations of volume and energy fluctuations with a common velocity \(v_n\) as follows:

$$\begin{aligned} \mathcal {X}^n_t(\mathfrak {u}_n;\varphi ) = \frac{1}{\sqrt{n}} \sum _{j\in {\mathbb {Z}}} \big ( \overline{\eta }_j(t) + \mathfrak {u}_n \overline{\zeta }_j(t) \big ) T^-_{v_nt} \varphi ^n_j, \end{aligned}$$
(2.9)

We will choose the constant \(\mathfrak {u}_n\) and the velocity \(v_n\) properly, in such a way that we can characterize the limiting behavior of the new field \(\mathcal {X}^n\). We can show that this is established for the following two cases:

$$\begin{aligned} \begin{aligned}&\text {(i) } \mathfrak {u}_n=\mathfrak {u}_n^1:=c_3\beta _n, \quad v_n=v^1_n :=\theta (n)\alpha _n (2+2\lambda c_3 \beta _n). \qquad \qquad \qquad \qquad \qquad \qquad \\&\text {(ii) } \mathfrak {u}_n=\mathfrak {u}_n^2:=-1/\lambda ~( \lambda \ne 0), \quad v_n= v^2_n:=0. \end{aligned} \end{aligned}$$
(2.10)

For simplicity, we defined

$$\begin{aligned} c_k=V^{(k)}(0). \end{aligned}$$
(2.11)

Remark 2.8

Note that we can consider the fluctuation fields associated with \(\zeta _j + \mathfrak {u}_n\eta _j\), instead of (2.9). Then, the role of \(\mathfrak {u}_n\) becomes that of \(\mathfrak {u}_n^{-1}\), so that the condition \(\lambda \ne 0\) in the case (ii) can be removed. For this reason, we will consider the fluctuation fields multiplied by \(\lambda \) in the case (ii).

Recall the Taylor expansion (2.5). Then, we notice that the fluctuation field of the conserved quantity \(\eta _j + c_3\beta _n \zeta _j\) in the case (i) roughly matches that of \(\xi _j=V'_\beta (\eta _j)\). On the other hand, for the case (ii), recall that \(E_{\nu _n}[\xi _j]=\lambda \) and that we took along the article \(b=1\) for simplicity. Then, we have

$$\begin{aligned} \begin{aligned} (\overline{\xi }_j)^2 = \lambda ^2 -2\lambda (\eta _j - \lambda ^{-1} \zeta _j) + \varepsilon _j, \end{aligned} \end{aligned}$$

where \(\varepsilon _j\) is an error term which satisfies the bound \(\varepsilon _j \lesssim \beta _n e^{2\gamma _V | \eta _j|}\). Hence the fluctuations of \(\overline{\eta }_j -\lambda ^{-1}\overline{\zeta }_j\) asymptotically match those of \((\overline{\xi }_j)^2\) when \(\beta _n\) is small. Our main theorems are concerned with the derivation of the SBE and the 3/2-Lévy anomalous diffusion for the fluctuations of \(\xi _j\) and \((\overline{\xi }_j)^2\), respectively. Let us recall the notion of the stationary energy solution of the stochastic Burgers equation which was introduced in [19]. Let \(\nu ,D > 0\) and \(\Lambda \in \mathbb {R}\) be fixed constants and consider the \((1+1)\)-dimensional stochastic Burgers equation

$$\begin{aligned} \partial _t u = \nu \partial _x^2 u + \Lambda \partial _x u^2 + \sqrt{D} \partial _x \dot{W}. \end{aligned}$$
(2.12)

We begin with the definition of stationarity.

Definition 2.9

We say that an \(\mathcal {S}^\prime (\mathbb {R})\)-valued process \(u = \{ u_t: t \in [0,T] \} \) satisfies condition (S) if for all \(t \in [0,T]\), the random variable \(u_t\) has the same distribution as the space white-noise with variance \(D/(2\nu )\).

For a process \(u = \{ u_t: t \in [0,T]\}\) satisfying the condition (S), we define

$$\begin{aligned} \mathcal {A}^\varepsilon _{ s, t } (\varphi ) = \int _s^t \int _{\mathbb {R} } u_r (\iota _\varepsilon (x; \cdot ) )^2 \partial _x \varphi (x ) dx dr. \end{aligned}$$
(2.13)

for every \(0 \le s < t \le T \), \(\varphi \in \mathcal {S} (\mathbb {R} ) \) and \(\varepsilon > 0 \). Here we defined the function \(\iota _\varepsilon (x; \cdot ): \mathbb {R} \rightarrow \mathbb {R} \) by \(\iota _{ \varepsilon } (x; y) = \varepsilon ^{ - 1 } \textbf{1}_{ [ x, x + \varepsilon ) } (y) \) for each \(x \in \mathbb {R} \) and \(\epsilon >0\). Although the function \(\iota _\varepsilon (x,\cdot )\) does not belong to the Schwartz space, the quantity (2.13) is well-defined when u satisfies the condition (S).

Definition 2.10

Let \(u = \{ u_t:t \in [0,T]\}\) be a process satisfying the condition (S). We say that the process u satisfies the energy estimate (EC) if there exists a constant \(\kappa > 0\) such that for any \(\varphi \in \mathcal {S} (\mathbb {R} )\), any \(0 \le s < t \le T\) and any \(0< \delta< \varepsilon < 1 \),

$$\begin{aligned} {{\mathbb {E}}} \big [ \big | \mathcal {A}^\varepsilon _{ s, t } (\varphi ) - \mathcal {A}^\delta _{ s, t } (\varphi ) \big |^2 \big ] \le \kappa \varepsilon (t-s) \Vert \partial _x \varphi \Vert ^2_{ L^2(\mathbb {R})}. \end{aligned}$$

Here \(\mathbb {E}\) denotes the expectation with respect to the measure of a probability space where the process u lives.

Then the following result is proved in [19].

Proposition 2.11

Assume \(\{ u_t:t\in [0,T]\} \) satisfies the conditions (S) and (EC). Then there exists an \(\mathcal {S}^\prime (\mathbb {R} )\)-valued process \(\{ \mathcal {A}_t: t \in [0, T ] \} \) with continuous trajectories such that

$$\begin{aligned} \mathcal {A}_t (\varphi ) = \lim _{ \varepsilon \rightarrow 0 } \mathcal {A}^\varepsilon _{ 0, t } (\varphi ), \end{aligned}$$

in \(L^2 \) for every \(t \in [0,T]\) and \(\varphi \in \mathcal {S}(\mathbb {R})\).

From the last proposition, thinking that the singular term \(\partial _x u^2 \) is given by the last quantity, we can define a solution of (2.12) as follows.

Definition 2.12

We say that an \(\mathcal {S}^\prime (\mathbb {R})\)-valued process \(u=\{u (t, \cdot ): t\in [0,T] \}\) is a stationary energy solution of the stochastic Burgers equation (2.12) if

  1. (1)

    The process u satisfies the conditions (S) and (EC).

  2. (2)

    For all \(\varphi \in \mathcal {S} (\mathbb {R} )\), the process

    $$\begin{aligned} u_t(\varphi ) - u_0 (\varphi ) - \nu \int _0^t u_s (\partial _x^2 \varphi ) ds + \Lambda \mathcal {A}_t (\varphi ), \end{aligned}$$

    is a martingale with quadratic variation \(D \Vert \partial _x \varphi \Vert ^2_{ L^2 (\mathbb {R} ) } t \) where \(\mathcal {A}_\cdot \) is the process obtained in Proposition 2.11.

  3. (3)

    For all \(\varphi \in \mathcal {S} (\mathbb {R} )\), writing \(\hat{u}_t = u_{T-t}\) and \(\hat{ \mathcal {A} }_t = - (\mathcal {A}_T - \mathcal {A}_{ T- t })\), the process

    $$\begin{aligned} \hat{u}_t (\varphi ) - \hat{u}_0 (\varphi ) - \nu \int _0^t \hat{u}_s (\partial _x^2 \varphi ) ds + \Lambda \hat{\mathcal {A}}_t (\varphi ), \end{aligned}$$

    is a martingale with quadratic variation \(D \Vert \partial _x \varphi \Vert ^2_{L^2(\mathbb {R})}t\).

Then there exists a unique-in-law stationary energy solution of (2.12). Existence was shown in [19] and then uniqueness was proved in [24].

Theorem 2.13

(The SBE regime). Let \(\mathfrak {u}_n=\mathfrak {u}_n^1=c_3 \beta _n\) and \(v_n =v_n^1= \theta (n)\alpha _n (2+2\lambda c_3\beta _n)\). We consider the diffusive scaling \(\theta (n)=n^2\), and assume \(\lim _{n\rightarrow \infty }\sqrt{n}\alpha _n\beta _n =1\) and \(\lim _{n\rightarrow \infty } n\beta _n^4=0\). Moreover, we assume \(c_4-c_3^2=0\). Let \(\mathcal {X}^n\) be the fluctuation field defined by (2.9). Then, \(\mathcal {X}^n\) converges in distribution in \(D([0,T],\mathcal {S}^\prime (\mathbb {R}))\) to the stationary energy solution of the stochastic Burgers equation

$$\begin{aligned} \begin{aligned} \partial _t u = \frac{1}{2} \partial _x^2 u - c_3 \partial _x u^2 + \partial _x \dot{W}, \end{aligned} \end{aligned}$$
(2.14)

where \(\dot{W}=\dot{W}(t,x)\) denotes the one-dimensional space-time white-noise.

Remark 2.14

If \(c_3=0\), then above \({\mathfrak {u}}_n^1=0\) and the quantity we are looking at is just the volume, but in that case the limit is given by an Ornstein–Uhlenbeck process. We also observe that, independently of the value of \(c_3\), if \(\lim _{n\rightarrow \infty }\sqrt{n}\alpha _n\beta _n =0\), then the limit is again an Ornstein–Uhlenbeck process.

Remark 2.15

Note that in the Taylor expansion of \(\xi _j\), which is defined in (2.1), the quantity \(c_4-c_3^2\) is the coefficient of the cubic term, which is proportional to \(\eta _j^3\), see (4.4) below. If the condition \(c_4-c_3^2=0\) does not hold, then it is possible that another term appears in the limit. However, it is not clear how to write this cubic term with respect to volume and energy fluctuation fields, therefore, we decided to avoid the issue by choosing that coefficient to be zero, but it is certainly an interesting question that we leave for a future work.

Our second result is concerned with the case (ii) of (2.10). For \(j\in \mathbb {Z}\), we define the correlation function

$$\begin{aligned} S_j(t) = \frac{1}{2} \mathbb {E}_n \big [ \big (\overline{\zeta }_j(0) -\lambda \overline{\eta }_j(0) \big ) \big (\overline{\zeta }_j(t) -\lambda \overline{\eta }_j(t) \big ) \big ]. \end{aligned}$$
(2.15)

We denote by \(k_* \in \{ 3,4,\ldots \}\cup \{\infty \}\) the smallest integer such that \(V^{(k_*)}(0)\ne 0\). As examples of other cases of the potential, let us suppose that the function V is the FPU-\(\alpha \) potential which is given in Example 2.3. Then, \(k_*=3\) when \(\alpha \ne 0\), whereas \(k_*=4\) when \(\alpha =0\). Note that when \(\kappa ^*=\infty \) the nonlinear function \(V(\eta )\) is proportional to the purely harmonic potential \(\eta ^2/2\) and moreover for any \(\beta \in {\mathbb {R}}\) we have that \(\beta ^{-2}V(\beta \cdot )=V(\cdot )\). The next result has been shown in [8] when the nonlinear potential V is the purely harmonic one given above and we extend it to generic potentials.

Theorem 2.16

(The 3/2-Lévy regime). Let \(\theta (n)=n^a\) and \(\alpha _n=\gamma n^{-\kappa }\) with \(\gamma >0\), \(\kappa \ge 0\) and \(a=\min \{ 3/2+3\kappa /2, 2\}\). Assume \(\beta _n = O(n^{-b})\) with \(b\ge 1/(2k_*-4)\) when \(k_*\ge 4\) whereas \(b\ge 1/4\) when \(k_*=3\). Let \(\mathcal {X}^n\) be the fluctuation field defined by

$$\begin{aligned} \mathcal {X}^n_t(\varphi ) = \frac{1}{\sqrt{n}}\sum _{j\in \mathbb {Z}} (\overline{\zeta }_j - \lambda \overline{\eta }_j) \varphi (j/n), \end{aligned}$$

for each \(\varphi \in \mathcal {S}(\mathbb {R})\). Let fg be smooth functions on \(\mathbb {R}\) with compact support. Then,

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{2} {\mathbb {E}}_n \big [\mathcal {X}^n_0(g) \mathcal {X}^n_t(f) \big ] = \iint _{\mathbb {R}^2} f(x)g(y) P^{\gamma ,\kappa }_t(x-y) dx dy, \end{aligned}$$

and \(\{P^{\gamma ,\kappa }_t(x):t\ge 0, x\in \mathbb {R}\}\) is the fundamental solution of the equation \(\partial _t u= \mathbb {L}_{\gamma ,\kappa }u,\) where

$$\begin{aligned} \mathbb {L}_{\gamma ,\kappa } = \frac{1}{2}\textbf{1}_{\kappa \ge 1/3}\Delta - \gamma ^{3/2} \textbf{1}_{\kappa \le 1/3} \mathscr {L}, \end{aligned}$$
(2.16)

with \(\mathscr {L}=-\frac{1}{\sqrt{2}}[(-\Delta )^{3/4}-\nabla (-\Delta )^{1/4}]\).

Remark 2.17

We observe that the result of the last theorem gives (as long as the value of b satisfies the assumptions of the theorem) that the second mode has exactly the same behavior as the energy in the case of an harmonic potential \(V(\eta )=\eta ^2/2\), i.e. the same diagram as in Fig. 1.

Remark 2.18

In [8], a perturbation given by a quartic function, namely, \(V(\eta )=\eta ^2/2+ \overline{\gamma } \eta ^4\), is studied and the authors proved that the same equation driven by the operator (2.16) is derived provided \(\overline{\gamma }\) decays faster than \(n^{-1/4}\) for large n. Observing that \(\overline{\gamma }=\beta _n^2\), this bound is better than the one we obtained in Theorem 2.16, whereas their proof is based on the exact form of the potential. Additionally, we expect that Theorem 2.16 is valid for any order of perturbation which is given by the quartic function.

Remark 2.19

The interested reader can find an heuristic argument on the value of the critical exponent \(\kappa =1/3\) that appears in the statement of Theorem 2.16 in the introduction of [5]. The critical scale \(\alpha _n=n^{-1/3}\) is obtained by solving a macroscopic differential equation which well approximates the time evolution of the correlation function.

3 Proof of Theorem 2.5

3.1 The martingale decomposition

In the sequel, we define discrete derivative operators as follows.

$$\begin{aligned} \begin{aligned}&\nabla ^{1,n} \varphi ^n_j = n(\varphi ^n_{j+1} - \varphi ^n_{j}), \quad \nabla ^{2,n} \varphi ^n_j = \frac{n}{2}(\varphi ^n_{j+1} - \varphi ^n_{j-1}), \\&\Delta ^n \varphi ^n_j = n^2 (\varphi ^n_{j+1} + \varphi ^n_{j-1} - 2 \varphi ^n_j). \end{aligned} \end{aligned}$$
(3.1)

Here, note that both \(\nabla ^{n,1}\) and \(\nabla ^{2,n}\) approach to the continuous derivative \(\partial \), though, the rates of convergence are different. Indeed, we can show that \(\nabla ^{1,n}\varphi _j-\partial _x\varphi _j=O(n^{-1})\) while \(\nabla ^{2,n}\varphi _j- \partial _x\varphi _j = O(n^{-2})\) with the help of the mean-value Theorem. In this section, we give a proof of Theorem 2.5, assuming \(\kappa >1/2\), namely, the strength of asymmetry is sufficiently weak.

Our starting point is a martingale decomposition for the pair of fluctuation fields \((\mathcal {V}^n,\mathcal {E}^n)\). Hereafter we set \(\mathcal {Z}^n_t = (\mathcal {V}^n_t, \mathcal {E}^n_t)\) and recall (2.7) and (2.8). To compute the correlation between two martingales associated with \(\mathcal {V}^n\) and \(\mathcal {E}^n\), similarly to [1], we apply Dynkin’s formula, see, for example, Lemma A.1.5.1 of [32], to \(\mathcal {Z}^n_t (\overrightarrow{\varphi }) = \mathcal {V}^n_t(\varphi _1) + \mathcal {E}^n_t(\varphi _2)\) for each \(\overrightarrow{\varphi }=(\varphi _1,\varphi _2)\in \mathcal {S}(\mathbb {R})^2\). Then, we have that

$$\begin{aligned} \mathcal {N}^n_t (\overrightarrow{\varphi }) = \mathcal {Z}^n_t(\overrightarrow{\varphi }) - \mathcal {Z}^n_0 (\overrightarrow{\varphi }) - \int _0^t (\partial _s + L_n)\mathcal {Z}^n_s (\overrightarrow{\varphi })ds, \end{aligned}$$

and \(\mathcal {N}^n_t(\varphi )^2 -\langle \mathcal {N}^n(\varphi )\rangle _t\) where

$$\begin{aligned} \langle \mathcal {N}^n (\overrightarrow{\varphi })\rangle _t&= \int _0^t \big ( L_n \mathcal {Z}^n_s (\overrightarrow{\varphi })^2 - 2 \mathcal {Z}^n_s(\overrightarrow{\varphi }) L_n \mathcal {Z}^n_s (\overrightarrow{\varphi }) \big ) ds \nonumber \\&= \frac{\theta (n)}{2n^3} \int _0^t \sum _{j \in \mathbb {Z}} (\eta _{j}(s)- \eta _{j+1}(s))^2 \big ( \nabla ^{1,n} T^-_{f_1t} \varphi _1(j/n) \big )^2 ds\nonumber \\&\quad + \frac{\theta (n)}{2n^3} \int _0^t \sum _{j \in \mathbb {Z}} (\zeta _{j}(s) - \zeta _{j+1}(s))^2 \big ( \nabla ^{2,n} T^-_{f_2t} \varphi _2(j/n) \big )^2 ds\nonumber \\&\quad + \frac{\theta (n)}{n^3} \int _0^t \sum _{j \in \mathbb {Z}} (\eta _{j}(s) - \eta _{j+1}(s)) (\zeta _{j}(s)\nonumber \\&\quad - \zeta _{j+1}(s)) \nabla ^{1,n} T^-_{f_1t} \varphi _1(j/n) \nabla ^{1,n} T^-_{f_2t} \varphi _2(j/n) ds \end{aligned}$$
(3.2)

are martingales with respect to the natural filtration of the process. In what follows, we give a generic computation for any \(\lambda \) and at the final step we set \(\lambda =0\) to show Theorem 2.5. Since the limiting measure is product and homogeneous,

$$\begin{aligned} \begin{aligned} \lim _{n\rightarrow \infty } E_{\nu _n}[(\eta _j-\eta _{j+1})(\zeta _j - \zeta _{j+1})]&= E_{\eta _j \sim \mathcal {N}(\lambda ,1)} [(\eta _j-\eta _{j+1})(\eta _j^2/2-\eta _{j+1}^2/2)]\\&= E_{\eta _j \sim \mathcal {N}(\lambda ,1)} [\eta _j^3]-E_{\eta _j \sim \mathcal {N}(\lambda ,1)} [\eta _{j+1}\eta _j^2]\\&=\lambda ^3+3\lambda -\lambda (\lambda ^2+1)=2\lambda . \end{aligned} \end{aligned}$$

Here the limiting procedure is justified as we mentioned in Sect. 2.2. Similarly, we have that

$$\begin{aligned} \begin{aligned} \lim _{n\rightarrow \infty } E_{\nu _n}[(\eta _j-\eta _{j+1})^2]&=2 \textrm{Var}_{\eta _j \sim \mathcal {N}(\lambda ,1)}[\eta _j] =2, \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} \lim _{n\rightarrow \infty } E_{\nu _n}[(\zeta _j-\zeta _{j+1})^2]&=\frac{1}{2} E_{\eta _j \sim \mathcal {N}(\lambda ,1)}[\eta _j^4 - \eta _j^2 \eta _{j+1}^2] \\&= \frac{1}{2}[(\lambda ^4+6\lambda ^2+3) - (\lambda ^2+1)^2] = 2\lambda ^2+1, \end{aligned} \end{aligned}$$

where we used the fact that \(E[X^{2n}]=(2n-1)!!\) when X is drawn from the standard normal distribution. In particular, when \(\lambda =0\) the expectation of the third term in the utmost right-hand side of (3.2) vanishes as \(n\rightarrow +\infty \) and the noise terms are diagonalized. In summary, under the diffusive scaling \(\theta (n)=n^2\), we have that

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb {E}_n [\langle \mathcal {N}^n(\overrightarrow{\varphi }) \rangle _t] = t \Vert \partial _x \varphi _1\Vert ^2_{L^2(\mathbb {R})} + \frac{2\lambda ^2+1}{2} t \Vert \partial _x \varphi _2\Vert ^2_{L^2(\mathbb {R})} + 2\lambda t \langle \partial _x \varphi _1, \partial _x \varphi _2 \rangle _{L^2(\mathbb {R})}. \nonumber \\ \end{aligned}$$
(3.3)

Now, we compute the action of the generator \(L_n\) on \(\mathcal {Z}^n\). First, for the symmetric part, we get

$$\begin{aligned} \begin{aligned}&\int _0^t \theta (n)S \mathcal {V}^n_s (\varphi )ds = \frac{\theta (n)}{2n^{5/2}} \int _0^t \sum _{j \in \mathbb {Z}} {\overline{\eta }}_j(s) \Delta ^n T^-_{f_1s}\varphi ^n_j ds, \\&\int _0^t \theta (n)S \mathcal {E}^n_s (\varphi )ds = \frac{\theta (n)}{2n^{5/2}} \int _0^t \sum _{j \in \mathbb {Z}} \overline{\zeta }_j(s) \Delta ^n T^-_{f_2s}\varphi ^n_j ds, \end{aligned} \end{aligned}$$

for each \(\varphi \in \mathcal {S}(\mathbb {R})\) where we used the short-hand notation \(\varphi ^n_j=\varphi (j/n)\). In particular, the symmetric part S should always be accelerated by the diffusive scaling \(\theta (n)=n^2\) in order to obtain a non-trivial limit.

Next, we consider the anti-symmetric part of the generator. The action of the anti-symmetric part, after some rearrangement, is calculated as follows.

Lemma 3.1

We have that

$$\begin{aligned} \int _0^t (\partial _s + \theta (n) \alpha _n A) \mathcal {V}^n_s(\varphi ) ds= & {} \frac{\theta (n)\alpha _n}{n^{3/2}} \int _0^t \sum _{j \in \mathbb {Z}} \bigg (2 \overline{\xi }_j(s) - \frac{f_1(n)}{\theta (n)\alpha _n} \overline{\eta }_j(s) \bigg ) \partial _x T^-_{f_1s} \varphi ^n_jds\\{} & {} + E^{1,n}_t, \end{aligned}$$

and

$$\begin{aligned} \begin{aligned}&\int _0^t (\partial _s + \theta (n) \alpha _n A) \mathcal {E}^n_s(\varphi ) ds \\&\quad = \frac{\theta (n)\alpha _n}{n^{3/2}} \int _0^t \sum _{j \in \mathbb {Z}} \bigg (\overline{\xi }_{j+1}(s) \overline{\xi }_j(s) + 2 \lambda \overline{\xi }_j - \frac{f_2(n)}{\theta (n)\alpha _n} \overline{\zeta }_j(s) \bigg ) \partial _x T^-_{f_2s} \varphi ^n_jds + E^{2,n}_t, \end{aligned} \end{aligned}$$

where,

$$\begin{aligned} \mathbb {E}_n \bigg [\sup _{0\le t\le T} \big | E^{1,n}_t \big |^2 \bigg ] \lesssim T^2 \frac{\theta (n)^2 \alpha _n^2}{n^6} \quad \quad \text {and}\quad \quad \mathbb {E}_n \bigg [\sup _{0\le t\le T} \big | E^{2,n}_t \big |^2 \bigg ] \lesssim T^2 \frac{\theta (n)^2 \alpha _n^2}{n^4}. \nonumber \\ \end{aligned}$$
(3.4)

Proof

We begin with the computation for \(\mathcal {V}^n\). For each \(\varphi \in \mathcal {S}(\mathbb {R})\), note that

$$\begin{aligned} \begin{aligned}&\int _0^t (\partial _s + \theta (n) \alpha _n A) \mathcal {V}^n_s(\varphi )ds \\&\quad = \frac{\theta (n)\alpha _n}{n^{3/2}} \int _0^t \sum _{j \in \mathbb {Z}} 2\overline{\xi }_j(s) \nabla ^{2,n} T^-_{f_1s}\varphi ^n_j ds - \frac{1}{\sqrt{n}} \int _0^t \sum _{j \in \mathbb {Z}} \frac{f_1(n)}{n} \overline{\eta }_j(s) \partial _x T^-_{f_1s}\varphi ^n_j ds \\&\quad = \frac{\theta (n)\alpha _n}{n^{3/2}} \int _0^t \sum _{j \in \mathbb {Z}} \bigg (2\overline{\xi }_j(s) - \frac{f_1(n)}{\theta (n)\alpha _n} \overline{\eta }_j(s) \bigg ) \partial _x T^-_{f_1s} \varphi ^n_j ds + E^{1,n}_t, \end{aligned} \end{aligned}$$

where \(E^{n,1}_t\) satisfies the desired estimate (3.4). In the last line we replaced the discrete derivative by the continuous one with the variance of order \(O(\theta (n)^2 \alpha _n^2 n^{-6})\), which is estimated as follows. Recall from (3.1) the definition of the discrete derivative. Then, recalling the definition of \(\nabla ^{2,n}\) in(3.1), due to Taylor’s theorem, \(|\nabla ^{2,n}\varphi ^n_j - \partial _x \varphi ^n_j |\lesssim n^{-3} \partial _x^3\varphi ^n_j\) and thus by the Schwarz’s inequality

$$\begin{aligned} \begin{aligned}&\mathbb {E}_n \bigg [\sup _{0\le t\le T} \bigg | \frac{\theta (n)\alpha _n}{n^{3/2}} \int _0^t \sum _{j\in \mathbb {Z}} \overline{\xi }_j(s) T^-_{f_1s} \big (\nabla ^{2,n} \varphi ^n_j - \partial _x \varphi ^n_j \big ) ds \bigg |^2 \bigg ] \\&\quad \lesssim \frac{\theta (n)^2\alpha _n^2T}{n^3} \int _0^T \sum _{j \in \mathbb {Z}} \mathbb {E}_n \big [ \overline{\xi }_j(s)^2 \big ] T^-_{f_1s} \big (\nabla ^{2,n} \varphi ^n_j - \partial _x \varphi ^n_j \big )^2 ds \lesssim T^2 \frac{\theta (n)^2\alpha _n^2}{n^{6}}. \end{aligned} \end{aligned}$$

Here we used the fact that \(E_{\nu _n} [(\overline{\xi }_j)^2]\) is bounded by a constant which is independent of n by Lemma 2.2. On the other hand, for the energy fluctuation field we have that

$$\begin{aligned} \begin{aligned}&\int _0^t (\partial _s + \theta (n)\alpha _n A)\mathcal {E}^n_s(\varphi )ds \\&\quad = \frac{\theta (n)\alpha _n}{n^{3/2}} \int _0^t \sum _{j \in \mathbb {Z}} \xi _j(s) \xi _{j+1}(s) \nabla ^{1,n} T^-_{f_2s} \varphi ^n_j ds - \frac{1}{\sqrt{n}} \int _0^t \sum _{j \in \mathbb {Z}} \frac{f_2(n)}{n}\overline{\zeta }_j(s)\partial _x T^-_{f_2s} \varphi ^n_j ds\\&\quad = \frac{\theta (n)\alpha _n}{n^{3/2}} \int _0^t \sum _{j \in \mathbb {Z}} \bigg ( \overline{\xi }_{j}(s) \overline{\xi }_{j+1}(s) + \lambda (\overline{\xi }_j(s) + \overline{\xi }_{j+1}(s)) \bigg ) \nabla ^{1,n} T^-_{f_2s} \varphi ^n_j ds \\&\qquad - \frac{1}{\sqrt{n}} \int _0^t \sum _{j \in \mathbb {Z}} \frac{f_2(n)}{n}\overline{\zeta }_j(s)\partial _x T^-_{f_2s} \varphi ^n_j ds. \end{aligned} \end{aligned}$$

Similarly to the volume fluctuation, we replace the discrete derivative by the continuous one, and since variables are all centered, the error term of this replacement satisfies the bound in the assertion. Note that the cost of this replacement is worse than the one above since now \(|\nabla ^{1,n}\varphi ^n_j - \partial _x \varphi ^n_j|\lesssim n^{-1}\). Finally, by a summation-by-parts the last display equals to

$$\begin{aligned} \begin{aligned}&\quad \frac{\theta (n)\alpha _n}{n^{3/2}} \int _0^t \sum _{j \in \mathbb {Z}} \bigg ( \overline{\xi }_{j}(s) \overline{\xi }_{j+1}(s) + 2\lambda \overline{\xi }_j(s) - \frac{f_2(n)}{\theta (n)\alpha _n} \overline{\zeta }_j(s) \bigg ) \partial _x T^-_{f_2s} \varphi ^n_j ds + E^{2,n}_t, \end{aligned} \end{aligned}$$

where \(E^{2,n}_t\) satisfies the bound (3.4). This is justified as follows:

$$\begin{aligned} \begin{aligned}&\mathbb {E}_n \bigg [\sup _{0\le t \le T} \bigg | \int _0^t \frac{\theta (n)\alpha _n}{n^{3/2}}\sum _{j\in \mathbb {Z}} (\overline{\xi }_{j+1}(s)- \overline{\xi }_{j}(s)) \partial _x T^-_{f_2s} \varphi ^n_j ds \bigg |^2 \bigg ]\\&\quad = \mathbb {E}_n \bigg [\sup _{0\le t \le T} \bigg | \int _0^t \frac{\theta (n)\alpha _n}{n^{5/2}} \sum _{j\in \mathbb {Z}} \overline{\xi }_{j}(s) T^-_{f_2s} \nabla ^{1,n} \partial _x \varphi ^n_j ds \bigg |^2 \bigg ]\\&\quad \le \frac{T\theta (n)^2\alpha _n^2}{n^5} \int _0^T \mathbb {E}_n \bigg [ \bigg ( \sum _{j \in \mathbb {Z}} \overline{\xi }_j(s) \nabla ^{1,n} \partial _x T^-_{f_2s} \varphi ^n_j \bigg )^2 \bigg ] ds \lesssim \frac{T^2\theta (n)^2\alpha _n^2}{n^4} \Vert \partial _x^2 \varphi \Vert ^2_{L^2(\mathbb {R})}. \end{aligned} \end{aligned}$$

\(\square \)

With these representations for the action of the generator at hand, we have that

$$\begin{aligned} \begin{aligned}&\int _0^t (\partial _s + \theta (n) \alpha _n A) \mathcal {Z}^n_s (\overrightarrow{\varphi }) ds \\&\quad = \frac{\theta (n)\alpha _n}{n^{3/2}} \int _0^t \sum _{j \in \mathbb {Z}} \bigg (2\overline{\xi }_j(s) - \frac{f_1(n)}{\theta (n)\alpha _n} \overline{\eta }_j(s) \bigg ) \partial _x T^-_{f_1s} \varphi ^{n,1}_j ds \\&\qquad + \frac{\theta (n)\alpha _n}{n^{3/2}} \int _0^t \sum _{j \in \mathbb {Z}} \bigg ( \overline{\xi }_{j}(s) \overline{\xi }_{j+1}(s) + 2\lambda \overline{\xi }_j (s) - \frac{f_2(n)}{\theta (n)\alpha _n} \overline{\zeta }_j(s) \bigg ) \partial _x T^-_{f_2s} \varphi ^{n,1}_j ds + E^n_t, \end{aligned} \end{aligned}$$
(3.5)

where \(E^n_t\) satisfies the bound

$$\begin{aligned} \mathbb {E}_n\bigg [ \sup _{0\le t\le T} \big | E^n_t \big |^2 \bigg ] \lesssim T^2 \frac{\theta (n)^2\alpha _n^2}{n^4}, \end{aligned}$$

which vanishes as \(n\rightarrow \infty \) provided \(\alpha _n=o_n(1)\) and \(\theta (n)=n^2\). Now, recall the assumptions in Theorem 2.5 and the Taylor expansion (2.5). We note that the first term on the utmost right-hand side of (3.5) is estimated for any \(\varphi \) as

$$\begin{aligned} \mathbb {E}_n \bigg [\sup _{0\le t \le T} \bigg | \frac{2\theta (n)\alpha _n}{n^{3/2}} \int _0^t \sum _{j\in \mathbb {Z}} (\overline{\xi }_j -\overline{\eta }_j) (s) \partial _x T^-_{f_1s} \varphi ^n_j ds \bigg |^2 \bigg ] \lesssim T^2 \frac{\beta _n^2\theta (n)^2\alpha _n^2}{n^2} \Vert \partial _x \varphi \Vert ^2_{L^2(\mathbb {R})}, \end{aligned}$$

which vanishes under the assumptions of Theorem 2.5: \(\theta (n)=n^2\) and \(n\beta _n =o_n(1)\). On the other hand, the quantity

$$\begin{aligned} \int _0^t \sum _{j\in \mathbb {Z}} \overline{\xi }_j(s) \overline{\xi }_{j+1}(s) \partial _x T^-_{f_2s} \varphi ^{n,2}_j ds, \end{aligned}$$

remains non-trivial but not divergent as \(n\rightarrow \infty \). This will be verified in the SBE regime with the help of the second-order Boltzmann–Gibbs principle, see Proposition 5.7. In particular, the above quadratic term which is multiplied by \(\theta (n)\alpha _nn^{-3/2}\) vanishes in \(L^2\) when \(\theta (n)=n^2\) and \(\kappa >1/2\). Therefore, combined with the additional assumption \(\lambda =0\) and \(f_2=0\), the quadratic term in (3.5) vanishes in \(L^2\) as \(n\rightarrow \infty \).

Hence, the anti-symmetric part (3.5) is negligible in the limit \(n\rightarrow \infty \), and we have a martingale decomposition

$$\begin{aligned} \mathcal {Z}^n_t (\overrightarrow{\varphi }) = \mathcal {Z}^n_0 (\overrightarrow{\varphi }) + \int _0^t \mathcal {Z}^n_s (\partial _x^2 \overrightarrow{\varphi }) ds + \mathcal {N}^n_t (\overrightarrow{\varphi }) + {{\tilde{E}}^n_t}, \end{aligned}$$

where \({\tilde{E}}^n_t\) is an error term which is negligible in the sense that

$$\begin{aligned} \lim _{n\rightarrow \infty } {\mathbb {E}}_n \Big [\sup _{0\le t\le T} \big |\tilde{E}^n_t\big |^2 \Big ] = 0. \end{aligned}$$

From the above decomposition, we can deduce the assertions of Theorem 2.5. We do not present more steps on this because it is very similar to the approach of [1] and we refer the interested readers to that article for details.

4 Choice of Fluctuation Fields

In this section, we find the linear combination of the volume and energy fluctuations, from which we derive the stochastic Burgers equation and the 3/2-Lévy anomalous diffusion. Throughout this section, we consider \(\theta (n)=n^2\). We apply Dynkin’s formula, see, for example, [32, Lemma A.1.5.1], for \(\mathcal {X}^n(\cdot ) = \mathcal {X}^n(\mathfrak {u}_n;\cdot )\) defined by (2.9) with a common velocity \(v_n\). Then, for each \(\varphi \in \mathcal {S}(\mathbb {R})\),

$$\begin{aligned} \mathcal {M}^n_t(\varphi ) = \mathcal {X}^n_t (\varphi ) - \mathcal {X}^n_0 (\varphi ) -\int _0^t (\partial _s + L_n) \mathcal {X}^n_s(\varphi ) ds, \end{aligned}$$

and \(\mathcal {M}^n_t(\varphi )^2 -\langle \mathcal {M}^n (\varphi ) \rangle _t\) where

$$\begin{aligned} \langle \mathcal {M}^n(\varphi )\rangle _t = \int _0^t \big ( L_n \mathcal {X}^n_s(\varphi )^2 - 2 \mathcal {X}^n_s(\varphi ) L_n \mathcal {X}^n_s(\varphi ) \big ) ds, \end{aligned}$$

are martingales with respect to the natural filtration of the process. From simple, but long, computations,

$$\begin{aligned} \begin{aligned} \langle \mathcal {M}^n (\varphi ) \rangle _t&= \frac{\theta (n)}{2n^3} \int _0^t \sum _{j \in \mathbb {Z}} (\eta _j(s) -\eta _{j+1}(s))^2 \big ( \nabla ^{n,1} T^-_{v_ns} \varphi ^n_j \big )^2 ds \\&\quad + \frac{(\mathfrak {u}_n)^2 \theta (n)}{2n^3} \int _0^t \sum _{j \in \mathbb {Z}} (\zeta _j(s) -\zeta _{j+1}(s))^2 \big ( \nabla ^{n,1} T^-_{v_ns} \varphi ^n_j \big )^2 ds \\&\quad + \frac{\mathfrak {u}_n\theta (n)}{n^3} \int _0^t \sum _{j\in \mathbb {Z}} (\eta _j(s) - \eta _{j+1}(s)) (\zeta _j(s) -\zeta _{j+1}(s)) (\nabla ^{n,1} T^-_{v_ns}\varphi ^n_j)^2 ds. \end{aligned} \end{aligned}$$
(4.1)

We proceed by computing the action of the generator on \(\mathcal {X}^n\). First, for the symmetric part, we have that

$$\begin{aligned} \int _0^t \theta (n) S \mathcal {X}^n_s(\varphi ) ds = \frac{\theta (n)}{2n^{5/2}} \int _0^t \sum _{j\in \mathbb {Z}} (\overline{\eta }_j + \mathfrak {u}_n \overline{\zeta }_j ) \Delta ^n T^-_{v_ns} \varphi ^n_j ds, \end{aligned}$$

which is replaced by \(\frac{\theta (n)}{2n^2}\int _0^t \mathcal {X}^n_s(\partial _x^2 \varphi ) ds\) whose error is of order \(O(\theta (n)^2 n^{-6})\), i.e.:

$$\begin{aligned} \begin{aligned} \mathbb {E}_n \bigg [ \sup _{0\le t \le T} \bigg | \int _0^t \theta (n)S\mathcal {X}^n_s(\varphi )ds -\frac{\theta (n)}{2n^2} \int _0^t \mathcal {X}^n_s(\partial _x^2 \varphi ) ds \bigg |^2 \bigg ] \lesssim T^2 \frac{\theta (n)^2}{n^6}. \end{aligned} \end{aligned}$$

On the other hand, the action of the anti-symmetric part can be computed as follows.

Lemma 4.1

We have that

$$\begin{aligned} \begin{aligned}&\int _0^t (\partial _s + \theta (n) \alpha _n A) \mathcal {X}^n_s(\varphi ) ds\\&\quad = \frac{\mathfrak {u}_n \theta (n) \alpha _n}{n^{3/2}} \int _0^t \sum _{j \in \mathbb {Z}} \overline{\xi }_j(s) \overline{\xi }_{j+1}(s) \partial _x T^-_{v_ns} \varphi ^n_j ds \\&\qquad + \bigg ( (2+2\lambda \mathfrak {u}_n) \frac{\theta (n)\alpha _n}{n^{3/2}} - \frac{v_n}{n^{3/2}} \bigg ) \int _0^t \sum _{j \in \mathbb {Z}} \overline{\eta }_j \partial _x T^-_{v_ns} \varphi ^n_j ds \\&\qquad + \bigg ( (2+2\lambda \mathfrak {u}_n) \frac{\theta (n)\alpha _n c_3\beta _n}{n^{3/2}} - \frac{\mathfrak {u}_nv_n}{n^{3/2}} \bigg ) \int _0^t \sum _{j \in \mathbb {Z}} \overline{\zeta }_j \partial _x T^-_{v_ns} \varphi ^n_j ds \\&\qquad + (2+2\lambda \mathfrak {u}_n) \frac{\theta (n)\alpha _n (c_4-c_3^2)\beta _n^2}{3n^{3/2}} \int _0^t \sum _{j \in \mathbb {Z}} \zeta _j^n(s) \eta ^n_j(s) \partial _x T^-_{v_ns} \varphi ^n_j ds + E^n_t, \end{aligned} \end{aligned}$$
(4.2)

where \(E^n_t\) satisfies the bound

$$\begin{aligned} \mathbb {E}_n \bigg [\sup _{0\le t\le T} \big | E^n_t \big |^2 \bigg ] \lesssim T^2 \bigg ( (2+2\lambda \mathfrak {u}_n)^2 \frac{\theta (n)^2\alpha _n^2 \beta _n^6}{n^2} \vee \frac{\theta (n)^2\alpha _n^2}{n^6} \vee \frac{(\mathfrak {u}_n)^2\theta (n)^2\alpha _n^2}{n^4} \bigg ). \end{aligned}$$

Proof

Recalling that from Lemma 3.1, we have

$$\begin{aligned} \begin{aligned}&\int _0^t \theta (n)\alpha _n A \mathcal {X}^n_s(\varphi ) ds\\&\quad = \frac{\theta (n)\alpha _n}{n^{3/2}} \int _0^t \sum _{j \in \mathbb {T}_n} \big ( (2+ 2\lambda \mathfrak {u}_n)\overline{\xi }_j(s) + \mathfrak {u}_n \overline{\xi }_j(s)\overline{\xi }_{j+1}(s) \big ) \partial _x T^-_{v_ns} \varphi ^n_j ds + E^n_t, \end{aligned} \end{aligned}$$

where the error term \(E^n_\cdot \), which, by an abuse of notation, is denoted by the same notation as in the assertion, satisfies the bound

$$\begin{aligned} \mathbb {E}_n\bigg [ \sup _{0\le t\le T} \big | E^n_t \big |^2 \bigg ] \lesssim T^2 \bigg ( \frac{\theta (n)^2\alpha _n^2}{n^6} \vee \frac{(\mathfrak {u}_n)^2\theta (n)^2\alpha _n^2}{n^4} \bigg ). \end{aligned}$$

Hence,

$$\begin{aligned} \begin{aligned}&\int _0^t (\partial _s + \theta (n) \alpha _n A) \mathcal {X}^n_s(\varphi ) ds\\&\quad = \frac{\theta (n)\alpha _n}{n^{3/2}} \int _0^t \sum _{j \in \mathbb {Z}} \big ( (2+2\lambda \mathfrak {u}_n) \overline{\xi }_j(s) + \mathfrak {u}_n \overline{\xi }_j(s)\overline{\xi }_{j+1}(s) \big ) \partial _x T^-_{v_ns} \varphi ^n_j ds \\&\qquad - \frac{v_n}{n^{3/2}} \int _0^t \sum _{j \in \mathbb {Z}} \big ( \overline{\eta }_j(s) + \mathfrak {u}_n \overline{\zeta }_j(s) \big ) \partial _x T^-_{v_ns} \varphi ^n_j ds + E^n_t. \end{aligned} \end{aligned}$$
(4.3)

To proceed further, we make use of the following Taylor expansion up to order \(\beta _n^3\). Recall that \(c_k=V^{(k)}(0)\). By Taylor’s theorem, we have that

$$\begin{aligned} \begin{aligned} \xi _j - \eta _j = c_3 \beta _n \zeta _j + \frac{c_4-c_3^2}{3} \beta _n^2 \zeta _j \eta _j + \varepsilon _j, \end{aligned} \end{aligned}$$
(4.4)

where \(\varepsilon _j\) depends on the fifth derivative of the potential. As a consequence of Assumption 2.1, we note that

$$\begin{aligned} \mathbb {E}_n\bigg [ \sup _{0\le t \le T} \bigg | \int _0^t \sum _{j\in \mathbb {Z}} \overline{\varepsilon }_j \varphi ^n_j ds \bigg |^2 \bigg ] \lesssim T^2n\beta _n^6 \Vert \varphi \Vert ^2_{L^2(\mathbb {R})}, \end{aligned}$$

with the help of (2.4) on the uniform moment bound. Now we substitute the expansion (4.4) in the first linear term on the right-hand side of (4.3). Then, we obtain the desired expression with an additional cost whose variance is proportional to \(\beta _n^6\). \(\square \)

Note that the assumption \(c_4-c_3^2=0\) forces the last term of (4.2) to be equal to zero. In addition, to cancel linear fluctuations, which are divergent in our regime, we choose \(\mathfrak {u}_n\) and \(v_n\) in such a way that

$$\begin{aligned} {\left\{ \begin{array}{ll} \begin{aligned} &{} \theta (n)\alpha _n(2+2\lambda \mathfrak {u}_n) - v_n = 0,\\ &{} \theta (n)\alpha _n c_3 \beta _n (2+2\lambda \mathfrak {u}_n) - {{\mathfrak {u}}_n}v_n = 0. \end{aligned} \end{array}\right. } \end{aligned}$$
(4.5)

Namely, the constant \({\mathfrak {u}}_n\) should satisfy

$$\begin{aligned} \mathfrak {u}_n (2+2\lambda \mathfrak {u}_n) = c_3 \beta _n (2+2\lambda \mathfrak {u}_n). \end{aligned}$$

This quadratic equation has two solutions \(\mathfrak {u}_n^1=c_3 \beta _n\) and \(\mathfrak {u}_n^2 = - 1/\lambda \), which give \(v^1_n = \theta (n)\alpha _n (2+2\lambda c_3 \beta _n)\) and \(v^2_n= 0\), respectively. Therefore, when the relationship (4.5) is satisfied, the second and the third terms on the utmost right-hand side of (4.2) cancel. In the next section we are going to analyse the convergence of the fluctuation fields for each one of the previous quantities.

5 Proof of Theorem 2.13: The SB Equation

5.1 The martingale decomposition

We consider the case \(\mathfrak {u}_n=\mathfrak {u}_n^1= c_3 \beta _n\) and \(v_n=v_n^1=\theta (n)\alpha _n (2+2\lambda c_3 \beta _n)\) under the diffusive scaling \(\theta (n)=n^2\). In this section, we are concerned with the following fluctuation field given in (2.9) with \(\mathfrak {u}_n=\mathfrak {u}^1_n\) and \(v_n=v^1_n\), i.e.,

$$\begin{aligned} \mathcal {X}^n_t(c_3\beta _n; \varphi ) = \frac{1}{\sqrt{n}} \sum _{j \in \mathbb {Z}} (\overline{\eta }_j(t) + c_3 \beta _n \overline{\zeta }_j(t)) T^-_{v_n^1 t} \varphi ^n_j. \end{aligned}$$

We hereafter write \(\mathcal {X}^n_t(c_3\beta _n;\varphi )=\mathcal {X}^n_t(\varphi )\) for simplicity. In this case, by (4.2), the action of the anti-symmetric part of the generator is computed as follows.

$$\begin{aligned} \begin{aligned} \int _0^t (\partial _s + \theta (n)\alpha _n A)\mathcal {X}^n_s(\varphi ) ds&= \frac{\theta (n)\alpha _n c_3 \beta _n}{n^{3/2}} \int _0^t \sum _{j \in \mathbb {Z}} \overline{\xi }_j(s) \overline{\xi }_{j+1}(s) \nabla ^{1,n} T^-_{v_n^1 s} \varphi ^n_j ds + E^n_t, \end{aligned} \end{aligned}$$

where the remainder term \(E^n_t\) satisfies the bound

$$\begin{aligned} \mathbb {E}_n \bigg [\sup _{0\le t\le T} \big | E^n_t\big |^2 \bigg ] \lesssim \frac{\theta (n)^2\alpha _n^2\beta _n^6}{n^2} \vee \frac{\theta (n)^2\alpha _n^2}{n^6} \vee \frac{\beta _n^2\theta (n)^2\alpha _n^2}{n^4}. \end{aligned}$$
(5.1)

The action of the anti-symmetric part of the generator on the fluctuation field gives rise to the nonlinear term in the limiting equation, the SBE, provided \(\lim _{n\rightarrow \infty }\sqrt{n}\alpha _n \beta _n = 1\), which will be justified by the second-order Boltzmann–Gibbs principle. Moreover, note that the error term vanishes as \(n\rightarrow \infty \) when we additionally assume

$$\begin{aligned} n\beta _n^4=o_n(1). \end{aligned}$$
(5.2)

In summary, we have a martingale decomposition

$$\begin{aligned} \mathcal {X}^n_t(\varphi ) = \mathcal {X}^n_0(\varphi ) + \mathcal {S}^n_t(\varphi ) + \mathcal {B}^n_t(\varphi ) + \mathcal {M}^n_t(\varphi ) + \mathcal {R}^n_t(\varphi ), \end{aligned}$$
(5.3)

where \(\mathcal {M}^n_t\) is the Dynkin’s martingale whose quadratic variation is given by (4.1) with \(\mathfrak {u}_n=\mathfrak {u}^1_n=c_3 \beta _n\), \(v_n=v^1_n\) and \(\theta (n)=n^2\), and

$$\begin{aligned} \mathcal {S}^n_t (\varphi )= & {} \frac{1}{2\sqrt{n}} \int _0^t \sum _{j \in \mathbb {Z}} \big ( \overline{\eta }_j+ c_3\beta _n \overline{\zeta }_j \big )(s) \Delta ^n T^-_{v_n^1s} \varphi ^n_j ds,\\ \mathcal {B}^n_t (\varphi )= & {} c_3 \int _0^t \sum _{j \in \mathbb {Z}} \overline{\xi }_j(s) \overline{\xi }_{j+1}(s) \nabla ^{1,n} T^-_{v_n^1s} \varphi ^n_j ds, \end{aligned}$$

and \(\mathcal {R}^n_\cdot \) is a reminder term that vanishes in the following sense.

$$\begin{aligned} \begin{aligned} \lim _{n\rightarrow \infty } \mathbb {E}_n \bigg [ \sup _{0\le t\le T} \big | \mathcal {R}^n_t (\varphi ) \big |^2 \bigg ] =0. \end{aligned} \end{aligned}$$

Following arguments below, it turns out that the reminder term \(\mathcal {R}^n\) does not affect the limit. With the decomposition (5.3) at hand, we give a proof of Theorem 2.13. We will show that each term in the martingale decomposition is tight and then identify their limit points.

Remark 5.1

We observe that from [22, Theorem 4] (see in fact equation (5.30) in [1] for the precise bound) we know that the second moment of the second term in (4.2) is of order \(\beta _n^2(\theta _n)^{3/2} \alpha _n^2n^{-2}\). Since \(\theta _n=n^a\), \(\beta _n=n^{-b}\) and \(\alpha _n=\alpha n^{-\kappa }\) we see that the quadratic term has no contribution to the limit if \(a<\frac{4}{3}(\kappa +b+1)\). When \(a=\frac{4}{3}(\kappa +b+1)\) (see the line in gray color in the figure below) the same result should be true and in analogy to Fig. 2 we believe that the same should be true up to the line \(a=3/2+\kappa +b\), i.e. the line in magenta color in Fig. 4. Moreover, if we require that the nonlinear term survives in the limit then we need to impose that the factor in front of the second term in (4.2) is of order O(1). This means that the relationship \(a-{\kappa }=3/2+b\) has to hold. Since we also need that the error terms in (5.1) vanish as \(n\rightarrow +\infty \) we need the following conditions to be true:

$$\begin{aligned} a- \kappa < \min \{(1+3b), 3, (2 + b)\}. \end{aligned}$$

In fact, last conditions simultaneously hold when \(1/4<b<3/2\). Moreover, when \(b<1/2\), the symmetric and martingale parts also survive. Therefore, for \(1/4<b<1/2\) we will derive the SB equation for \(\kappa =1/2-b\) and \(a=3/2+\kappa +b=2\).

Fig. 4
figure 4

\(V'(\eta )\) fluctuations

5.2 Tightness

In this part, we recall some basic notions of the Skorohod space and the tightness of a sequence in the càdlàg space for readers’ convenience. To begin with a general setting, let E be a complete, separable metric space, endowed with a distance \(d_E\). Let D([0, T], E) be the space of all right continuous functions with left limits taking values on E. Let \(\lambda \) be the set of all strictly increasing continuous functions \(\lambda \) from [0, T] into itself. Then, we define

$$\begin{aligned} \Vert \lambda \Vert = \sup _{s\ne t} \bigg | \log \frac{\lambda (t)-\lambda (s)}{t-s} \bigg | \end{aligned}$$

and define for each \(X,Y\in D([0,T],E)\)

$$\begin{aligned} d(X,Y) = \inf _{\lambda \in \Lambda } \max \big \{ \Vert \lambda \Vert , \sup _{0\le t\le T} d_E(X_t, Y_{\lambda (t)}) \big \}. \end{aligned}$$

Then it is known that the Skorohod space D([0, T], E) endowed with the metric d is a complete separable metric space, see [11, Chapter 3]. Next, in order to characterize the convergence of a sequence of paths in the Skorohod space, we make use of the following modified modulus of continuity: for each \(X=\{X_t:t\in [0,T]\} \in D([0,T],E)\), set

$$\begin{aligned} w^\prime _X(\gamma ) = \inf _{\{t_i\}_{0\le t \le N}} \max _{0\le i<N} \sup _{t_i\le s<t <t_{i+1}} d_E(X_s, X_t), \end{aligned}$$

where the first infimum is taken over all partitions \(\{ t_i\}_{0\le i\le N}\) of the interval [0, T] such that \(0=t_0<t_1<\cdots <t_N=T\) and \(t_i-t_{i-1}>\gamma \) for each \(i=1,\ldots , N\). Then, the relative compactness of a sequence in the Skorohod space is characterized by the following Prohorov’s theorem [32, Theorem 4.1.3].

Proposition 5.2

Let \(\{ \mathbb {P}_n\}_n\) be a sequence of probability measures on D([0, T], E). The sequence is relatively compact if, and only if,

  1. (1)

    For each \(t\in [0,T]\) and each \(\varepsilon >0\) there exists a compact set \(K(t,\varepsilon )\) in E such that \(\mathbb {P}_n(X_t \notin K(t,\varepsilon ))\le \varepsilon \).

  2. (2)

    For each \(\varepsilon >0\), we have \( \lim _{\gamma \rightarrow 0} \limsup _{n\rightarrow \infty } \mathbb {P}_n (w^\prime _X(\gamma ) > \varepsilon ) = 0\).

Here note that the modulus of continuity has the bound \(w^\prime _X(\gamma ) \le w_X(2\gamma )\) where

$$\begin{aligned} w_X(\gamma ) = \sup _{|t-s|\le \gamma } d_E(X_s,X_t) \end{aligned}$$

for each \(X\in D([0,T],E)\). Therefore, to show that a sequence in the Skorohod space is relatively compact, which is equivalent to the sequence being tight since the space is complete and separable, it suffices to show the following condition (2’) instead of the condition (2) in Proposition 5.2.

(2’):

For each \(\varepsilon >0\), we have \( \lim _{\gamma \rightarrow 0} \limsup _{n\rightarrow \infty } \mathbb {P}_n (w_X(\gamma ) > \varepsilon ) = 0\).

Note that once the condition (2’) is verified, combined with the condition (1) of Proposition 5.2, then all limit points of a sequence \(\{\mathbb {P}_n\}_n\) are concentrated on continuous paths.

Now we return to our current situation and recall the martingale decomposition (5.3). Our central aim is to show the tightness of each sequence.

Lemma 5.3

The sequences \(\{\mathcal {X}^n_t: t \in [0, T ] \}_{ n \in \mathbb {N} } \), \(\{ \mathcal {M}^n_t: t \in [0, T ] \}_{ n \in \mathbb {N} } \), \(\{ \mathcal {S}^n_t: t \in [0, T ] \}_{ n \in \mathbb {N} } \) and \(\{ \mathcal {B}^n_t: t \in [0, T ] \}_{ n \in \mathbb {N} } \), when the processes start from the invariant measure \(\nu _n\), are tight with respect to the Skorohod topology on \(D([0,T],\mathcal {S}^\prime (\mathbb {R})) \).

Here, note that the space of Schwartz distributions \(\mathcal {S}^\prime (\mathbb {R})\) is metrizable, which turns out to be separable and complete with respect to the strong topology. (See [19, Sect. 2.3] for a precise description of the topology.) To prove the tightness of a sequence of processes, the following Mitoma’s criterion [36, Theorem 4.1] is helpful.

Proposition 5.4

(Mitoma’s criterion). A sequence of \(\mathcal {S}^\prime (\mathbb {R} ) \)-valued processes \(\{ \mathcal {Y}^n_t: t \in [0, T ] \}_{ n \in \mathbb {N} } \) with trajectories in \(D ([0, T ], \mathcal {S}^\prime (\mathbb {R} ) ) \) is tight with respect to the uniform topology if, and only if, the sequence \(\{ \mathcal {Y}^n_t (\varphi ): t \in [0, T ] \}_{ n \in \mathbb {N}}\) of real-valued processes is tight with respect to the Skorohod topology on \(D([0,T],\mathbb {R}) \) for any \(\varphi \in \mathcal {S}(\mathbb {R}) \).

In addition, we will make use of the following continuity criterion. (See for example [39, Theorem 1.2.1].)

Proposition 5.5

(The Kolmogorov-Chentsov criterion). Let \(\{ X_t: t\in [0,T]\}\) be a Banach-valued process such that which there exists constants \(\kappa ,\gamma _1{,} \gamma _2>0\) satisfying

$$\begin{aligned} \mathbb {E} \big [ \big \Vert X_t-X_s\big \Vert ^{\gamma _1} \big ] \le \kappa | t-s|^{1+\gamma _2}, \end{aligned}$$

for any \(s,t\in [0,T]\). Here \(\Vert \cdot \Vert \) denotes the norm of the Banach space on which the process takes values. Then, there is a modification \(\tilde{X}\) of X such that

$$\begin{aligned} \mathbb {E} \bigg [ \bigg ( \sup _{s\ne t} \frac{\Vert \tilde{X}_t -\tilde{X}_s\Vert }{|t-s|^{\alpha }} \bigg )^{\gamma _1} \bigg ] < +\infty \end{aligned}$$

for any \(\alpha \in [0, \gamma _2/\gamma _1)\). In particular, the paths of \(\tilde{X}\) are almost-surely \(\alpha \)-Hölder continuous.

In what follows, we prove Lemma 5.3. With the help of Mitoma’s criterion, it suffices to show the tightness of sequences \(\{ \mathcal {X}^n_t (\varphi ): t \in [0, T ] \}_{ n \in \mathbb {N} } \), \(\{ \mathcal {S}^n_t (\varphi ): t \in [0, T ] \}_{ n \in \mathbb {N} } \), \(\{ \mathcal {B}^n_t (\varphi ): t \in [0, T ] \}_{ n \in \mathbb {N} } \) and \(\{ \mathcal {M}^n_t (\varphi ): t \in [0, T ] \}_{ n \in \mathbb {N}}\) in \(D ([0,T],\mathbb {R})\) for any given test function \(\varphi \in \mathcal {S} (\mathbb {R})\). In order to prove the tightness of a real-valued sequence \(\{X^n_t:t\ge 0 \}_n\) in \(D([0,T],\mathbb {R})\), according to Prohorov’s theorem, recall that it suffices to show two conditions, namely the condition (1) in Proposition 5.2 and the condition (2’):

$$\begin{aligned} \lim _{\delta \rightarrow 0} \limsup _{n\rightarrow \infty } \mathbb {P}_n \bigg ( \sup _{\begin{array}{c} |t-s|\le \delta \\ 0\le s,t \le T \end{array}} | X^n_t-X^n_s| > \varepsilon \bigg ) = 0, \end{aligned}$$
(5.4)

for any \(\varepsilon >0\). The condition (1) of Proposition 5.2 on fixed times easily follows for our sequences. Hence, in what follows, our task is to verify the condition (5.4) for the sequences \(\{ \mathcal {X}^n_t (\varphi ): t \in [0, T ] \}_{ n \in \mathbb {N} } \), \(\{ \mathcal {S}^n_t (\varphi ): t \in [0, T ] \}_{ n \in \mathbb {N} } \), \(\{ \mathcal {B}^n_t (\varphi ): t \in [0, T ] \}_{ n \in \mathbb {N} } \) and \(\{ \mathcal {M}^n_t (\varphi ): t \in [0, T ] \}_{ n \in \mathbb {N}}\). For the initial field \(\{ \mathcal {X}^n_0\}_n\), it is enough to observe that by characteristic functions we can show that it converges to a Gaussian field, and in particular it is tight. Thus, in what follows, we focus on the tightness of the martingale, symmetric and anti-symmetric parts, from which the tightness of the fields \(\{ \mathcal {X}^n_\cdot \}_n\) is deduced.

5.2.1 Martingale part

Next, we deal with the martingale part. We will make use of the following fourth-moment estimate.

Lemma 5.6

For each smooth compactly supported function \(\varphi \), and for each \(s,t\in [0,T]\) such that \(s<t\) we have that

$$\begin{aligned} \mathbb {E}_n \big [ \big ( \mathcal {M}^n_t(\varphi ) - \mathcal {M}^n_s(\varphi ) \big )^4 \big ] \lesssim (t-s)^2 + n^{-3} (t-s). \end{aligned}$$

In Lemma 5.6 we assumed that each test function has a compact support, in order to assure the integrability of an exponential martingale in the proof, whereas we need the fourth moment estimate for functions in the Schwartz space \(\mathcal {S}(\mathbb {R})\). On the other hand, note that we can approximate any element of \(\mathcal {S}(\mathbb {R})\) by smooth compactly supported functions with respect to the Sobolev \(H^1({\mathbb {R}})\)-norm. Then, the tightness of the martingales, whose test function is taken from this restricted space, turns out to be sufficient for the proof by conducting the approximation in the martingale decomposition.

Proof of Lemma 5.6

The proof is based on an expansion of an exponential martingale, similarly to [21]. We set \(\Xi _j=\eta _j + c_3\beta _n V_{\beta _n}(\eta _j)\) in the sequel. First, note that the process

$$\begin{aligned} \exp \big ( \rho \mathcal {X}^n_t(\varphi ) \big ) = \prod _{j\in \mathbb {Z}} \exp \bigg ( \frac{\rho }{\sqrt{n}} \overline{\Xi }_j T^-_{v_n^1t}\varphi ^n_j \bigg ), \end{aligned}$$

is contained in the space \(L^2(\nu _n)\) for sufficiently small \(\rho \). Indeed, recalling the static estimate from Sect. 2.2 and noting that \(b=1\), we obtain:

$$\begin{aligned} \begin{aligned} E_{\nu _n}\big [\exp (\rho \Xi _j) \big ] = \frac{1}{Z_n} \int _{\mathbb {R}} \exp \{ (\rho c_3 \beta _n -1) V_{\beta _n}(\eta _j) + (\rho +\lambda ) \eta _j \} d\eta _j \end{aligned} \end{aligned}$$

is finite when \(\rho \) is sufficiently small. In addition, note that we calculate

$$\begin{aligned} \begin{aligned} \exp \big (-\rho \mathcal {X}^n_t(\varphi )\big ) (\partial _t + L_n ) \exp \big (\rho \mathcal {X}^n_t(\varphi )\big )&= \sum _{j\in \mathbb {Z}} \big (\exp (\rho (\Xi _j-\Xi _{j+1})T^-_{v_n^1t}(\varphi ^n_{j+1} -\varphi ^n_j ) ) - 1\big )\\&\quad + \rho c_3\sqrt{n}\alpha _n\beta _n \sum _{j\in \mathbb {Z}} \overline{\xi }_j \overline{\xi }_{j+1} \nabla ^{1,n} T^-_{v_n^1t} \varphi ^n_j, \end{aligned}\nonumber \\ \end{aligned}$$
(5.5)

which turns out to be in \(L^2(\nu _n)\). Consequently, we have that the process

$$\begin{aligned} \textrm{Exp}(\mathcal {M})^n_{s,t} =\exp \bigg ( \rho \mathcal {X}^n_t(\varphi ) - \rho \mathcal {X}^n_s (\varphi ) - \int _s^t \exp (-\rho \mathcal {X}^n_r(\varphi )) (\partial _r + L_n)\exp (\rho \mathcal {X}^n_r(\varphi )) dr \bigg ), \end{aligned}$$

is a martingale, see [18]. Then, applying Taylor’s theorem to the exponential functions in the integrand, we can expand \(\textrm{Exp}(\mathcal {M})^n_{s,t}\) in terms of \(\rho \) as

$$\begin{aligned} \begin{aligned} \textrm{Exp}(\mathcal {M})^n_{s,t} =\exp \bigg ( \rho \mathcal {M}^n_{s,t}(\varphi ) -\frac{\rho ^2}{2!} \langle \mathcal {M}^n(\varphi )\rangle _{s,t} - \sum _{i=1}^3\frac{\rho ^{2+i}}{(2+i)!} \int _s^t \mathcal {R}_i (r) dr \bigg ), \end{aligned}\nonumber \\ \end{aligned}$$
(5.6)

where

$$\begin{aligned} \begin{aligned} \mathcal {R}_1 (t)&= L_n (\mathcal {X}^n_t(\varphi ))^3 -3\mathcal {X}^n_t(\varphi ) L_n (\mathcal {X}^n_t(\varphi ))^2 + 3(\mathcal {X}^n_t(\varphi ))^2 L_n \mathcal {X}^n_t(\varphi ) \\&= \frac{\theta (n)}{n^{9/2}}\sum _{j\in \mathbb {Z}} (\Xi _j(t) -\Xi _{j+1}(t))^3 (T^-_{v_n^1t}\nabla ^{n,1}\varphi ^n_j)^3, \end{aligned}\\ \begin{aligned} \mathcal {R}_2 (t)&= L_n (\mathcal {X}^n_t(\varphi ))^4 -4\mathcal {X}^n_t(\varphi ) L_n (\mathcal {X}^n_t(\varphi ))^3 + 6(\mathcal {X}^n_t(\varphi ))^2 L_n (\mathcal {X}^n_t(\varphi ))^2 \\ {}&\quad - 4(\mathcal {X}^n_t(\varphi ))^3 L_n \mathcal {X}^n_t(\varphi ) dr \\&= \frac{\theta (n)}{n^6} \sum _{j\in \mathbb {Z}} (\Xi _j-\Xi _{j+1})^4 (T^-_{v_n^1t}\nabla ^{1,n} \varphi ^n_j)^4, \end{aligned} \end{aligned}$$

and \(\mathcal {R}_3\) is a reminder term. Here we used the short-hand notation \(\mathcal {M}^n_{s,t}(\varphi ) = \mathcal {M}^n_t(\varphi ) - \mathcal {M}^n_s(\varphi )\) and \(\langle \mathcal {M}^n (\varphi )\rangle _{s,t} = \langle \mathcal {M}^n (\varphi )\rangle _{t} -\langle \mathcal {M}^n (\varphi )\rangle _{s}\). Moreover, we see that the bound

$$\begin{aligned} \Vert \mathcal {R}_1\Vert _{L^2(\nu _n)} \lesssim \frac{1}{n^{3/2}} \Vert \Xi _j \Vert _{L^{6}(\nu _n)}^3 \bigg ( \frac{1}{n} \sum _{j\in \mathbb {Z}} (\nabla ^{1,n}\varphi ^n_j)^3 \bigg ), \end{aligned}$$

and

$$\begin{aligned} \Vert \mathcal {R}_2\Vert _{L^1(\nu _n)} \lesssim \frac{1}{n^5} \Vert \Xi _j \Vert _{L^{4}(\nu _n)}^4 \bigg ( \frac{1}{n} \sum _{j\in \mathbb {Z}} (\nabla ^{1,n}\varphi ^n_j)^4 \bigg ), \end{aligned}$$

which will be used below. Noting that \(E_{\nu _n}[\textrm{Exp}(\mathcal {M})^n_{s,t}]=1\), take the expectation of the identity (5.6) and expand the exponential function in \(\rho \). This enables us to obtain the desired bound. In the identity (5.6), we differentiate with respect to \(\rho \) four times and then take \(\rho =0\). Moreover, note that the function \(\rho \mapsto e^{F(\rho )}\) satisfies

$$\begin{aligned} \frac{d^4}{d\rho ^4} e^{F(\rho )} = \big ( (F^\prime (\rho ))^4 + 6 (F^\prime (\rho ))^2 F^{\prime \prime }(\rho ) + 3 (F^{\prime \prime }(\rho ))^2 + 4 F^\prime (\rho ) F^{(3)}(\rho ) + F^{(4)}(\rho ) \big )e^{F(\rho )}. \end{aligned}$$

Now, we take the expectation of the identity (5.6) to obtain

$$\begin{aligned} 1 = E_{\nu _n} [\exp (F(\rho ))], \end{aligned}$$

where \(F(\rho )\) is the quantity inside the parentheses of the identity (5.6). Then, we differentiate the last identity four times and set \(\rho =0\):

$$\begin{aligned} \begin{aligned} 0&= \frac{d^4}{d\rho ^4} E_{\nu _n}[\exp (F(\rho ))] \bigg |_{\rho =0}\\&= E_{\nu _\rho }[(F^\prime (\rho ))^4 + 6 (F^\prime (\rho ))^2 F^{\prime \prime }(\rho ) + 3 (F^{\prime \prime }(\rho ))^2 + 4 F^\prime (\rho ) F^{(3)}(\rho ) + F^{(4)}(\rho )]\\&= E_{\nu _n}[\mathcal {M}^n_{s,t}(\varphi )^4] + 6E_{\nu _n}[\mathcal {M}^n_{s,t}(\varphi )^2 \langle \mathcal {M}^n (\varphi ) \rangle _{s,t}] + 3 E_{\nu _n}[\langle \mathcal {M}^n (\varphi ) \rangle _{s,t}^2] \\&\quad + 4 E_{\nu _n}\Big [ \mathcal {M}^n_{s,t}(\varphi ) \int _s^t \mathcal {R}_1(r) dr \Big ] + E_{\nu _n} \Big [\int _s^t \mathcal {R}_2(r) dr \Big ]. \end{aligned} \end{aligned}$$

Then, we have the following estimate.

$$\begin{aligned} \begin{aligned} E_{\nu _n}[\mathcal {M}^n_{s,t}(\varphi )^4]&= \bigg | 6E_{\nu _n}[\mathcal {M}^n_{s,t}(\varphi )^2 \langle \mathcal {M}^n (\varphi ) \rangle _{s,t}] + 3 E_{\nu _n}[\langle \mathcal {M}^n (\varphi ) \rangle _{s,t}^2] \\&\quad + 4 \int _s^t E_{\nu _n} [\mathcal {M}^n_{s,t}(\varphi ) \mathcal {R}_1(r)] dr + \int _s^t E_{\nu _n} [\mathcal {R}_2(r)] dr \bigg | \\&\le 3A E_{\nu _n}[\mathcal {M}^n_{s,t}(\varphi )^4] + (3A^{-1}+3) E_{\nu _n}[\langle \mathcal {M}^n (\varphi ) \rangle _{s,t}^2]\\&\quad + 2 (t-s) E_{\nu _n}[\mathcal {M}^n_{s,t}(\varphi )^2] + (t-s) \big ( 2\Vert \mathcal {R}_1 \Vert _{L^2(\nu _n)}^2 + \Vert \mathcal {R}_2\Vert _{L^1(\nu _n)} \big ), \end{aligned} \end{aligned}$$

for any \(A>0\) where we used Young’s inequality. Set \(A=1/6\) and move \(E[\mathcal {M}^n_{s,t}(\varphi )^4]\) on the right-hand side to the left-hand side. Moreover, recall that the quadratic variation is given by (4.1), which yields the bound

$$\begin{aligned} E_{\nu _n} [\langle \mathcal {M}^n(\varphi )\rangle _{s,t}^k] \lesssim (t-s)^k, \end{aligned}$$

for each \(k=1,2\). Hence we obtain the desired bound according to the moment estimates for \(\mathcal {R}_1\) and \(\mathcal {R}_2\). \(\square \)

With this fourth moment bound at hand, we show the tightness of the martingale part. This is established as follows.

$$\begin{aligned} \begin{aligned} \mathbb {P}_n \bigg ( \sup _{\begin{array}{c} |s-t|\le \delta \\ 0\le s,t\le T \end{array} } \big | \mathcal {M}^n_t(\varphi )- \mathcal {M}^n_s (\varphi ) \big | > \varepsilon \bigg )&\le \varepsilon ^{-4} \mathbb {E}_n \bigg [ \sup _{\begin{array}{c} |s-t|\le \delta \\ 0\le s,t\le T \end{array}} \big | \mathcal {M}^n_t(\varphi )- \mathcal {M}^n_s (\varphi ) \big |^4 \bigg ] \\&\lesssim \varepsilon ^{-4} \delta ^{-1} \mathbb {E}_n \big [ \big ( \mathcal {M}^n_\delta (\varphi )\big )^4 \big ]. \end{aligned} \end{aligned}$$

Here, in the second inequality, we used Doob’s inequality and stationarity. Now, recall that the fourth moment bound (Lemma 5.6) yields

$$\begin{aligned} \delta ^{-1} \mathbb {E}_n \big [ \big ( \mathcal {M}^n_\delta (\varphi )\big )^4 \big ] \lesssim \delta + n^{-3}, \end{aligned}$$

which vanishes as \(n\rightarrow \infty \) and then \(\delta \rightarrow 0\). Hence the condition (5.4) is verified and we complete the proof of the tightness of the martingale part.

5.2.2 Symmetric part

For the symmetric part, note that a direct \(L^2\)-computation yields

$$\begin{aligned} \begin{aligned} \mathbb {E}_n \big [\big |\mathcal {S}^n_{t_1}(\varphi ) - \mathcal {S}^n_{t_2} (\varphi )\big |^2 \big ]&\lesssim |t_1-t_2| \int _{t_1}^{t_2} \frac{1}{n} \sum _{j\in \mathbb {Z}} E_{\nu _n} [(\overline{\eta }_j+ c_3\beta _n \overline{\zeta }_j )^2] (\Delta ^n T^-_{v_n^1s}\varphi ^n_j)^2 ds \\&\lesssim | t_1-t_2|^2 \Vert \partial _x^2 \varphi \Vert ^2_{L^2(\mathbb {R})}. \end{aligned} \end{aligned}$$

By the Kolmogorov-Chentsov criterion (Proposition 5.5), combined with the continuity of \(\mathcal {S}^n_\cdot \), the condition (5.4) is verified, from which we deduce the tightness of the symmetric part.

5.2.3 Anti-symmetric part

Finally, we are in a position to show tightness of the anti-symmetric part \(\mathcal {B}^n_t(\varphi )\). A key ingredient for the proof is the so-called second-order Boltzmann–Gibbs principle, which enables us to replace a quadratic term by its local average. For each real sequence \((g_j)_{j \in \mathbb {Z}}\), we define its local average as follows.

$$\begin{aligned} \overrightarrow{g}^\ell _j = \frac{1}{\ell } \sum _{i=0}^{\ell -1} g_{j+i}. \end{aligned}$$

Proposition 5.7

(The second-order Boltzmann–Gibbs principle). For any \(T>0\) and \(\varphi \in \mathcal {S}(\mathbb {R})\), it holds that

$$\begin{aligned} \begin{aligned}&\mathbb {E}_n \bigg [ \sup _{0 \le t \le T} \bigg | \int _0^t \sum _{j\in \mathbb {Z}} \big ( \overline{\xi }_j(s) \overline{\xi }_{j+1}(s) - (\overrightarrow{\xi }^\ell _{j}(s))^2 \big ) \nabla ^{n,1} T^-_{v_n^1s}\varphi ^n_j ds \bigg |^2 \bigg ] \\&\quad \lesssim \bigg (\frac{T\ell }{n} + \frac{T^2n}{\ell ^2} \bigg ) \Vert \partial _x\varphi \Vert ^2_{L^2(\mathbb {R})}. \end{aligned} \end{aligned}$$

The proof is completely analogous to [22, Theorem 1] so that we omit it here. With this result at hand, we can show tightness of the anti-symmetric part. Indeed, we have by Proposition 5.7 and stationarity that

$$\begin{aligned} \begin{aligned}&\mathbb {E}_n \bigg [\bigg | \mathcal {B}^n_{t_2}(\varphi ) - \mathcal {B}^n_{t_1}(\varphi ) - c_3 \int _{t_1}^{t_2} \sum _{j\in \mathbb {Z}} \big (\overrightarrow{\xi }^\ell _{j}(s)\big )^2 \nabla ^{1,n} T^-_{v_n^1s} \varphi ^n_j ds \bigg |^2 \bigg ]\\&\quad \lesssim \bigg ( \frac{(t_2-t_1)\ell }{n} + \frac{(t_2-t_1)^2n}{\ell ^2} \bigg )\Vert \partial _x \varphi \Vert ^2_{L^2(\mathbb {R})}. \end{aligned} \end{aligned}$$

On the other hand, a direct \(L^2\)-computation gives

$$\begin{aligned} \begin{aligned} \mathbb {E}_n \bigg [ \bigg | \int _{t_1}^{t_2} \sum _{j\in \mathbb {Z}} \big (\overrightarrow{\xi }^\ell _{j}(s)\big )^2 \nabla ^{1,n} T^-_{v_n^1s} \varphi ^n_j ds \bigg |^2 \bigg ] \le \frac{(t_2-t_1)^2n}{\ell } \Vert \partial _x \varphi \Vert ^2_{L^2(\mathbb {R})}. \end{aligned} \end{aligned}$$

When \(1/n^2 \le t_2-t_1 \le 1\), we take \(\ell \) with order \((t_2-t_1)^{1/2}n\) to obtain

$$\begin{aligned} \mathbb {E}_n\big [\big |\mathcal {B}^n_{t_2}(\varphi ) - \mathcal {B}^n_{t_1}(\varphi ) \big |^2 \big ] \lesssim (t_2-t_1)^{3/2} \Vert \partial _x \varphi \Vert ^2_{L^2(\mathbb {R})}. \end{aligned}$$

On the other hand, when \(t_2-t_1 \le 1/n^2\), a direct estimate brings us

$$\begin{aligned} \mathbb {E}_n\big [\big |\mathcal {B}^n_{t_2}(\varphi ) - \mathcal {B}^n_{t_1}(\varphi ) \big |^2 \big ] \lesssim (t_2-t_1)^2 n \Vert \partial _x \varphi \Vert ^2_{L^2(\mathbb {R})} \lesssim (t_2-t_1)^{3/2} \Vert \partial _x \varphi \Vert ^2_{L^2(\mathbb {R})}. \end{aligned}$$

This ends the proof of tightness for the anti-symmetric part by the Kolmogorov-Chentsov criterion (Propsition 5.5) and continuity of the process.

5.3 Identification of limit points

Recall the martingale decomposition (5.3). We have already proved in Lemma 5.3 that the sequences \(\{ \mathcal {X}^n_t: t \in [0, T] \}_{ n \in \mathbb {N} } \), \(\{ \mathcal {M}^n_t: t \in [0, T] \}_{ n \in \mathbb {N} } \), \(\{ \mathcal {S}^n_t: t \in [0, T] \}_{ n \in \mathbb {N} } \) and \(\{ \mathcal {B}^n_t: t \in [0, T] \}_{ n \in \mathbb {N} } \) are tight in \(D([0,T], \mathcal {S}^\prime (\mathbb {R}))\). Let \(\mathscr {Q}^n\) be the distribution of

$$\begin{aligned} \{ (\mathcal {X}^n_t, \mathcal {M}^n_t, \mathcal {S}^n_t, \mathcal {B}^n_t): t\in [0,T] \}. \end{aligned}$$

We proved that there exists a subsequence n, which is denoted by the same letter with an abuse of notation, such that \(\{\mathscr {Q}^n\}_{n}\) converges to a limit point \(\mathscr {Q}\). We let \(\mathcal {X}, \mathcal {M}, \mathcal {S}\) and \(\mathcal {B}\) be the respective limits in distribution of each component. Since the tightness is shown in the uniform topology in \(D([0,T],\mathcal {S}^\prime (\mathbb {R}))\), then these limiting processes almost surely have continuous trajectories. In what follows, we identify these limit points as stationary energy solutions of the stochastic Burgers equation in the sense of Definition 2.12. Since this solution is unique-in-law, convergence then follows.

5.3.1 Martingale part

Now, we shift to the martingale part. Recall the expression (4.1) when \(\theta (n)=n^2\), \(\mathfrak {u}_n=c_3\beta _n\) and \(v_n=v_n^1\):

$$\begin{aligned} \begin{aligned} \langle \mathcal {M}^n (\varphi ) \rangle _t&= \frac{1}{2n} \int _0^t \sum _{j \in \mathbb {Z}} (\eta _j(s) -\eta _{j+1}(s))^2 \big ( \nabla ^{n,1} T^-_{v_n^1s} \varphi ^n_j \big )^2 ds \\&\quad + \frac{(c_3\beta _n)^2 }{2n} \int _0^t \sum _{j \in \mathbb {Z}} (\zeta _j(s) -\zeta _{j+1}(s))^2 \big ( \nabla ^{n,1} T^-_{v_n^1s} \varphi ^n_j \big )^2 ds \\&\quad + \frac{c_3 \beta _n}{n} \int _0^t \sum _{j\in \mathbb {Z}} (\eta _j(s) - \eta _{j+1}(s)) (\zeta _j(s) -\zeta _{j+1}(s)) (\nabla ^{n,1} T^-_{v_n^1s}\varphi ^n_j)^2 ds. \end{aligned} \end{aligned}$$

By Markov’s inequality and the Schwarz’s inequality,

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbb {P}_n \bigg (\sup _{0\le t\le T} \Big | \langle \mathcal {M}^n(\varphi )\rangle _t -t\Vert \partial _x \varphi \Vert ^2_{L^2(\mathbb {R})} \Big | > \varepsilon \bigg ) = 0 \end{aligned}$$

for any \(\varepsilon >0\). In particular, the sequence of processes \(\{\langle \mathcal {M}^n (\varphi ) \rangle _t:t\in [0,T] \}_n \) converges in distribution on \(D([0,T],\mathbb {R})\) to a deterministic path \(\{ t\Vert \partial _x \varphi \Vert ^2_{L^2(\mathbb {R})}:t\in [0,T] \}\) as \(n\rightarrow \infty \).

Then we can show that any limit point \(\mathcal {M}\) is a continuous martingale whose quadratic variation is \( t \Vert \partial _x \varphi \Vert ^2_{L^2(\mathbb {R})}\) in the following way. First, note that the limit point \(\mathcal {M}\) is a martingale since is is obtained as a limit of martingales with respect to the uniform topology. Moreover, note that by the triangle inequality and Doob’s inequality, we have the bound

$$\begin{aligned} \begin{aligned} \mathbb {E}_n \bigg [\sup _{0\le s \le t} \big | \mathcal {M}^n_s(\varphi ) - \mathcal {M}^n_{s-}(\varphi ) \big | \bigg ] \le 2 \mathbb {E}_n \bigg [\sup _{0\le s \le t} | \mathcal {M}^n_s(\varphi ) |^2 \bigg ]^{1/2} \le 2 \mathbb {E}_n \big [\langle \mathcal {M}^n(\varphi )\rangle _t \big ]^{1/2}, \end{aligned} \end{aligned}$$

and the utmost right-hand side is bounded by a constant which is independent of n. Therefore, by Corollary VI.6.30 of [27], we obtain the convergence \((\mathcal {M}^n(\varphi ), \langle \mathcal {M}^n(\varphi )\rangle )\rightarrow (\mathcal {M}(\varphi ), \langle \mathcal {M}(\varphi )\rangle )\) in distribution. Combining with the convergence of the quadratic variation, we conclude that \(\langle \mathcal {M}(\varphi )\rangle _t= t\Vert \partial _x \varphi \Vert ^2_{L^2(\mathbb {R})}\).

5.3.2 Symmetric part

For the symmetric part, we can easily show that

$$\begin{aligned} \begin{aligned} \mathcal {S}_t(\varphi ) = \frac{1}{2} \int _0^t \mathcal {X}_s(\partial _x^2\varphi ) ds. \end{aligned} \end{aligned}$$

5.3.3 Anti-symmetric part

Now, it suffices to characterize the limit point for the anti-symmetric part to complete the proof of Theorem 2.13. Define a modified fluctuation field

$$\begin{aligned} \tilde{\mathcal {X}}^n_t(\varphi ) = \frac{1}{\sqrt{n}} \sum _{j\in \mathbb {Z}} \overline{\xi }_j(t) T^-_{v_n^1t} \varphi ^n_j. \end{aligned}$$

Note that for fixed \(j\in \mathbb {Z}\), we have \(\tilde{\mathcal {X}}^n_t (\iota _\varepsilon (\frac{j-v_n^1t}{n},\cdot )) = \sqrt{n}\overrightarrow{\xi }^{\varepsilon n}_j\), recalling the definition of \(\iota _\varepsilon (x;\cdot )\) given below (2.13). Hence, we have that

$$\begin{aligned} \begin{aligned} \int _0^t \sum _{j\in \mathbb {Z}} \big ( \overrightarrow{\xi }^{\varepsilon n}_j(s)\big )^2 \nabla ^{1,n} T^-_{v_n^1s} \varphi ^n_j ds = \frac{1}{n}\int _0^t \sum _{j\in \mathbb {Z}} \tilde{\mathcal {X}}^n_s \big ( \iota _{\varepsilon }({\textstyle \frac{j-v_n^1s}{n}};\cdot ) \big )^2 \nabla ^{1,n} T^-_{v_n^1s} \varphi ^n_j ds. \end{aligned} \end{aligned}$$

In addition, set \(\Xi _j = \eta _j + c_3 \beta _n \zeta _j\). Recall that \(\mathcal {X}^n\) is the fluctuation fields of the conserved quantity \(\Xi _j\) so that

$$\begin{aligned} \begin{aligned} \int _0^t \sum _{j\in \mathbb {Z}} \big (\overrightarrow{\Xi }^{\varepsilon n}_j(s)\big )^2 \nabla ^{1,n} T^-_{v_n^1s} \varphi ^n_j ds = \frac{1}{n}\int _0^t \sum _{j\in \mathbb {Z}} \mathcal {X}^n_s \big ( \iota _{\varepsilon }({\textstyle \frac{j-v_n^1s}{n}};\cdot ) \big )^2 \nabla ^{1,n} T^-_{v_n^1s} \varphi ^n_j ds. \end{aligned} \end{aligned}$$

The difference between these quantities is estimated by the following lemma.

Lemma 5.8

We have that

$$\begin{aligned} \begin{aligned}&\mathbb {E}_n \bigg [\sup _{0\le t\le T} \bigg | \int _0^t \sum _{j\in \mathbb {Z}} \big ( \big (\overrightarrow{\xi }^{\ell }_j(s)\big )^2 - \big (\overrightarrow{\Xi }^{\ell }_j(s)\big )^2 \big ) \nabla ^{1,n}\varphi ^n_j ds \bigg |^2 \bigg ] \\&\quad \lesssim T^2 \bigg ( \frac{n\beta _n^2}{\ell } + \frac{n}{\ell ^2} \bigg ) \Vert \partial _x\varphi \Vert ^2_{L^2(\mathbb {R})}. \end{aligned} \end{aligned}$$

Proof

By Schwarz’s inequality and Young’s inequality, for any \(A>0\) we have the bound

$$\begin{aligned} \begin{aligned}&\mathbb {E}_n \bigg [\sup _{0\le t\le T} \bigg | \int _0^t \sum _{j\in \mathbb {Z}} \big ( \big (\overrightarrow{\xi }^{\ell }_j(s)\big )^2 - \big (\overrightarrow{\Xi }^{\ell }_j(s)\big )^2 \big ) \nabla ^{1,n}\varphi ^n_j ds \bigg |^2 \bigg ]\\&\quad \le T^2 \sum _{j\in \mathbb {Z}} E_{\nu _n} \big [ \big \{ \big (\overrightarrow{\xi }^{\ell }_j\big )^2 - \big (\overrightarrow{\Xi }^{\ell }_j\big )^2 -\Sigma _\ell \big \}^2 \big ] (\nabla ^{1,n}\varphi ^n_j)^2 \\&\quad \le \frac{T^2}{A} \sum _{j\in \mathbb {Z}} E_{\nu _n} \big [\big (\overrightarrow{\xi }^{\ell }_j - \overrightarrow{\Xi }^{\ell }_j \big )^2 \big ] (\nabla ^{1,n}\varphi ^n_j)^2 + AT^2 \sum _{j\in \mathbb {Z}} E_{\nu _n} \big [ \big ( \overrightarrow{\xi }^{\ell }_j + \overrightarrow{\Xi }^{\ell }_j\big )^2 \big ] (\nabla ^{1,n}\varphi ^n_j)^2 \\&\qquad + 2(\Sigma _\ell )^2 \sum _{j\in \mathbb {Z}} (\nabla ^{1,n}\varphi ^n_j)^2 \\&\quad \lesssim T^2 \bigg ( \frac{n\beta _n^4}{A\ell } + \frac{An}{\ell } + \frac{n}{\ell ^2}\bigg ) \Vert \partial _x\varphi \Vert ^2_{L^2(\mathbb {R})}, \end{aligned} \end{aligned}$$

where \(\Sigma _\ell = E_{\nu _n}[(\overrightarrow{\xi }^\ell _j)^2 - (\overrightarrow{\Xi }^\ell _j)^2]\) which satisfies the bound \(\Sigma _\ell ^2\lesssim \ell ^{-2} \). Here in the first term of the utmost right-hand side of the last display, we used the Taylor expansion \(\xi _j-\Xi _j=\beta _n^2\varepsilon _j\) for some reminder term \(\varepsilon _j\), which is not necessarily centered. Now, choose \(A=\beta _n^{-2}\) in the above estimate to complete the proof. \(\square \)

With this estimate at hand, now we identify the limit points of the anti-symmetric part. Note that the sequence \(\{ \mathcal {X}^n\}_n\) is tight, which subsequently converges to some \(\mathcal {X}\) which clearly satisfies the condition (S). Therefore, we obtain the limit

$$\begin{aligned} \mathcal {A}^\varepsilon _{s,t} (\varphi ) = \lim _{n\rightarrow \infty } \frac{1}{n} \int _s^t \sum _{j\in \mathbb {Z}} \mathcal {X}^n_r \big ( \iota _{\varepsilon }({\textstyle \frac{j-v_n^1r}{n}};\cdot ) \big )^2 \nabla ^{n,1} T^-_{v_n^1r} \varphi ^n_j dr, \end{aligned}$$

where \(\mathcal {A}^\varepsilon _{s,t}(\varphi )\) in the left-hand side of the last display coincides with the process we defined in (2.13) with \(u=\mathcal {X}\). Here note that the convergence does not follow immediately since the function \(\iota _\varepsilon \) does not belong to \(\mathcal {S}(\mathbb {R})\). This function, however, can be approximated by elements in \(\mathcal {S}(\mathbb {R})\) so that the convergence is justified. (See [19, Sect. 5.3] for details.) Now, by Proposition 5.7 and Lemma 5.8, we have that

$$\begin{aligned} \begin{aligned}&\mathbb {E}_n \bigg [\bigg | \mathcal {B}^n_{t}(\varphi ) - \mathcal {B}^n_{s}(\varphi ) - \int _s^t \sum _{j\in \mathbb {Z}} \big ( \overrightarrow{\Xi }^{\varepsilon n}_j(r)\big )^2 \nabla ^{1,n} T^-_{v_n^1r} \varphi ^n_j dr\bigg |^2 \bigg ]\\&\quad \lesssim \bigg (\frac{(t-s)\varepsilon n}{n} + \frac{(t-s)^2n}{(\varepsilon n)^2} + \frac{n\beta _n^2}{\varepsilon n} \bigg ) \Vert \partial _x \varphi \Vert ^2_{L^2(\mathbb {R})}. \end{aligned} \end{aligned}$$

Then, let \(n\rightarrow +\infty \) to obtain

$$\begin{aligned} \mathbb {E} \big [\big | \mathcal {B}_{t}(\varphi ) - \mathcal {B}_{s} (\varphi ) - c_3 \mathcal {A}^\varepsilon _{t,s}(\varphi ) \big |^2 \big ] \lesssim \varepsilon (t-s) \Vert \partial _x \varphi \Vert ^2_{L^2(\mathbb {R})}. \end{aligned}$$
(5.7)

The triangle inequality deduces the energy condition (EC). As a result, by Proposition 2.11 we get the existence of the limit

$$\begin{aligned} \mathcal {A}_{t} (\varphi ) = \lim _{\varepsilon \rightarrow 0} \mathcal {A}^\varepsilon _{0,t} (\varphi ). \end{aligned}$$

Moreover, the estimate (5.7) with \(s=0\) shows that \(\mathcal {B}= c_3\mathcal {A}\).

Finally, we note that all the above estimates hold also for the reversed process \(\{ \mathcal {X}^n_{T-t}: t \in [0,T]\}\) by repeating the argument for the dynamics generated by the adjoint operator \(L^*_n\), and thus the third condition of Definition 2.12 is satisfied. From this it follows that the limiting process \(\mathcal {X} \) is the energy solution of the SB equation (2.14) and thus we complete the proof of Theorem 2.13.

6 Proof of Theorem 2.16: The 3/2-Lévy Regime

6.1 The quadratic martingale

Hereafter we consider the case (ii) in (2.10). The fluctuation field (2.9), which we are concerned with, after multiplied by \(-\lambda ^{-1}\), noting that we can multiply any constant since it does not affect the result, is given by

$$\begin{aligned} \mathcal {X}^n_t(\varphi ) = \frac{1}{\sqrt{n}}\sum _{j\in \mathbb {Z}} \big (\overline{\zeta }_j(t) - \lambda \overline{\eta }_j(t)\big ) \varphi ^n_j, \end{aligned}$$

for each \(\varphi \in \mathcal {S}(\mathbb {R})\) where recall that we defined \(\varphi ^n_j=\varphi (j/n)\). Here we used the same notation for the field as before by an abuse of notation. In what follows, we are concerned with the correlation function for simplicity, but note that we can also study fluctuation fields themselves, see [7] for this setting. Recall the correlation function (2.15). Fix any compactly supported function \(g:\mathbb {R}\rightarrow \mathbb {R}\). Let \(\mathscr {S}^n_t(\cdot )\in D([0,T],\mathcal {S}^\prime (\mathbb {R}))\) be a field which is defined by

$$\begin{aligned} \begin{aligned} \mathscr {S}^n_t(f) = \frac{1}{n}\sum _{j,j^\prime \in \mathbb {Z}} S_{t}(j^\prime -j) g\bigg (\frac{j}{n}\bigg ) f\bigg (\frac{j^\prime }{n}\bigg ) = \frac{1}{2}\mathbb {E}_n\big [\mathcal {X}^n_0(g) \mathcal {X}^n_t(f) \big ], \end{aligned} \end{aligned}$$

for each \(f\in C^\infty _c(\mathbb {R})\). From (4.2) taking \(\mathfrak {u}_n=\mathfrak {u}^2_n=-1/\lambda \) and \(v_n=v^2_n=0\), the action of the generator \(L_n\) on the field \(\mathcal {X}^n_\cdot \) is given by

$$\begin{aligned} \begin{aligned} L_n \mathcal {X}^n_s(\varphi ) = \frac{\theta (n)}{2n^2} \mathcal {X}^n_t(\Delta ^n \varphi ^n) + \frac{\theta (n)\alpha _n}{n^{3/2}} \sum _{j \in \mathbb {Z}} \overline{\xi }_j(s) \overline{\xi }_{j+1}(s) \nabla ^{1,n} \varphi ^n_j + E^n_t. \end{aligned} \end{aligned}$$
(6.1)

Here the error term \(E^n_t\) satisfies the bound

$$\begin{aligned} \mathbb {E}_n\bigg [ \sup _{0\le t\le T} \bigg | \int _0^t E^n_s ds\bigg |^2 \bigg ] \lesssim \frac{\theta (n)^2\alpha _n^2}{n^4}, \end{aligned}$$
(6.2)

using the fact that \(2+2\lambda \mathfrak {u}_n=0\). In the sequel, we introduce a quadratic field \(\mathcal {Q}^n_\cdot \) which belongs to \(D([0,T],\mathcal {S}^\prime (\mathbb {R}^2))\) and it is defined on smooth symmetric functions \(h:\mathbb {R}^2 \rightarrow \mathbb {R}\) by

$$\begin{aligned} \mathcal {Q}^n_t(h) = \frac{1}{n} \sum _{\begin{array}{c} j,j^\prime \in \mathbb {Z},\\ j\ne j^\prime \end{array}} \overline{\xi }_j(t) \overline{\xi }_{j^\prime }(t) h \bigg ( \frac{j}{n}, \frac{j^\prime }{n} \bigg ). \end{aligned}$$
(6.3)

Moreover, set \(\mathscr {Q}^n_t(h) = (1/2) \mathbb {E}_n \big [ \mathcal {X}^n_0(g) \mathcal {Q}^n_t(h) \big ] \). From (6.1) we have that

$$\begin{aligned} \begin{aligned} \frac{d}{dt} \mathscr {S}^n_t(f)&= \frac{1}{2} \mathbb {E}_n \big [ \mathcal {X}^n_0(g) L_n \mathcal {X}^n_t(f) \big ] \\&= \frac{\theta (n)}{2n^2} \mathscr {S}^n_t(\Delta ^nf) + \frac{2\theta (n)\alpha _n}{n^{3/2}} \mathscr {Q}^n_t(\nabla ^{n,1} f\otimes \delta ) + \tilde{E}^n_t. \end{aligned} \end{aligned}$$
(6.4)

Above,

$$\begin{aligned} (\nabla ^{n,1} f \otimes \delta ) \bigg (\frac{j}{n},\frac{j^\prime }{n} \bigg ) = \frac{n}{2} \nabla ^{1,n}f \bigg (\frac{j}{n}\bigg ) \textbf{1}_{j^\prime =j+1} + \frac{n}{2} \nabla ^{1,n}f \bigg ( \frac{j-1}{n}\bigg ) \textbf{1}_{j^\prime =j-1}, \end{aligned}$$

and \(\tilde{E}^n_t\) is an error term which satisfies the bound (6.2). In the sequel, let \(\Delta ^n\) be the two-dimensional discrete Laplacian defined by

$$\begin{aligned} \Delta ^n h \bigg (\frac{j}{n},\frac{j^\prime }{n}\bigg )= & {} n^2 \bigg [ h\bigg (\frac{j+1}{n},\frac{j^\prime }{n}\bigg ) + h\bigg (\frac{j-1}{n},\frac{j^\prime }{n}\bigg ) + h\bigg (\frac{j}{n},\frac{j^\prime +1}{n}\bigg ) + h\bigg (\frac{j}{n},\frac{j^\prime -1}{n}\bigg )\\{} & {} - 4 h\bigg (\frac{j}{n},\frac{j^\prime }{n}\bigg ) \bigg ]. \end{aligned}$$

Above we have used the same symbol for the one-dimensional Laplacian with an abuse of notation. Moreover, we define discrete derivative operators by

$$\begin{aligned} \mathscr {A}_n h \bigg (\frac{j}{n},\frac{j^\prime }{n}\bigg ) = n \bigg [ h \bigg (\frac{j}{n},\frac{j^\prime -1}{n}\bigg ) + h \bigg (\frac{j-1}{n},\frac{j^\prime }{n}\bigg ) - h \bigg (\frac{j}{n},\frac{j^\prime +1}{n}\bigg ) - h \bigg (\frac{j+1}{n},\frac{j^\prime }{n}\bigg )\bigg ], \end{aligned}$$

and

$$\begin{aligned} \mathscr {D}_n h \bigg (\frac{j}{n}\bigg ) =\frac{n}{2} \bigg [ h\bigg (\frac{j+1}{n},\frac{j}{n}\bigg ) + h\bigg (\frac{j}{n},\frac{j+1}{n}\bigg ) - h\bigg (\frac{j-1}{n},\frac{j}{n}\bigg ) - h\bigg (\frac{j}{n},\frac{j-1}{n}\bigg ) \bigg ]. \end{aligned}$$

Here note that \(\mathscr {A}_nh(j/n,j/n)=-2\mathscr {D}_nh(j/n)\). Additionally, define

$$\begin{aligned} \begin{aligned} \tilde{\mathscr {D}}_nh \bigg (\frac{j}{n},\frac{j^\prime }{n}\bigg )&= n^2 \bigg [ \frac{1}{2}\tilde{\mathscr {E}}_nh \bigg ( \frac{j}{n}\bigg ) - \frac{1+2\alpha _n}{2} \tilde{\mathscr {F}}_nh \bigg ( \frac{j}{n} \bigg ) \bigg ]\textbf{1}_{j^\prime =j+1} \\&\quad + n^2 \bigg [ \frac{1}{2}\tilde{\mathscr {E}}_nh \bigg ( \frac{j^\prime }{n}\bigg ) - \frac{1+2\alpha _n}{2} \tilde{\mathscr {F}}_nh \bigg ( \frac{j^\prime }{n} \bigg ) \bigg ]\textbf{1}_{j^\prime =j-1}, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \tilde{\mathscr {E}}_nh \bigg (\frac{j}{n}\bigg ) = \frac{1}{2}\bigg [ h\bigg ( \frac{j}{n},\frac{j+1}{n}\bigg ) + h\bigg ( \frac{j+1}{n},\frac{j}{n}\bigg ) - 2h\bigg ( \frac{j}{n},\frac{j}{n}\bigg ) \bigg ], \end{aligned}$$

and

$$\begin{aligned} \tilde{\mathscr {F}}_nh \bigg (\frac{j}{n}\bigg ) = h\bigg ( \frac{j+1}{n},\frac{j+1}{n}\bigg ) - h\bigg ( \frac{j}{n},\frac{j}{n}\bigg ). \end{aligned}$$

Define

$$\begin{aligned} {\mathscr {E}}_nh\bigg (\frac{j}{n},\frac{j^\prime }{n}\bigg ) = h\bigg (\frac{j+1}{n},\frac{j^\prime }{n}\bigg )-h\bigg (\frac{j}{n},\frac{j^\prime }{n}\bigg ), \end{aligned}$$

and note that

$$\begin{aligned} \mathscr {E}_nh\bigg (\frac{j}{n},\frac{j}{n}\bigg ) =\tilde{\mathscr {E}}_nh \bigg (\frac{j}{n}\bigg ). \end{aligned}$$

For a function \(h:{\mathbb {Z}}^2\rightarrow {\mathbb {R}}\) we use the notation \(\Vert h\Vert _{2,n}\) for the discrete \(L^2\)-norm of h, i.e.,

$$\begin{aligned} \Vert h\Vert _{2,n}:=\sqrt{\frac{1}{n^2}\sum _{j,j'\in \mathbb Z}h\bigg (\frac{j}{n},\frac{j'}{n}\bigg )^2}. \end{aligned}$$
(6.5)

Then, the action of the generator on \(\mathcal {Q}^n\) which is defined in (6.3) is calculated as follows.

Lemma 6.1

We have that

$$\begin{aligned} \begin{aligned} L_n \mathcal {Q}^n_t(h)&= \frac{\theta (n)}{2n^2} \mathcal {Q}^n_t(\Delta ^nh - 2n\alpha _n \mathscr {A}_nh) + \frac{2\theta (n)\alpha _n}{n^{3/2}} \mathcal {X}^n_t(\mathscr {D}_nh)\\&\quad + \frac{2\theta (n)}{n^2} \mathcal {Q}^n_t(\tilde{\mathscr {D}_n} h) + E^n_t(h), \end{aligned} \end{aligned}$$

where the error term satisfies the bound

$$\begin{aligned} \begin{aligned}&\mathbb {E}_n \bigg [\sup _{0\le t \le T} \bigg | \int _0^t E^n_s(h)ds \bigg |^2 \bigg ] \\&\quad \lesssim \theta (n)^2 \alpha _n^2 c_{k_*}^2\beta _n^{2(k_*-2)} \bigg ( \frac{T^2}{n^2} \Vert \mathscr {A}_nh\Vert ^2_{2,n} + \frac{T^2}{n^2} \Vert \mathscr {E}_nh\Vert ^2_{2,n} + \frac{T}{\theta (n)} \Vert h\Vert ^2_{2,n} \bigg ). \end{aligned} \end{aligned}$$
(6.6)

Here recall that \(k_*\in \{3,4,\ldots \}\) denotes the smallest finite number such that \(c_{k_*}=V^{(k_*)}(0)\ne 0\). Moreover, the above error bound is improved when \(k_*=3\) as follows. For any \(\ell \in \mathbb {N}\) we have that

$$\begin{aligned} \begin{aligned} \mathbb {E}_n \bigg [\sup _{0\le t \le T} \bigg | \int _0^t E^n_s(h)ds \bigg |^2 \bigg ]&\lesssim \theta (n)^2 \alpha _n^2 c_{3}^2\beta _n^{2} \bigg [ \frac{T^2}{n^2} \Vert \mathscr {A}_nh \Vert ^2_{2,n} + \bigg (\frac{T^2\ell }{\theta (n)n^2} + \frac{T}{n^2\ell }\bigg ) \Vert \mathscr {E}_nh \Vert ^2_{2,n} \\&\quad + \frac{T}{\theta (n)n^2} \sum _{j\in \mathbb {Z}} h\bigg ( \frac{j}{n},\frac{j+1}{n}\bigg )^2 \bigg ]. \end{aligned}\nonumber \\ \end{aligned}$$
(6.7)

Proof

Here we derive the principal part of the action of the generator on the quadratic field \(\mathcal {Q}^n_\cdot \). The error bounds (6.7) and (6.6) will be given in Lemmas 6.2 and 6.3, respectively. Let us begin with a computation of the symmetric part. Note that

$$\begin{aligned} \begin{aligned} 2S(\overline{\xi }_j \overline{\xi }_{j^\prime })&= \overline{\xi }_{j^\prime } \Delta \xi _j + \overline{\xi }_{j} \Delta \xi _{j^\prime } - (\xi _{j+1}-\xi _{j})^2 \textbf{1}_{j^\prime =j+1} - (\xi _{j}-\xi _{j-1})^2 \textbf{1}_{j^\prime =j-1}\\&\quad + [(\xi _{j+1}-\xi _{j})^2 + (\xi _j-\xi _{j-1})^2] \textbf{1}_{j^\prime =j}, \end{aligned} \end{aligned}$$

where \(\Delta \) denotes the discrete Laplacian defined by \(\Delta g_j= g_{j+1} + g_{j-1}-2g_j\) for each \((g_j)_{j}\). Moreover, we use the short-hand notation \(h_{j,j^\prime }=h(j/n,j^\prime /n)\) in the proof. Then we compute

$$\begin{aligned} \begin{aligned} S\mathcal {Q}^n_t(h)&= \frac{1}{2n} \sum _{j\ne j^\prime } (\overline{\xi }_{j^\prime } \Delta \xi _j + \overline{\xi }_{j} \Delta \xi _{j^\prime }) h_{j,j^\prime } - \frac{1}{2n} \sum _{j\in \mathbb {Z}} (\xi _{j+1}-\xi _{j})^2 (h_{j,j+1}+h_{j+1,j})\\&= \frac{1}{2n^3} \sum _{j,j^\prime } \overline{\xi }_{j} \overline{\xi }_{j^\prime } (\Delta ^n h)_{j,j^\prime } - \frac{1}{n} \sum _{j\in \mathbb {Z}} \overline{\xi }_j \Delta \xi _j h_{j,j} - \frac{1}{2n} \sum _{j\in \mathbb {Z}} (\xi _{j+1}-\xi _{j})^2 (h_{j,j+1}+h_{j+1,j})\\&= \frac{1}{2n^2} \mathcal {Q}^n_t (\Delta ^n h) + \frac{1}{2n^3} \sum _{j\in \mathbb {Z}} (\overline{\xi }_j)^2 (\Delta ^n h)_{j,j} - \frac{1}{n} \sum _{j\in \mathbb {Z}} \overline{\xi }_j \Delta \xi _j h_{j,j}\\&\quad - \frac{1}{2n} \sum _{j\in \mathbb {Z}} (\xi _{j+1}-\xi _{j})^2 (h_{j,j+1}+h_{j+1,j})\\&= \frac{1}{2n^2} \mathcal {Q}^n_t(\Delta ^nh) + \frac{1}{n} \sum _{j\in \mathbb {Z}} \overline{\xi }_j \overline{\xi }_{j+1} (h_{j,j+1}+h_{j+1,j} -h_{j,j}-h_{j+1,j+1}). \end{aligned} \end{aligned}$$

On the other hand, we have for the anti-symmetric part that

$$\begin{aligned} A(\overline{\xi }_j \overline{\xi }_{j^\prime }) = (\xi _{j-1} - \xi _{j+1}) \xi ^\prime _j \overline{\xi }_{j^\prime } + (\xi _{j^\prime -1} - \xi _{j^\prime +1}) \xi ^\prime _{j^\prime } \overline{\xi }_{j}. \end{aligned}$$

Here, note that \(\xi ^\prime _j = V_{\beta _n}^{\prime \prime }(\eta _j) = 1+ \frac{c_{k_*}}{(k_*-2)!}(\beta _n\xi _j)^{k_*-2} + O(\beta _n^{k_*-1})\). Accordingly, we have that

$$\begin{aligned} A\mathcal {Q}^n_t(h) = I_n + \frac{c_{k_*}}{(k_*-2)!}\beta _n^{k_*-2} J_n + O(\beta _n^{k_*-1}), \end{aligned}$$

where

$$\begin{aligned} I_n = \frac{1}{n} \sum _{j\ne j^\prime } [(\xi _{j-1}-\xi _{j+1})\overline{\xi }_{j^\prime } + (\xi _{j^\prime -1}-\xi _{j^\prime +1}) \overline{\xi }_j] h_{j,j^\prime }, \end{aligned}$$

and

$$\begin{aligned} J_n = \frac{1}{n} \sum _{j\ne j^\prime } [(\xi _{j-1}-\xi _{j+1}){g_*}(\xi _j) \overline{\xi }_{j^\prime } + (\xi _{j^\prime -1}-\xi _{j^\prime +1}) {g_*}(\xi _{j^\prime }) \overline{\xi }_j] h_{j,j^\prime }. \end{aligned}$$

Above, we defined the function \(g_*\) by

$$\begin{aligned} g_*(x)=x^{k_*-2}. \end{aligned}$$
(6.8)

Then, we have

$$\begin{aligned} \begin{aligned} I_n&= \frac{1}{n} \sum _{j,j^\prime } [(\xi _{j-1}-\xi _{j+1}) \overline{\xi }_{j^\prime } + (\xi _{j^\prime -1} - \xi _{j^\prime +1}) \overline{\xi }_{j}]h_{j,j^\prime } - \frac{2}{n} \sum _{j\in \mathbb {Z}} (\xi _{j-1}-\xi _{j+1})\overline{\xi }_j h_{j,j} \\&= \frac{1}{n^2}\sum _{j,j^\prime } \overline{\xi }_{j} \overline{\xi }_{j^\prime } (\mathscr {A}_n h)_{j,j^\prime } - \frac{2}{n^2} \sum _{j\in \mathbb {Z}} \overline{\xi }_j \overline{\xi }_{j+1} (\tilde{\mathscr {F}}_n h)_j \\&= -\frac{1}{n}\mathcal {Q}^n_t(\mathscr {A}_nh) - \frac{1}{n^2} \sum _{j\in \mathbb {Z}} (\overline{\xi }_{j})^2 (\mathscr {A}_nh)_{j,j} - \frac{2}{n^2} \sum _{j\in \mathbb {Z}} \overline{\xi }_j \overline{\xi }_{j+1} (\tilde{\mathscr {F}}_n h)_j. \end{aligned} \end{aligned}$$
(6.9)

Additionally, note that

$$\begin{aligned} \begin{aligned} (\overline{\xi }_j)^2&= \bigg ( \eta _j -\lambda + \frac{c_{k_*}}{(k_*-1)!} \beta _n^{k_*-2} \eta _j^{k_*-1} + O(\beta _n^{k_*-1}) \bigg )^2 \\&= \lambda ^2 + \eta _j^2 - 2\lambda \eta _j + \frac{2c_{k_*}}{(k_*-1)!} \beta _n^{k_*-2} \eta _j^{k_*-1} (\eta _j-\lambda ) + O(\beta _n^{k_*-1}) \\&=\lambda ^2 + 2(\zeta _j - \lambda \eta _j) + \frac{2c_{k_*}}{k_*!} \beta _n^{k_*-2} \eta _j^{k_*-1} \big [(k_*-1)\eta _j-k_*\lambda \big ] + O(\beta _n^{k_*-1}). \end{aligned} \end{aligned}$$

In particular, the second term in the utmost right-hand side of (6.9) gives \(2\theta (n)\alpha _nn^{-3/2}\mathcal {X}^n_t(\mathscr {D}_nh)\) with an error term which comes from the third term of the last display, which is substituted back into the second term in the last line of (6.9). This error term is treated with the Schwarz inequality in the following way.

$$\begin{aligned} \begin{aligned}&\mathbb {E}_n \bigg [ \sup _{0\le t \le T} \bigg | \frac{1}{n^2} \int _0^t \sum _{j\in \mathbb {Z}} F(\eta _j(s)) (\mathscr {A}_n h)_{j,j}(s) ds\bigg |^2 \bigg ]\\&\quad \le \frac{T}{n^4} \int _0^T \mathbb {E}_n \bigg [ \bigg ( \sum _{j\in \mathbb {Z}} F(\eta _j(s)) (\mathscr {A}_nh)_{j,j}(s) \bigg )^2 \bigg ] ds \lesssim \frac{T^2}{n^2} \Vert \mathscr {A}_nh\Vert ^2_{L^2(\mathbb {R}^2)}, \end{aligned} \end{aligned}$$

where \(F(\eta )=\eta ^{k_*-1}(\eta -\lambda )\) and in the second estimate we used the fact that for any symmetric function h it holds that \(\sum _{j} (\mathscr {A}_nh)_{j,j}=0\). This bound gives an error term which is proportional to \(\Vert \mathscr {A}_nh\Vert ^2_{2,n}\) in (6.6). Finally, the estimates (6.7) and (6.6) for \(J_n\) follow from Lemma 6.2 with \(g=g_*\) and Lemma 6.3 below. \(\square \)

In the sequel, we give a quantitative estimate for the error term \(J_n\) which appears in the proof of Lemma 6.1. For that purpose, we make use of the following Kipnis–Varadhan estimate [15, 33], for each smooth mean-zero function \(F:[0,T]\times \Omega \rightarrow {\mathbb {R}}\)

$$\begin{aligned} \begin{aligned} \mathbb {E}_n \bigg [\sup _{0\le t \le T} \bigg | \int _0^t F(s,\eta (s)) ds \bigg |^2 \bigg ] \lesssim \int _0^T \Vert F(s,\cdot )\Vert ^2_{-1,n} ds, \end{aligned} \end{aligned}$$

where \(\Vert \cdot \Vert _{-1,n}\)-norm is defined through the variational formula

$$\begin{aligned} \Vert F\Vert ^2_{-1,n} =\sup _{f} \bigg \{ 2\langle F,f\rangle _{L^2(\nu _n)} - \langle f, -\theta (n) Sf\rangle _{L^2(\nu _n)} \bigg \}, \end{aligned}$$

where the supremum is taken over all \(L^2(\nu _n)\)-local functions.

Lemma 6.2

Let

$$\begin{aligned} J_n^g = \frac{1}{n} \sum _{j\ne j^\prime } [(\xi _{j-1}-\xi _{j+1}) g(\xi _j) \overline{\xi }_{j^\prime } + (\xi _{j^\prime -1}-\xi _{j^\prime +1}) g(\xi _{j^\prime }) \overline{\xi }_j ] h\bigg ( \frac{j}{n},\frac{j^\prime }{n}\bigg ), \end{aligned}$$
(6.10)

where \(g:\mathbb {R}\rightarrow \mathbb {R}\) is any continuous function such that \(E_{\nu _n}[g(\eta _j)^2]<+\infty \). Then,

$$\begin{aligned} \mathbb {E}_n \bigg [ \sup _{0\le t \le T} \bigg | \int _0^t J_n^g(s) ds \bigg |^2 \bigg ] \lesssim \frac{T^2}{n^2} \Vert \mathscr {E}_nh\Vert ^2_{2,n} + \frac{T}{\theta (n)} \Vert h\Vert ^2_{2,n}. \end{aligned}$$
(6.11)

Proof

In this proof, we write the short-hand notation \(h_{j,j^\prime }=h(j/n,j^\prime /n)\). In the sequel, we may assume that \(g(\xi _j)\) is centered, namely, that \(E_{\nu _n}[g(\xi _j)]=0\). Let \(\Psi _j = \xi _j g(\xi _{j+1}) - \xi _{j+1} g(\xi _j)\). Since h is symmetric, we have

$$\begin{aligned} \begin{aligned} J_n^g&= \frac{2}{n} \sum _{j\ne j^\prime } (\xi _{j-1}-\xi _{j+1}) g(\xi _j) \overline{\xi }_{j^\prime } h_{j,j^\prime } \\&= \frac{2}{n} \sum _{j\ne j^\prime } \big ( \xi _{j-1}g(\xi _j) - \xi _{j} g(\xi _{j+1}) \big ) \overline{\xi }_{j^\prime } h_{j,j^\prime } + \frac{2}{n} \sum _{j\ne j^\prime } \big ( \xi _j g(\xi _{j+1}) - \xi _{j+1} g(\xi _j) \big ) \overline{\xi }_{j^\prime } h_{j,j^\prime } \\&= \frac{2}{n} \sum _{j,j^\prime } \big ( \xi _{j-1}g(\xi _j) - \xi _{j} g(\xi _{j+1}) \big ) \overline{\xi }_{j^\prime } h_{j,j^\prime } + \frac{2}{n} \sum _{j^\prime \ne j,j+1} \big ( \xi _j g(\xi _{j+1}) - \xi _{j+1} g(\xi _j) \big ) \overline{\xi }_{j^\prime } h_{j,j^\prime } \\&\quad - \frac{2}{n} \sum _{j\in \mathbb {Z}} \big ( \xi _{j-1}g(\xi _j)-\xi _jg(\xi _{j+1}) \big ) \overline{\xi }_j h_{j,j} + \frac{2}{n} \sum _{j\in \mathbb {Z}} \big ( \xi _j g(\xi _{j+1}) - \xi _{j+1} g(\xi _j) \big ) \overline{\xi }_{j+1} h_{j,j+1} \\&= \frac{2}{n^2} \sum _{j,j^\prime } \overline{\xi }_j g(\xi _{j+1}) \overline{\xi }_{j^\prime } (\mathscr {E}_n h)_{j,j^\prime } + \frac{2}{n} \sum _{j^\prime \ne j,j+1} \Psi _j \overline{\xi }_{j^\prime } h_{j,j^\prime } \\&\quad - \frac{2}{n} \sum _{j\in \mathbb {Z}} \big ( \xi _{j-1}g(\xi _j)-\xi _jg(\xi _{j+1}) \big ) \overline{\xi }_j h_{j,j} + \frac{2}{n} \sum _{j\in \mathbb {Z}} \big ( \xi _j g(\xi _{j+1}) - \xi _{j+1} g(\xi _j) \big ) \overline{\xi }_{j+1} h_{j,j+1} \\&=:H_1 + H_2 + H_3, \end{aligned} \end{aligned}$$

where we set

$$\begin{aligned} H_1 = \frac{2}{n^2} \sum _{j,j^\prime } \overline{\xi }_j g(\xi _{j+1}) \overline{\xi }_{j^\prime } (\mathscr {E}_n h)_{j,j^\prime },\quad H_2= \frac{2}{n} \sum _{j^\prime \ne j,j+1} \Psi _j \overline{\xi }_{j^\prime } h_{j,j^\prime }, \end{aligned}$$

and

$$\begin{aligned} H_3 = - \frac{2}{n} \sum _{j\in \mathbb {Z}} \big ( \xi _{j-1}g(\xi _j)-\xi _jg(\xi _{j+1}) \big ) \overline{\xi }_j h_{j,j} + \frac{2}{n} \sum _{j\in \mathbb {Z}} \big ( \xi _j g(\xi _{j+1}) - \xi _{j+1} g(\xi _j) \big ) \overline{\xi }_{j+1} h_{j,j+1}. \end{aligned}$$

First, \(H_1\) is estimated by a direct \(L^2\)-computation as follows.

$$\begin{aligned} \begin{aligned}&\mathbb {E}_n \bigg [ \sup _{0\le t \le T} \bigg | \int _0^t \frac{2}{n^2} \sum _{j\ne j^\prime } \overline{\xi }_j g(\xi _{j+1}) \overline{\xi }_{j^\prime } (\mathscr {E}_nh)_{j,j^\prime } ds\bigg |^2 \bigg ] \\&\quad \lesssim \frac{T}{n^4} \int _0^T \mathbb {E}_n \bigg [ \bigg ( \sum _{j\ne j^\prime } {\bar{\xi }}_j g(\xi _{j+1}) \overline{\xi }_{j^\prime } (\mathscr {E}_nh)_{j,j^\prime } \bigg )^2 \bigg ] ds \lesssim \frac{T^2}{n^2} \Vert \mathscr {E}_nh \Vert ^2_{2,n}. \end{aligned} \end{aligned}$$

To give an estimate of \(H_2\), we apply the Kipnis–Varadhan inequality for this term. Note that \(\Psi _j(\eta )\) satisfies \(\Psi _j(\eta ^{j,j+1})=-\Psi (\eta )\). We take an arbitrary local function \(f:\Omega \rightarrow \mathbb {R}\). Then by Young’s inequality, we have for any \(f\in L^2(\nu _n)\) that

$$\begin{aligned} \begin{aligned}&\bigg \langle \frac{4}{n}\sum _{j\ne j^\prime } \Psi _j(\eta ) \overline{\xi }_{j^\prime } h_{j,j^\prime }, f(\eta ) \bigg \rangle _{L^2(\nu _n)}\\&\quad = \frac{2}{n} E_{\nu _n} \bigg [\sum _{j\in \mathbb {Z}} \Psi _j(\eta ) \nabla _{j,j+1}f(\eta ) \sum _{j^\prime \in \mathbb {Z}} \overline{\xi }_{j^\prime } h_{j,j^\prime } \bigg ] \\&\quad \le \frac{A}{n} \sum _{j\in \mathbb {Z}} E_{\nu _n} [(\nabla _{j,j+1}f(\eta ))^2] + \frac{1}{An} \sum _{j\in \mathbb {Z}} E_{\nu _n} \bigg [ \Psi _j(\eta )^2 \bigg (\sum _{j^\prime \in \mathbb {Z}} \overline{\xi }_{j^\prime } h_{j,j^\prime }\bigg )^2 \bigg ], \end{aligned} \end{aligned}$$

for any \(A>0\). In particular, when \(A=n\theta (n)/4\) the first term in the utmost right-hand side is nothing but the Dirichlet form of the generator \(\theta (n)S\). In addition, the other remainder term is estimated by

$$\begin{aligned} \begin{aligned} \frac{4}{n^2\theta (n)} \sum _{j,j^\prime } E_{\nu _n} [\Psi _j(\eta )^2 (\overline{\xi }_j)^2 h^2_{j,j^\prime } ] \lesssim \frac{1}{n^2\theta (n)} \sum _{j,j^\prime } h^2_{j,j^\prime }. \end{aligned} \end{aligned}$$

We obtain the desired bound by the Kipnis–Varadhan inequality, i.e.,

$$\begin{aligned} \begin{aligned} \mathbb {E}_n \bigg [\sup _{0\le t \le T} \bigg | \int _0^t \frac{2}{n} \sum _{j^\prime \ne j,j+1} \Psi _j(s) \overline{\xi }_{j^\prime }(s) h_{j,j^\prime } ds \bigg |^2 \bigg ] \lesssim \frac{T^2}{\theta (n)} \Vert h\Vert ^2_{2,n}. \end{aligned} \end{aligned}$$

Finally, for \(H_3\), we compute

$$\begin{aligned} \begin{aligned} H_3&= - \frac{2}{n} \sum _{j\in \mathbb {Z}} \overline{\xi }_{j}\overline{\xi }_{j+1} g(\xi _{j+1}) (h_{j+1,j+1}-h_{j,j+1}) + \frac{2}{n} \sum _{j\in \mathbb {Z}} \big ((\overline{\xi }_j)^2 g(\xi _{j+1}) h_{j,j}\\&\quad - (\overline{\xi }_{j+1})^2 g(\xi _j) h_{j,j+1}\big ) \\&= - \frac{2}{n^2} \sum _{j\in \mathbb {Z}} \overline{\xi }_{j-1}\overline{\xi }_j g(\xi _j) \big ( (\tilde{\mathscr {F}}_nh)_{j} -(\tilde{\mathscr {E}}_n h)_j \big ) - \frac{2}{n^2} \sum _{j\in \mathbb {Z}} (\overline{\xi }_j)^2 g(\xi _{j+1}) (\tilde{\mathscr {E}}_n h)_j \\&\quad + \frac{2}{n} \sum _{j\in \mathbb {Z}} \big ( (\overline{\xi }_j)^2 g(\xi _{j+1}) - (\overline{\xi }_{j+1})^2 g(\xi _j) \big ) h_{j,j+1}. \end{aligned} \end{aligned}$$

This is bounded as follows:

$$\begin{aligned} \begin{aligned}&\mathbb {E}_n \bigg [ \sup _{0\le t \le T} \bigg |\int _0^t H_3(s) ds \bigg |^2 \bigg ]\\&\quad \lesssim \frac{T^2}{n^4} \bigg (\sum _{j\in \mathbb {Z}} (\tilde{\mathscr {F}}_n h)_j^2 + \sum _{j\in \mathbb {Z}} (\tilde{\mathscr {E}}_n h)_j^2 \bigg ) + \frac{T}{\theta (n)n^2} \bigg ( \sum _{j\in \mathbb {Z}} (\tilde{\mathscr {E}}_n h)_j^2 + \sum _{j\in \mathbb {Z}} (h_{j,j})^2 \bigg ) \\&\quad \le \frac{T^2}{n^4} \bigg (\sum _{j,j^\prime \in \mathbb {Z}} (\mathscr {F}_n h)_{j,j^\prime }^2 + \sum _{j,j^\prime \in \mathbb {Z}} (\mathscr {E}_n h)_{j,j^\prime }^2 \bigg ) + \frac{T}{\theta (n)n^2} \bigg ( \sum _{j,j^\prime \in \mathbb {Z}} (\mathscr {E}_n h)_{j,j^\prime }^2 + \sum _{j,j^\prime \in \mathbb {Z}} h_{j,j^\prime }^2 \bigg ) \\&\quad \lesssim \frac{T^2}{n^2} \bigg ( \Vert \mathscr {F}_nh \Vert ^2_{L^2(\mathbb {R}^2)} + \Vert \mathscr {E}_n h\Vert ^2_{L^2(\mathbb {R}^2)}\bigg ) + \frac{T}{\theta (n)} \bigg ( \Vert \mathscr {E}_n h\Vert ^2_{L^2(\mathbb {R}^2)} + \Vert h \Vert ^2_{L^2(\mathbb {R}^2)} \bigg ). \end{aligned} \end{aligned}$$

These bounds can be absorbed in those of \(H_1\) and \(H_2\), noting that \((\mathscr {F}_nh)_{j,j^\prime } = (\mathscr {E}_nh)_{j,j^\prime +1} - (\mathscr {E}_nh)_{j,j^\prime }\). Hence we complete the proof. \(\square \)

On the other hand, for the case when \(k_*=3\), we have the following improved estimate, which can be obtained by the fact that \(J_n^{g_*}\) is written in a gradient form when \(k_*=3\) where \(g_*\) is the function defined in (6.8).

Lemma 6.3

(The case \(k_*=3\).). Let

$$\begin{aligned} {\tilde{J}_n} = \frac{1}{n} \sum _{j\ne j^\prime } [({\bar{\xi }}_{j-1}{\bar{\xi }}_j-{\bar{\xi }}_j{\bar{\xi }}_{j+1}) \overline{\xi }_{j^\prime } + ({\bar{\xi }}_{j^\prime -1}{\bar{\xi }}_{j^\prime }-{\bar{\xi }}_{j^\prime }{\bar{\xi }}_{j^\prime +1})\overline{\xi }_j] h\bigg ( \frac{j}{n},\frac{j^\prime }{n}\bigg ). \end{aligned}$$

Then, for any \(\ell \in \mathbb {N}\), we have that

$$\begin{aligned} \begin{aligned} \mathbb {E}_n \bigg [\sup _{0\le t\le T}\bigg |\int _0^t {\tilde{J}_n}(s)ds \bigg |^2\bigg ] \lesssim \bigg ( \frac{T^2\ell ^2}{n^2\theta (n)} + \frac{T}{n^2\ell } \bigg ) \Vert \mathscr {E}_nh\Vert ^2_{2,n} + \frac{T}{\theta (n)n^2} \sum _{j\in \mathbb {Z}} h\bigg ( \frac{j}{n},\frac{j+1}{n}\bigg )^2. \end{aligned} \end{aligned}$$

Proof

Here we simply write \(h_{j,j^\prime }=h(j/n,j^\prime /n)\), as in the proof of Lemma 6.1. Since h is symmetric we have that

$$\begin{aligned} \begin{aligned} {{\tilde{J}}_n}&= \frac{2}{n}\sum _{j\ne j^\prime } (\overline{\xi }_{j-1}\overline{\xi }_{j} - \overline{\xi }_{j}\overline{\xi }_{j+1}) \overline{\xi }_{j^\prime }h_{j,j^\prime } \\&= \frac{2}{n}\sum _{j,j^\prime } (\overline{\xi }_{j-1}\overline{\xi }_{j} - \overline{\xi }_{j}\overline{\xi }_{j+1}) \overline{\xi }_{j^\prime }h_{j,j^\prime } -\frac{2}{n}\sum _{j} (\overline{\xi }_j)^2 (\overline{\xi }_{j-1}-\overline{\xi }_{j+1}) h_{j,j}\\&= \frac{2}{n^2}\sum _{j,j^\prime } \overline{\xi }_{j} \overline{\xi }_{j+1} \overline{\xi }_{j^\prime } (\mathscr {E}_nh)_{j,j^\prime } - \frac{1}{n}\sum _j (\overline{\xi }_j)^2 (\overline{\xi }_{j-1}-\overline{\xi }_{j+1}) h_{j,j}\\&= \frac{2}{n^2}\sum _{j^\prime \ne j,j+1} \overline{\xi }_{j} \overline{\xi }_{j+1} \overline{\xi }_{j^\prime } (\mathscr {E}_nh)_{j,j^\prime } - \frac{2}{n}\sum _j (\overline{\xi }_j)^2 (\overline{\xi }_{j-1}-\overline{\xi }_{j+1}) h_{j,j}\\&\quad + \frac{2}{n}\sum _{j\in \mathbb {Z}} (\overline{\xi }_j)^2 \overline{\xi }_{j+1} (h_{j+1,j}-h_{j,j}) + \frac{2}{n}\sum _{j\in \mathbb {Z}} (\overline{\xi }_{j-1})^2 \overline{\xi }_j (h_{j+1,j+1}-h_{j,j+1}) \\&= \frac{2}{n^2}\sum _{j^\prime \ne j,j+1} \overline{\xi }_{j} \overline{\xi }_{j+1} \overline{\xi }_{j^\prime } (\mathscr {E}_nh)_{j,j^\prime } + \frac{2}{n} \sum _{j\in \mathbb {Z}} \big ((\overline{\xi }_j)^2 \overline{\xi }_{j+1} - (\overline{\xi }_{j+1})^2 \overline{\xi }_j \big ) h_{j,j+1} =:K_1 + K_2. \end{aligned} \end{aligned}$$

To estimate \(K_1\) we do the following. Fix \(\ell \in \mathbb {N}\). At first instance we split the sum in two cases. First either \(j^\prime \in R_{j+1}^\ell :=\{j+1,\cdots , j+1+\ell \} \) or \(j^\prime \in L_{j}^\ell :=\{j-\ell ,\cdots , j-1\}\) or \(j^\prime \notin {R_{j+1}^\ell \cup L_{j}^\ell }\). All the cases are treated analogously. In the first case we make the replacement \(\xi _j\) by \(\overleftarrow{\xi }_j^\ell \), in the second we replace \(\xi _{j+1}\) by \(\overrightarrow{\xi }_{j+1}^\ell \) and in the third case we do one of them (does not matter which). Note that by the one-block estimate, see [19] for example, we have that

$$\begin{aligned}{} & {} \mathbb {E}_n \bigg [ \sup _{0\le t \le T} \bigg | \int _0^t \frac{1}{n^2} \sum _{j\in {\mathbb {Z}}}\sum _{j^\prime \notin {L_j^\ell }} \Big (\overline{\xi _j}(s)-\overleftarrow{\xi }^\ell _j(s)\Big ) {\bar{\xi }}_{j+1}(s) \overline{\xi }_{j^\prime }(s) (\mathscr {E}_nh)_{j,j^\prime } ds\bigg |^2 \bigg ] \nonumber \\ {}{} & {} \quad \le \frac{T\ell ^2}{n^2\theta (n)}\Vert {\mathcal {E}}_nh\Vert ^2_{2,n}. \end{aligned}$$
(6.12)

Indeed, to obtain the last estimate, we proceed as follows. Set

$$\begin{aligned} \sum _{j\in {\mathbb {Z}}} \sum _{j^\prime \notin {L_j^\ell }} (\overline{\xi _j} - \overleftarrow{\xi }^\ell _j) \overline{\xi }_{j+1} \overline{\xi }_{j^\prime } (\mathcal {E}_nh)_{j,j^\prime } = \sum _{j\in {\mathbb {Z}}} (\xi _j - \overleftarrow{\xi }^\ell _j) \overline{\xi }_{j+1} \Phi _j, \end{aligned}$$

where \(\Phi _j = \sum _{j^\prime \notin {L_j^\ell }} \overline{\xi }_{j^\prime } (\mathcal {E}_nh)_{j,j^\prime }\). We have that

$$\begin{aligned} \begin{aligned} \sum _{j\in \mathbb {Z}} (\xi _j - \overleftarrow{\xi }^\ell _j) \overline{\xi }_{j+1} \Phi _j = \sum _{j\in \mathbb {Z}} \sum _{i=0}^{\ell -2} \overline{\xi }_{j+1} (\xi _{j-i}-\xi _{j-i-1}) \psi _i \Phi _j = \sum _{k\in \mathbb {Z}} F_k (\xi _k -\xi _{k-1}), \end{aligned} \end{aligned}$$

where \(F_k=\sum _{i=0,\ldots ,\ell -2} \overline{\xi }_{k+i+1} \Phi _{k+i} \psi _{i}\) and \(\psi _i = (\ell -i-1)/\ell \). Note that the functional \(F_k(\eta )\) is invariant under the action \(\eta \mapsto \eta ^{k,k-1}\). Therefore, for any \(f\in L^2(\nu _n)\), it holds

$$\begin{aligned} \begin{aligned} \bigg \langle 2 \sum _{j\in \mathbb {Z}} (\xi _j - \overleftarrow{\xi }^\ell _j) \overline{\xi }_{j+1} \Phi _j, f(\eta ) \bigg \rangle _{L^2(\nu _n)}&= 2 \sum _{j\in \mathbb {Z}} E_{\nu _n} [F_j (\xi _j-\xi _{j-1})f] \\&= 2 \sum _{j\in \mathbb {Z}} E_{\nu _n}[F_{j+1}\overline{\xi }_j (\nabla _{j,j+1}f(\eta ))]. \end{aligned} \end{aligned}$$

Then by Young’s inequality for any \(A>0\) the above display is bounded by

$$\begin{aligned} \begin{aligned} A \sum _{j\in \mathbb {Z}} E_{\nu _n} [(\nabla _{j,j+1}f)^2] + \frac{1}{A} \sum _{j\in \mathbb {Z}} E_{\nu _n} [F_{j+1}^2 (\overline{\xi }_j)^2]. \end{aligned} \end{aligned}$$

In particular, when we take \(A=\theta (n)/4\), the first term is nothing but the Dirichlet form. For the reminder term, note

$$\begin{aligned} E_{\nu _n}[F_j^2] = E_{\nu _n} \bigg [ \bigg ( \sum _{i=0}^{\ell -2} \overline{\xi }_{j+i+1} \psi _{i} \sum _{j^\prime >j} \overline{\xi }_{j^\prime +i} (\mathcal {E}_nh)_{j^\prime \notin {L_j^\ell }} \bigg )^2 \bigg ] \lesssim \ell ^2 \sum _{j,j^\prime } (\mathcal {E}_nh)_{j,j^\prime }^2, \end{aligned}$$

which follows from a crude \(L^2\)-estimate and the fact that \(\psi _i\) is bounded by a constant. Hence the desired estimate follows from the Kipnis–Varadhan inequality.

Now observe that from the Schwarz inequality we have that

$$\begin{aligned} \begin{aligned}&\mathbb {E}_n \bigg [ \sup _{0\le t \le T} \bigg | \int _0^t \frac{1}{n^2} \sum _{j,j^\prime \notin {L_j^\ell }} \overleftarrow{\xi }^\ell _j(s) \overline{\xi }_{j+1}(s) \overline{\xi }_{j^\prime }(s) (\mathscr {E}_nh)_{j,j^\prime } ds\bigg |^2 \bigg ] \le \frac{T^2}{n^2\ell } \Vert {\mathcal {E}}_nh\Vert ^2_{2,n}. \end{aligned}\nonumber \\ \end{aligned}$$
(6.13)

The proof is a consequence of the fact that

$$\begin{aligned} \begin{aligned}&E_{\nu _n} \bigg [ \bigg ( \frac{1}{n^2} \sum _{j,j^\prime \notin {L_j^\ell }} \overleftarrow{\xi }^\ell _j \overline{\xi }_{j+1} \overline{\xi }_{j^\prime } (\mathscr {E}_nh)_{j,j^\prime } \bigg )^2 \bigg ] =\frac{1}{n^4}\sum _{j,j^\prime \notin {L_j^\ell }} E_{\nu _n}[(\overleftarrow{\xi }^\ell _j)^2 ({\bar{\xi }}_{j+1})^2 (\overline{\xi }_{j^\prime })^2] (\mathscr {E}_nh)_{j,j^\prime }^2, \end{aligned} \end{aligned}$$

which follows from \(\mathbb {E}_n[\overline{\xi }_j \overline{\xi }_{j^\prime }]=0\) provided \(j\ne j^\prime \), and the fact that \(\mathbb {E}_n [(\overleftarrow{\xi }^\ell _j)^2 ] \le \ell ^{-1}\). Putting the two last estimates (6.12) and (6.13) together we get that

$$\begin{aligned} \begin{aligned} \mathbb {E}_n \bigg [ \sup _{0\le t \le T} \bigg | \int _0^t \frac{1}{n^2} \sum _{j,j^\prime \ne j,j+1} \overline{\xi _j}(s) \overline{\xi }_{j+1}(s) \overline{\xi }_{j^\prime }(s) (\mathscr {E}_nh)_{j,j^\prime } ds\bigg |^2 \lesssim \bigg ( \frac{T^2\ell ^2}{n^2\theta (n)} + \frac{T}{n^2\ell } \bigg ) \Vert \mathscr {E}_nh \Vert ^2_{2,n}. \end{aligned} \end{aligned}$$

Finally, the remainder term \(K_2\) can be estimated by the Kipnis–Varadhan inequality, which yields

$$\begin{aligned} \mathbb {E}_n \bigg [ \sup _{0\le t\le T} \bigg | \int _0^t \frac{1}{n} \sum _{j\in \mathbb {Z}} \big ((\overline{\xi }_j)^2 \overline{\xi }_{j+1} - (\overline{\xi }_{j+1})^2 \overline{\xi }_j \big )(s) h_{j,j+1} (t) dt \bigg |^2 \bigg ] \lesssim \frac{T}{\theta (n)n^2} \sum _{j\in \mathbb {Z}} h_{j,j+1}^2. \end{aligned}$$

This is verified exactly in the same way as the estimate of \(H_1\) in the proof of Lemma 6.2.

To finish the proof of Lemma 6.3, we optimize over \(\ell \), with \(\ell =\sqrt{n}\) and we are done.

\(\square \)

6.2 Proof of Theorem 2.16

As a consequence of Lemma 6.1, we have that

$$\begin{aligned} \frac{d}{dt} \mathscr {Q}^n_t(h)= & {} \frac{1}{2}\mathbb {E}_n \big [ \mathcal {X}^n_0(g) L_n \mathcal {Q}^n_t (h) \big ] \nonumber \\= & {} \frac{\theta (n)}{2n^2} \mathscr {Q}^n_t(\Delta ^nh - 2n\alpha _n \mathscr {A}_nh) + \frac{2\theta (n)\alpha _n}{n^{3/2}} \mathscr {S}^n_t(\mathscr {D}_nh) \nonumber \\{} & {} + \frac{2\theta (n)}{n^2} \mathscr {Q}^n_t(\tilde{\mathscr {D}_n} h) + E^n_t(h), \end{aligned}$$
(6.14)

where \(E_n\) satisfies the bound (6.6). Here recall the martingale decomposition (6.4). To relate (6.4) with (6.14), we solve the following Poisson equation.

$$\begin{aligned} \frac{1}{2} \Delta ^n h \bigg (\frac{j}{n},\frac{j^\prime }{n} \bigg ) - n\alpha _n \mathscr {A}_n h \bigg (\frac{j}{n},\frac{j^\prime }{n} \bigg ) = 2n^{1/2} \alpha _n (\nabla ^{n,1} \varphi \otimes \delta ) \bigg (\frac{j}{n},\frac{j^\prime }{n} \bigg ). \end{aligned}$$
(6.15)

Hereafter we denote the solution of (6.15) by \(h_n\). Then, we have the following quantitative estimate for \(h_n\) and its derivatives.

Lemma 6.4

Let \(h_n\) be the solution of the discrete Poisson equation (6.15). Then,

$$\begin{aligned} \Vert h_n \Vert ^2_{2,n} \lesssim n^{-1/2},\quad \Vert \mathscr {E}_n h \Vert ^2_{2,n} \lesssim 1, \quad \Vert \mathscr {A}_n h \Vert ^2_{2,n} \lesssim n^{-1/2}. \end{aligned}$$

As a consequence, the utmost right-hand side of the bound (6.6) vanishes provided

$$\begin{aligned} \lim _{n\rightarrow \infty } \theta (n)^2 \alpha _n^2 \beta _n^{2(k_*-2)} \bigg ( \frac{1}{n^2} \Vert \mathscr {A}_nh_n\Vert ^2_{2,n} + \frac{1}{n^2} \Vert \mathscr {E}_nh_n\Vert ^2_{2,n} + \frac{1}{\theta (n)} \Vert h_n\Vert ^2_{2,n} \bigg ) =0. \end{aligned}$$

According to Lemma 6.4, this is satisfied when \(\beta _n=O(n^{-b})\) with \(b\ge 1/(2k_*-4)\). Moreover, note that the error term in the decomposition (6.4) vanishes under the condition of Theorem 2.16. Thus we hereafter write the error term simply by \(o_n(1)\), which is always negligible when \(n\rightarrow \infty \). On the other hand, when \(k_*=3\), we see that the critical exponent for \(\beta _n\) improves by Lemma 6.3 and the following estimate.

Lemma 6.5

Let \(h_n\) be the solution of the discrete Poisson equation (6.15). Then

$$\begin{aligned} \frac{1}{n}\sum _{j\in \mathbb {Z}} h_n\bigg ( \frac{j}{n},\frac{j+1}{n}\bigg )^2 \lesssim 1. \end{aligned}$$

The proof of Lemmas 6.4 and 6.5 is given in Sect. 6.3. Now we combine the two identities (6.4) and (6.14) to obtain

$$\begin{aligned} \begin{aligned} \frac{d}{dt} \mathscr {S}^n_t(f)&= -\frac{d}{dt} \mathscr {Q}^n_t(h) + \mathscr {S}^n_t \bigg ( \frac{\theta (n)}{2n^2}\Delta ^nh_n +\frac{2\theta (n)\alpha _n}{n^{3/2}}\mathscr {D}_nh_n \bigg ) + \frac{2\theta (n)}{n^2}\mathscr {Q}^n_t(\tilde{\mathscr {D}_n}h_n) + o_n(1). \end{aligned} \end{aligned}$$

By integrating the last display in time we obtain

$$\begin{aligned} \mathscr {S}^n_T(f)-\mathscr {S}^n_0(f)= & {} \int _0^T \mathscr {S}^n_t \bigg ( \frac{\theta (n)}{2n^2}\Delta ^nh_n +\frac{2\theta (n)\alpha _n}{n^{3/2}}\mathscr {D}_nh_n \bigg ) dt \nonumber \\{} & {} + \mathscr {Q}^n_0(h_n) -\mathscr {Q}^n_T(h_n) + \frac{2\theta (n)}{n^2}\int _0^T \mathscr {Q}^n_t(\tilde{\mathscr {D}}_nh_n)dt + o_n(1). \qquad \quad \end{aligned}$$
(6.16)

Since \(\Vert h_n\Vert ^2_{L^2(\mathbb {R}^2)}\lesssim n^{-1/2}\) by Lemma 6.4, the second and third terms in the utmost right-hand side of (6.16) vanish as \(n\rightarrow +\infty \). Moreover, the contribution of the first term in the utmost right-hand side of (6.16) is captured by the following result [5, Lemma 3.2].

Proposition 6.6

If \(a=\min \{ 3/2+3\kappa /2, 2 \}\) and \(\kappa \in (0,\infty )\), then

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{n} \sum _{j\in \mathbb {Z}} \bigg | \big ( n^{a-2} \Delta ^n f - 2\gamma n^{a-\kappa -3/2} \mathscr {D}_n h_n\big ) \bigg (\frac{j}{n}\bigg ) - \mathbb {L}_{\gamma , \kappa } f\bigg (\frac{j}{n}\bigg ) \bigg |^2 = 0, \end{aligned}$$

where \(\mathbb {L}_{\gamma ,\kappa }\) is the operator defined by (2.16).

We apply Proposition 6.6 by setting \(\theta (n) = n^a\) and \(\alpha _n = \gamma n^{-\kappa }\) to see that the first term on the utmost right-hand side of (6.16) gives rise to the Lévy operator \(\mathbb {L}_{\gamma ,\kappa }\). Finally, the fourth term in the utmost right-hand side of (6.16) is controlled with the following result.

Lemma 6.7

Let \(h_n\) be the solution of the Poisson equation given in (6.15), \(\theta (n)=n^a\) and \(\alpha _n= \gamma n^{-\kappa }\) with \(a=\min \{ \frac{3}{2}(1+\kappa ), 2\}\) and \(\kappa \in (0,1)\). For any \(T>0\), we have that

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb {E}_{n} \bigg [\bigg ( \int _0^T \mathcal {Q}^n_s(n^{a-2} \tilde{\mathscr {D}}_n h_n) ds \bigg )^2 \bigg ] = 0. \end{aligned}$$

Proof

The proof is similar to previous works in a case of the harmonic potential and it is exactly the same as [5, Lemma 3.3]. The only difference is the definition of the quadratic field \(\mathcal {Q}^n\) where the variables \(\xi _j\)’s were replaced by \(\eta _j\) in the harmonic case. The proof given in [5], however, is valid also under this difference so we omit a detailed description of the proof here. \(\square \)

Finally, we need to show tightness of the fluctuation fields, but this is given in [4, Sect. 5.2] so that we omit the details here. Then the proof of Theorem 2.16 is completed.

6.3 Quantitative estimate for the solution of the Poisson equation

Here we give a proof of the estimates in Lemmas 6.4 and 6.5 with Fourier analysis. For \(f:\frac{1}{n}\mathbb {Z}^d \rightarrow \mathbb {R}\), let \(\widehat{f}_n:\mathbb {R}^d \rightarrow \mathbb {R}\) be the Fourier transform of f defined by

$$\begin{aligned} \widehat{f}_n(k) = \frac{1}{n^d} \sum _{j\in \mathbb {Z}^d} f\bigg (\frac{j}{n} \bigg ) e^{\frac{2\pi \textsf{i} k\cdot j}{n}}. \end{aligned}$$

Here \(\textsf{i}=\sqrt{-1}\) denotes the imaginary unit. Then, the inverse Fourier transform is given as

$$\begin{aligned} f \bigg (\frac{j}{n} \bigg ) = \int _{[-\frac{n}{2}, \frac{n}{2}]^d} \widehat{f}_n (k) e^{- \frac{2\pi \textsf{i} j\cdot k}{n}} dk. \end{aligned}$$

Now, we give the proof of Lemma 6.4.

Proof of Lemma 6.4

The first assertion is shown in [4, Appendix D] so that we focus on the other estimates. Applying Fourier transform to the Poisson equation (6.15), we have that

$$\begin{aligned} \widehat{h}_n (k,k^\prime ) = \frac{1}{\sqrt{n}} \frac{\textsf{i} \Omega (\frac{k}{n}, \frac{k^\prime }{n}) }{-\alpha _n^{-1} \Lambda (\frac{k}{n},\frac{k^\prime }{n}) - \textsf{i} \Omega (\frac{k}{n},\frac{k^\prime }{n})} \widehat{\varphi }_n (k+k^\prime ), \end{aligned}$$
(6.17)

where

$$\begin{aligned} \Lambda \bigg ( \frac{k}{n}, \frac{k^\prime }{n} \bigg )= & {} 4 \bigg [ \sin ^2\bigg (\frac{\pi k}{n}\bigg ) + \sin ^2\bigg (\frac{\pi k^\prime }{n}\bigg ) \bigg ],\quad \textsf{i} \Omega \bigg ( \frac{k}{n}, \frac{k^\prime }{n} \bigg ) \\= & {} 2\textsf{i} \bigg [ \sin \bigg (\frac{2\pi k}{n}\bigg ) + \sin \bigg (\frac{2\pi k^\prime }{n}\bigg ) \bigg ]. \end{aligned}$$

The computation is exactly the same as [5, Appendix C], therefore we omit details. Note that

$$\begin{aligned} \widehat{(\mathscr {E}_nh)}_n (k,k^\prime ) = n \big ( e^{-\frac{2\pi \textsf{i}k}{n}}-1 \big ) \widehat{h}_n (k,k^\prime ),\quad \widehat{(\mathscr {A}_nh)}_n (k,k^\prime ) =\textsf{i}n \Omega \bigg ( \frac{k}{n}, \frac{k^\prime }{n} \bigg ) \widehat{h}_n (k,k^\prime ). \end{aligned}$$

Hence, according to the Parseval-Plancherel identity, we have that

$$\begin{aligned} \begin{aligned} \Vert \mathscr {E}_n h \Vert ^2_{2,n}&= \iint _{[-\frac{n}{2},\frac{n}{2}]^2} | \widehat{(\mathscr {E}_nh)}_n (k,k^\prime ) |^2 dk dk^\prime \\&= n \iint _{[-\frac{n}{2},\frac{n}{2}]^2} \big | e^{-\frac{2\pi \textsf{i}k}{n}} - 1 \big |^2 \frac{ \Omega (\frac{k}{n}, \frac{k^\prime }{n})^2 |\widehat{\varphi }_n (k+k^\prime )|^2 }{\alpha _n^{-2} \Lambda (\frac{k}{n},\frac{k^\prime }{n})^2 + \Omega (\frac{k}{n},\frac{k^\prime }{n})^2} dk dk^\prime \\&= 4n \iint _{[-\frac{n}{2},\frac{n}{2}]^2} \frac{ \sin ^2 (\frac{\pi (\xi -k^\prime )}{n}) \Omega (\frac{\xi -k^\prime }{n}, \frac{k^\prime }{n})^2 |\widehat{\varphi }_n (\xi )|^2 }{\alpha _n^{-2} \Lambda (\frac{\xi -k^\prime }{n},\frac{k^\prime }{n})^2 + \Omega (\frac{\xi -k^\prime }{n},\frac{k^\prime }{n})^2} d\xi dk^\prime . \end{aligned} \end{aligned}$$

Here in the last estimate we used \(|e^{-\frac{2\pi \textsf{i}k}{n}}-1|^2 = 4\sin ^2 (\pi k/n)\), and then applied a change of variables \(\xi =k+k^\prime \). Similarly, we have that

$$\begin{aligned} \Vert \mathscr {A}_nh\Vert ^2_{2,n} = n \iint _{[-\frac{n}{2},\frac{n}{2}]^2} \frac{ \Omega (\frac{\xi -k^\prime }{n}, \frac{k^\prime }{n})^4 |\widehat{\varphi }_n (\xi )|^2 }{\alpha _n^{-2} \Lambda (\frac{\xi -k^\prime }{n},\frac{k^\prime }{n})^2 + \Omega (\frac{\xi -k^\prime }{n},\frac{k^\prime }{n})^2} d\xi dk^\prime . \end{aligned}$$

Moreover, note that

$$\begin{aligned} \Omega \bigg ( \frac{\xi -k^\prime }{n}, \frac{k^\prime }{n} \bigg ) \le 4 | 1-e^{\frac{2\pi \textsf{i}\xi }{n}} |^2 = 16 \sin ^2 \bigg ( \frac{\pi \xi }{n}\bigg ). \end{aligned}$$

Then, we have that

$$\begin{aligned} \Vert \mathscr {E}_nh \Vert ^2_{2,n} \lesssim n^3 \int _{-1/2}^{1/2} \sin ^2 (\pi y) | \widehat{\varphi }_n (ny) |^2 \tilde{V}_n (y) dy, \end{aligned}$$

and

$$\begin{aligned} \Vert \mathscr {A}_nh \Vert ^2_{2,n} \lesssim n^3 \int _{-1/2}^{1/2} \sin ^4 (\pi y) | \widehat{\varphi }_n (ny) |^2 \tilde{W}_n (y) dy, \end{aligned}$$

where

$$\begin{aligned} \tilde{V}_n(y)&= \int _{-1/2}^{1/2} \frac{\sin ^2(\pi (y-x))}{\alpha _n^{-2} \Lambda (y-x,x)^2 + \Omega (y-x,x)^2 } dx \nonumber \\&\le \int _{-1/2}^{1/2} \frac{\sin ^2(\pi (y-x))}{ \Lambda (y-x,x)^2 + \Omega (y-x,x)^2 } dx, \end{aligned}$$
(6.18)

and

$$\begin{aligned} \begin{aligned} \tilde{W}_n(y) = \int _{-1/2}^{1/2} \frac{dx}{\alpha _n^{-2} \Lambda (y-x,x)^2 + \Omega (y-x,x)^2 } \le \int _{-1/2}^{1/2} \frac{dx}{ \Lambda (y-x,x)^2 + \Omega (y-x,x)^2 }. \end{aligned} \end{aligned}$$

Similarly to the estimate of \(h_n\) which is given in [5, Appendix C], it suffices to get a bound for \(\tilde{V}_n\) by a polynomial function. Indeed, suppose \(\tilde{V}_n(y)=O(|y|^{q})\) with some \(q>0\). Moreover, note that for \(\varphi \in \mathcal {S}(\mathbb {R})\), we have

$$\begin{aligned} | \widehat{\varphi }_n(yn) | \lesssim \frac{1}{1 + (n|y|)^p}, \end{aligned}$$

for any \(p\ge 1\). (See [4, Lemma B.1].) Thus, with these estimates at hand we have

$$\begin{aligned} \begin{aligned} \Vert \mathscr {E}_nh \Vert ^2_{2,n}&\lesssim n^3 \int _{-1/2}^{1/2} \frac{|y|^{2+q}}{(1+(n|y|)^p)^2} dy \lesssim n^3 \int _{-1/2}^{1/2} \frac{|y|^{2+q}}{1+(n|y|)^{2p}} dy \\&\lesssim n^{-q} \int _\mathbb {R} \frac{|y|^{2+q}}{1 + |y|^{2p}} dy = O(n^{-q}), \end{aligned} \end{aligned}$$

by taking \(p\ge 1\) sufficiently large. In addition, Lemma 6.8 below enables us to take \(q=0\), by which we complete the proof of the bound for \(\mathscr {E}_nh\). On the other hand, we have \(\tilde{W}_n(y)=|y|^{-3/2}\) by [4, Lemma F.5], which yields

$$\begin{aligned} \begin{aligned} \Vert \mathscr {A}_nh \Vert ^2_{2,n} \lesssim n^3 \int _{-1/2}^{1/2} \frac{|y|^{4-3/2}}{1+(n|y|)^{2p}} dy \lesssim n^{-1/2} \int _\mathbb {R} \frac{|y|^{q}}{1 + |y|^{2p}} dy = O(n^{-1/2}). \end{aligned} \end{aligned}$$

Hence we complete the proof of Lemma 6.4. \(\square \)

Lemma 6.8

There exists some \(C>0\) such that \(\tilde{V}_n(y) \le C\) for any \(y\in \mathbb {R}\).

Proof

Recall the estimate for \(\tilde{V}_n\) given in (6.18). This is further bounded as follows.

$$\begin{aligned} \tilde{V}_n(y) \lesssim \int _{-1/2}^{1/2} \frac{\sin (\pi (y-x))}{\sin (\pi (y-x)) + 2 \cos (\pi (y-x))} dx = V_n(y). \end{aligned}$$

In what follows we give an estimate for \(V_n\). Set \(z=e^{2\pi \textsf{i}x}\) and \(w=e^{2\pi \textsf{i}y}\) for \(x,y\in [-1/2,1/2] \). Note that

$$\begin{aligned} \begin{aligned} \sin (\pi (y-x)) = \frac{1}{2\textsf{i}}(wz^{-1}-w^{-1}z), \quad \cos (\pi (y-x)) = \frac{1}{2}(wz^{-1}+w^{-1}z). \end{aligned} \end{aligned}$$

Then, we can write

$$\begin{aligned} V_n(y) = \frac{1}{2\pi \textsf{i}}\oint _{\mathscr {C}} g_w(z) dz, \end{aligned}$$

where \(\mathscr {C}\) denotes the unit circle which is positively oriented and the meromorphic function \(g_w\) is defined by

$$\begin{aligned} g_w(z) = \frac{1}{z} \frac{wz^{-1}-w^{-1}z}{(wz^{-1}-w^{-1}z) + 2\textsf{i}(wz^{-1}+w^{-1}z)} = \frac{1}{z} \frac{z^2-w^2}{(1-2\textsf{i})z^2- (1+2\textsf{i})w^2 }. \end{aligned}$$

Note that the meromorphic function \(g_w\) has three poles inside the unit circle \(\mathscr {C}\): \(z=0, \pm a_0w\) with \(a_0=(1+2\textsf{i})(1-2\textsf{i})\). Then, by the residue theorem, we have

$$\begin{aligned} V_n(y) = \textrm{Res}(g_w,0) + \textrm{Res}(g_w,a_0w) + \textrm{Res}(g_w,-a_0w), \end{aligned}$$

where \(\textrm{Res}(g_w,\cdot )\) denotes the value of residue of \(g_w\) at each pole in \(\mathbb {C}\). Noting

$$\begin{aligned} \textrm{Res}(g_w,0) = \frac{1}{1+2\textsf{i}}, \end{aligned}$$

and

$$\begin{aligned} \textrm{Res}(g_w,\pm a_0w) = \frac{1}{aw} \frac{(a_0^2-1)w^2}{(1-2\textsf{i}) (a_0w)(2a_0w)} = \frac{a_0^2-1}{2a_0^2(1-2\textsf{i})}, \end{aligned}$$

we see that the summation of the residues inside the circle \(\mathscr {C}\) is independent of w. This means \(\tilde{W}_n(y)=O(1)\) and we complete the proof. \(\square \)

Finally, we give a proof of Lemma 6.5.

Proof of Lemma 6.5

In the sequel, we simply write \(h_n =h\) and \(h_{j,j^\prime }=h(j/n,j^\prime /n)\) with an abuse of notation. Set \(u_{j,j^\prime }=h_{j+j^\prime ,j-j^\prime +1}\). Then, the Fourier transformation of u is computed as

$$\begin{aligned} \begin{aligned} \widehat{u}(\xi ,\xi ^\prime )&= \frac{1}{n^2}\sum _{j,j^\prime \in \mathbb {Z}} h_{j+j^\prime ,j-j^\prime +1} e^{\frac{2\pi \textsf{i}}{n}(j\xi + j^\prime \xi ^\prime )} = \frac{1}{n^2}\sum _{k,k^\prime \in \mathbb {Z}} h_{k,k^\prime +1} e^{\frac{\pi \textsf{i}}{n}( k(\xi +\xi ^\prime ) + k^\prime (\xi -\xi ^\prime ))} \\&= e^{-\frac{\pi \textsf{i}}{n}(\xi -\xi ^\prime ) } \widehat{h}\bigg ( \frac{\xi +\xi ^\prime }{2},\frac{\xi -\xi ^\prime }{2}\bigg ). \end{aligned} \end{aligned}$$

Noting \(\int _{[-n/2,n/2]} e^{2\pi \textsf{i}j\xi /n}d\xi = n\delta _{j,0}\), we apply the integration over \(\xi ^\prime \)-variable in the first identity of the last display. Then we have that

$$\begin{aligned} \int _{[\frac{n}{2},\frac{n}{2}]} \widehat{u}(\xi ,\xi ^\prime )d\xi ^\prime = \frac{1}{n} \sum _{j,j^\prime \in \mathbb {Z}} h_{j+j^\prime ,j-j^\prime +1} e^{\frac{2\pi \textsf{i}}{n}j\xi } \delta _{j^\prime ,0} = \frac{1}{n} \sum _{j\in \mathbb {Z}} {h_{j,j+1}} e^{\frac{2\pi \textsf{i}}{n}j\xi } = \widehat{h\textbf{1}_{j^\prime =j+1}}(\xi ). \end{aligned}$$

Hence by the Parseval-Plancherel identity, we have that

$$\begin{aligned} \frac{1}{n} \sum _{j\in \mathbb {Z}} h_{j,j+1}^2 = \int _{[\frac{n}{2},\frac{n}{2}]} \big |\widehat{h\textbf{1}_{j^\prime =j+1}}(\xi )\big |^2 d\xi = \int _{[\frac{n}{2},\frac{n}{2}]} \bigg ( \int _{[\frac{n}{2},\frac{n}{2}]} \widehat{u}(\xi ,\xi ^\prime )d\xi ^\prime \bigg )^2 d\xi . \end{aligned}$$

Moreover, recall that the Fourier transformation of \(h_n\) is given in (6.17), which yields,

$$\begin{aligned} \widehat{u}(\xi ,\xi ^\prime ) = e^{-\frac{\pi \textsf{i}}{n}(\xi -\xi ^\prime ) } \widehat{h}\bigg (\frac{\xi +\xi ^\prime }{2},\frac{\xi -\xi ^\prime }{2} \bigg ) = \frac{1}{\sqrt{n}} \frac{\textsf{i} e^{-\frac{\pi \textsf{i}}{n}(\xi -\xi ^\prime ) } \Omega (\frac{\xi +\xi ^\prime }{2n}, \frac{\xi -\xi ^\prime }{2n}) }{-\alpha _n^{-1} \Lambda (\frac{\xi +\xi ^\prime }{2n},\frac{\xi -\xi ^\prime }{2n}) - \textsf{i} \Omega (\frac{\xi +\xi ^\prime }{2n},\frac{\xi -\xi ^\prime }{2n})} \widehat{\varphi }_n (\xi ). \end{aligned}$$

Recalling the definition of \(\Omega \) and \(\Lambda \), we have a bound \(|\widehat{u}(\xi ,\xi ^\prime )|^2 \le n^{-1} | \widehat{\varphi _n}(\xi )|^2\). Therefore,

$$\begin{aligned} \frac{1}{n} \sum _{j\in \mathbb {Z}} h_{j,j+1}^2 \le \int _{[-\frac{n}{2},\frac{n}{2}]} |\widehat{\varphi }_n(\xi )|^2 d\xi \lesssim \int _{\mathbb {R}} \frac{d\xi }{1+|\xi |^2}, \end{aligned}$$

which is bounded by a universal constant. This completes the proof. Here we used the bound \(| \widehat{\varphi _n}(\xi )|\lesssim (1+|\xi |^p)^{-1}\) for any \(\varphi \in \mathcal {S}(\mathbb {R})\) and \(p \ge 1\). \(\square \)