Journal of Statistical Physics

, Volume 148, Issue 1, pp 1–37

Long Time, Large Scale Limit of the Wigner Transform for a System of Linear Oscillators in One Dimension

Authors

    • Institute of Mathematics
  • Łukasz Stȩpień
    • Institute of Mathematics
Open AccessArticle

DOI: 10.1007/s10955-012-0528-4

Cite this article as:
Komorowski, T. & Stȩpień, Ł. J Stat Phys (2012) 148: 1. doi:10.1007/s10955-012-0528-4
  • 270 Views

Abstract

We consider the long time, large scale behavior of the Wigner transform Wϵ(t,x,k) of the wave function corresponding to a discrete wave equation on a 1-d integer lattice, with a weak multiplicative noise. This model has been introduced in Basile et al. in Phys. Rev. Lett. 96 (2006) to describe a system of interacting linear oscillators with a weak noise that conserves locally the kinetic energy and the momentum. The kinetic limit for the Wigner transform has been shown in Basile et al. in Arch. Rat. Mech. 195(1):171–203 (2009). In the present paper we prove that in the unpinned case there exists γ0>0 such that for any γ∈(0,γ0] the weak limit of Wϵ(t/ϵ3/2γ,x/ϵγ,k), as ϵ≪1, satisfies a one dimensional fractional heat equation \(\partial_{t} W(t,x)=-\hat{c}(-\partial_{x}^{2})^{3/4}W(t,x)\) with \(\hat{c}>0\). In the pinned case an analogous result can be claimed for Wϵ(t/ϵ2γ,x/ϵγ,k) but the limit satisfies then the usual heat equation.

Keywords

Wigner transformInteracting harmonic oscillatorsFractional diffusion

1 Introduction

In the present paper we are concerned with the asymptotic behavior of the Wigner transform of the wave function corresponding to a discrete wave equation on a one dimensional integer lattice with a weak multiplicative noise, see (2.1) below. This kind of an equation arises naturally while considering a stochastically perturbed chain of oscillators with harmonic interactions, see [1] and also [15]. It has been argued in [1] that, due to the presence of the noise conserving both the energy and the momentum (in fact the latter property is crucial), in the low dimensions (d=1, or 2) conductivity of this explicitly solvable model diverges as N1/2 in dimension d=1 and logN, when d=2, where N is the length of the chain. This complies with numerical results concerning some anharmonic chains with no noise, see e.g. [14] and [13]. We refer an interested reader to the review papers [5, 13] and the references therein for more background information on the subject of heat transport in anharmonic crystals.

It has been shown in [3] that in the weakly coupled case, i.e. when the coupling parameter ϵ is small, the asymptotics of the Wigner function Wϵ(t,x,k) (defined below by (2.8), with γ=0), that describes the resolution of the energy in spatial and momentum coordinates (x,k) at time tϵ−1, is given by a linear Boltzmann equation, see (2.9) below. Furthermore, since in the dimension d=1 the scattering rate of a phonon is of order k2 for small wavenumber k, in the unpinned case (then the dispersion relation satisfies \(\omega'(k)\approx \operatorname{sign} k\), for |k|≪1) the long time, large space asymptotics of the solution of the transport equation can be described by a fractional (in space) heat equation
$$ \partial_tW(t,x)=-\hat{c}\bigl(-\partial^2_x \bigr)^{3/4}W(t,x) $$
for some \(\hat{c}>0\). The initial condition W(0,x) is the limit of the average of the initial Wigner transform over the wavenumbers, see [10] and also [2, 16]. Note that the above equation is invariant under time-space scalings tt′/ϵ3γ/2, xx′/ϵγ for an arbitrary γ>0. This suggests that the fractional heat equation is the limit of the Wigner transform in the above time space scaling. In our first main result, see part (1) of Theorem 2.1 below, we prove that it is indeed the case when γ∈(0,γ0] for some γ0>0.
On the other hand, in the pinned case, i.e. when the dispersion relation satisfies ω′(k)≈0, as |k|≪1, one can show that the solution of the Boltzmann equation approximates the regular heat equation
$$ \partial_tW(t,x)=\hat{c}\partial^2_xW(t,x). $$
(1.1)
In part (2) of Theorem 2.1 below we assert that if the Wigner transform is considered under the scaling (t,x)∼(t′/ϵ2γ,x′/ϵγ), for γ∈(0,γ0] and some γ0>0, then it converges to W(t,x), as ϵ≪1. The coefficient \(\hat{c}\) in the heat equation (1.1) is related with the thermal conductivity coefficient κ. Recall that the latter is calculated explicitly in the pinned case with the help of the Green-Kubo formula in [1, see (16)] for the model without weak noise assumption. It corresponds to the system of harmonic oscillators described by (2.5) below with ϵ=1. The Green-Kubo formula reads then
$$ \kappa=3+\frac{1}{2e^2}\int_0^{+\infty}C(t)\,dt, $$
(1.2)
where C(t) is the current-current correlation function. In fact, after detailed computation, see formulas (3.13) and (6.12)–(6.14) below, one obtains that the integral of the current correlation function in (1.2) equals \([e^{2}/(48\pi^{2})]\hat{c}\), cf. [1, see (12)]. Therefore, we can write
$$ \kappa=3+\frac{\hat{c}}{96\pi^2}. $$
(1.3)
Finally, we mention also the results concerning the diffusive limits for the Wigner transform of a solution of the wave equation on a lattice with a random local velocity in the weak coupling regime (see [15]), for the geometric optics regime for the wave equation in continuum (see [12]) and in the case of Schrödinger equation in the radiative transport regime (see [9]).

The proof of Theorem 2.1 is made of two principal ingredients: the estimates of the convergence rate for the Wigner transform towards the solution of the kinetic equation, see Theorem 3.2 below in the unpinned case (resp. Theorem 3.5 for the pinned case), and the respective rate of convergence estimates for the solutions of the scaled kinetic equation, see Theorem 3.3 (resp. Theorem 3.6). To prove the latter we show two probabilistic results: Theorems 5.5 and 5.8 that are of interest on their own. They provide estimates of the rate of convergence of the characteristic functions corresponding to a scaled additive functional of a stationary Markov chain towards the characteristic function of an appropriate stable limit and the respective result in the continuous time case.

2 Description of the Model and Preliminaries

2.1 Discrete Wave Equation with a Noise

We consider a discrete wave equation with the multiplicative noise on a one dimensional integer lattice, see [1],
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ4_HTML.gif
(2.1)
Here \(( \mathfrak{ p},\mathfrak{ q} )=\{( \mathfrak{ p}_{x},\mathfrak{ q}_{x}), x\in \mathbb{Z}\} \), where the component labelled by x corresponds to the one dimensional momentum \(\mathfrak{ p}_{x}\) and position \(\mathfrak{ q}_{x}\). The Hamiltonian corresponds to an infinite chain of harmonic oscillators and is given by
$${\mathcal{H}}( \mathfrak{ p}, \mathfrak{ q}):=\frac{1}{2}\sum_{y\in\mathbb{ Z}} \mathfrak{ p}_y^2+\frac{1}{2}\sum_{y,y'\in\mathbb{ Z}} \alpha\bigl(y-y'\bigr)\mathfrak{ q}_y\mathfrak{ q}_{y'}. $$
The interaction potential {αx,x∈ℤ} will be further specified later on. The noises \(\{\dot{\xi}_{x}^{(\epsilon)}(t),x\in\mathbb{ Z}\}\) are defined by the following stochastic differentials
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ5_HTML.gif
(2.2)
understood in the Stratonovich sense. Here ϵ>0 and
$$Y_x:=(\mathfrak{ p}_x-\mathfrak{ p}_{x+1}) \partial_{\mathfrak{ p}_{x-1}}+(\mathfrak{ p}_{x+1}-\mathfrak{ p}_{x-1}) \partial_{\mathfrak{ p}_{x}}+(\mathfrak{ p}_{x-1}-\mathfrak{ p}_{x}) \partial_{\mathfrak{ p}_{x+1}} $$
and {wx(t),t≥0}, x∈ℤ are i.i.d. standard, one dimensional Brownian motions over a certain probability space \((\varOmega ,{\mathcal{F}},{\mathbb{P}})\). Note that the vector field Yx is tangent to the surfaces
$$ \mathfrak{ p}_{x-1}^2+\mathfrak{ p}_x^2+\mathfrak{ p}_{x+1}^2\equiv \mbox{const} $$
(2.3)
and
$$ \mathfrak{ p}_{x-1}+\mathfrak{ p}_x+\mathfrak{ p}_{x+1}\equiv \mbox{const} $$
(2.4)
therefore the system (2.1) conserves the total energy and momentum.
System (2.1) can be rewritten formally in the Itô form:
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ8_HTML.gif
(2.5)
Here \(\beta_{y}:=\Delta\beta^{(0)}_{y}\), with
$$\beta^{(0)}_y=\left\{ \begin{array}{r@{\quad}l} -4,&y=0,\\ -1,&y=\pm1,\\ 0, &\mbox{if otherwise}. \end{array} \right . $$
The lattice Laplacian for a given g:ℤ→ℂ is defined as Δgy:=gy+1+gy−1−2gy.

2.2 Formulation of the Main Results

To describe the distribution of the energy of the chain over the position and momentum coordinates it is convenient to consider the Wigner transform of the wave function corresponding to the chain. Adjusting the time variable to the macroscopic scale it is defined as
$$ \psi^{(\epsilon)}(t):=\tilde{\omega} * \mathfrak{ q} \biggl( \frac{t}{\epsilon} \biggr)+i\mathfrak{ p} \biggl(\frac{t}{\epsilon} \biggr). $$
(2.6)
Here \(\tilde{\omega}\) is the inverse Fourier transform, see (2.17), of the dispersion relation function given by \(\omega(k)=\sqrt{\hat{\alpha}(k)}\), with \(\hat{\alpha}(k)\) the direct Fourier transform of the potential, defined on \(\mathbb{ T}\)—the one dimensional torus, see (2.16). Suppose that the initial condition in (2.1) is random, independent of the realizations of the noise and such that for some γ>0
$$ \limsup_{\epsilon\to0+}\sum_{y\in\mathbb{ Z}} \epsilon^{1+\gamma}\bigl\langle\bigl|\psi^{(\epsilon )}_y(0)\bigr|^2 \bigr\rangle_{\epsilon}<+\infty. $$
(2.7)
Here 〈⋅〉ϵ denotes the average with respect to the probability measure μϵ corresponding to the randomness in the initial data. In fact, since the total energy of the system \(\sum_{y\in\mathbb{ Z}}|\psi^{(\epsilon)}_{y}(t)|^{2}\) is conserved in time, see Sect. 2 of [3], an analogue of condition (2.7) holds for any t>0.
The (averaged) Wigner transform of the wave function, see [3], is a distribution defined as follows
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ11_HTML.gif
(2.8)
for any \(\tilde{J}\) belonging to \({\mathcal{S}}\)—the Schwartz class of functions on \(\mathbb{ R}\times\mathbb{ T}\), see Sect. 2.4. Here \({\mathbb{E}}_{\epsilon}\) is the average with respect to the product measure μϵ⊗ℙ. It has been shown in [3], see Theorem 5, that, under appropriate assumptions on the potential α(⋅), see conditions (a1) and (a2) below, the respective Wigner transforms Wϵ,0(t) converge in a distribution sense, as ϵ→0+, to the solution of the linear kinetic equation
$$ \partial_t U(t,x,k)+\frac{\omega'(k)}{2\pi} \partial_x U(t,x,k)={\mathcal{L}} U(t,x,k), $$
(2.9)
where \({\mathcal{L}}\) is the scattering operator defined in (2.35). Our principal result concerning this model deals with the limit of the Wigner transform in the longer time scales, i.e. when γ>0. It is a direct consequence of Theorems 3.1 and 3.4 formulated below that contain also the information on the convergence rates. Before its statement let us recall the notion of a solution of a fractional heat equation. Assume that W0 is a function from the Schwartz class on ℝ. The solution of the Cauchy problem for the fractional heat equation
$$ \partial_tW(t,x)=-\frac{\hat{c}}{(2\pi)^b}\bigl(- \partial_x^2\bigr)^{b/2}W(t,x) $$
(2.10)
with \(b,\hat{c}>0\) and W(0,x)=W0(x) is given by
$$W(t,x):=\int_{\mathbb{ R}}e^{i2\pi xp-\hat{c}|p|^{b}t}\hat{W}_0 (p )\,dp, $$
where
$$\hat{W}_0(p):=\int_{\mathbb{ R}}e^{-i2\pi xp}W_0 (x )\,dx $$
is the Fourier transform of W0(x).

Theorem 2.1

Suppose that potential {αy,y∈ℤ} satisfy assumptions (a1)–(a2) formulated in Sect2.3. Then, the following are true.
  1. (1)
    Assume that\(\hat{\alpha}(0)=0\) (no pinning case), γ∈(0,2a/3) for somea∈(0,1] and
    $$ {\mathcal{K}}_{a,\gamma}:=\limsup_{\epsilon\to0+} \epsilon^{1+\gamma }\int_{\mathbb{ T} }\bigl\langle\bigl|\hat{\psi}^{(\epsilon)}_0(k)\bigr|^2\bigr\rangle_{\epsilon} \frac {dk}{|k|^{2a}}<+\infty, $$
    (2.11)
    where\(\hat{\psi}^{(\epsilon)}_{0}(k)\)is the Fourier transform of the initial condition\(\psi^{(\epsilon)}_{x}(0)\). Suppose also that for someW0with the norm (2.24) finite we have
    $$ \lim_{\epsilon\to0+} \bigl\langle W_{\epsilon,\gamma}(0),\tilde{J} \bigr\rangle =\int_{\mathbb{ R}\times\mathbb{ T}} W_0(x,k)\tilde{J}^*(x,k)\,dx\,dk, $$
    (2.12)
    for all\(\tilde{J}\in{\mathcal{S}}\). Then, for any fixedt>0 we have
    $$ \lim_{\epsilon\to0+} \biggl\langle W_{\epsilon,\gamma} \biggl( \frac{t}{\epsilon^{3\gamma /2}} \biggr),\tilde{J} \biggr\rangle =\int_{\mathbb{ R}\times\mathbb{ T}} W(t,x)\tilde{J}^*(x,k)\,dx\,dk, $$
    (2.13)
    whereW(t,x) satisfies (2.10) withb=3/2 and the initial condition given by
    $$ W(0,x):=\int_{\mathbb{ T}}W_0(x,k)\,dk. $$
    (2.14)
    The coefficient\(\hat{c}\)is given by (3.5).
     
  2. (2)
    Suppose that\(\hat{\alpha}(0)>0\) (pinned case), γ∈(0,1/2) and conditions (2.11) and (2.12) forW0with the norm (2.24) finite are satisfied. Then, for any\(\tilde{J}\in{\mathcal{S}}\)and fixedt>0 we have
    $$ \lim_{\epsilon\to0+} \biggl\langle W_{\epsilon,\gamma} \biggl( \frac {t}{\epsilon^{2\gamma }} \biggr),\tilde{J} \biggr\rangle=\int_{\mathbb{ R}\times\mathbb{ T}} W(t,x)\tilde{J}^*(x,k)\,dx\,dk, $$
    (2.15)
    whereW(t,x) is the solution of the ordinary heat equation, i.e. (2.10) withb=2, with the initial condition given by (2.14) and coefficient\(\hat{c}\)as in (3.13).
     

2.3 Fourier Transform of the Wave Function

The one dimensional torus \(\mathbb{ T}\) is understood here as the interval [−1/2,1/2] with its endpoints identified. Let er(k):=exp{−i2πrk}, r∈ℤ. It is a standard orthonormal base in \(L^{2}_{\mathbb{C}}(\mathbb{ T})\)—the space of complex valued, square integrable functions. The Fourier transform of a given square integrable sequence of complex numbers {gy,y∈ℤ} is defined as
$$ \hat{g}(k)=\sum_{y\in\mathbb{ Z}}g_ye_y(k), \quad k\in\mathbb{ T} $$
(2.16)
and the inverse transform is given by
$$ \tilde{f}_y=\int_{\mathbb{ T}} e_y^*(k)f(k)\,dk, \quad y\in\mathbb{ Z} $$
(2.17)
for any f belonging to \(L^{2}_{\mathbb{C}}(\mathbb{ T})\).
A straightforward calculation shows that \(\hat{\psi}^{(\epsilon )}(t,k)\)—the Fourier transform of the (complex valued) wave function given by (2.6)—satisfies the following Itô stochastic differential equation, cf. formula (7.0.7) of [4],
$$ \begin{aligned}[c] & d\hat{\psi}^{(\epsilon)}(t)=A_{\epsilon}\bigl[\hat{\psi}^{(\epsilon)}(t)\bigr]\,dt +\sum_{r\in\mathbb{ Z} }Q\bigl[\hat{\psi}^{(\epsilon)}(t)\bigr](e_r)\,dw_r(t), \\ & \hat{\psi}^{(\epsilon)}(0)= \hat{\psi}_0, \end{aligned} $$
(2.18)
where \(\hat{\psi}_{0}\) is the Fourier transform of ψ(ϵ)(0). Here, a (nonlinear) mapping \(A:L^{2}_{\mathbb{C}}(\mathbb{ T})\to L^{2}_{\mathbb{C}}(\mathbb{ T} )\) is given by
$$ A_{\epsilon}[f](k):=-\frac{i}{\epsilon} \omega(k)f(k)- \frac{\hat{\beta}(k)}{4}\sum_{\sigma=\pm}\sigma f_{\sigma}(k),\quad\forall f\in L^2_{\mathbb{C}}(\mathbb{ T}), $$
(2.19)
where
$$ \begin{array}{c} f_+(k):=f(k)\quad\mbox{and}\quad f_-(k):=f^*(-k), \\ [3pt] \hat{\beta}(k)=8\sin^2(\pi k) \bigl[1+2\cos^2(\pi k) \bigr]. \end{array} $$
(2.20)
For any \(g\in L^{2}_{\mathbb{C}}(\mathbb{ T})\) a linear mapping \(Q[g]:L^{2}_{\mathbb{C}}(\mathbb{ T})\to L^{2}_{\mathbb{C}}(\mathbb{ T})\) is given by
$$Q[g](f) (k):=i\sum_{\sigma=\pm}\sigma\int _{\mathbb{ T}}r\bigl(k,k'\bigr)g_{\sigma } \bigl(k-k'\bigr)f\bigl(k'\bigr)\,dk',\quad \forall f\in L^2_{\mathbb{C}}(\mathbb{ T}), $$
where
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equh_HTML.gif
Finally, {wr(t),t≥0}, r∈ℤ are i.i.d. one dimensional, standard Brownian motions, on a probability space \((\varOmega,{\mathcal{F}},{\mathbb{P}} )\), that are non-anticipative w.r.t. the given filtration \(\{{\mathcal{F}}_{t},t\ge0\}\).
We assume, as in [3], that
  1. (a1)

    {αy,y∈ℤ} is real valued and there exists C>0 such that |αy|≤Ce−|y|/C for all y∈ℤ,

     
  2. (a2)

    \(\hat{\alpha}(k)\) is also real valued, \(\hat{\alpha}(k)>0\) for \(k\not=0\) and in case \(\hat{\alpha}(0)=0\) we have \(\hat{\alpha}''(0)>0\).

     
The above assumptions imply that both yαy and \(k\mapsto \hat{\alpha}(k)\) are real valued, even functions. In addition \(\hat{\alpha}\in C^{\infty}(\mathbb{ T})\) and if \(\hat{\alpha}(0)=0\) then \(\hat{\alpha}(k)=k^{2}\phi(k^{2})\) for some strictly positive \(\phi\in C^{\infty}( \mathbb{T})\). This in particular implies that, in the latter case, the dispersion relation \(\omega(k)=\sqrt{\hat{\alpha}(k)}\) belongs to \(C^{\infty}(\mathbb{ T}\setminus\{0\})\).
It can be easily checked that under the hypotheses made about the potential αy the mapping given by (2.19) is Lipschitz from \(L^{2}_{\mathbb{C}}(\mathbb{ T})\) to itself and \(\sum_{r\in\mathbb{ Z}}\|Q[g](e_{r})\|_{L^{2}}^{2}\le C\|g\|_{L^{2}}^{2}\) for some C>0 and all \(g\in L^{2}_{\mathbb{C}}(\mathbb{ T})\) so Q[g] is Hilbert-Schmidt. Using Theorem 7.4, p. 186, of [6] one can show that for any \(L^{2}_{\mathbb{C}}(\mathbb{ T})\)-valued, \({\mathcal{F}}_{0}\)-measurable, initial data \(\hat{\psi}_{0}^{(\epsilon)}\) there exists a unique solution to (2.18) understood as \(L^{2}_{\mathbb{C}}(\mathbb{ T})\)-valued, continuous trajectory, adapted process \(\{\hat{\psi}^{(\epsilon)}(t), t\ge0\} \) a.s. satisfying (2.18). In addition, see Sect. 2 of [3], for every initial data \(\hat{\psi}_{0}^{(\epsilon)}\in L^{2}_{\mathbb{C}}(\mathbb{ T})\) we have
$$ \bigl\|\hat{\psi}^{(\epsilon)}(t)\bigr\|_{L^2}= \mbox{const}, \quad\forall t\ge 0,\ {\mathbb{P}}\mbox{ a.s.} $$
(2.21)

2.4 Wigner Transform

2.4.1 Some Function Spaces

Denote by \({\mathcal{S}}\) the set of functions \(J:\mathbb{ R}\times\mathbb{ T}\to\mathbb{C}\) that are of C class and such that for any integers l,m,n we have \(\sup_{p,k}(1+p^{2})^{n/2}|\partial_{p}^{l}\partial_{k}^{m}J(p,k)|<+\infty\). For any a∈ℝ we introduce the norm
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ25_HTML.gif
(2.22)
for a given \(J\in{\mathcal{S}}\). By \({\mathcal{A}}'_{a}\) we denote the completion of \({\mathcal{S}}\) in the norm \(\|\cdot\|_{{\mathcal{A}}_{a}'}\). Note that \({\mathcal{A}}_{a}'\) is dual to \({\mathcal{A}}_{a}\) defined as the completion of \({\mathcal{S}}\) in the norm
$$ \|J\|_{{\mathcal{A}}_a}:=\sup_p\bigl(1+p^2 \bigr)^{-a/2}\int_{\mathbb{ T}}\bigl|J(p,k)\bigr|\,dk. $$
(2.23)
We use a shorthand notation \({\mathcal{A}}:={\mathcal{A}}_{0}\) and \({\mathcal{A}}':={\mathcal{A}}_{0}'\). With some abuse of notation by 〈⋅,⋅〉 we denote the scalar product in \(L^{2}_{\mathbb{C}}(\mathbb{ T})\) and the extension of
$$\langle J_1,J_2\rangle:=\int_{\mathbb{ R}\times\mathbb{ T}}J_1(p,k)J_2^*(p,k)\,dp\,dk $$
from \({\mathcal{S}}\times{\mathcal{S}}\) to \({\mathcal{A}}\times{\mathcal{A}}'\).
We shall also use the space \({\mathcal{B}}_{a,b}\) obtained by the completion of \({\mathcal{S}}\) in the norm
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ27_HTML.gif
(2.24)
When b=0 we shall write \({\mathcal{B}}_{a}\) instead of \({\mathcal{B}}_{a,0}\).

2.4.2 Random and Average Wigner Transform

For a given ϵ>0 let \(\hat{\psi}^{(\epsilon)}(t)\) be a solution of (2.18) with a random initial condition \(\hat{\psi}^{(\epsilon)}(0)\) distributed according to a probability measure μϵ on \(L^{2}_{\mathbb{C}}(\mathbb{ T})\). Define
$$ \widehat{W}_{\epsilon}(t,p,k):= \biggl\langle \bigl(\hat{\psi}^{(\epsilon)} \bigr)^* \biggl(t,k-\frac{\epsilon p}{2} \biggr)\hat{\psi}^{(\epsilon )} \biggl(t,k+\frac {\epsilon p}{2} \biggr) \biggr \rangle_{\epsilon} $$
(2.25)
and
$$ \widehat{Y}_{\epsilon}(t,p,k):= \biggl\langle\hat{\psi}^{(\epsilon )} \biggl(t,- k+\frac{\epsilon p}{2} \biggr)\hat{\psi}^{(\epsilon)} \biggl(t, k+\frac{\epsilon p}{2} \biggr) \biggr\rangle_{\epsilon}, $$
(2.26)
where, as we recall, 〈⋅〉ϵ is the average with respect to the initial condition. Using (2.7) and (2.21) we conclude that both \(\widehat{W}_{\epsilon}(t)\) and \(\widehat{Y}_{\epsilon}(t)\) belong to \(L^{1}({{\mathbb{P}}} ;{\mathcal{A}})\)—the space of \({\mathcal{A}}\)-valued random elements possessing the absolute moment, for any t≥0. We also introduce the average objects \(\overline{W}_{\epsilon}(t,p,k)\) and \(\overline{Y}_{\epsilon}(t,p,k)\) using formulas analogous to (2.25) and (2.26) with 〈⋅〉ϵ replaced by \({\mathbb{E}}_{\epsilon}\) corresponding to the average over both the initial data and realization of Brownian motion.
The (averaged) Wigner transform Wϵ,γ(t) is defined as
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ30_HTML.gif
(2.27)
where \(J\in{\mathcal{A}}'\) and
$$\tilde{J}(x,k):=\int_{\mathbb{ R}}\exp \{i2\pi px \}J(p,k)\,dp. $$
The anti-transform Yϵ,γ(t) is defined by an analogous formula, with \(\overline{W}_{\epsilon}\) replaced by \(\overline {Y}_{\epsilon}\).

2.5 Evolution of the Wigner Transform

Using Itô formula for the solution of (2.18), see Theorem 4.17 of [6], we conclude that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ31_HTML.gif
(2.28)
where the \(\{{\mathcal{M}}^{(\epsilon)}_{t},t\ge0\}\) is an \(\{ {\mathcal{F}}_{t},t\ge 0\}\)-adapted local martingale, given by
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ32_HTML.gif
(2.29)
In order to guarantee that the stochastic integrals defined above are martingales, not merely local ones, we need to make an additional assumption that μϵ has the 4-th absolute moment. Taking the expectation of both sides of (2.28) with respect to the realizations of the Brownian motion we conclude that \(\overline{W}_{\epsilon}(t,p,k)={\mathbb{E}}\widehat{W}_{\epsilon}(t,p,k)\) satisfies
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ33_HTML.gif
(2.30)
where
$$ \begin{aligned}[c] \hat{\mathfrak{ p}}^{(\epsilon)}(t,k)&:=\frac{1}{2i} \bigl[\hat{\psi}^{(\epsilon )}(t,k)-\bigl(\hat{\psi}^{(\epsilon)}\bigr)^*(t,-k) \bigr], \\ \rho_{\epsilon}\bigl(k,k',p\bigr)&:=r \biggl(k- \frac{\epsilon p}{2},k' \biggr)r \biggl(k+\frac {\epsilon p}{2},k' \biggr), \\ \delta_{\epsilon}\omega(p,k)&:=\frac{1}{\epsilon} \biggl[\omega \biggl(k+ \frac{\epsilon p}{2} \biggr)-\omega \biggl(k-\frac{\epsilon p}{2} \biggr) \biggr], \\ \bar{\beta}_{\epsilon}(k,p)&:=\frac{1}{2} \biggl[\hat{\beta}\biggl(k+\frac{\epsilon p}{2} \biggr)+\hat{\beta}\biggl(k-\frac{\epsilon p}{2} \biggr) \biggr], \end{aligned} $$
(2.31)
and
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equk_HTML.gif
with \(\overline{Y}_{\epsilon}(t,p,k)={\mathbb{E}}\widehat{Y}_{\epsilon}(t,p,k)\).

Formula (2.30) remains valid also when only the second absolute moment exists. This can be easily argued by an approximation of the initial condition by random elements that are deterministically bounded.

Since the momentum, that is the inverse Fourier transform of \(\hat{\mathfrak{ p}}^{(\epsilon)}(t,k)\), is real valued, the expression under the expectation appearing on the right hand side of (2.30) is an even function of k′, thus the last term appearing on the right hand side of the equation can be replaced by
$$ -4\int _{\mathbb{ T}}R_{\epsilon}\bigl(p,k,k'\bigr){ \mathbb{E}}_{\epsilon} \biggl[\bigl(\hat{\mathfrak{ p}}^{(\epsilon )}\bigr)^* \biggl(t,k'-\frac{\epsilon p}{2} \biggr)\hat{\mathfrak{ p}}^{(\epsilon )} \biggl(t,k'+\frac{\epsilon p}{2} \biggr) \biggr]\,dk', $$
(2.32)
where
$$R_{\epsilon}\bigl(p,k,k'\bigr):=\frac{1}{2}\sum _{\iota=\pm1}\rho_{\epsilon} \bigl(k,k+\iota k', p \bigr). $$
Note that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equm_HTML.gif
The following relation holds
$$ 4\int_{\mathbb{ T}}R\bigl(k,k' \bigr)\,dk'=\hat{\beta}(k),\quad\forall k\in\mathbb{ T}. $$
(2.33)
We conclude therefore that \(\overline{W}_{\epsilon}(t,p,k)\) satisfies the following
$$ \bigl\langle\partial_t \overline{W}_{\epsilon}(t),J\bigr\rangle= \bigl\langle\,\overline{W}_{\epsilon}(t), (iB +{\mathcal{L}} )J\bigr\rangle +\bigl\langle{\mathcal{R}}_{\epsilon}(t),J \bigr\rangle,\quad\forall J\in {\mathcal{S}}, $$
(2.34)
where Bf(p,k):=′(k)f(p,k), for any \(f\in{\mathcal{S}}\). We let ω′(0):=0 in case ω is not differentiable at 0. In addition,
$$ {\mathcal{L}}:={\mathcal{L}}^{(0)}, $$
(2.35)
where for each ϵ∈[0,1] operator \({\mathcal{L}}^{(\epsilon )}\) acts on \({\mathcal{S}}\) according to the formula
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ39_HTML.gif
(2.36)
and extends to a bounded operator on either \({\mathcal{A}}\), or \({\mathcal{A}}'\). Finally,
$$ {\mathcal{R}}_{\epsilon}(t,p,k):={\mathcal{R}}_{\epsilon}^{(1)}(t,p,k)+{\mathcal{R}}_{\epsilon}^{(2)}(t,p,k), $$
(2.37)
with
$$ \begin{aligned}[c] & {\mathcal{R}}_{\epsilon}^{(1)}(t,p,k):= \bigl[i \bigl(p\omega'(k)-\delta_{\epsilon}\omega (p,k) \bigr)+\bigl({ \mathcal{L}}^{(\epsilon)} -{\mathcal{L}}\bigr) \bigr]\overline{W}_{\epsilon} (t,p,k), \\ &{\mathcal{R}}_{\epsilon}^{(2)}(t,p,k):=\overline{\mathcal{R}}_{\epsilon}^{(2)}(t,p,k)+ \bigl[\,\overline{\mathcal{R}}_{\epsilon}^{(2)}(t,-p,k) \bigr]^*, \end{aligned} $$
(2.38)
where
$$ \overline{\mathcal{R}}_{\epsilon}^{(2)}(t,p,k):= \frac{1}{4}\hat{\beta}\biggl(k-\frac{\epsilon p}{2} \biggr)\overline{Y}_{\epsilon}(t, p,k)-\int_{\mathbb{ T}}R_{\epsilon} \bigl(p,k,k' \bigr)\overline{Y}_{\epsilon}\bigl(t,p,k'\bigr)\,dk'. $$
Similar calculations show that
$$ \frac{d}{dt}\overline{Y}_{\epsilon}(t,p,k)=- \frac{2i}{\epsilon} \bar{\omega}_{\epsilon} (p,k)\overline{Y}_{\epsilon}(t,p,k) +\overline{\mathcal{U}}_{\epsilon}(t,p,k), $$
(2.39)
where
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equo_HTML.gif
and
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equp_HTML.gif

2.6 Probabilistic Interpretation of the Kinetic Linear Equation

Denote by \(\overline{U}(t,p,k)\) the Fourier transform of the solution of (2.9) in the x variable. Let Kt(k) be a \(\mathbb{ T}\)-valued, Markov jump process, defined over \((\varOmega ,{\mathcal{F}},{\mathbb{P}})\), starting at k, with the generator \({\mathcal{L}}\). Suppose also that \(\overline{U}_{0}\in{\mathcal{A}}\). Then
$$ \partial_t \overline{U}(t)-iB\overline{U}(t)={\mathcal{L}}\overline{U}(t),\quad \overline{U}(0)=\overline{U}_0 $$
(2.40)
is understood as a continuous \({\mathcal{A}}\)-valued function \(\overline{U}(t)\) such that
$$ \bigl\langle\,\overline{U}(t),J\bigr\rangle-\langle\overline{U}_0,J\rangle=\int_0^t\bigl \langle\,\overline{U}(s),(iB+{\mathcal{L}})J\bigr\rangle\, ds $$
(2.41)
for all \(J\in{\mathcal{A}}'\). It is well known that this solution admits the following probabilistic representation
$$ \overline{U}(t,p,k):={\mathbb{E}} \biggl[\exp \biggl\{- i p\int_0^t\omega' \bigl(K_s(k) \bigr)\,ds \biggr\}\overline{U}_0 \bigl(p,K_t(k) \bigr) \biggr]. $$
(2.42)
Here \({\mathbb{E}}\) is the expectation over ℙ. For given \(J\in{\mathcal{A}}'\) we let
$$ J(t,p,k):={\mathbb{E}} \biggl[\exp \biggl\{ i p\int _0^t\omega' \bigl(K_s(k) \bigr)\,ds \biggr\}J \bigl(p,K_t(k) \bigr) \biggr]. $$
(2.43)
Therefore \(J(t)\in{\mathcal{A}}'\). Using the reversibility of the Lebesgue measure under the dynamics of Kt, we conclude that the law of {Kts,s∈[0,t]} and that of {Ks,s∈[0,t]} coincide. Hence,
$$ \bigl\langle\,\overline{U}(t),J\bigr\rangle=\bigl\langle\, \overline{U}_0,J(t)\bigr\rangle. $$
(2.44)
Likewise, using the definition of \({\mathcal{R}}_{\epsilon}(t)\) (see (2.37)), from (2.34) and the Duhamel formula we get
$$ \bigl\langle\,\overline{W}_{\epsilon}(t),J\bigr \rangle=\bigl\langle\,\overline{W}_{\epsilon} (0),J(t)\bigr\rangle +\int _0^t\bigl\langle{\mathcal{R}}_{\epsilon}(s),J(t-s) \bigr\rangle \,ds,\quad \forall J\in {\mathcal{S}}. $$
(2.45)

3 Convergence of the Wigner Transform

For a given a∈ℝ define the norm
$$\|f\|_{H^{-a}}:= \biggl(\int_{\mathbb{ T}} \frac{|f(k)|^2}{|k|^{2a}}\,dk \biggr)^{1/2}. $$
Recall also that μϵ is the distribution of the initial data for Eq. (2.18). We assume that:
(Aa)
for a given a>0
$$ {\mathcal{K}}_{a,\gamma}:=\limsup_{\epsilon\to0+} \epsilon^{1+\gamma }\int\|f\|_{H^{-a}}^2 \mu_{\epsilon}(df)<+\infty. $$
(3.1)
Let \({\mathcal{K}}_{\gamma}:={\mathcal{K}}_{0,\gamma}\).

3.1 No Pinning

Since
$$\biggl\langle W_{\epsilon,\gamma} \biggl(\frac{t}{\epsilon^{3\gamma /2}} \biggr),\tilde{J} \biggr\rangle=\frac{\epsilon^{1+\gamma}}{2}\int_{\mathbb{ R}\times \mathbb{ T}}\overline{W}_{\epsilon} \biggl(\frac{t}{\epsilon^{3\gamma/2}},p\epsilon^{\gamma },k \biggr)J^*(p,k) \,dp\,dk, $$
part (1) of Theorem 2.1 is a consequence of the following result.

Theorem 3.1

Assume thatt0>0, \(\hat{\alpha}(0)=0\)anda∈(0,1] is such that (3.1) holds. Then, for anyγ∈(0,2a/3),
$$ 0<\gamma'< \gamma\min \biggl[\frac{3}{13}, \frac{1}{a+1} \biggr] $$
(3.2)
andb>1 one can findC>0 such that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ51_HTML.gif
(3.3)
for allϵ∈(0,1], μϵ, tt0, \(J\in {\mathcal{A}}_{5}'\cap {\mathcal{B}}_{a,b}\)and\(W_{0}\in{\mathcal{B}}_{a}\). Here
$$ \overline{W}_0(p)=\int_{\mathbb{ T}} W_0 (p,k )\,dk\quad\mbox{\textit{and}} \quad \overline{J}(p)=\int _{\mathbb{ T}} J (p,k )\,dk $$
(3.4)
and
$$ \hat{c}:= \biggl(\frac{\pi^2\hat{\alpha}''(0)}{2} \biggr)^{3/4}. $$
(3.5)

Proof

Denote by \(\overline{U}_{\epsilon} (t,p,k )\) the solution of (2.41) with the initial condition \(\overline{U}_{\epsilon} (0,p,k )=W_{0} (p\epsilon^{-\gamma},k )\). The left hand side of (3.3) can be estimated by
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ54_HTML.gif
(3.6)
Denote the terms appearing above by \({\mathcal{J}}_{1}\) and \({\mathcal{J}}_{2}\) respectively. The proof is made of two principal ingredients: the estimates of the convergence rates of the averaged Wigner transform of the wave function given by (2.18) to the solution of the linear equation (2.40) (that correspond to the estimates of \({\mathcal{J}}_{1}\)) and further estimates for the long time-large scale asymptotics of these solutions (these will allow us to estimate \({\mathcal{J}}_{2}\)). To deal with the first issue we formulate the following.

Theorem 3.2

Suppose that\(\{\hat{\psi}^{(\epsilon)}(t),\,t\ge0\}\)is the solution of (2.18) with coefficients satisfying (a1)–(a2) with\(\hat{\alpha}(0)=0\)and (Aa) for somea∈(0,1]. Assume also that\(\overline{U}(t)\)andJ(t) are given by (2.42) and (2.43) respectively. Then, there exists a constantC>0 such that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ55_HTML.gif
(3.7)
for all\(J\in{\mathcal{S}}\), ϵ∈(0,1] andt≥0.
We postpone the proof of this theorem until Sect. 4.1, proceeding instead with estimates of \({\mathcal{J}}_{1}\). Using (3.7) we can write that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ56_HTML.gif
(3.8)
where Jϵ(p,k):=ϵγJ(γ,k) and
$$(\delta W)_{\epsilon} (p,k ):=\frac{\epsilon^{1+\gamma }}{2}\overline{W}_{\epsilon} \bigl(0,p\epsilon^{\gamma},k \bigr)-\overline{W}_0 (p,k ). $$
Since \(\|J_{\epsilon}\|_{{\mathcal{A}}_{b}'}\le\|J\|_{{\mathcal{A}}_{b}'}\) for every ϵ∈(0,1] and b≥0, we obtain the last two terms on the right hand side of (3.8) account for the last two terms on the right hand side of (3.3).
Denote the first term on the right hand side of (3.8) by \({\mathcal{I}}\). We can write that \({\mathcal{I}}\le{\mathcal{I}}_{1}+{\mathcal{I}}_{2}\), where
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equt_HTML.gif
Here for any function \(f:\mathbb{ R}\times\mathbb{ T}\to\mathbb{C}\) we denote
$$\overline{f}(p):=\int_{\mathbb{ T}} f(p,k)\,dk. $$
Term \({\mathcal{I}}_{1}\) accounts for the first term on the right hand side of (3.3). Using the reversibility of the Lebesgue measure under the dynamics of Kt (see (2.44)), we conclude that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equv_HTML.gif
To estimate \({\mathcal{I}}_{2}\) (and then further to estimate \({\mathcal{J}}_{2}\)) we need a bound on the convergence rate of the scaled functionals of the form (2.42). Let
$$ \overline{W}(t,p):=\overline{W}_0(p)\exp\bigl\{-\hat{c} |p|^{3/2}t\bigr\}. $$
(3.9)

Theorem 3.3

For anyt0>0, a∈(0,1], b>1 andγas in (3.2) there exists a constantC>0 such that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ58_HTML.gif
(3.10)
for allϵ∈(0,1], tt0, \(W_{0}\in{\mathcal{B}}_{a}\)and\(J\in{\mathcal{A}}_{5}'\cap{\mathcal{B}}_{a,b}\).
The proof of this result shall be presented in Sect. 6.1. Using the above theorem we can estimate
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equw_HTML.gif
Invoking again Proposition 3.3, this time to estimate \({\mathcal{J}}_{2}\), we obtain that
$${\mathcal{J}}_2\le Ct \| W_0\|_{{\mathcal{B}}_a}\bigl(\|J \|_{{\mathcal{A}}_{5}'}+\|J\|_{{\mathcal{B}}_{a,b}}\bigr)\epsilon^{\gamma'}. $$
The above estimates account for the second term on the right hand side of (3.3), thus concluding the proof of the estimate in (3.3).

3.2 Pinned Case

Part (2) of Theorem 2.1 is a direct consequence of the following result.

Theorem 3.4

Assume that\(\hat{\alpha}(0)>0\)andt0>0. Then, for anyγ∈(0,1/2), a∈(0,1] and
$$ 0<\gamma'<\frac{a\gamma}{a+1} $$
(3.11)
one can findC>0 such that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ60_HTML.gif
(3.12)
for allϵ∈(0,1], tt0, \(J\in{\mathcal{A}}_{4}'\cap {\mathcal{B}}_{a,b}\)and\(W_{0}\in{\mathcal{B}}_{a}\). Here\(\overline{W}_{0}(p)\), \(\overline{J}(p)\)are given by (3.4) and
$$ \hat{c}:= 6\int_{\mathbb{ T}}\frac{[\omega'(k)]^2}{R(k)}\,dk. $$
(3.13)

Proof

We proceed in the same fashion as in the proof of Theorem 3.1 so we only outline the main points of the argument. First, we estimate the left hand side of (3.12) by an expression corresponding to (3.6). The first term is estimated by an analogue of Theorem 3.2 that in this case can be formulated as follows.

Theorem 3.5

Assume that conditions (a1)–(a2) and (A0) hold. In addition, we let\(\hat{\alpha}(0)>0\). Then, the average Wigner transform\(\overline{W}_{\epsilon} (t)\)satisfies the following: there exists a constantC>0 such that
$$ \biggl \vert \biggl\langle\frac{\epsilon^{1+\gamma}}{2}\overline{W}_{\epsilon}(t)-\overline{U}(t),J \biggr\rangle\biggr \vert \le \biggl \vert \biggl\langle\frac{\epsilon^{1+\gamma}}{2}\overline{W}_{\epsilon}(0)-\overline{U}_0,J(t) \biggr\rangle\biggr \vert + C\epsilon\|J \|_{{\mathcal{A}}_{1}'}{\mathcal{K}}_{\gamma}t $$
(3.14)
for allϵ∈(0,1] andt≥0.
The proof of this result is presented in Sect. 4.2. In the next step we can estimate the rate of convergence of the functional appearing in the formula for the probabilistic solution of the linear Boltzmann equation towards the solution of the heat equation given in the following theorem. Let
$$ \overline{W}(t,p):=\overline{W}_0(p)\exp\bigl\{-\hat{c} p^{2}t\bigr\}. $$
(3.15)

Theorem 3.6

For anyt0>0, a∈(0,1], b>1 andγas in (3.11) there exists a constantC>0 such that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ64_HTML.gif
(3.16)
for allϵ∈(0,1], tt0, \(W\in{\mathcal{B}}_{a}\)and\(J\in{\mathcal{A}}_{4}'\cap{\mathcal{B}}_{a,b}\).

The proof of the above theorem is contained in Sect. 6.2. The remaining part of the argument follows the argument of Sect. 3.1.

4 Proofs of Theorems 3.2 and 3.5

4.1 Proof of Theorem 3.2

From (2.45) and (2.44) we conclude that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equy_HTML.gif
with \({\mathcal{R}}_{\epsilon}(t)\) given by (2.37). Estimates of the last term on the right hand side above shall be done separately for each term appearing on the right hand side of (2.37).

4.1.1 Terms Corresponding to \({\mathcal{R}}_{\epsilon}^{(1)}\)

Denote
$${\mathcal{E}}^{(\epsilon)}(t,k):=\frac{\epsilon^{1+\gamma }}{2}\overline{W}_{\epsilon} (t,0,k)\quad\mbox{and}\quad{\mathcal{G}}^{(\epsilon)}(t,k):=\frac {\epsilon^{1+\gamma }}{2} \overline{Y}_{\epsilon}(t,0,k). $$

Lemma 4.1

For a givena∈(0,1] there existsC>0 such that
$$ \int_{\mathbb{ T}}\frac{{\mathcal{E}}^{(\epsilon)}(t,k)\,dk}{|k|^{2a}}\le Ct{\mathcal{K}}_{\gamma }+{\mathcal{K}}_{a,\gamma}, $$
(4.1)
for allϵ∈(0,1], t≥0.

Proof

Denote the expression on the left hand side of (4.1) by \({\mathcal{E}}_{a}(t)\). From (2.34) and (2.37) we conclude that
$$ \biggl \vert \frac{d{\mathcal{E}}_a(t)}{dt}\biggr \vert \le\int _{\mathbb{ T}}\frac{dk}{|k|^{2a}} \bigl[\bigl|{\mathcal{L}}\bigl({\mathcal{E}}^{(\epsilon )}(t)\bigr) (k)\bigr|+\bigl|{\mathcal{L}}\bigl(\operatorname{Re} {\mathcal{G}}^{(\epsilon )}(t)\bigr) (k)\bigr| \bigr]. $$
(4.2)
Since |k|−2aR(k,k′) is bounded when a∈(0,1] we can bound the right hand side of (4.2), with the help of (2.21), by \(C{\mathcal{K}}_{\gamma}\) and (4.1) follows. □
Let Δωϵ(p,k):=′(k)−δϵω(p,k). For a given q∈ℝ define by mod(q,1/2) the unique r∈[−1/2,1/2) such that q=+r, where ∈ℤ. Divide the cylinder \(\mathbb{ R} \times\mathbb{ T}\) into two domains: \({\mathcal{C}}\)—described below—and its complement \({\mathcal{C}}^{c}\). The first domain consists of those (p,k) for which either |mod(k±ϵp/2,1/2)|≥1/4, or both points mod(k±ϵp/2,1/2) belong to an interval [−1/2,0], or they belong to [0,1/2]. We can write then
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equaa_HTML.gif
where the terms on the right hand side correspond to the integration over the aforementioned domains. One can easily verify that there exists C>0 such that |Δωϵ(p,k)|≤|p| for \((p,k)\in{\mathcal{C}}\). We can estimate therefore
$$ |I_1|\le C\epsilon\|J\|_{{\mathcal{A}}_{1}'}{\mathcal{K}}_{\gamma}. $$
(4.3)
On the other hand, when \((p,k)\in{\mathcal{C}}^{c}\), with no loss of generality assume that 1/2>k+ϵp/2>0>kϵp/2>−1/2 (the other cases can be handled analogously) p is positive and k∈(−ϵp/2,ϵp/2). Since |Δωϵ(p,k)|≤C|p|, for a given a∈(0,1] we can write
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equab_HTML.gif
for some constant C>0. Using the above estimate we obtain
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ68_HTML.gif
(4.4)
By virtue of (4.1) we get that
$$ |I_2|\le C\epsilon^{2a}(s{\mathcal{K}}_{\gamma}+{\mathcal{K}}_{a,\gamma })\|J\|_{{\mathcal{A}}_{2a+1}'}. $$
(4.5)
Taking the Taylor expansion of Rϵ(p,k,k′), up to terms of order ϵ, it is also straightforward to conclude that
$$ \biggl \vert \frac{\epsilon^{1+\gamma}}{2}\bigl\langle\bigl({\mathcal{L}}^{(\epsilon )} -{\mathcal{L}}\bigr) \overline{W}_{\epsilon}(s),J(t-s)\bigr \rangle\biggr \vert \le\epsilon{\mathcal{K}}_{\gamma}\|J\|_{{\mathcal{A}}'}. $$
(4.6)
Summarizing, (4.3), (4.5) and (4.6) together imply
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ71_HTML.gif
(4.7)

4.1.2 Terms Corresponding to \({\mathcal{R}}_{\epsilon}^{(2)}\)

Straightforward computations, taking the Taylor expansions of \(\hat{\beta}(k-\epsilon p/2)\) and Rϵ(p,k,k′) up to ϵ, show that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equac_HTML.gif
From (2.39) we conclude the following estimate.

Lemma 4.2

Suppose that\(\phi_{\epsilon}:\mathbb{ R}\times\mathbb{ T}\to\mathbb{ R}\)is such that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ72_HTML.gif
(4.8)
Then, there existsC>0 such that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ73_HTML.gif
(4.9)
for all\(J\in{\mathcal{S}}\), ϵ∈(0,1], t>0.

Proof

The left hand side of (4.9) can be rewritten as
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ74_HTML.gif
(4.10)
where \(\varGamma_{\epsilon}(p,k):= \phi_{\epsilon}(p,k)\bar{\omega}_{\epsilon}^{-1}(p,k)\). Using (2.39) we can estimate this expression by
$$ \epsilon\biggl \vert \frac{\epsilon^{1+\gamma}}{2}\int _0^t \bigl\langle \partial_s \overline{Y}_{\epsilon}(s)\varGamma_{\epsilon},J(t-s) \bigr\rangle\, ds \biggr \vert +\epsilon\biggl \vert \frac{\epsilon^{1+\gamma}}{2}\int_0^t \bigl\langle{\mathcal{U}}_{\epsilon}(s)\varGamma_{\epsilon},J(t-s)\bigr \rangle \,ds\biggr \vert . $$
(4.11)
Thanks to (2.21) and (4.8) the second term is bounded by \(\epsilon\varGamma_{*}\|J\|_{\mathcal{A}'}{\mathcal{K}}_{\gamma}t\). On the other hand, integration by parts allows us to estimate the first one by
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ76_HTML.gif
(4.12)
by virtue of (2.21) and (4.8). □
Since
$$ \sup_{\epsilon,k,k',p}\frac{R_{\epsilon}(p,k,k')}{\bar{\omega}_{\epsilon}(p,k)}<+\infty $$
(4.13)
from the above lemma we conclude that
$$ \biggl \vert \frac{\epsilon^{1+\gamma}}{2}\int_0^t \bigl\langle{\mathcal{L}}\bigl( \operatorname{Re} \overline{Y}_{\epsilon}(s) \bigr),J(t-s)\bigr\rangle \,ds\biggr \vert \le C{\mathcal{K}}_{\gamma} \epsilon\bigl(t\|J\|_{{\mathcal{A}}_1'}+\|J\|_{{\mathcal{A}}'}\bigr). $$
(4.14)
This ends the proof of (3.7). □

4.2 Proof of Theorem 3.5

We maintain the notation from the argument made in the previous section. Estimate (4.4) can be improved since in this case there exists C>0 such that |Δωϵ(p,k)|≤|p| for all (p,k), ϵ∈(0,1]. Thus, we can write
$$ |I_2|\le C{\mathcal{K}}_{\gamma}\epsilon\int _{\mathbb{ R}\times\mathbb{ T}}|p|{\mathbb{E}} \bigl|J\bigl(p,K(t-s,k)\bigr)\bigr|\,dp\,dk\le C{ \mathcal{K}}_{\gamma}\epsilon\|J\|_{{\mathcal{A}}_1'} $$
(4.15)
for all t≥0. With this improvement in mind we conclude that there is a constant C>0 such that
$$ \biggl \vert \frac{\epsilon^{1+\gamma}}{2}\int_0^t \bigl\langle{\mathcal{R}}_{\epsilon}^{(1)}(s),J(t-s)\bigr\rangle\, ds \biggr \vert \le C\epsilon\|J\|_{{\mathcal{A}}_{1}'}{\mathcal{K}}_{\gamma}t $$
(4.16)
for all t≥0, ϵ∈(0,1]. Repeating the argument from the proof of Theorem 3.2 for the term corresponding to \({\mathcal{R}}_{\epsilon}^{(2)}\) we conclude (3.14).  □

5 Convergence Rate for the Characteristic Function of an Additive Functional of a Markov Process

5.1 Markov Chains

Suppose that {ξn,n≥0} is a Markov chain taking values in a Polish metric space (E,d). Assume that π—the law of ξ0—is an invariant and ergodic probability measure for the chain. The transition operator satisfies:

Condition 5.1

(Spectral gap condition)

https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equad_HTML.gif
Since P is also a contraction in L1(π) and L(π) we conclude, via Riesz-Thorin interpolation theorem, that for any β∈[1,+∞):
$$ \|Pf\|_{L^\beta(\pi)}\le a^{\kappa(\beta)}\|f\|_{L^\beta(\pi)}, $$
(5.1)
for all \(f\in L^{\beta}_{0}(\pi)\)—the subspace of \(L^{\beta}_{0}(\pi)\) consisting of functions satisfying ∫fdπ=0, with κ(β):=1−|2/β−1|>0. Thus, \(Q_{N}:=\sum_{n=0}^{N}P^{n}\) satisfies
$$ \|Q_Nf\|_{L^{\beta}(\pi)}\le\bigl(1-a^{\kappa(\beta)} \bigr)^{-1} \|f\|_{L^{\beta }(\pi)},\quad\forall f\in L^\beta_0(\pi). $$
(5.2)
Furthermore, we assume the following regularity property of transition of probabilities:

Condition 5.2

(Existence of bounded probability densities w.r.t. π)

Transition probability is of the formP(w,dv)=p(w,v)π(dv), where the kernelp(⋅,⋅) belongs toL(ππ).

5.2 Convergence of Additive Functionals

Suppose that Ψ:E→ℝ satisfies the tail estimate
$$ \pi\bigl(|\varPsi|>\lambda\bigr) \le\frac{ C}{ \lambda^\alpha} $$
(5.3)
for some C>0, α>1 and all λ≥1 and
$$ \int\varPsi \,d\pi=0. $$
(5.4)
We wish to describe the behavior of tail probabilities \({\mathbb{P}} [\vert Z_{t}^{(N)}\vert \geq N^{\kappa}]\) when κ>0 for the scaled partial sum process
$$ Z^{(N)}_t:=\frac{1}{N^{1/\alpha}}\sum _{n=0}^{[Nt]}\varPsi(\xi_n),\quad t\ge0. $$
(5.5)
To that end we represent \(Z_{t}^{(N)}\) as a sum of an Lβ integrable martingale for β∈[1,α) and a boundary term vanishing with N. Let χ be the unique solution, belonging to \(L^{\beta}_{0}(\pi)\) for β∈[1,α), of the Poisson equation
$$ \chi-P\chi=\varPsi. $$
(5.6)
In fact using Condition 5.2 we conclude that L(π). Therefore the tails of χ and Ψ under π are identical. We introduce an Lβ integrable martingale letting: M0:=0,
$$ M_N:=\sum_{n=1}^{N}Z_n, \quad\mbox{where }Z_n:=\chi(\xi_{n})-P\chi ( \xi_{n-1}),\ N\geq1 $$
(5.7)
and the respective partial sum process \(M_{t}^{(N)}:= N^{-1/\alpha }M_{[Nt]}\), t≥0.
Using the dual version of Burkholder inequality for Lβ integrable martingales, when β∈(1,2), see Corollary 4.22, p. 101 of [17] (and also [11]) we conclude that there exists C>0 such that
$$ \bigl({\mathbb{E}}\vert M_N\vert ^{\beta} \bigr)^{1/\beta}\leq CN^{1/\beta},\quad\forall N\ge1. $$
(5.8)

Lemma 5.3

Under the assumptions (5.3) and (5.4) for anyκ>0 andδ∈(0,ακ) there existsC>0 such that
$$ {\mathbb{P}} \bigl[\bigl \vert Z_t^{(N)} \bigr \vert \geq N^\kappa \bigr]\leq \frac{C( t+1)}{N^\delta},\quad \forall N \ge1,\ t\ge0. $$
(5.9)

Proof

Choose β∈[1,α). We can write
$$ \sum_{n=0}^{[Nt]}\varPsi( \xi_n)=M_{[Nt]}+\chi(\xi_0)-\chi( \xi_{[Nt]}). $$
(5.10)
From (5.10) and Chebyshev’s inequality we can estimate the left hand side of (5.9) by
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equae_HTML.gif
for some C>0. On the other hand
$${\mathbb{P}} \biggl[\vert M_{[Nt]}\vert \geq\frac{N^{1/\alpha +\kappa}}{3} \biggr]\leq\frac {1}{N^{\beta(1/\alpha+\kappa)}}{\mathbb{E}}\vert M_{[Nt]} \vert ^{\beta}\le\frac {Ct}{N^{\beta(1/\alpha+\kappa)-1}}. $$
The last inequality follows from (5.8). Choosing β sufficiently close to α we conclude the assertion of the lemma. □

Suppose furthermore that an observable Ψ:E→ℝ is such that

Condition 5.4

Ψ=0 and there existα∈(1,2), α1∈(0,2−α) and nonnegative constants\(c_{*}^{+},c_{*}^{-},C^{*}>0\)such that\(c_{*}^{+}+c_{*}^{-}>0\)and
$$ \biggl \vert \pi(\varPsi>\lambda) -\frac{ c_*^+}{ \lambda^\alpha}\biggr \vert +\biggl \vert \pi (\varPsi<- \lambda) -\frac{ c_*^-}{ \lambda^\alpha}\biggr \vert \le \frac {C^*}{\lambda^{\alpha+\alpha_1}} $$
(5.11)
for allλ≥1.
Let {Zt,t≥0} be an α-stable process with the Levy exponent
$$ \psi(p):=\alpha\int_{\mathbb{ R}}\bigl(1+i \lambda p-e^{i\lambda p}\bigr)\frac{c_*(\lambda)\,d\lambda}{|\lambda|^{1+\alpha}}, $$
(5.12)
where
$$ c_*(\lambda):= \left\{ \begin{array}{l@{\quad}l} c_*^-,&\mbox{when }\lambda<0,\\ c_*^+,&\mbox{when }\lambda>0. \end{array} \right. $$
(5.13)

In what follows we shall prove the following.

Theorem 5.5

Under the assumptions made above, for anyδ∈(0,α/(α+1)) there existsC>0 such that
$$ \bigl \vert {\mathbb{E}}e^{ip Z^{(N)}_t}-e^{-t\psi(p)}\bigr \vert \le C\bigl(1+|p|\bigr)^{5}(t+1) \biggl(\frac{1}{N^{\alpha_1/\alpha}}+ \frac{1}{N^{\delta}} \biggr) $$
(5.14)
for allp∈ℝ, t≥0, N≥1.

Proof

In the first step we replace an additive functional of the chain by a martingale partial sum process. Using (5.10) we conclude that there exists C>0 such that
$$\bigl \vert {\mathbb{E}}e^{ip Z^{(N)}_t}-{\mathbb{E}}e^{ip M^{(N)}_t}\bigr \vert \le\frac{C|p|}{N^{1/\alpha }},\quad\forall t\ge0,\ N\ge1,\ p\in\mathbb{ R} $$
and (5.14) shall follow as soon as we can show that for any δ∈(0,α/(α+1)) there exist C>0 such that
$$ \bigl \vert {\mathbb{E}}e^{ip M^{(N)}_t}-e^{-t\psi(p)}\bigr \vert \le C\bigl(1+|p|\bigr)^{5}(t+1) \biggl(\frac{1}{N^{\alpha_1/\alpha}}+ \frac{1}{N^{\delta}} \biggr) $$
(5.15)
for all p∈ℝ, t≥0, N≥1. The remaining part of the argument is therefore devoted to the proof of (5.15). Denote ZN,n:=N−1/αZn with Zn defined in (5.7) and χ the solution of Poisson equation (5.6) with the right hand side equal to Ψ. Introduce also
$$ \begin{aligned}[c] h_p(x)&:=e^{ip x}-1-ip x, \\ \bar{h}_p^{(N)}&:={\mathbb{E}}h_p (Z_{N,1} ) \end{aligned} $$
(5.16)
and \(\psi^{(N)}(p):=-N \bar{h}_{p}^{(N)}\). Observe that \(\operatorname{Re} \bar{h}_{p}^{(N)}\le0\).

Lemma 5.6

There exists a constantC>0 such that
$$ \bigl|\psi^{(N)}(p)-\psi(p)\bigr|\le\frac{C}{N^{\alpha_1/\alpha}}|p| \bigl(1+|p|\bigr). $$
(5.17)
In addition, for any bounded set Δ⊂ℝ andβ∈[1,α) there existsC>0 such that
$$ N\int\sup_{\lambda\in\Delta}\biggl \vert h_p \biggl( \frac{\varPsi (v)+\lambda }{N^{1/\alpha}} \biggr)\biggr \vert ^{\beta}\pi(dv)\le C|p|^{\beta }\bigl(1+|p|\bigr),\quad \forall p\in\mathbb{ R},\ N\ge1. $$
(5.18)

Proof

Denote \(\tilde{\varPsi}(w,v):=\varPsi(v)+P\chi(v)-P\chi(w)\). With this notation the expression on the left hand side of (5.17) can be rewritten as
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ99_HTML.gif
(5.19)
Here FN(λ):=π(ΨN1/αλ). The first term on the right hand side can be estimated by
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equah_HTML.gif
where Δ is a bounded interval containing all possible values of (v)−(w). This expression can be further estimated by
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equai_HTML.gif
As for the second term on the right hand side of (5.19), using integration by parts, we conclude that it can be bounded by
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ100_HTML.gif
(5.20)
The first estimate follows from (5.11), while the last one follows upon the change of variables λ′:=λp and the fact that 1<α+α1<2.
Concerning (5.18) expression appearing there can be estimated by
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ101_HTML.gif
(5.21)
The first term is estimated by
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equaj_HTML.gif
for some C>0 and all p∈ℝ, N≥1. Since Δ is bounded the second term on the other hand is smaller than
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ102_HTML.gif
(5.22)
Hence, (5.18) follows. □
In fact, in light of the above lemma to prove the theorem it suffices only to show that for any δ∈(0,α/(α+1)) one choose C>0 so that
$$ \bigl \vert {\mathbb{E}}e^{ip M^{(N)}_t}-e^{-t\psi^{(N)}(p)}\bigr \vert \le \frac{C}{N^{\delta }}\bigl(1+|p|\bigr)^{5}(t+1) $$
(5.23)
for all p∈ℝ, t≥0, N≥1. To shorten the notation we let Mj,N:=Mj/N1/α. Since {Mn,n≥0} is \(({\mathcal{F}}_{n})\) adapted and \({\mathbb{E}}[ Z_{N,n+1} \mid {\mathcal{F}}_{n}]=0\), we can write
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equak_HTML.gif
To derive a recursive formula for
$$W_{j}:=\exp\bigl\{\psi^{(N)}(p)j/N\bigr\} {\mathbb{E}}\exp \{ipM_{j,N}\}, $$
write
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equam_HTML.gif
Since M0=0, adding up from j=0 up to [Nt]−1 and then dividing both sides of the obtained equality by exp{ψ(N)(p)[Nt]/N} we get
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ104_HTML.gif
(5.24)
where
$$e_{N,j}:=\exp\bigl\{\psi^{(N)}(p) \bigl(j+1-[Nt]\bigr)/N\bigr \}e^{ipM_{j,N}}. $$
We denote the terms appearing on the right hand side of (5.24) by I and II and examine each of them separately. As far as II is concerned we bound its absolute value by
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ105_HTML.gif
(5.25)
and since \(\operatorname{Re}\bar{h}_{p}^{(N)}\le0\) we obtain the following
$$ |\mathit{II}|\le Nt \bigl|\bar{h}_p^{(N)} \bigr|^2\le\frac{Ct|p|^{2}(1+|p|)^2}{N} $$
(5.26)
for some C>0.
Fix K≥1, to be adjusted later on, and divide the set ΛN={0,…,[Nt]−1} into =[[Nt]/K]+1 contiguous subintervals, of size K and the last one of size K′≤K, i.e.
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equao_HTML.gif
and
$${\mathcal{I}}_{\ell} = \bigl\{j_m, \dots, j_m + K' \bigr\} $$
with K′≤K. Here [a] stands for the integer part of a∈ℝ. To simplify the notation we shall assume that K′=K. This assumption does not influence the asymptotics.
We need to estimate the absolute value of
$$ I = \sum_{k=1}^{\ell} \sum _{j\in I_k} {\mathbb{E}} \bigl\{e_{N,j} \bigl[h_p(Z_{N,j+1})- \bar{h}_p^{(N)} \bigr] \bigr\}=I_1+I_2, $$
(5.27)
where
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equaq_HTML.gif
The conditional expectation in the formula for I1 equals
$$\int p(\xi_j,v) \biggl[h_p \biggl(\frac{\varPsi(v)+P\chi(v)-P\chi(\xi_j)}{N^{1/\alpha}} \biggr)- \bar{h}_p^{(N)} \biggr]\pi(dv). $$
The supremum of its absolute value can be estimated by
$$2\bigl\|p(\cdot{,}\cdot)\bigr\|_{\infty}\int\sup_{\lambda\in\Delta}\biggl \vert h_p \biggl(\frac{\varPsi(v)+\lambda}{N^{1/\alpha}} \biggr)\biggr \vert \pi(dv)\le \frac {C|p|(1+|p|)}{N}, $$
for some constant C>0. Here Δ is a bounded set containing 0 and all possible values of (w)−(z) for z,wE. The last inequality follows from Lemma 5.6. On the other hand, since the real part of logeN,j is non-positive we have
$${\mathbb{E}}|e_{N,j}-e_{N,j_k}|\le|p| \Bigl(K{\mathbb{E}}\bigl|\bar{h}_p^{(N)}\bigr|+{\mathbb{E}} \Bigl[\sup_{l\in I_k}|M_{l,N}-M_{j_k,N}| \Bigr] \Bigr),\quad\forall j\in {\mathcal{I}}_k. $$
According to (5.18) the first term in parentheses can be estimated by C|p|(1+|p|)K/N. Choose β∈(1,α). From (5.8) and Doob’s inequality we can estimate the second term by CK1/β/N1/α. Summarizing we have shown that
$$ |I_1|\le C(t+1) \bigl(1+|p| \bigr)^5 \biggl( \frac{K^{1/\beta }}{N^{1/\alpha }}+\frac{K}{N} \biggr). $$
(5.28)
On the other hand,
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equau_HTML.gif
where
$$g_N(w):=\int h_p \biggl(\frac{\chi(v)-P\chi(w)}{N^{1/\alpha}} \biggr)p(w,v)\pi (dv)- \bar{h}_p^{(N)}. $$
Fix β∈(1,α). Since ∫gN=0 and =[[Nt]/K]+1, from (5.2) we conclude that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equaw_HTML.gif
Hence, we have shown that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equax_HTML.gif
Choose K=Nδ with δ∈(0,1). It is easy to see that
$$1-\delta>\frac{1}{\alpha}-\frac{\delta}{\beta},\quad\forall \beta\in(1,\alpha) $$
thus the middle term on the right hand side is smaller than the first one. The optimal rate is obtained therefore when
$$\frac{1}{\alpha}-\frac{\delta}{\beta}=\delta+\frac{1}{\beta}-1, $$
or equivalently
$$\delta= \frac{1/\alpha+2}{1/\beta+1}-1. $$
From the above we get, that the optimal rate of convergence is obtained when β is as close to α as possible, and for each δ1<α/(α+1) we can choose then C>0 so that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equbb_HTML.gif
Taking into account the above and (5.26) we conclude (5.23). □

5.3 Convergence Rates in the Central Limit Theorem Regime

We maintain assumptions made about the chain in the previous section. This time however we assume instead of Condition 5.4 the following.

Condition 5.7

Ψ=0 andΨL2(π).

We define the martingale approximation, using (5.7) from the previous section. Decomposition (5.10) remains in force. Instead of (5.8) we have then an inequality
$$ \bigl({\mathbb{E}}\vert M_N\vert ^{2} \bigr)^{1/2}\leq CN^{1/2},\quad\forall N\ge1, $$
(5.29)
where C>0 is independent of N≥1. Let
$$\sigma^2:= {\mathbb{E}}M_1^2\quad\mbox{and} \quad \psi(p):=\sigma^2 p^2/2 . $$
Define the partial sum process \(Z^{(N)}_{t}\) by (5.5) with α replaced by 2. Our main result concerning the convergence of characteristic functions reads as follows.

Theorem 5.8

There existsC>0 such that
$$ \bigl \vert {\mathbb{E}}e^{ip Z^{(N)}_t}-e^{-t\psi(p)}\bigr \vert \le C \bigl(1+|p|\bigr)^4\frac{t+1}{N^{1/3}} $$
(5.30)
for allp∈ℝ, t≥0, N≥1.

Proof

Denote ZN,n:=N−1/2Zn with Zn defined in (5.7) and Mj,N:=Mj/N1/2. Using this notation as well as the notation from the previous section we can write that
$$\bigl \vert {\mathbb{E}}e^{ip Z^{(N)}_t}-{\mathbb{E}}e^{ip M^{(N)}_t}\bigr \vert \le\frac {C|p|}{N^{1/2}},\quad\forall t\ge0,\ N\ge1,\ p\in\mathbb{ R}. $$
It suffices therefore to prove
$$ \bigl \vert {\mathbb{E}}e^{ip M^{(N)}_t}-e^{-t\psi(p)}\bigr \vert \le C\bigl(1+|p|\bigr)^4\frac{t+1}{N^{1/3}}. $$
(5.31)
Let hp(x) be given by (5.16). We denote \(\bar{h}_{p}^{(N)}:=-(\sigma p)^{2}/(2N)\) and
$$W_{j}:=\exp\bigl\{(\sigma p)^2j/(2N)\bigr\} {\mathbb{E}} \bigl[ \exp\{ipM_{j,N}\}\bigr]. $$
Analogously to what has been done in the previous section we obtain
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equbf_HTML.gif
Adding up from j=0 up to [Nt]−1 and then dividing both sides of the obtained equality by exp{(σp)2[Nt]/(2N)} we conclude that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ112_HTML.gif
(5.32)
where
$$e_{N,j}:=\exp\bigl\{(\sigma p)^2\bigl(j+1-[Nt]\bigr)/(2N) \bigr\}e^{ipM_{j}/N^{1/2} }. $$
Denote the absolute value of the terms appearing on the right hand side by I and II respectively. We can easily estimate
$$\mathit{II}\le C\frac{tp^4}{N} $$
for some constant C>0 and all t≥0, N≥1 and p∈ℝ.
To estimate I we invoke the block argument from the previous section. Fix K≥1 and divide the set ΛN={0,…,N−1} in =[[Nt]/K]+1 contiguous subintervals, of size K and the last one of size K′≤K. In fact for simplicity sake we just assume that all intervals have the same size and maintain the notation introduced in the previous section. Estimate (5.27) remains in force with the obvious adjustments needed for α=2. Repeating the argument leading to (5.28) with (5.29) used in place of (5.8) we conclude that
$$ |I_1|\le C p^2(t+1) \biggl( \frac{pK^{1/2}}{N^{1/2}}+\frac {p^2K}{N} \biggr). $$
(5.33)
On the other hand,
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equbi_HTML.gif
Choosing K=N1/3 in the above estimates we conclude (5.31). □

5.4 Convergence Rates for Additive Functionals of Jump Processes

Assume that {τn,n≥0} are i.i.d. exponentially distributed random variables with \({\mathbb{E}}\tau_{0}=1\) that are independent of the Markov chain {ξn,n≥0} considered in the previous section. Suppose furthermore that V, θ are Borel measurable functions on E such that:

Condition 5.9

For somet>0
$$ \theta(w)\ge t_*, \quad \forall w\in E, $$
(5.34)
there existC>0 andα2>1 such that
$$ \pi(\theta>\lambda) \le\frac{C^*}{\lambda^{\alpha_2}},\quad \forall \lambda \ge1. $$
(5.35)

We shall also assume that either Ψ(w):=V(w)θ(w) satisfies Condition 5.4, or it satisfies Condition 5.7.

Let \(t_{n}:=\sum_{k=0}^{n-1}\theta(\xi_{k})\tau_{k}\) and let {Xt,t≥0} be a jump process given by \(X_{t}:=\xi_{n_{t}}\) where nt:=n for tnt<tn+1. The process conditioned on ξ0=w shall be denoted by Xt(w). The corresponding chain shall be called a skeleton, while θ−1(w) is the jump rate at w. Note that measure \(\tilde{\pi}(dw):=\bar{\theta}^{-1}\theta(w)\pi(dw)\) is invariant under the process. Here \(\bar{\theta}:={\mathbb{E}}\theta(\xi_{0})\).

We also define processes
$$ Y^{(N)}_t:=N^{-1/\beta}\int _0^{Nt}V(X_s)\,ds, $$
(5.36)
with β=α when this condition holds and β=2, in case Condition 5.7 is in place.
Observe that the integral above can be written as a random length sum formed over a Markov chain. More precisely, when (5.11) holds
$$ Y^{(N)}_t=\frac{1}{N^{1/\alpha}}\sum _{k=0}^{n_{Nt}-1}\tilde{\varPsi}(\tilde{\xi}_k)+R^{(N)}_t, $$
(5.37)
where
$$ R^{(N)}_t:=\frac{1}{N^{1/\alpha}}V( \xi_{n_{Nt}}) (t-t_{n_{Nt}}), $$
(5.38)
\(\{\tilde{\xi}_{n},n\ge0\}\) is an E×(0,+∞)-valued Markov chain given by \(\tilde{\xi}_{n}:=(\xi_{n},\tau_{n})\) and \(\tilde{\varPsi}(w,\tau ):=\varPsi(w)\tau\). Its invariant measure \(\tilde{\pi}\) is given by \(\tilde{\pi}(dw,d\tau):=e^{-\tau}\pi(dw)\,d\tau\) and the transition probability kernel equals \(\tilde{P}(w,\tau,dv,d\tau'):=p(w,v)e^{-\tau'}\pi(dv)\,d\tau'\). It is elementary to verify that this chain and the observable \(\tilde{\varPsi}\) satisfy Conditions 5.1–5.4 with the constants appearing in Condition 5.4 given by \(\tilde{c}_{*}^{\pm}:=\varGamma (\alpha +1) c_{*}^{\pm}\), where Γ(⋅) is the Euler gamma function. In case Condition 5.7 holds we can still write (5.37) and (5.38) with α=2.
Consider the stable process {Zt,t≥0} whose Levy exponent is given by (5.12) with c(λ) replaced by
$$ \tilde{c}_*(\lambda):= \left\{ \begin{array}{l@{\quad}l} \bar{\theta}^{-\alpha}\varGamma(\alpha+1)c_*^-,&\mbox{when }\lambda <0,\\ [3pt] \bar{\theta}^{-\alpha}\varGamma(\alpha+1)c_*^+,&\mbox{when }\lambda>0. \end{array} \right. $$
(5.39)
Let {Bt,t≥0} be a zero mean Brownian motion whose variance equals
$$ \hat{c}^2:=2\bar{\theta}^{-2} \sigma^2. $$
(5.40)
The aim of this section is to prove the following.

Theorem 5.10

In addition to the assumptions made in Sect5.1suppose that Condition 5.9 holds. Then, for any
$$ 0<\delta<\delta_*:=\min\bigl[\alpha/(\alpha+1),( \alpha_2-1)/(\alpha \alpha_2+1)\bigr] $$
(5.41)
there existsC>0 such that
$$ \bigl \vert {\mathbb{E}}\exp \bigl\{i p Y^{(N)}_t \bigr\}-{\mathbb{E}}\exp \{ip Z_t \} \bigr \vert \le C\bigl(|p|+1\bigr)^5(t+1) \biggl(\frac{1}{N^{\alpha_1/\alpha }}+\frac {1}{N^{\delta}} \biggr), $$
(5.42)
for allp∈ℝ, t≥0, N≥1.
If, on the other hand the assumptions made in Sect5.3and Condition 5.9 hold then for any
$$ 0<\delta<\delta_*:=(\alpha_2-1)/(1+2 \alpha_2) $$
(5.43)
there existsC>0 such that
$$ \bigl \vert {\mathbb{E}}\exp \bigl\{i p Y^{(N)}_t \bigr\}-{\mathbb{E}}\exp \{ip B_t \} \bigr \vert \le C\bigl(|p|+1\bigr)^4(t+1) \biggl(\frac{1}{N^{1/3}}+\frac {1}{N^{\delta }} \biggr) $$
(5.44)
for allp∈ℝ, t≥0, N≥1.

The proof of this result is carried out below. It relies on Theorem 5.5. The principal difficulty is that definition of \(Y^{(N)}_{t}\) involves a random sum instead of a deterministic one as in the previous section. This shall be resolved by replacing nNt appearing in the upper limit of the sum in (5.37) by the deterministic limit \([\bar{N}t]\), where \(\bar{N}:=N/\bar{\theta}\), that can be done with a large probability, according to the lemma formulated below.

5.4.1 From a Random to Deterministic Sum

Lemma 5.11

Suppose thatκ>0. Then, for anyδ∈(0,α2κ) there existsC>0 such that
$$ {\mathbb{P}} \bigl[\bigl \vert n_{ N t}-[\bar{N} t]\bigr \vert \geq N^{\kappa+1/\alpha_2} \bigr]\leq\frac{C(t+1)}{N^\delta},\quad \forall N\ge1, \ t\ge0. $$
(5.45)

Proof

Denote
$$ \begin{aligned}[c] & A_N^+:= \bigl[n_{[ N t]}-[\bar{N} t]\geq N^{1/\alpha_2+\kappa} \bigr], \\ & A_N^-:= \bigl[n_{[ N t]}-[\bar{N} t]\leq-N^{1/\alpha_2+\kappa} \bigr],\quad A_N:=A_N^+\cup A_N^-. \end{aligned} $$
(5.46)
Let κ′∈(0,κ) be arbitrary and
$$C_N:= \Biggl[\sum_{n=[\bar{N} t]}^{[\bar{N} t]+N^{1/\alpha_2+\kappa }-1} \tau_n\geq N^{1/\alpha_2+\kappa'} \Biggr]. $$
We adopt the convention of summing up to the largest integer smaller than, or equal to the upper limit of summation. Note that on \(A_{N}^{+}\) we have
$$[ N t]-t_{[\bar{N} t]}\geq t_{n_{[N t]}}-t_{[\bar{N} t]}\geq t^*\sum _{n=[\bar{N} t]}^{[\bar{N} t]+N^{1/\alpha_2+\kappa}-1}\tau_n. $$
Furthermore
$$r_N:=\frac{1}{N^{1/\alpha_2}}\biggl \vert \frac{[Nt]}{\bar{\theta}}-[\bar{N} t]\biggr \vert \le\frac{1}{N^{1/\alpha_2}} \biggl(\frac{1}{\bar{\theta}}+1 \biggr). $$
Hence,
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equbm_HTML.gif
To estimate the probability appearing on the utmost right hand side we apply Lemma 5.3 for the Markov chain \(\{\tilde{\xi}_{n},n\ge0\}\) and \(\varPsi(w,\tau):=\theta(w)\tau-\bar{\theta}\). We conclude therefore that for any δ∈(0,α2κ′) there exists C>0 such that
$$ {\mathbb{P}} \bigl[A_N^+\cap C_N \bigr] \leq\frac{C(t+1)}{N^{\delta }},\quad\forall t\ge 0,\ N\ge1. $$
(5.47)
Since \({\mathbb{E}}\tau_{0}=1\) and κ′∈(0,κ) for any x∈(0,1) we can find C>0 such that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ128_HTML.gif
(5.48)
where I(x):=−(1−x−lnx). The last inequality follows from the large deviations estimate of Cramer, see e.g. Theorem 2.2.3 of [7]. Using this and (5.47) we get
$${\mathbb{P}} \bigl[A_N^+ \bigr]\leq{\mathbb{P}} \bigl[A_N^+ \cap C_N \bigr]+{\mathbb{P}} \bigl[C_N^c \bigr]\leq\frac {C}{N^\delta}. $$
Probability \({\mathbb{P}}[A_{N}^{-}]\) can be estimated in similar way. Instead of CN we consider the event
$$\tilde{C}_N:= \Biggl[\,\sum_{n=[\bar{N} t]-N^{1/\alpha_2+\kappa }+1}^{[\bar{N} t]} \tau_n\geq N^{1/\alpha_2+\kappa'} \Biggr] $$
and carry out similar estimates to the ones done before. □

5.4.2 Proof of (5.42)

Choose any κ>0. We can write
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ129_HTML.gif
(5.49)
with AN defined in (5.46). The last term on the right hand side can be estimated by the expression appearing on the right hand side of (5.42), by virtue of Theorem 5.5.
The first term on the right hand side can be estimated by C(t+1)Nδ for some δ∈(0,α2κ) and C>0. The second term is less than, or equal to
$$|p|({\mathbb{E}}I_N+{\mathbb{E}}J_N), $$
where
$$ I_N:=N^{-1/\alpha}\max_{m\in[[\bar{N} t]-N^{1/\alpha_2+\kappa },[\bar{N} t]+N^{1/\alpha _2+\kappa}]\cap{\mathbb{Z}}}\Biggl \vert \sum_{k=m}^{[\bar{N} t]}\tilde{\varPsi}( \tilde{\xi}_k)\Biggr \vert , $$
(5.50)
and
$$ J_N:=N^{-1/\alpha}\max_{m\in[[\bar{N} t]-N^{1/\alpha_2+\kappa },[\bar{N} t]+N^{1/\alpha _2+\kappa}]\cap{\mathbb{Z}}}\bigl \vert \tilde{\varPsi}(\tilde{\xi}_m)\bigr \vert . $$
(5.51)

Lemma 5.12

Suppose thatκ∈(0,1−1/α2). Then, for any\(\delta\in (0,\alpha^{-1}(1- \kappa-\alpha^{-1}_{2}))\)there existsC>0 such that
$$ {\mathbb{E}}I_N\le\frac{C}{N^{\delta}} $$
(5.52)
and
$$ {\mathbb{E}}J_N\le\frac{C}{N^{\delta}},\quad\forall N\ge1. $$
(5.53)

Proof

First we prove (5.53). Since \(\tilde{\varPsi}(\tilde{\xi}_{0})\) is Lβ integrable we can write for any β∈(1,α)
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ134_HTML.gif
(5.54)
Choosing β sufficiently close to α we conclude (5.53).
Now we prove (5.52). Again we can use martingale decomposition
$$\sum_{n=0}^{m-1}\tilde{\varPsi}(\tilde{\xi}_k)=\tilde{\chi}(\tilde{\xi}_0)-\tilde{P}\tilde{\chi}( \tilde{\xi}_{m-1})+M_m, $$
where
$$M_m:=\sum_{n=1}^{m-1} \bigl[ \tilde{\chi}(\tilde{\xi}_{n})-\tilde{P}\tilde{\chi}(\tilde{\xi}_{n-1}) \bigr], $$
and \(\tilde{\chi}(\cdot)\) is unique, zero mean, solution of \(\tilde{\chi}-\tilde{P}\chi=\tilde{\varPsi}\), with \(\tilde{P}\) the transition operator for the chain \(\{\tilde{\xi}_{n}, n\ge0\}\). Using stationarity we can bound
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equbs_HTML.gif
Denote the terms on the right hand side by \(I_{N}^{(i)}\), i=1,2,3 respectively. One can easily estimate \(I_{N}^{(2)}\le CN^{-1/\alpha}\). Also, to bound \(I_{N}^{(3)}\) we can repeat estimates made in (5.54), as \(\tilde{\chi}\) is also Lβ integrable and obtain
$$I_N^{(3)}\le CN^{(1/\alpha_2+\kappa)/\beta-1/\alpha} $$
for some C>0. Finally, to deal with \(I_{N}^{(1)}\) observe that by Doob’s inequality for β∈(1,α)
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equbu_HTML.gif
We use again (5.8) and conclude that
$$\bigl\{{\mathbb{E}}\vert M_{2N^{1/\alpha_2+\kappa}} \vert ^\beta \bigr \}^{1/\beta}\le CN^{(1/\alpha_2+\kappa)/\beta}. $$
Summarizing from the above estimates we get
$$I_N^{(1)}\le CN^{-\delta}, $$
where δ is as in the statement of the lemma. □
Gathering the above results we have shown that for any \(\kappa\in (0,1-\alpha^{-1}_{2})\) and δ1∈(0,α2κ), \(\delta_{2}\in(0,\alpha^{-1}(1- \kappa-\alpha^{-1}_{2}))\) we have
$$\bigl \vert {\mathbb{E}}\exp \bigl\{i p Y^{(N)}_t \bigr\}-{ \mathbb{E}}\exp \bigl\{ip Z_t^{(\bar{N})} \bigr\}\bigr \vert \le C \biggl(\frac{t+1}{N^{\delta_1}}+\frac {|p|}{N^{\delta_2}} \biggr). $$
Choosing κ sufficiently close to \((1-\alpha^{-1}_{2})/(\alpha \alpha_{2}+1)\) we obtain that for any δ∈(0,(α2−1)(αα2+1)−1) we can find a constant C>0 so that
$$ \bigl \vert {\mathbb{E}}\exp \bigl\{i p Y^{(N)}_t \bigr\}-{\mathbb{E}}\exp \bigl\{ip Z_t^{(\bar{N})} \bigr\}\bigr \vert \le\frac{C}{N^{\delta}}(t+1) \bigl(|p|+1\bigr). $$
(5.55)
Thus, we conclude the proof of (5.42).

5.4.3 Proof of (5.44)

In this case we can still write inequality (5.49). With the help of Lemma 5.11, for any κ>0 and δ∈(0,α2κ) we can find C>0 such that
$$ \bigl \vert {\mathbb{E}}\exp \bigl\{i p Y^{(N)}_t \bigr\}-{\mathbb{E}}\exp \bigl\{ip Z_t^{(\bar{N})} \bigr\}\bigr \vert \le\frac{C(t+1)}{N^{\delta}}+ |p|({\mathbb{E}}I_N+{\mathbb{E}}J_N), $$
(5.56)
where IN, JN are defined by (5.50) and (5.51) respectively, with α=2. Repeating the estimates made in the previous section we obtain that
$${\mathbb{E}}I_N+{\mathbb{E}}J_N\le CN^{1/2(\kappa-1+\alpha^{-1}_2)} $$
for some C>0. Using the above estimates and (5.30) we conclude (5.44).

6 Proofs of Theorems 3.3 and 3.6

6.1 Proof of Theorem 3.3

Let N:=ϵ−3γ/2 and \(J\in{\mathcal{A}}\) be a real valued function. Define
$$ W_N(t,p,k):={\mathbb{E}} \biggl[W \bigl(p,K_{Nt}(k) \bigr)\exp \biggl\{ -i pN^{-2/3} \int _0^{Nt}\omega' \bigl(K_s(k)\bigr)\,ds, \biggr\} \biggr], $$
(6.1)
where {Kt(k),t≥0} is the Markov jump process starting at k, introduced in Sect. 2.6. It can be easily verified that the Lebesgue measure on the torus is invariant and reversible for the process and we denote the respective stationary process by {Kt,t≥0}. Its generator \({\mathcal{L}}\) is a symmetric operator on \(L^{2}(\mathbb{ T})\) given by
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equbz_HTML.gif
Here
$$\mathfrak{e}_1(k):=\frac{8}{3}\sin^4(\pi k),\qquad \mathfrak {e}_{-1}(k):=2\sin^2(2\pi k). $$
We also let
$$\mathfrak{r}(k):=\mathfrak{e}_{-1}(k)+\mathfrak{e}_1(k). $$
Note that
$$\int_{\mathbb{ T}}\mathfrak{e}_1(k)\,dk=\int _{\mathbb{ T}}\mathfrak{e}_{-1}(k)\,dk=1 $$
and
$$R(k)=\frac{3}{4}\mathfrak{r}(k)=2\sin^2(\pi k) \bigl[1+2 \cos^2(\pi k) \bigr]. $$
The process Kt(k) is a jump Markov process of the type considered in Sect. 2.6. The mean jump time and the transition probability operator of the skeleton Markov chain {ξn,n≥0} are given by θ(k)=R−1(k) and
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equce_HTML.gif
respectively. Probability measure
$$\pi(dk)=(1/2)\mathfrak{r}(k)\,dk=(2/3)R(k)\,dk $$
is reversible under the dynamics of the chain. It is clear that Condition 5.2 holds. It has been shown in Sect. 3 of [10] that Condition 5.1 is satisfied.

Let {Qt,t≥0} be the semigroup corresponding to the generator \({\mathcal{L}}\). It can easily be argued that Qt is a contraction on \(L^{p}(\mathbb{ T})\) for any p∈[1,+∞]. We shall need the following estimate.

Theorem 6.1

For a givena∈(0,1] there existsC>0
$$ \|Q_{t}f\|_{L^1(\mathbb{ T})}\le\frac{C}{(1+t)^a}\|f \|_{{\mathcal{B}}_a},\quad \forall t\ge0, $$
(6.2)
for all\(f\in{\mathcal{B}}_{a}\)such that\(\int_{\mathbb{ T}} f\,dk=0\).
The proof of this result shall be presented in Sect. 6.3. We proceed first with its application in the proof of Theorem 3.3. The additive functional appearing in (6.1) shall be denoted by
$$Y^{(N)}_t(k):=\frac{1}{N^{2/3}} \int_0^{Nt} \omega'\bigl(K_s(k)\bigr)\,ds, $$
or by \(Y^{(N)}_{t}\) in case it corresponds to the stationary process Kt. It is of the type considered in Sect. 5.4 with α=3/2 and
$$ \varPsi(k):=\omega'(k)\theta(k). $$
(6.3)
Since the dispersion relation satisfies
$$ \omega(k)=|k| \biggl[\frac{\hat{\alpha}''(0)}{2}+O\bigl(k^2 \bigr) \biggr]^{1/2}\quad\mbox{for }k\ll1, $$
(6.4)
we have
$$\omega'(k)=\operatorname{sgn} k \biggl[\frac{\hat{\alpha}''(0)}{2}+O \bigl(k^2\bigr) \biggr]^{1/2}\quad\mbox{for }k\ll1 $$
and
$$\biggl \vert \pi(\varPsi>\lambda)-\frac{c_*^+}{\lambda^{3/2}}\biggr \vert \le \frac{C^*}{\lambda^2},\quad\forall\lambda\ge1, $$
with
$$c_*^+:=2^{-1/4}3^{-5/2}\pi^{1/2}\bigl[\hat{\alpha}''(0)\bigr]^{3/4} $$
and some C>0. Condition 5.4 is therefore satisfied with α=3/2 and arbitrary α1<1/2. Since Ψ(k) is odd we have \(c_{*}^{-}=c_{*}^{+}\). On the other hand, jump mean time θ(k) satisfies (5.35) with α2=3/2.
We can apply to \(Y^{(N)}_{t}\) the conclusion of part (1) of Theorem 5.10. In this case \(\tilde{c}_{*}(\lambda)\equiv\hat{c}\), where
$$\hat{c}=\frac{3}{2^{1/2}}\bar{\theta}^{-3/2}\varGamma \biggl(\frac{5}{2} \biggr)c_*^+\int_0^{+\infty}\frac{\sin^2 x}{x^{5/2}}\,dx. $$
Since the integral on the right hand side equals \(4\sqrt{\pi}/3\) we obtain (3.5). Let Zt be the corresponding symmetric 3/2-stable process. From (3.9) we obtain \(\overline{W}(t,p)=\overline{W}(p){\mathbb{E}}e^{-ip Z_{t}}\). Therefore, for any \(J(\cdot)\in{\mathcal{S}}\)
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ141_HTML.gif
(6.5)
Let 1>β>1/3. The left hand side of (6.5) is estimated by E1+E2+E3, where
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equcl_HTML.gif
and \(\widetilde{W}_{p}(k):= W(p,k)-\overline{W}(p) \). The first term can be estimated as follows
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ142_HTML.gif
(6.6)
To estimate the second term note that by the Markov property we can write
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ143_HTML.gif
(6.7)
Term E2 can be therefore estimated by
$$\sup_{p\in\mathbb{ R}}\|Q_{N^{1-\beta}t} \widetilde{W}_{p} \|_{L^1( \mathbb{T})}\|J\|_{{\mathcal{A}}'}. $$
Invoking Theorem 6.1 we obtain
$$\sup_{p\in\mathbb{ R}}\|Q_{N^{1-\beta}t} \widetilde{W}_{p} \|_{L^1(\mathbb{T})}\le \frac{C}{N^{a(1-\beta)}}\|W\|_{{\mathcal{B}}_a}, \quad\forall t\ge 1,\ N\ge1. $$
As a result we conclude immediately that
$$ E_2\le\frac{C}{N^{a(1-\beta)}}\|W\|_{{\mathcal{B}}_a}\|J \|_{{\mathcal{A}}'}. $$
(6.8)
To deal with E3 note that by reversibility of the process Kt it equals
$$E_3=\biggl \vert \int_{\mathbb{ R}} \overline{W}(p){ \mathbb{E}} \bigl[J^*(p,K_{t(1-N^{-\beta })})e^{- i p Y^{(N)}_{t(1-N^{-\beta})}}-\bar{J}^*(p) e^{- i p Z_t} \bigr]\,dp\biggr \vert $$
where \(\bar{J}(p):=\int_{\mathbb{ T}}J(p,k)\,dk\). We obtain that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equcp_HTML.gif
From this point on we handle this term similarly to what has been done before and obtain that
$$ E_{31}\le\frac{Ct}{N^{\beta-1/3}} \|J\|_{{\mathcal{A}}_1'}\| W\|_{\mathcal{A}} $$
(6.9)
and
$$ E_{32}\le\frac{C}{N^{(1-\beta)a}}\|W\|_{{\mathcal{A}}}\|J \|_{{\mathcal{B}}_{a,b}}, $$
(6.10)
provided b>1.
Term E33 can be handled with the help of Theorem 5.10 with α=α2=3/2 and an arbitrary α1<1/2. Therefore, for any δ<2/13 we can find a constant C>0 such that
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ147_HTML.gif
(6.11)
We have reached therefore the conclusion of Theorem 3.3 with the exponent γ′ as indicated in (3.2). □

6.2 Proof of Theorem 3.6

The respective additive functional in this case equals
$$Y^{(N)}_t(k):=\frac{1}{N^{1/2}} \int_0^{Nt} \omega'\bigl(K_s(k)\bigr)\,ds, $$
where N:=ϵ−2γ. The observable Ψ is given by (6.3). From (6.4) we get
$$\omega'(k)=\frac{\hat{\alpha}''(0)k}{2} \biggl[\hat{\alpha}(0)+ \frac {\hat{\alpha}''(0)}{2}k^2 \bigl(1+O\bigl(k^2\bigr) \bigr) \biggr]^{-1/2}\quad\mbox{for }k\ll1 $$
and the asymptotics of the tails of Ψ(k) is given by
$$\pi\bigl(|\varPsi|>\lambda\bigr)\le\frac{C^*}{\lambda^{3}} $$
for some C>0 and all λ≥1. The observable belongs therefore to L2(π) and, since it is odd, its mean is 0. Conditions 5.7 and 5.9 are therefore fulfilled (the latter with α2=3). We also have
$$ \hat{c}:= 9\sigma^2, $$
(6.12)
where
$$ \sigma^2=\int_{\mathbb{ T}}\bigl[ \chi^2(k)-(P\chi)^2(k) \bigr]\pi(dk) $$
(6.13)
and χ is the unique zero mean solution of equation
$$ \chi-P\chi=\varPsi. $$
(6.14)
Since k′↦R(k,k′) is an even function for any fixed k and Ψ is odd we conclude easily that =0, thus
$$ \chi=\varPsi. $$
(6.15)
The result is then a consequence of the argument made in Sect. 6.1 and Theorems 5.10 and 6.1. From (6.12), (6.13) and (6.15) we conclude (3.13).

6.3 Proof of Theorem 6.1

Denoting ft:=Qtf we can write, using Duhamel’s formula
$$ f_t=S_tf+\frac{3}{4}\sum _{\iota\in\{-1,1\}} \int_0^tS_{t-s} \mathfrak{e}_{-\iota}(k)\langle\mathfrak{e}_\iota,f_s\rangle\, ds, $$
(6.16)
where
$$S_tf(k):=e^{-R(k)t}f(k). $$
Let \(\mathbb{H}:=[\lambda\in\mathbb{C}: \operatorname{Re}\lambda>0]\),
$$ \begin{aligned}[c] \hat{f}(\lambda)&:=(\lambda-{\mathcal{L}})^{-1}f=\int _0^{+\infty }e^{-\lambda s}f_s\, ds, \quad\lambda\in\mathbb{H}, \\ \hat{\mathfrak{ f}}_{0}(\lambda)&:=\frac{f}{\mathfrak{r}+\lambda} \end{aligned} $$
(6.17)
and \(\hat{\mathfrak{ f}}_{\iota}(\lambda):=\langle\hat{f}(\lambda),\mathfrak {e}_{\iota}\rangle\), ι=±1. Formula (6.17) extends to the resolvent set of the generator \({\mathcal{L}}\) in \(L^{2}(\mathbb{ T})\), that contains in particular ℂ∖[−M,0], with M:=(4/3)∥R+1.
From (6.16) we obtain that
$$ \hat{f}(\lambda)=\frac{4}{3} \hat{\mathfrak{ f}}_{0} \biggl( \frac{4\lambda }{3} \biggr)+\sum_{\iota\in\{-1,1\}}\hat{\mathfrak{ f}}_{\iota} (\lambda )\frac {\mathfrak{ e}_{-\iota}}{\mathfrak{r}+4\lambda/3}. $$
(6.18)
Vector \(\hat{ \mathfrak{f}}^{T}(\lambda):=[\hat{ \mathfrak{f}}_{-1}(\lambda ),\hat{ \mathfrak{f}}_{1}(\lambda)]\) satisfies therefore
$$ \hat{ \mathfrak{f}}(\lambda)=\mathfrak{ A}^{-1} \biggl( \frac{4\lambda }{3} \biggr)\hat{ \mathfrak{g}} \biggl(\frac{4\lambda}{3} \biggr), $$
(6.19)
where
$$\mathfrak{ A}(\lambda)=\left[ \begin{array}{l@{\quad}l} a(\lambda)&a_{-1}(\lambda) \\ a_{1}(\lambda)& a(\lambda) \end{array} \right], $$
$$a(\lambda):=1-\int_{\mathbb{ T}}\frac{\mathfrak{ e}_{-1}(k) \mathfrak {e}_{1}(k)}{\lambda+\mathfrak{r}(k)}\,dk, $$
$$a_\iota(\lambda):=-\int_{\mathbb{ T}}\frac{\mathfrak{ e}_{\iota}^2(k)\, dk}{\lambda+\mathfrak{r}(k)}, $$
\(\hat{ \mathfrak{ g}}^{T}(\lambda):=[\hat{ \mathfrak{ g}}_{-1}(\lambda ),\hat{ \mathfrak{ g}}_{1}(\lambda)]\) and
$$\hat{ \mathfrak{g}}_{\iota}(\lambda)=\frac{4}{3}\int_{\mathbb{ T}} \frac{f(k) \mathfrak {e}_\iota(k)\,dk}{\lambda+\mathfrak{r}(k)}, \quad\iota=\pm1,\ \lambda\in \mathbb{C}\setminus[-M,0]. $$
Let \(\Delta(\lambda):=\det \mathfrak{ A}(\lambda) \) and
$$b_\iota(\lambda):=-\int_{\mathbb{ T}}\frac{\mathfrak{ e}_{\iota}(k)\, dk}{\lambda+\mathfrak{r}(k)}. $$
Observe that
$$ a(0)=-a_{-1}(0)=-a_{1}(0) $$
(6.20)
and
$$ \Delta(\lambda)=\lambda\bigl[\lambda b_{-1}( \lambda)b_1(\lambda )+b_{-1}(\lambda)a_1( \lambda)+a_{-1}(\lambda )b_1(\lambda)\bigr]. $$
(6.21)
From (6.20) we get that Δ(0)=0. In addition, from (6.21) we can see that D(λ):=Δ(λ)λ−1 is analytic in ℂ∖[−M,0]. In addition,
$$\lim_{\lambda\to0,\,\lambda\in\overline{\mathbb{H}}}D(\lambda ) =b_{-1}(0)a_1(0)+a_{-1}(0)b_1(0)>0. $$
Hence, there exist ϱ>0 and c>0 such that
$$ \bigl|\Delta(\lambda)\bigr|\ge c_*|\lambda|, \quad\forall\lambda\in \overline{\mathbb{H}},\ |\lambda|\le\varrho. $$
(6.22)
It can be straightforwardly argued that ϱ can be further adjusted in such a way that (6.22) holds on the boundary \({\mathcal{C}}\) of the rectangle (−M,0)×(−ϱ,ϱ).
From (6.19) we obtain that
$$ \hat{ \mathfrak{f}}_{\iota}(\lambda)=\Delta^{-1} \biggl(\frac{4\lambda }{3} \biggr)\mathfrak{n}_{\iota} \biggl( \frac{4\lambda}{3} \biggr),\quad\iota=\pm1, $$
(6.23)
where
$$\mathfrak{n}_{-1}(\lambda) :=a(\lambda)\hat{ \mathfrak{g}}_{-1}( \lambda)- a_{-1}(\lambda)\hat{ \mathfrak{g}}_{1}(\lambda) $$
and
$$\mathfrak{n}_1(\lambda):= -a_{1}(\lambda)\hat{ \mathfrak{g}}_{-1}(\lambda )+a(\lambda)\hat{ \mathfrak{g}}_{1}(\lambda). $$
Using the fact that \(\int_{\mathbb{ T}}f dk=0\), after a straightforward calculation, we obtain that
$$ \mathfrak{n}_{\iota}(\lambda)=\lambda \bigl[a_{\iota}( \lambda)\hat{\mathfrak{{g}}}_0(\lambda )-b_{\iota}(\lambda)\hat{ \mathfrak{g}}_{\iota}(\lambda) \bigr],\quad \iota=\pm1, $$
(6.24)
where
$$\hat{\mathfrak{{g}}}_0(\lambda):=\int_{\mathbb{ T}}\hat{ \mathfrak {f}}_0(\lambda,k)\,dk. $$
We use the well known formula, see e.g. Chap. VII.3.6 of [8],
$$Q_t f=\frac{1}{2\pi i}\int_{{\mathcal{K}}}e^{\lambda t}( \lambda -{\mathcal{L}})^{-1}f\,d\lambda, $$
where \({\mathcal{K}}\) is a contour enclosing the L2 spectrum of \({\mathcal{L}}\), that as we recall is contained in [−M,0]. It is easy to see that
$$ \bigl|\hat{ \mathfrak{f}}_{\iota}(\lambda)\bigr|\le C\|f \|_{{\mathcal{B}}_a}|\lambda |^{a-1},\quad\lambda\in\overline{\mathbb{H}}, \ |\lambda|\le\varrho,\ \iota =0,\pm1 $$
(6.25)
for an appropriate ϱ>0. This is clear for ι=0. Form this and (6.22), (6.24) we conclude that (6.25) holds also for ι=±1. Therefore, we can use as \({\mathcal{K}}\) the boundary \({\mathcal{C}}\) of the rectangle, mentioned after (6.22), oriented counter-clockwise. From (6.18) we get
$$ f_t=S_tf+\frac{1}{2\pi i}\sum _{\iota\in\{-1,1\}}\mathfrak{e}_{-\iota }\int _{{\mathcal{C}}}\frac{e^{\lambda t}\mathfrak{n}_{\iota} (4\lambda /3 )\,d\lambda }{(\mathfrak{r}+4\lambda/3)\Delta (4\lambda/3 )}. $$
(6.26)
Note that
$$\|S_tf\|_{L^1(\mathbb{ T})}=\int_{\mathbb{ T}}e^{-R(k)t}\bigl|f(k)\bigr|\,dk \le\frac {C}{t^a}\|f\|_{{\mathcal{B}}_a}, $$
where C:=supx≥0xaex. Consider the term I(t) corresponding to the integral on the right hand side of (6.26). We can write \(I(t)=\sum_{i=1}^{4}I_{i}(t)\), where Ii(t) correspond to the sides of the rectangle {−M}×(−ϱ,ϱ), [−M,0]×{−ϱ}, [−M,0]×{ϱ}, {0}×(−ϱ,ϱ) appropriately oriented. The estimations of Ii(t), i=1,2,3 are quite straightforward and lead to the bounds
$$ \bigl\|I_i(t)\bigr\|_{L^1(\mathbb{ T})}\le\frac{C}{t+1}\|f \|_{L^1(\mathbb{ T})},\quad i=1,2,3. $$
(6.27)
To deal with I4(t) observe that, thanks to (6.24), it equals to I41(t)+I42(t), where
$$I_{4,j}(t):=\frac{1}{2\pi i}\sum_{\iota\in\{-1,1\}} \mathfrak {e}_{-\iota}\int_{-\varrho}^{\varrho}e^{i\nu t} \bigl(g_{\iota,j}D^{-1}\bigr) \biggl(\frac {4i\nu }{3} \biggr)\,d \nu,\quad j=1,2 $$
and
$$g_{\iota,1}(\nu)=\frac{a_{\iota} (i\nu )\hat{\mathfrak{ {g}}}_{0} (i\nu )}{\mathfrak{r}+i\nu}, \qquad g_{\iota,2}(\nu)= \frac {-b_{\iota } (i\nu )\hat{\mathfrak{ g}}_{\iota} (i\nu )}{\mathfrak{ r}+i\nu}. $$
The asymptotics of I4,1 for t≫1 is, up to a term of order \(\| f\|_{L^{1}(\mathbb{ T})}/t\), the same as
$$\tilde{I}_{4,1}(t):=\frac{1}{2\pi i}\sum _{\iota\in\{-1,1\}}\mathfrak {e}_{-\iota}\int_{\mathbb{ R}}e^{i\nu t}(Fg_{\iota,1}) (4i\nu /3 )\,d\nu $$
for some C function F() supported in (−ρ,ρ) and equal to D−1(iν) in (−ρ/2,ρ/2). Denoting
$$\mathfrak{ h}(\lambda):=\frac{\hat{\mathfrak{ g}}_0 (\lambda )}{\mathfrak{ r}+\lambda} $$
we can write
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equdj_HTML.gif
for sufficiently large t (the support of F is contained in (−ϱ,ϱ)). The first term on the utmost right hand side can be estimated by
$$\frac{C}{t}\int_{-2\varrho}^{2\varrho}\bigl \vert \hat{\mathfrak{ g}}_0 \bigl(4i(\nu +\pi/t)/3 \bigr)\bigr \vert \,d\nu\le \frac{C}{t}\int_{-2\varrho}^{2\varrho}\int _{\mathbb{ T}}\frac {|f|\,d\nu\, dk}{\mathfrak{ r}^a|\nu+\pi/t|^{1-a}}\,dk\le\frac{C}{t}\|f \|_{{\mathcal{B}}_a}. $$
As for the second term, it can be estimated by
https://static-content.springer.com/image/art%3A10.1007%2Fs10955-012-0528-4/MediaObjects/10955_2012_528_Equ164_HTML.gif
(6.28)
The first term is less than
$$ \frac{C}{t}\sum_{\iota\in\{-1,1\}}\int _{\mathbb{ T}}\frac{\mathfrak{ e}_{-\iota }\,dk}{\mathfrak{r}^{1+b}}\int_{\mathbb{ T}} \frac{|f|\,dk}{\mathfrak{r}^a} \int_{-2\rho}^{2\rho} \frac{ d\nu}{|\nu|^{1-b}|\nu+\pi/t|^{1-a}} $$
(6.29)
for some b∈(0,1/2). Since for any a,b>0 there exists C>0 such that
$$ \int_{-2\rho}^{2\rho} \frac{ d\nu}{|\nu|^{1-b}|\nu+x|^{1-a}}\le \frac {C}{x^{1-a-b}},\quad\forall x>0 $$
(6.30)
expression in (6.29) can be estimated by \(C\|f\|_{{\mathcal{B}}_{a}}/t^{a+b}\). Finally the second term in (6.28) can be estimated by
$$\frac{C}{t}\int_{\mathbb{ T}}\int_{-2\rho}^{2\rho} \frac{d\nu}{|\nu |^{1-a/2}|\nu +\pi/t|^{1-a/2}}\int_{\mathbb{ T}}\frac{|f|\,dk}{\mathfrak{r}^a}\le \frac {C}{t^a}\| f\|_{{\mathcal{B}}_a}, $$
by virtue of (6.30). Summarizing, we have shown that
$$\bigl\|\tilde{I}_{4,1}(t)\bigr\|_{L^1(\mathbb{ T})}\le\frac{C}{(t+1)^a}\|f \|_{{\mathcal{B}}_a} $$
for some C>0 and all t>0. The estimates for \(\|\tilde{I}_{4,2}(t)\|_{L^{1}(\mathbb{ T})}\) are quite analogous.

Acknowledgements

Both authors wish to thank the anonymous referee of the paper for many valuable remarks that contributed to the improvement of the manuscript. In particular we are thankful for pointing out the relationship (6.15) which leads to (1.3). The support of the grant of the Polish Ministry of Science and Higher Education, NN 201 419 139 is gratefully acknowledged by both authors. L.S. would also like to acknowledge the support of the European Social Fund.

Open Access

This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

Copyright information

© The Author(s) 2012