1 Introduction

The Rosenblatt process was introduced and studied by Taqqu [33], motivated by a counterexample of Rosenblatt regarding a strong mixing condition [26]. See Taqqu [36] for the history and an overview of the process, and related work. The Rosenblatt process arises from convergence in distribution of normalized partial sums of strongly dependent random variables. It lives in the second Wiener chaos, in contrast to Gaussian processes (which belong to the first chaos) that are obtained from sums of independent or weakly dependent random variables. The Rosenblatt process possesses some of the main properties of the (Gaussian) fractional Brownian motion with Hurst parameter greater than \(1/2\): mean zero, Hölder continuity, non-differentiability, self-similarity and stationarity of increments (hence, it has the same form of covariance as fractional Brownian motion), infinite divisibility, long-range dependence in the sense that the sum over \(k\) of covariances of increments on the intervals \([0,1]\) and \([k,k+1]\) diverges, and it is not a semimartingale. The Rosenblatt process is the simplest non-Gaussian Hermite process (see Taqqu [34]). It is a counterpart of the fractional Brownian motion, which is the most prominent long-range dependent Gaussian process. Significant attention has been attracted by the Rosenblatt process due to its mathematical interest, and possible applications where the Gaussian property may not be assumed. Recent papers on the subject include [39], which develops a related stochastic calculus and mentions areas of application (see also references therein), [23] and [40], where new properties of the process have been found, and [17], which provides a strong approximation for the process. Relevant information for the present paper on the Rosenblatt process and fractional Brownian motion is given in the next section.

Our main objective in this paper was to show a different way of obtaining the Rosenblatt process. The method of Taqqu, which was developed for Hermite processes generally [34], is based on limits of sums of strongly dependent random variables. The Rosenblatt process can also be defined as a double stochastic integral [36, 39]. Our approach consists in deriving the Rosenblatt process from a specific random particle system, which hopefully provides an intuitive physical interpretation of this process. A useful tool here is the theory of random variables in the space of tempered distributions \({\mathcal {S}}'\equiv {\mathcal {S}}'({\mathbb {R}}^d)\) (\(d=1\) in our case), since it permits to employ some nice properties of this space, and the Rosenblatt process can be expressed with the help of an \({\mathcal {S}}'\)-random variable. This was noted by Dobrushin [13]. Relations between random particle systems and random elements of \({\mathcal {S}}'\) have been studied by many authors, beginning with Martin-Löf [24]. Our approach is in the spirit of Adler and coauthors, e.g., [1], where a general scheme (but still not so general to cover our case) was developed for representing a given random element of \({\mathcal {S}}'\) as the limit of appropriate functionals of some particle system (with high density in [1]). That approach was later applied in [2, 11, 32] to give particle picture interpretations of the self-intersection local times of density processes in \({\mathcal {S}}'\). We stress that our principal aim is to construct the Rosenblatt process by means of a particle system, and to this end, an important step is to study a suitable random element of \({\mathcal {S}}'\). Particle picture approaches have been used to obtain fractional Brownian motion and subfractional Brownian motion with Hurst parameter \(H\), in different ways for \(H<1/2\) and \(H>1/2\) [8, 12].

Our results can be summarized as follows.

The Rosenblatt process with parameter \(H\), defined for \(H\in (1/2,1)\), is represented in the form \(\xi _t=\langle Y,1\!\!1_{[0,t]}\rangle , t\ge 0\), where \(Y\) is an \({\mathcal {S}}'\)-random variable which is obtained from the Wick product \(:X\otimes X:\), where \(X\) is a centered Gaussian \({\mathcal {S}}'\)-random variable (see (3.1)). \(X\) is in a sense a distributional derivative of a suitable fractional Brownian motion, and the relation between \(Y\) and \(X\) corresponds to the informal formula (43) in [36]. Note that \(:X\otimes X:\) is a counterpart of the Hermite polynomial of order \(2, H_2(x)=x^2-1\), used in [33, 36]. The possibility to take “test functions” of the form \(\varphi =1\!\!1_{[0,t]}\) to bring in a time parameter has been noted for example in [34]. This formulation is in the spirit of white noise analysis.

We define a particle system on \({\mathbb {R}}\) with initial distribution given by a Poisson random field with Lebesgue intensity measure, and particles evolving independently according to the standard symmetric \(\alpha \)-stable Lévy process. The particles are independently assigned charges \(+1\) and \(-1\) with probabilities \(1/2\). A crucial element of the construction is the intersection local time of two independent \(\alpha \)-stable processes, which is known to exist for \(\alpha >1/2\) (Proposition 5.1 of [11]). Consider the process \(\xi ^T\) defined by

$$\begin{aligned} \xi ^T_t=\frac{1}{T}\mathop {\sum _{j,k}}\limits _{j\ne k}\sigma _j\sigma _k\langle \varLambda (x_j+\rho ^j,x_k+\rho ^k;T), 1\!\!1_{[0,t]}\rangle , \,\, t\ge 0, \end{aligned}$$
(1.1)

where the \(x_j\) are the points of the initial Poisson configuration, the \(\rho ^j\) are the \(\alpha \)-stable processes corresponding to those points \((\rho ^j(0)=0)\), the \(\sigma _j\) are the respective charges, and \(\varLambda (x_j+\rho ^j,x_k+\rho ^k;T)\) is the intersection local time of the processes \(x_j+\rho ^j\) and \(x_k+\rho ^k\) on the interval \([0,T]\). This local time is defined as a process in \({\mathcal {S}}'\) (Definition 2.2 and Proposition 2.3). \(\xi ^T\) is given a precise meaning and shown to have a continuous modification (Lemma 3.4). The result is that for \(\alpha \in (1/2,1),\xi ^T\) converges as \(T\rightarrow \infty \), in the sense of weak functional convergence, to the Rosenblatt process with parameter \(H=\alpha \), up to a multiplicative constant (Theorem 3.5).

Our second result concerns the analysis of long-range dependence of the Rosenblatt process. We give a precise measure of long-range dependence by means of a number called dependence exponent (Theorem 4.1). We show that the increments are asymptotically independent (not just uncorrelated) as the distance between the intervals tends to infinity (Corollary 4.2).

In Sect. 2, we give background on the Rosenblatt process, on the \({\mathcal {S}}'\)-random variable \(X\) from which fractional Brownian motion is derived, on how the abovementioned particle system produces \(X\), and on intersection local time. In Sect. 3, we construct the Rosenblatt process by means of the \({\mathcal {S}}'\)-random variable \(Y\) obtained from \(:X\otimes X:\), we show that the process \(\xi ^T\) is well defined, and we prove convergence to the Rosenblatt process. In Sect. 4, we discuss the long-range dependence of the process. Section 5 contains some comments of related interest. The proofs are given in Sect. 6.

We use the following notation:

  • \(\mathcal {S}\): the Schwartz space of smooth rapidly decreasing functions on \(\mathbb {R}\),

  • \({\mathcal {S}}'\): the space of tempered distributions (dual of \({\mathcal {S}}\)),

  • \(\bar{\,}\bar{\,}\): complex conjugate,

  • \(\widehat{\varphi }(z)=\int _{\mathbb {R}} e^{ixz}\varphi (x)\hbox {d}x\): Fourier transform of a function \(\varphi \),

  • \(\Rightarrow \): convergence in law in an appropriate space,

  • \(\Rightarrow _f\): convergence of finite-dimensional distributions,

  • \(*\): convolution,

  • \(C([0,\tau ])\): the space of real continuous functions on \([0,\tau ]\),

  • \(C, C_i\): generic positive constants with possible dependencies in parentheses.

2 Background

2.1 The Rosenblatt process

We recall some facts on the Rosenblatt process \(\xi =(\xi _t)_{t\ge 0}\) with parameter \(H\in (1/2,1)\), which are found in [33, 36]. The characteristic function of the finite-dimensional distributions of the process in a small neighborhood of 0 has the form

$$\begin{aligned} E\mathrm{exp}\left\{ i\sum ^p_{j=1}\theta _j\xi _{t_j}\right\} =\mathrm{exp}\left\{ \frac{1}{2}\sum ^\infty _{k=2} \frac{(2i\sigma )^k}{k}R_{H,k}(\theta _1,\ldots ,\theta _p; t_1,\ldots ,t_p)\right\} \end{aligned}$$
(2.1)

where

$$\begin{aligned}&{R_{H,k}(\theta _1,\ldots ,\theta _p;t_1,\ldots ,t_p) =\int \limits _{{\mathbb {R}}^k}\psi (x_1)\psi (x_2)\ldots \psi (x_k)}\nonumber \\&\quad \cdot |x_1-x_2|^{H-1}|x_2-x_3|^{H-1}\ldots |x_{k-1}-x_k|^{H-1} |x_k-x_1|^{H-1}dx_1\ldots dx_k,\end{aligned}$$
(2.2)
$$\begin{aligned}&\psi (x)=\sum ^p_{j=1}\theta _j1\!\!1_{[0,t_j]}(x), \end{aligned}$$
(2.3)

and

$$\begin{aligned} \sigma =\left[ \frac{1}{2}H(2H-1)\right] ^{1/2}. \end{aligned}$$
(2.4)

The value of \(\sigma \) is chosen so that \(E\xi ^2_1=1\). The series in the exponent converges for \(\theta _1,\ldots ,\theta _p\) in a (small) neighborhood of \(0\) depending on \(t_1,\ldots , t_p\), and (2.1) defined in this neighborhood determines the distribution. The process is also characterized by the cumulants of the random variable \(\sum ^p_{j=1}\theta _j\xi _{t_j}\), which are \(\kappa _1=0\),

$$\begin{aligned} \kappa _k=2^{k-1}(k-1)!\sigma ^kR_{H,k}(\theta _1,\ldots ,\theta _p;t_1,\ldots ,t_p),\,\, k\ge 2. \end{aligned}$$
(2.5)

The process \(\xi \) arises from a (Donsker-type) limit in distribution as \(n\rightarrow \infty \) of the processes

$$\begin{aligned} \xi _n(t)=\frac{\sigma }{n^H}\sum ^{\lfloor nt\rfloor }_{j=1}X_j,\,\,t\in [0,T], \end{aligned}$$

(\(\lfloor \cdot \rfloor \) denotes integer part), where the random variables \(X_j\) are defined by \(X_j=Y^2_j-1\) (i.e., \(X_j=H_2(Y_j)\), where \(H_2\) is the Hermite polynomial of order 2), and \((Y_j)_j\) is a Gaussian stationary sequence of random variables with mean \(0\), variance \(1\), and covariances

$$\begin{aligned} r_j=EY_0Y_j=(1+j^2)^{(H-1)/2}\sim j^{H-1}\,\,\mathrm{as}\,\, j\rightarrow \infty . \end{aligned}$$

The spectral representation of the process is

$$\begin{aligned} \xi _t\mathop {=}\limits ^{d}A(H)\int \limits _{{\mathbb {R}}^2}^{''} \frac{e^{i(\lambda _1+\lambda _2)t}-1}{i(\lambda _1+\lambda _2)} \frac{1}{|\lambda _1|^{H/2}|\lambda _2|^{H/2}}d \widetilde{B}(\lambda _1)d\widetilde{B}(\lambda _2) \end{aligned}$$
(2.6)

(\(\mathop {=}\limits ^{d}\) means equality in distribution), where

$$\begin{aligned} A(H)=\frac{[H(2H-1)/2]^{1/2}}{2\Gamma (1-H)\sin (H\pi /2)}, \end{aligned}$$
(2.7)

\(\widetilde{B}\) is a complex Gaussian measure on \(\mathbb {R}\) such that \(\widetilde{B}=B^{(1)}+iB^{(2)},B^{(1)}(A)=B^{(1)}(-A), B^{(2)}(A) =-B^{(2)}(-A), A\) is a Borel set of \(\mathbb {R}\) with finite Lebesgue measure \(|A|, B^{(1)}\) and \(B^{(2)}\) are independent, and \(E(B^{(1)}(A))^2=E(B^{(2)}(A))^2=\frac{1}{2}|A|\). (\(\widetilde{B}\) can be viewed as a complex-valued Fourier transform of white noise). The double prime on the integral means that the diagonals \(\lambda _1=\pm \lambda _2\) are excluded in the integration. The process also has a time representation as a double integral on \({\mathbb {R}}^2\) with respect to Brownian motion, and a finite interval integral representation obtained in [39].

We have mentioned in the Introduction some of the main properties of the Rosenblatt process. We recall the self-similarity with parameter \(H\): for any \(c>0\),

$$\begin{aligned} (\xi _{ct})_{t\ge 0}\mathop {=}\limits ^{d} c^H(\xi _t)_{t\ge 0} \end{aligned}$$
(2.8)

This and stationarity of increments imply

$$\begin{aligned} E(\xi _t-\xi _s)^2=\sigma ^2(t-s)^{2H}, \end{aligned}$$
(2.9)

and hence, the covariance function of \(\xi \) has the same form as that of the fractional Brownian motion, i.e.,

$$\begin{aligned} E\xi _s\xi _t=\frac{\sigma ^2}{2}(s^{2H}+t^{2H}-|t-s|^{2H}). \end{aligned}$$
(2.10)

In particular, the increments are positively correlated (since \(H>1/2\)), and

$$\begin{aligned} \sum ^\infty _{k=1}E\xi _1(\xi _k-\xi _{k-1})=\infty . \end{aligned}$$
(2.11)

We have not found a published proof of the non-semimartingale property of \(\xi \), but that is easy to show. By (2.9) with \(H>1/2\), it is obvious that the quadratic variation is \(0\). A deeper result is that self-similarity and increment stationarity imply that the paths have infinite variation [41]. The non-semimartingale property of fractional Brownian motion \((H\ne 1/2)\) follows for example from a general criterion for Gaussian processes [7] (Lemma 2.1, Corollary 2.1).

Infinite divisibility was recently proved in [23, 40].

There does not seem to be information in the literature on whether or not the Rosenblatt process has the Markov property (it seems plausible that it does not, by analogy with fractional Brownian motion).

The fractional Brownian motion is the only Gaussian process that has the properties (2.8) and (2.9) (with \(\sigma =1\)). On the other hand, there are many processes with stationary increments satisfying (2.8) which belong to the second chaos [22]. The Rosenblatt process is the simplest one of them.

2.2 An \({\mathcal {S}}'\)-random variable related to fractional Brownian motion

Recall that fractional Brownian motion (fBm) with Hurst parameter \(H\in (0,1)\) is a centered Gaussian process \((B^H_t)_{t\ge 0}\) with covariance given by the right-hand side of (2.10) with \(\sigma =1\) (see [29] for background on fBm). This process can be represented with the help of the centered Gaussian \({\mathcal {S}}'\)-valued random variable \(X\) with covariance functional

$$\begin{aligned} E\langle X,\varphi _1\rangle \langle X,\varphi _2\rangle =\frac{1}{\pi } \int \limits _{\mathbb {R}}\widehat{\varphi }_1(x) \overline{\widehat{\varphi }_2(x)}|x|^{-\alpha }dx,\quad \varphi _1,\varphi _2\in {\mathcal {S}}, \end{aligned}$$
(2.12)

where \(-1<\alpha <1\). Namely,

$$\begin{aligned} \left( \langle X,1\!\!1_{[0,t]}\rangle \right) _{t\ge 0}=(KB^H_t)_{t\ge 0}, \end{aligned}$$
(2.13)

with \(H=\frac{1+\alpha }{2}\) and \(K\) is some positive constant. (\(\langle X,1\!\!1_{[0,t]}\rangle \) is defined by an \(L^2\)-extension). For \(0<\alpha <1\quad (\frac{1}{2}< H<1)\), this random variable \(X\) can be obtained from the particle system described in the Introduction, i.e., the system of independent standard \(\alpha \)-stable processes (particle motions) starting from a Poisson random field with Lebesgue intensity. Each particle has a charge \(\pm 1\) with equal probabilities, and the charges are mutually independent and independent of the initial configuration and of the particle motions. The motions have the form \(x_j+\rho ^j\), where the \(x_j\)’s are the points of the initial configuration, and the \(\rho ^j\) are independent standard \(\alpha \)-stable processes independent of \(\{x_j\}_j, \rho ^j_0=0\). The charges are denoted by \(\sigma _j\).

The normalized total charge occupation on the interval \([0,T]\) is defined by

$$\begin{aligned} \langle X_T,\varphi \rangle =\frac{1}{\sqrt{T}}\sum \limits _j\sigma _j\int \limits _0^T \varphi (x_j+\rho ^j_s)\hbox {d}s,\quad \varphi \in {\mathcal {S}}. \end{aligned}$$
(2.14)

We have the following proposition.

Proposition 2.1

If \(0<\alpha <1\) and \(T\rightarrow \infty \), then

  1. (a)

    \(X_T\Rightarrow X\) in \({\mathcal {S}}'\), where \(X\) is as in (2.12).

  2. (b)

    \((\langle X_T,1\!\!1_{[0,t]}\rangle )_{t\ge 0}\;{\Rightarrow }_f (KB^H_t)_{t\ge 0}\) with \(H=\frac{1+\alpha }{2}\).

This fact is an easy consequence of Theorem 2.1(a) in [9], where the occupation time fluctuations around the mean for the system without charges were considered. It suffices to take two independent copies of such systems and to write the difference of their occupation time fluctuations.

A similar procedure with a different functional of a particle system without charges permits also to obtain fBm with \(H<\frac{1}{2}\), as well as the corresponding random variable \(X\) (see Theorem 2.9 in [12]).

2.3 Intersection local time

There are several ways to define the intersection local time (ILT) of two processes (see, e.g., [1, 14, 25]. We will take the definition from [11], which is close to that of [1]. Intuitively, ILT of real cadlag processes \((\eta ^1_t)_{t\ge 0},(\eta ^2_t)_{t\ge 0}\) up to time \(T\) is given by

$$\begin{aligned} \langle \varLambda (\eta ^1,\eta ^2;T),\varphi \rangle = \int \limits ^T_0\int \limits ^T_0\delta (\eta ^2_v-\eta ^1_u) \varphi (\eta ^1_u)du\, dv,\quad \varphi \in {\mathcal {S}}, \end{aligned}$$

where \(\delta \) is the Dirac distribution. We want to regard \(\varLambda \) as a process in \({\mathcal {S}}'\).

To make this definition rigorous, one has to apply a limiting procedure.

Let \({\mathcal {F}}\) denote the class of non-negative symmetric infinitely differentiable functions \(f\) on \(\mathbb {R}\) with compact support and such that \(\int _{\mathbb {R}} f(x)\hbox {d}x=1\). For \(f\in {\mathcal {F}}, \varepsilon >0\), let

$$\begin{aligned} f_\varepsilon (x)=\varepsilon ^{-1}f \left( \frac{x}{\varepsilon }\right) , \quad x\in {\mathbb {R}}. \end{aligned}$$
(2.15)

We will frequently use

$$\begin{aligned} |\widehat{f}_\varepsilon {(x)}|\le 1\quad \mathrm{and}\quad \lim _{\varepsilon \rightarrow 0}\widehat{f}_\varepsilon (x)=\lim _{\varepsilon \rightarrow 0}\widehat{f}(\varepsilon x)=1. \end{aligned}$$
(2.16)

We define

$$\begin{aligned} \langle \varLambda ^f_\varepsilon (\eta ^1,\eta ^2;T),\varphi \rangle = \int \limits ^T_0 \int \limits ^T_0f_\varepsilon (\eta ^2_v-\eta ^1_u)\varphi (\eta ^1_u) du\,dv,\quad T\ge 0,\varphi \in {\mathcal {S}}. \end{aligned}$$
(2.17)

Definition 2.2.

If there exists an \({\mathcal {S}}'\)-process \(\varLambda (\eta ^1,\eta ^2)=(\varLambda (\eta ^1,\eta ^2;T))_{T\ge 0}\) such that for each \(T\ge 0, \varphi \in {\mathcal {S}}\) and any \(f\in {\mathcal {F}},\langle \varLambda (\eta ^1,\eta ^2;T),\varphi \rangle \) is the mean square limit of \(\langle \varLambda ^f_\varepsilon (\eta ^1,\eta ^2;T),\varphi \rangle \) as \(\varepsilon \rightarrow 0\), then the process \(\varLambda (\eta ^1,\eta ^2)\) is called the intersection local time (ILT) of the processes \(\eta ^1,\eta ^2\).

In [11], the following result was proved (Theorem 4.2 and Proposition 5.1 therein).

Proposition 2.3.

Let \(\eta ^1,\eta ^2\) be independent standard \(\alpha \)-stable processes in \(\mathbb {R}\). If \(\alpha >\frac{1}{2}\), then for any \(x,y\in \mathbb {R}\) the ILT \(\varLambda (x+\eta ^1,y+\eta ^2)\) exists. Moreover, for all \(T\ge 0, f\in ~{\mathcal {F}}, \varphi \in {\mathcal {S}}, \langle \varLambda ^f_\varepsilon (\cdot +\eta ^1,\cdot +\eta ^2;T),\varphi \rangle \) converges in \(L^2(\mathbb {R}^2\times \varOmega ,\lambda \otimes \lambda \otimes P)\), where \(\lambda \) is the Lebesgue measure on \(\mathbb {R}\), and \(P\) is the probability measure on the underlying sample space \(\varOmega \).

3 Particle picture for the Rosenblatt process

We begin with another representation of the Rosenblatt process, which is more suitable for our purpose. From [13], it can be deduced that this construction was known to Dobrushin, but we have not been able to find it written explicitly in the literature. Therefore, we will describe it in detail, but the proof will be only sketched.

Let \(X\) be the centered Gaussian \({\mathcal {S}}'\)-random variable with covariance (2.12). Recall that the Wick product \(:X\otimes X:\) is defined as a random variable in \({\mathcal {S}}'(\mathbb {R}^2)\) such that

$$\begin{aligned} \langle :X\otimes X:,\varphi _1\otimes \varphi _2\rangle =\langle X,\varphi _1\rangle \langle X,\varphi _2\rangle -E\langle X,\varphi _1\rangle \langle X, \varphi _2\rangle ,\quad \varphi _1,\varphi _2\in {\mathcal {S}}. \end{aligned}$$
(3.1)

(see, e.g., Chapter 6 of [18] or [3, 5]). The Wick square of \(X\) is an \({\mathcal {S}}'\)-random variable \(Y\) that can be written informally as \(\langle Y,\varphi \rangle =\langle :X\otimes X:, \varphi (x)\delta _{y-x}\rangle \). To make this rigorous, we use approximation. Fix \(f\in {\mathcal {F}}\) and let \(f_\varepsilon \) be as in (2.15). For \(\varphi \in {\mathcal {S}}\) we denote

$$\begin{aligned} \varPhi ^f_{\varepsilon ,\varphi }(x,y)=\varphi (x)f_\varepsilon (y-x), \end{aligned}$$
(3.2)

and we define an \({\mathcal {S}}'\)-random variable \(Y^f_\varepsilon \) by

$$\begin{aligned} \langle Y^f_\varepsilon ,\varphi \rangle =\langle :X\otimes X:,\varPhi ^f_{\varepsilon ,\varphi }\rangle ,\quad \varphi \in {\mathcal {S}}. \end{aligned}$$
(3.3)

The following lemma is an easy consequence of the regularization theorem [20], and the fact that, by Gaussianity,

$$\begin{aligned}&E\langle :X\otimes X:,\varPhi \rangle \langle :X\otimes X:,\varPsi \rangle \nonumber \\&\quad =\frac{1}{\pi ^2} \int \limits _{\mathbb {R}}\widehat{\varPhi }(x,y)(\overline{\widehat{\varPsi }(x,y)}+ \overline{\widehat{\varPsi }(y,x)})|x|^{-\alpha }|y|^{-\alpha }dx\,dy,\quad \varPhi ,\varPsi \in {\mathcal {S}}(\mathbb {R}^2). \end{aligned}$$
(3.4)

Lemma 3.1

If \(\frac{1}{2}<\alpha <1\), then there exists an \({\mathcal {S}}'\)-random variable \(Y\) such that for any \(f\in {\mathcal {F}}\),

$$\begin{aligned} \langle Y,\varphi \rangle =L^2 \mathrm{-} \lim _{\varepsilon \rightarrow 0}\langle Y^f_\varepsilon ,\varphi \rangle ,\quad \varphi \in {\mathcal {S}}. \end{aligned}$$

\(\langle Y,\varphi \rangle \) can be further extended in \(L^2(\varOmega )\) to test functions of the form \(1\!\!1_{[0,t]}\).

The next theorem is an analogue of (2.13) for the Rosenblatt process.

Theorem 3.2.

Let \(Y\) be as in Lemma 3.1. Then the real process

$$\begin{aligned} (\langle Y,1\!\!1_{[0,t]}\rangle )_{t\ge 0} \end{aligned}$$
(3.5)

is, up to a constant, the Rosenblatt process with parameter \(H=\alpha \).

We remark that this theorem gives a rigorous meaning to the informal expression (43) in [36] relating the Rosenblatt process \(\xi \) with parameter \(H\) and a fBm with parameter \(H_1 =\frac{H+1}{2}\in (\frac{3}{4},1)\), which is given by

$$\begin{aligned} \xi _t =C(H_1)\int \limits ^t_0 \left( (B^{H_1}_s)'\right) ^{2}\hbox {d}s. \end{aligned}$$

\(X\) corresponds to \((B^{H_1})'\) and \(:X\otimes X:\) corresponds to \(((B^{H_1})')^2\). The relationship between the parameters follows from Proposition 2.1(b) and Theorem 3.2.

In [13], \({\mathcal {S}}'\)-random variables such as \(Y\) are represented in terms of complex multiple stochastic integrals related to (2.6).

After the first version of this paper had been submitted, the referee drew our attention to preprint [4] which appeared in the meantime. That paper uses Hida–Kuo type calculus [21] to construct the stochastic integral with respect to the Rosenblatt process, but the representation (3.5) does not seem to be present there.

Representation (3.5) and Proposition 2.1 suggest a way to construct the Rosenblatt process by means of a particle system. We consider the particle system as before with \(\frac{1}{2}<\alpha <1\). By Proposition 2.3, for each pair \(\rho ^j,\rho ^k, j\ne k\), the intersection local time \(\varLambda (x_j+\rho ^j,x_k+\rho ^k;T)\) exists; moreover, it extends in a natural way to test function \(1\!\!1_{[0,t]}\). Namely, we have the following lemma.

Lemma 3.3.

Let

$$\begin{aligned} \psi =\sum ^m_{j=1}a_j1\!\!1_{I_j},\quad a_j\in \mathbb {R},\,\, I_j\,\,{ is\,\, a}\,\, { bounded\,\, interval}, \end{aligned}$$
(3.6)

Fix \(\rho ^1,\rho ^2\), independent standard \(\alpha \)-stable processes and \(x,y\in \mathbb {R}, T>0,f\in {\mathcal {F}}\).

  1. (a)

    There exists an \(L^2\)-limit of \(\langle \varLambda ^f_\varepsilon (x+\rho ^1,y+\rho ^2;T),\psi \rangle \quad { as}\quad \varepsilon \rightarrow 0\), where \(\varLambda ^f_\varepsilon \) is given by (2.17), and this limit does not depend on \(f\). We denote it by \(\langle \varLambda (x+\rho ^1,y+\rho ^2;T),\psi \rangle \).

  2. (b)

    \(\langle \varLambda (x+\rho ^1,y+\rho ^2;T),\psi \rangle =L^2-\lim _{\varepsilon \rightarrow 0}\langle \varLambda (x+\rho ^1,y+\rho ^2;T),\psi *f_\varepsilon \rangle .\)

  3. (c)

    Moreover, \(\langle \varLambda ^f_\varepsilon (\cdot +\rho ^1,\cdot +\rho ^2;T),\psi \rangle \) converges in \(L^2(\mathbb {R}^2\times \varOmega ,\lambda \otimes \lambda \otimes P)\).

For the convenience of the reader, let us recall (1.1),

$$\begin{aligned} \xi ^T_t=\frac{1}{T}\mathop {\sum _{j,k}}\limits _{j\ne k}\sigma _j\sigma _k\langle \varLambda (x_j+\rho ^j,x_k+\rho ^k;T), 1\!\!1_{[0,t]}\rangle , \,\, t\ge 0. \end{aligned}$$

Lemma 3.4.

The process \(\xi ^T\) is well defined (the series converges in \(L^2\)), and it has a continuous modification.

The main result of the paper is stated in the next theorem, which is a counterpart of Proposition 2.1.

Theorem 3.5.

Let \(\frac{1}{2}<\alpha <1\). Then \(\xi ^T\Rightarrow K\xi \) in \(C([0,\tau ])\) as \(T\rightarrow \infty , \tau >0\), where \(\xi \) is the Rosenblatt process with \(H=\alpha \) and \(K\) is a positive constant.

4 Dependence exponent

Long-range dependence is a general notion that has not been clearly defined and can be viewed in different ways [19, 28, 35]. For a Gaussian process \(\eta \), long-range dependence is usually described as slow (power) decay of the covariance of increments on intervals \([u,v], [s+\tau , t+\tau ]\) as \(\tau \rightarrow \infty \), i.e.,

$$\begin{aligned} \mathrm{Cov}(\eta _v-\eta _u,\eta _{t+\tau }-\eta _{s+\tau })\sim C_{u,v,s,t}\tau ^{-K}, \end{aligned}$$

where \(K\) is a positive constant, and convergence and divergence of the series

$$\begin{aligned} \sum ^\infty _{k=1} \mathrm{Cov}(\eta _1-\eta _0, \eta _{k+1}-\eta _k) \end{aligned}$$

are sometimes referred to as “short-range” dependence and “long-range” dependence, respectively. This criterion is also applied to non-Gaussian processes with finite second moments, such as the Rosenblatt process [36]. The underlying idea is that the increments become uncorrelated (but not necessarily independent) at some rate as the distance \(\tau \) between the intervals tends to infinity. However, it can happen that \(K\le 0\), which should also be regarded as long-range dependence ([16] contains examples).

In order to characterize long-range dependence in some more precise way for infinitely divisible processes (not necessarily Gaussian), the codifference (see [27, 29]) can be useful. In [10], we defined the dependence exponent of a (real) infinitely divisible process \(\eta \) as the number

$$\begin{aligned} \kappa =\inf _{z_1,z_2\in {\mathbb {R}}}\inf _{0\le u<v<s<t} \sup \{\gamma >0:D^\eta _\tau (z_1,z_2;u,v,s,t)=o(\tau ^{-\gamma })\,\,\mathrm{as}\,\, \tau \rightarrow \infty \}, \end{aligned}$$
(4.1)

where

$$\begin{aligned} D^\eta _\tau (z_1,z_2;u,v,s,t)&= |\log E\hbox {e}^{i(z_1(\eta _v-\eta _u)+z_2(\eta _{t+\tau }-\eta _{s+\tau })}\nonumber \\&-\log E\hbox {e}^{iz_1(\eta _v-\eta _u)}-\log E\hbox {e}^{iz_2(\eta _{t+\tau }-\eta _{s+\tau })}| \end{aligned}$$
(4.2)

is the absolute value of the codifference of the random variables \(z_1(\eta _v-\eta _u)\) and \(-z_2(\eta _{t+\tau }-\eta _{s+\tau })\). Note that if \(\eta \) is Gaussian, then

$$\begin{aligned} D^\eta _\tau (z_1,z_2;u,v,s,t)=|z_1z_2\mathrm{Cov}(\eta _v-\eta _u,\eta _{t+\tau }-\eta _{s+\tau })|. \end{aligned}$$

For fractional Brownian motion, \(\kappa =K=2-2H\), and for sub-fractional Brownian motion, \(\kappa =K=3-2H\) [8].

It turns out that the same idea can be used to measure long-range dependence for the Rosenblatt process, and this can be done without recourse to infinite divisibility. As recalled in Sect. 2.1, the characteristic functions of the finite-dimensional distributions of the process are given by an explicit formula only for small values of the parameters, which are \(z_1\) and \(z_2\) in our case (see (4.2)). We show next that it is enough to take \(z_1\) and \(z_2\) in an appropriate neighborhood of \(0\) to measure long-range dependence and prove asymptotic independence of increments.

For simplicity we take \(u=s,v=t\).

Theorem 4.1

Let \(\xi \) be the Rosenblatt process with parameter \(H\). For any \(0\le s<t\) there exists a neighborhood \(U(s,t)\) of \(0\) in \(\mathbb {R}^2\) such that

$$\begin{aligned} D^\xi _\tau (z_1,z_2,s,t) \,\,{ := }\,\, {D}^\xi _\tau (z_1,z_2,s,t,s,t) \end{aligned}$$
(4.3)

is well defined for \((z_1,z_2)\in U(s,t)\) and all \(\tau >0\), and if we modify (4.1) putting

$$\begin{aligned} \kappa =\inf _{0\le s<t}\inf _{(z_1,z_2)\in U(s,t)}\sup \{\gamma >0:{D}^\xi _\tau (z_1,z_2,s,t)=o(\tau ^{-\gamma })\quad \mathrm{as}\quad \tau \rightarrow \infty \}, \end{aligned}$$
(4.4)

then \(\kappa =2-2H\).

So we see that dependence exponent of the Rosenblatt process with parameter \(H\) is the same as that for fBm \(B^H\).

From this theorem, by a standard tightness argument, stationarity of increments of \(\xi \), and the fact that the law of \(\xi _t\) is determined by its characteristic function in an arbitrarily small neighborhood of \(0\), we obtain the following corollary.

Corollary 4.2.

For any \(0<s<t\), the increments of the Rosenblatt process \(\xi _t-\xi _s\) and \(\xi _{t+\tau }-\xi _{s+\tau }\) are asymptotically independent as \(\tau \rightarrow \infty \), i.e., if \(\mu _{(s,t)}\) is the law of \(\xi _t-\xi _s\) and \(\mu _{(s,t),(s+\tau ,t+\tau )}\) is the law of \((\xi _t-\xi _s,\xi _{t+\tau }-\xi _{s+\tau })\), then

$$\begin{aligned} \mu _{(s,t),(s+\tau ,t+\tau )}\Rightarrow \mu _{(s,t)}\otimes \mu _{(s,t)} \left( =\mu _{(s,t)}\otimes \mu _{(s+\tau ,t+\tau )}\right) . \end{aligned}$$

5 Additional comments

5.1 Sub-Rosenblatt process

It is known that if in the formula (2.13) we put \(1\!\!1_{[0,t]}-1\!\!1_{[-t,0]}\) instead of \(1\!\!1_{[0,t]}\), we obtain a sub-fractional Brownian motion (sub-fBm), i.e., a centered Gaussian process with covariance

$$\begin{aligned} t^{2H}+s^{2H}-\frac{1}{2}\left( |t-s|^{2H}+(t+s)^{2H}\right) , \end{aligned}$$

again with \(H=\frac{1+\alpha }{2}\). This process has been studied by several authors, e.g., [8, 15, 38, 42] and others. In particular, in [12], an analogue of Proposition 2.1(b) was proved for sub-fBm.

We can now extend formula (3.5) and define a new process \((\langle Y,1\!\!1_{[0,t]}-1\!\!1_{[-t,0]}\rangle )_{t\ge 0}\). It is natural to call it sub-Rosenblatt process, as it has the same covariance as sub-fBm.

Analogues of Theorems 3.5 and 4.1 also hold.

5.2 Rosenblatt process with two parameters

Maejima and Tudor in [22] define a class of self-similar processes with stationary increments that live in the second Wiener chaos. These processes depend on two parameters \(H_1,H_2\), and the Rosenblatt process corresponds to the case \(H_1=H_2\). One can ask about the possibility of extending our construction to those two-parameter processes.

5.3 General Hermite processes

Taqqu [34] studies extensions of the Rosenblatt process living in Wiener chaos of order \(k, k\ge 2\), which he calls Hermite processes. One can attempt to find a particle picture interpretation for those processes. It seems that one should employ \(k\)-th Wick powers and work with intersection local times of \(k\)-tuples of stable processes.

6 Proofs

Proof of Theorem 3.2

(outline). Let \(\xi \) be the Rosenblatt process with parameter \(H\). It is known that its distributions are determined by its moments, and therefore, it is enough to prove that for all \(n,p\in {\text{ N }}, t_1,t_2, \ldots , t_p\ge 0\),

$$\begin{aligned} E\left<Y,\psi \right>^n=C^nE \left( \sum _{j=1}^p\theta _j\xi _{t_j}\right) ^n, \end{aligned}$$
(6.1)

where \(\psi \) has the form (2.3). It is known (see, e.g., [30], Thm II.12.6) that

$$\begin{aligned} E \left( \sum _{j=1}^p\theta _j\xi _{t_j}\right) ^n=\sum _{\pi \in \mathcal {P}(n)}\prod _{B\in \pi }\kappa _{\# B}, \end{aligned}$$
(6.2)

where \(\kappa \)’s are the corresponding cumulants given by (2.5) and \(\mathcal {P}(n)\) is the set of all partitions of \(\{1,\ldots , n\}\), and \(\#\) denotes cardinality of a set.

To compute \(E\left<Y,\psi \right>^n\), we will need the formulas for moments of the Wick product \(:X\otimes X:\). These moments are expressed with the help of Feynman graphs (see, e.g., [3, p. 1,085] or [31, p. 422]). For fixed \(n\), we consider graphs as follows. Suppose that we have \(n\) numbered vertices. Each vertex has two legs numbered \(1\) and \(2\). Legs are paired, forming links between vertices, in such a way that each link connects two different vertices, and there are no unpaired legs left. The graph is a set of links. Each link is described by an unordered pair \(\{(i,j),(l,m)\}, i,l\in \{1,\ldots n\}, j,m\in \{1,2\}\), which means that leg \(j\), growing from vertex \(i\) is paired with leg \(m\) growing from vertex \(l\). \(i\ne l\) since each link connects different vertices, and any \((i,j), i=1,\ldots ,n, j=1,2\), is a part of one and only one link. The set of all distinct graphs of the above form will be denoted by \({\mathcal {G}}_n^2\). Let \(\tilde{\mathcal {G}}_n^2\) denote the set of all connected graphs in \({\mathcal {G}}_n^2\).

By formulas (2.7) and (6.10) in [31], we have

$$\begin{aligned} E \langle :X\otimes X{:}, \varPhi \rangle ^n=\sum _{G\in {\mathcal {G}}_n^2}I^G(\varPhi ), \qquad \varPhi \in {\mathcal {S}}(\mathbb {R}^{2d}), \end{aligned}$$
(6.3)

where

$$\begin{aligned} I^G(\varPhi )\!=\!\!\int \limits _{\mathbb {R}^{2n}}\hat{\varPhi }(x_{1,1},x_{1,2})\ldots \hat{\varPhi }(x_{n,1}, x_{n,2})\!\!\prod _{\{(l,m),(p,q)\}\in G}\delta _{-x_{p,q}}(\hbox {d}x_{l,m})\left| x_{p,q}\right| ^{-H}{ dx}_{p,q}\!. \end{aligned}$$
(6.4)

Using Lemma 3.1 and similar arguments as in the proof of Lemma 3.3 below, it is not difficult to see that

$$\begin{aligned} E \left<Y, \psi \right>^n=\lim _{\varepsilon \rightarrow 0}\sum _{G\in {\mathcal {G}}_n^2}I^G(\varPhi ^f_{\varepsilon ,\psi }), \end{aligned}$$
(6.5)

where \(\varPhi ^f_{\varepsilon ,\psi }\) is given by (3.2).

For \(G\in \tilde{\mathcal {G}}_n^2\) we have

$$\begin{aligned} I^G(\varPhi ^f_{\varepsilon ,\psi })&= \int \limits _{\mathbb {R}^{n}}\hat{\psi }(x_1-x_2)\hat{\psi }(x_2-x_3)\ldots \hat{\psi }(x_n-x_1)\\&\times F^G_{\varepsilon }(x_1,\ldots , x_n)\left| x_1\right| ^{-H}\ldots \left| x_n\right| ^{-H}{ dx}_1\ldots { dx}_n, \end{aligned}$$

where \(F^G_\varepsilon \) is a product of functions of the form \({\hat{f}}_\varepsilon (x_i)\) or \(\overline{{\hat{f}}_\varepsilon (x_i)}\). By (2.16) and the dominated convergence theorem,

$$\begin{aligned}&\lim _{\varepsilon \rightarrow 0}I^G(\varPhi ^f_{\varepsilon ,\psi })=J_n\nonumber \\&\quad {:=}\int \limits _{\mathbb {R}^n}\hat{\psi }(x_1-x_2)\hat{\psi }(x_2-x_3)\ldots \hat{\psi }(x_n-x_1) \left| x_1\right| ^{-H}\ldots \left| x_n\right| ^{-H}{ dx}_1\ldots { dx}_n. \end{aligned}$$
(6.6)

In the proof of the integrability of the function under the integral in (6.6), we use \(|\hat{\psi }(x_1-x_2)\hat{\psi }(x_n-x_1)|\le |\hat{\psi }(x_1-x_2)|^2+|\hat{\psi }(x_n-x_1)|^2\),

$$\begin{aligned} |\widehat{\psi }(x)|\le \frac{C}{1+|x|}, \end{aligned}$$
(6.7)

and

$$\begin{aligned} \int \limits _{\mathbb {R}}\frac{1}{1+\left| x-y\right| }\left| y\right| ^{-H}\hbox {d}y\le C_1. \end{aligned}$$

In the general case, if \(G\in {\mathcal {G}}^2_n\), then it has a decomposition of the form \(G=G_1\cup \ldots \cup G_k\), where \(G_j\) are connected components of \(G\). Then by (6.4) and (6.6), we have

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}I^G(\varPhi ^f_{\varepsilon ,\psi })=J_{\# G_1}\ldots J_{\# G_k}, \end{aligned}$$
(6.8)

where \(\# G_j\) is the number of vertices of \(G_j\).

Note that each \(G\in {\mathcal {G}}_n^2\) determines a partition \(\pi _G\in \mathcal {P}(n)\), whose elements are the sets of vertices of connected components of \(G\). Hence, by (6.5) and (6.8),

$$\begin{aligned} E\left<Y,\psi \right>^n=\sum _{G\in {\mathcal {G}}_n^2}\prod _{B\in \pi _G}J_{\# B}. \end{aligned}$$

It is not difficult to see that if \(\pi \in \mathcal {P}(n)\) is of the form \(\pi =\{B_1, \ldots , B_k\},\, B_j\ge 2, j=1,\ldots , k\), then the number of different \(G\in {\mathcal {G}}_n^2\) such that \(\pi _G=\pi \) is equal to

$$\begin{aligned} \prod _{j=1}^k 2^{\# B_j-1}(\# B_j-1)! \end{aligned}$$

Therefore, setting \(J_1=0\), we obtain

$$\begin{aligned} E\left<Y,\psi \right>^n= \sum _{\pi \in \mathcal {P}(n)}\prod _{B\in \pi } 2^{\# B-1}(\# B-1)! J_{\# B}. \end{aligned}$$
(6.9)

\(J_k\) given by (6.6) can be also written as

$$\begin{aligned} J_k=C^k\int \limits _{\mathbb {R}^k}\psi (x_1)\ldots \psi (x_k)\left| x_1-x_2\right| ^{H-1}\left| x_2-x_3\right| ^{H-1}\ldots \left| x_k-x_1\right| ^{H-1}{ dx}_1\ldots { dx}_k, \end{aligned}$$

hence combining (6.9) with (2.2), (2.5), and (6.2), we obtain (6.1). \(\square \)

Proof of Lemma 3.3

To prove part (a), it suffices to show that for any \(f, g\in {\mathcal {F}}\),

$$\begin{aligned} E\langle {\varLambda }^f_\varepsilon (x+\rho ^1,y+\rho ^2;T), {\psi }\rangle \langle {\varLambda }^g_\delta (x+\rho ^1,y+\rho ^2;T),{\psi }\rangle \end{aligned}$$
(6.10)

has a finite limit as \(\varepsilon ,\delta \rightarrow 0\). Analogously as in (7.1)-(7.4) of [11], (6.10) is equal to

$$\begin{aligned}&\frac{1}{(2\pi )^4}\int \limits _{[0,T]^4}\int \limits _{\mathbb {R}^4}\hbox {e}^{-ix(z+z')} \hbox {e}^{-iy(w+w')}\widehat{\psi }(z+w)\widehat{\psi }(z'+w')\widehat{f}(\varepsilon w)\widehat{g}(\delta w')\\&\quad \times \,\,\overline{\widehat{\nu }_{s,u}(z,z')} \overline{\widehat{\nu }_{r,v}(w,w')}{ dzdz'dwdw'dsdudrdv}, \end{aligned}$$

where \(\nu _{s,u}\) is the law of \((\rho ^1_s,\rho ^1_u)\). To complete the proof of part (a), it suffices to show that

$$\begin{aligned} I&:= \int \limits _{\mathbb {R}^4}|\widehat{\psi }(z+w)||\widehat{\psi }(z'+w')|\int \limits _{[0,T]^2}| \widehat{\nu }_{s,u}(z,z')|ds\,du\nonumber \\&\times \int \limits _{[0,T]^2}|\widehat{\nu }_{r,v}(w,w')|{ drdvdzdz'dwdw'}<\infty .\nonumber \\ \end{aligned}$$
(6.11)

To derive this, we cannot repeat the argument of [11] because \(\widehat{\psi }\notin L^1\) for \(\psi \) of the form (3.6), we only have (6.7).

Fix \(\gamma >0\) such that \(\frac{1}{2}+4\gamma <\alpha \). It is easy to see that

$$\begin{aligned} \int \limits _{[0,T]^2}|\widehat{\nu }_{s,u}(z,z')|\hbox {d}s\hbox {d}u&\le \frac{C(T)}{1+|z+z'|^\alpha } \left( \frac{1}{1+|z|^\alpha }+\frac{1}{1+|z'|^\alpha }\right) \nonumber \\&\le C_1(T)h_\gamma (z,z')\frac{1}{1+|z|^\gamma }\frac{1}{1+|z'|^\gamma }, \end{aligned}$$
(6.12)

where

$$\begin{aligned} h_\gamma (z,z')=\frac{1}{1+|z+z'|^{\frac{1}{2}+\gamma }} \left( \frac{1}{1+|z|^{\frac{1}{2}+\gamma }} +\frac{1}{1+|z'|^{\frac{1}{2}+\gamma }}\right) . \end{aligned}$$

We have used

$$\begin{aligned} \frac{1}{1+|z+z'|^\gamma }\le C\frac{1+|z|^\gamma }{1+|z'|^\gamma }. \end{aligned}$$
(6.13)

Using \(h_\gamma (z,z')h_\gamma (w,w')\le h^2_\gamma (z,z')+h^2_\gamma (w,w'),\) (6.12) and symmetry, we obtain

$$\begin{aligned} I\le C_2(T)\int \limits _{\mathbb {R}^4}\frac{|\widehat{\psi }(z+w)|\,|\widehat{\psi }(z'+w')|}{(1+|z|^\gamma )(1+|w|^\gamma )\, (1+|z'|^\gamma )(1+|w'|^\gamma )}h^2_\gamma (z,z'){ dzdz'dwdw'}, \end{aligned}$$

and (6.13) permits to replace the denominator by \((1+|z+w|^\gamma )(1+|z'+w'|^\gamma )\), hence (6.11) follows by (6.7).

Note that we have also shown that

$$\begin{aligned}&E\langle \varLambda (x+\rho ^1,y+\rho ^2;T),\psi \rangle ^2 =\frac{1}{(2\pi )^4}\int \limits _{[0,T]^4}\int \limits _{\mathbb {R}^4}\hbox {e}^{-ix(z+z')} \hbox {e}^{-iy(w+w')}\nonumber \\&\quad \times \widehat{\psi }(z+w)\widehat{\psi }(z'+w') \overline{\widehat{\nu }_{s,u}(z,z')}\overline{\widehat{\nu }_{r,v}(w,w')} { dzdz'dwdw'dsdudrdv}. \end{aligned}$$
(6.14)

To prove part (b), we observe that the argument above can be carried out for linear combinations of functions of the form (3.6) and from \({\mathcal {S}}\) instead of \(\psi \). Hence, we see that (6.14) holds for \(\psi -\psi * f_\varepsilon \), since

$$\begin{aligned} (\psi -\psi * f_\varepsilon )^{\widehat{}}(x)=\widehat{\psi }(x)(1-\widehat{f}(\varepsilon x)) \end{aligned}$$
(6.15)

Then (b) follows from (6.11).

The proof of part (c) is the same as that of Proposition 4.4 in [11]. Only the fact that \(\psi \in L^2\) is needed here. \(\square \)

Remark 6.1.

From the proof of part (c), it follows that

$$\begin{aligned}&E\int \limits _{\mathbb {R}^2}(\langle \varLambda _\varepsilon ^f(x+\rho ^1,y+\rho ^2;T), \psi \rangle -\langle \varLambda (x+\rho ^1,y+\rho ^2;T),\psi \rangle )^2{ dxdy}\nonumber \\&\quad =\frac{1}{(2\pi )^2}\int \limits _{[0,T]^4}\int \limits _{\mathbb {R}^2}|\widehat{\psi }(x+y)|^2| \widehat{f}(\varepsilon y)-1|^2\hbox {e}^{-|s-u||x|^\alpha }\hbox {e}^{-|r-v||y|^\alpha }{ dxdydsdrdudv},\nonumber \\ \end{aligned}$$
(6.16)

and

$$\begin{aligned}&E\int \limits _{\mathbb {R}^2}(\langle \varLambda (x+\rho ^1,y+\rho ^2;T),\psi \rangle ^2{ dxdy} \nonumber \\&\quad =\frac{1}{(2\pi )^2}\int \limits _{[0,T]^4}\int \limits _{\mathbb {R}^2}|\widehat{\psi }(x+y)|^2| \hbox {e}^{-|s-u||x|^\alpha }\hbox {e}^{-|r-v||y|^\alpha }{ dxdydsdrdudv}. \end{aligned}$$
(6.17)

We need the following lemma which can be proved by repeating the argument of the proof of Lemma 4.1 in [11]

Lemma 6.2.

For any \(F(\cdot +\rho ^1,\cdot +\rho ^2)\in L^2(\mathbb {R}^2\times \varOmega ,\lambda \otimes \lambda \otimes P)\) the series

$$\begin{aligned} \mathop {\sum _{j,k}}\limits _{j\ne k}\sigma _j\sigma _k F(x_j+\rho ^j,x_k+\rho ^k) \end{aligned}$$

converges in\(L^2(\varOmega )\), and

$$\begin{aligned}&E\left( {\mathop {\mathop {\sum }_{j,k}}\limits _{j\notin k}}\sigma _j\sigma _k F(x_j+\rho ^j,x_k+\rho ^k)\right) ^2=\int \limits _{{\mathbb {R}}^2}E(F^2(x+\rho ^1,y+\rho ^2)\nonumber \\&\quad +\,\,F(x+\rho ^1,y+\rho ^2) F(y+\rho ^1,x+\rho ^2))\hbox {d}x\hbox {d}y. \end{aligned}$$
(6.18)

Proof of Lemma 3.4

From Lemma 3.3, it follows that \(\langle \varLambda (x_j+\rho ^j,x_k+\rho ^k;T), 1\!\!1_{[0,t]}\rangle \) are well defined, and \(\langle \varLambda (\cdot +\rho ^j,\cdot +\rho ^k;T), 1\!\!1_{[0,t]}\rangle \) belongs to \(L^2(\mathbb {R}^2\times \varOmega ,\lambda \otimes \lambda \otimes P)\), hence by Lemma 6.2 the process \(\xi ^T\) is well defined and the series in (1.1) converges in \(L^2(\varOmega )\). Moreover, using the fact that \(\varLambda (x+\rho ^j,y+\rho ^k,T)=\varLambda (y+\rho ^k,x+\rho ^j;T)\) (see Corollary 3.4 in [11]), we have

$$\begin{aligned} E\left( {\mathop {\mathop {\sum }_{j, k}}\limits _{j\ne k}}\sigma _j\sigma _k\langle \varLambda (x_j+\rho ^j,x_k+\rho ^k;T),\psi \rangle \right) ^2 \!=\!2\!\int \limits _{\mathbb {R}^2}\!\!E\langle \varLambda (x+\rho ^1,y+\rho ^2;T),\psi \rangle ^2{ dxdy}, \end{aligned}$$
(6.19)

for \(\psi \) either of the form (3.6), or \(\psi =\psi _1+\varphi , \psi _1\) of the form (3.6), \(\varphi \in {\mathcal {S}}\). Hence, for \(t_2>t_1\ge 0\), by (1.1), (6.19) and (6.17),

$$\begin{aligned}&E\left( \xi ^T_{t_2}-\xi ^T_{t_1}\right) ^2\\&\quad =\frac{1}{2\pi ^2T^2}\int \limits _{[0,T]^4} \int \limits _{\mathbb {R}^2}\left| \widehat{1\!\!1_{(t_1,t_2]}}(x+y)\right| ^2\hbox {e}^{-|s-u||x|^\alpha } \hbox {e}^{-|r-v||y|^\alpha }{ dxdydsdrdudv}. \end{aligned}$$

Using \(1>\alpha >\frac{1}{2}\) and

$$\begin{aligned} \frac{1}{T}\int \limits ^T_0\int \limits ^T_0\hbox {e}^{-|s-r||x|^\alpha }{ dsdr}\le \frac{2}{|x|^\alpha } \end{aligned}$$
(6.20)

we obtain

$$\begin{aligned} E(\xi _{t_2}^T-\xi ^T_{t_1})^2&\le C\int \limits _{\mathbb {R}^2} \frac{|\hbox {e}^{i(t_2-t_1)(x+y)}-1|^2}{|x+y|^2}|x|^{-\alpha }|y|^{-\alpha }{ dxdy}\nonumber \\&\le C_1|t_2-t_1|^{2\alpha }. \end{aligned}$$
(6.21)

Hence, \(\xi ^T\) has a continuous modification. \(\square \)

Before we pass to the proof of Theorem 3.5, we observe that for any \({\mathcal {S}}'\)-random variable \(Z\) (not necessarily Gaussian) such that \({\mathcal {S}}\ni \varphi \mapsto E\langle Z,\varphi \rangle ^2\) is continuous, the Wick product \(:Z\otimes Z:\) is well defined by an extension of (3.1). Moreover, we have the following lemma.

Lemma 6.3.

Let \((Z_T)_{T\ge 1}\) be a family of \({\mathcal {S}}'\)-random variables such that

$$\begin{aligned} \sup _{T\ge 1}E\langle Z_T,\varphi \rangle ^2\le p^2(\varphi ),\quad \varphi \in S, \end{aligned}$$

form some continuous Hilbertian seminorm \(p\) on \({\mathcal {S}}\). Assume that \(Z_T\Rightarrow Z\) and \(E\langle Z_T,\varphi \rangle ^2\rightarrow E\langle Z,\varphi \rangle ^2,\varphi \in {\mathcal {S}}\), as \(T\rightarrow \infty \). Then \(:Z_T\otimes Z_T:, :Z\otimes Z:\) are well defined and \(:Z_T\otimes Z_T:\Rightarrow :Z\otimes Z:\) in \({\mathcal {S}}'(\mathbb {R}^2)\) as \(T\rightarrow \infty \).

This lemma follows by a standard argument using properties of \({\mathcal {S}}\) [37], so we skip the proof.

Lemma 6.3 together with Proposition 2.1(a) implies the following corollary.

Corollary 6.4

Let \(X_T,X\) be as in Proposition 2.1. Then

$$\begin{aligned} :X_T\otimes X_T:\Rightarrow :X\otimes X:\, \hbox { in}\ {\mathcal {S}}'(\mathbb {R}^2)\;\;{ as} \;\; T\rightarrow \infty . \end{aligned}$$
(6.22)

Indeed, it suffices to observe that by (2.14) and the Poisson initial condition, we have

$$\begin{aligned} E\langle X_T,\varphi \rangle ^2&= \frac{1}{T}\int \limits _{\mathbb {R}}\int \limits ^T_0 \int \limits ^T_0E\varphi (x+\rho _s)\varphi (x+\rho _u){ dudsdx}\\&= \frac{1}{2\pi }\frac{1}{T}\int \limits _{\mathbb {R}}\int \limits ^T_0\int \limits ^T_0| \widehat{\varphi }(x)|^2\hbox {e}^{-|s-u||x|^\alpha }{ dudsdx}, \end{aligned}$$

so the assumptions of Lemma 6.3 are satisfied (we use (6.20)).

Proof of Theorem 3.5

For \(\psi \) of the form (3.6), we denote by \(\xi ^T_\psi \) the random variable defined by (1.1) with \(1\!\!1_{[0,t]}\) replaced by \(\psi \).

To prove the theorem it suffices to show that

$$\begin{aligned} \lim _{T\rightarrow \infty }|E \hbox {e}^{i\xi ^T_\psi }-E\hbox {e}^{i\langle Y,\psi \rangle }|=0 \end{aligned}$$
(6.23)

for any \(\psi \) of the form (3.6). Indeed, from (6.23) and Theorem 3.2, we infer convergence of finite-dimensional distributions, and from (6.21), we obtain tightness in \(C([0,\tau ])\) for each \(\tau >0\) (see [6], Thm. 12.3; note that the constant \(C_1\) in (6.21) does not depend on \(T\)).

Fix any \(f\in {\mathcal {F}}\) and denote \(\psi _\delta =\psi * f_\delta ,\delta >0\), where \(f_\delta \) is given by (2.15). Let

$$\begin{aligned} \xi ^{T,\varepsilon ,f}_\varphi =\frac{1}{T}{\mathop {\mathop {\sum }_{j,k}}\limits _{j\ne k}}\sigma _j\sigma _k\langle \varLambda ^f_\varepsilon (x_j+\rho ^j,x_k+\rho ^k;T), \varphi \rangle ,\quad \varepsilon >0,\varphi \in {\mathcal {S}}, \end{aligned}$$
(6.24)

which is well defined by Lemma 6.2.

Using the estimate \(|E\hbox {e}^{i\eta _1}-E\hbox {e}^{i\eta _2}|\le \frac{1}{2} E|\eta _1-\eta _2|^2,\) valid for centered random variables \(\eta _1,\eta _2\), it is easy to see that (6.23) will be proved if we show

$$\begin{aligned}&\lim _{\delta \rightarrow 0}\sup _{T\ge 1}E|\xi ^T_\psi -\xi ^T_{\psi _\delta }|^2=0,\end{aligned}$$
(6.25)
$$\begin{aligned}&\lim _{\varepsilon \rightarrow 0}\sup _{T\ge 1}\sup _{0<\delta \le 1}E|\xi ^T_{\psi _\delta }- \xi ^{T,\varepsilon ,f}_{\psi _\delta }|^2=0,\end{aligned}$$
(6.26)
$$\begin{aligned}&\lim _{T\rightarrow \infty }\sup _{0<\delta \le 1}E|\langle : X_T\otimes X_T:, \varPhi ^f_{\varepsilon ,\psi _\delta }\rangle - \xi ^{T,\varepsilon ,f}_{\psi _\delta }|=0,\quad \varepsilon >0,\end{aligned}$$
(6.27)
$$\begin{aligned}&\langle :X_T\otimes X_T:,\varPhi ^f_{\varepsilon ,\psi _\delta } \rangle \Rightarrow \langle :X\otimes X:,\varPhi ^f_{\varepsilon ,\psi _\delta } \rangle \quad \mathrm{as}\quad T\rightarrow \infty ,\quad \varepsilon >0,\delta >0,\end{aligned}$$
(6.28)
$$\begin{aligned}&\lim _{\varepsilon \rightarrow 0}\sup _{0<\delta \le 1}E| \langle :X\otimes X:,\varPhi ^f_{\varepsilon ,\psi _\delta }\rangle -\langle Y,\psi _ \delta \rangle |^2=0,\end{aligned}$$
(6.29)
$$\begin{aligned}&\lim _{\delta \rightarrow 0} E|\langle Y,\psi _\delta \rangle -\langle Y,\psi \rangle |^2=0. \end{aligned}$$
(6.30)

Using (6.18), (6.17), (6.15) and then (6.20), we have

$$\begin{aligned} E|\xi ^T_\psi -\xi ^T_{\psi _\delta }|^2&= \frac{2}{T^2}\int \limits _{[0,T]^4} \int \limits _{\mathbb {R}^2}|\widehat{\psi }(x+y)|^2|1-\widehat{f}(\delta (x+y))|^2\nonumber \\&\times \hbox { e}^{-|s-u||x|^\alpha }\hbox {e}^{-|r-v||y|^\alpha }{ dxdydsdudrdv}\nonumber \\&\le 8\int \limits _{\mathbb {R}^2}|\widehat{\psi }(x+y)|^2|1- \widehat{f}(\delta (x+y))|^2|x|^{-\alpha }|y|^{-\alpha }{ dxdy}. \end{aligned}$$
(6.31)

Hence, (6.25) follows by (2.16) and since \(\frac{1}{2}<\alpha <1.\)

Next, we apply Lemma 6.2 to

$$\begin{aligned} F(x+\rho ^1,y+\rho ^2)=\langle \varLambda (x+\rho ^1,y+\rho ^2;T)- \varLambda ^f_\varepsilon (x+\rho ^1,y+\rho ^2;T),\psi _\delta \rangle , \end{aligned}$$

and by (6.18) we obtain

$$\begin{aligned}&E|\xi ^T_{\psi _\delta }-\xi ^{T,\varepsilon ,f}_{\psi _\delta }|^2\le \frac{2}{T^2}\int \limits _{\mathbb {R}^2}E(\langle (\varLambda -\varLambda ^f_ \varepsilon )(x+\rho ^1,y+\rho ^2;T),\psi _\delta \rangle )^2{ dxdy}\\&\quad \le \frac{C}{T^2}\int \limits _{[0,T]^4}\int \limits _{\mathbb {R}^2}| \widehat{\psi }(x+y)|^2|\widehat{f}(\varepsilon y)-1|^2\hbox {e}^{-|s-u||x|^\alpha }\hbox {e}^{-|r-v||y|^\alpha }{ dxdydsdrdudv}, \end{aligned}$$

by (6.16) and (2.16). By (6.20), (6.7) and (2.16) we obtain (6.26).

To prove (6.27), observe that random variables almost identical to \(\xi ^{T,\varepsilon ,f}_{\psi _\delta }\) and \(\langle :X_T\otimes X_T:,\varPhi ^f_{\varepsilon ,\psi _\delta }\rangle \) have already appeared in [11] with different notation (and different scaling). Hence, by (4.1), (8.8), (8.14) and the two subsequent formulas in [11], we have

$$\begin{aligned}&A(T,\varepsilon ,\delta ){:=}E|\langle :X_T\otimes X_T:,\varPhi ^f_{\varepsilon ,\psi _\delta } \rangle -\xi ^{T,\varepsilon ,f}_{\psi _\delta }|^2\nonumber \\&\quad =\frac{1}{T^2}\int \limits _{[0,T]^4}\int \limits _{\mathbb {R}}E \varPhi ^f_{\varepsilon ,\psi _\delta }(x+\rho _s,x+\rho _u) \varPhi ^f_{\varepsilon ,\psi _\delta }(x+\rho _r,x+\rho _v){ dxdvdrduds}.\nonumber \\ \end{aligned}$$
(6.32)

It is easy to see that since the support of \(\psi _\delta \) is contained in a compact set which is independent of \(\delta \), we have (see (3.2) and (3.6))

$$\begin{aligned} |\varPhi ^f_{\varepsilon ,\psi _\delta }(x,y)|\le C(\varepsilon ,f)\phi (x)\phi (y), \end{aligned}$$
(6.33)

where

$$\begin{aligned} \phi (x)=\frac{1}{1+|x|^2}. \end{aligned}$$

Let \(({\mathcal {T}}_t)_t\) denote the \(\alpha \)-stable semigroup and \(G\) its potential, \(G\varphi =\int ^\infty _0{\mathcal {T}}_t\varphi \hbox {d}t\).

From (6.32), (6.33), and the Markov property, we get

$$\begin{aligned}&A(T,\varepsilon ,\delta )\\&\quad \le 4! C^2(\varepsilon ,f)\frac{1}{T^2}\int \limits _{\mathbb {R}}\int \limits ^T_0 \int \limits ^T_s\int \limits ^T_u\int \limits ^T_r{\mathcal {T}}_ s(\phi ({\mathcal {T}}_{u-s}\phi ({\mathcal {T}}_{r-u}(\phi {\mathcal {T}}_{v-r}\phi ))))(x) { dxdvdrduds}\\&\quad \le C_1(\varepsilon ,f)\frac{1}{T^2}\int \limits ^T_0 \int \limits _{\mathbb {R}}\phi (x)G(\phi G(\phi G\phi ))(x){ dxds}\\&\quad \le C_2(\varepsilon ,f)\frac{1}{T}. \end{aligned}$$

In the last estimate, we used the fact that \(G\phi \) is bounded. Hence, (6.27) follows.

(6.28) follows from (6.22) since \(\varPhi ^f_{\varepsilon ,\varphi _\delta }\in {\mathcal {S}}\).

In the proof of (6.29), we use (3.3) and Lemma 3.1, which yield

$$\begin{aligned}&E|\langle :X\otimes X:,\varPhi ^f_{\varepsilon ,\psi _\delta }\rangle -\langle Y,\psi _\delta \rangle |^2\\&= \frac{1}{\pi ^2}\int \limits _{\mathbb {R}^2}|\widehat{\psi }_\delta (x+y)|^2| \widehat{f}(\varepsilon y)-1|^2|x|^{-\alpha }|y|^{-\alpha }{ dxdy}\\&+\frac{1}{\pi ^2}\int \limits _{\mathbb {R}^2}|\widehat{\psi }_\delta (x+y)|^2 (\widehat{f}(\varepsilon y)-1)(\overline{\widehat{f}(\varepsilon x)-1}) |x|^{-\alpha }|y|^{-\alpha }{ dxdy}\\&\le \frac{2}{\pi ^2}\int \limits _{\mathbb {R}^2}|\widehat{\psi }(x+y)|^2| \widehat{f}(\varepsilon y)-1|^2|x|^{-\alpha }|y|^{-\alpha }{ dxdy}. \end{aligned}$$

Hence, (6.29) follows by (6.7) and (2.16).

Finally, from (3.3) and Lemma 3.1, it is easy to see that

$$\begin{aligned} E|\langle Y,\psi _\delta \rangle -\langle Y,\psi \rangle |^2=\frac{2}{\pi ^2}\int \limits _{\mathbb {R}}|\widehat{\psi }(x+y)|^2| \widehat{f}(\delta (x+y))-1|^2|x|^{-\alpha }|y|^{-\alpha }{ dxdy}, \end{aligned}$$

which tends to \(0\) as \(\delta \rightarrow 0\) (cf (6.31)).\(\square \)

Proof of Theorem 4.1

Fix \(0\le s<t\). By (2.1) and (2.2), we know that

$$\begin{aligned} E\hbox {e}^{i[z_1(\xi _t-\xi _s)+z_2(\xi _{t+\tau }-\xi _{s-\tau })]} =\mathrm{exp} \left\{ \frac{1}{2}\sum ^\infty _{k=2}\frac{(2i\sigma )^k}{k} \widetilde{R}_k(z_1,z_2,s,t,\tau )\right\} , \end{aligned}$$
(6.34)

where \(\widetilde{R}_k(z_1,z_2,s,t,T)\) is the right-hand side of (2.2) with

$$\begin{aligned} \psi (x)=z_11\!\!1_{(s,t]}+z_21\!\!1_{(s+\tau ,t+\tau ]}, \end{aligned}$$
(6.35)

and formula (6.34) holds for \(z_1,z_2\in {\mathbb {R}}\) and \(\tau >0\) such that the series in (6.34) converges.

To continue with the proof, we need the following lemma and corollary.

Lemma 6.5

There exists \(C(s,t)>0\) independent of \(\tau \) such that

$$\begin{aligned} |\widetilde{R}_k(z_1,z_2,s,t,\tau )|\le (|z_1|+|z_2|)^k(C(s,t))^k,\quad k=2,3,\ldots . \end{aligned}$$
(6.36)

An immediate consequence of this lemma is

Corollary 6.6.

For all \(0<s<t\) there exists a neighborhood \(U(s,t)\) of \(0\) in \(\mathbb {R}^2\) such that (6.34) holds for \((z_1,z_2)\in U(s,t)\) and all \(\tau >0\).

Proof of Lemma 6.5

Consider an integral

$$\begin{aligned} I_k(\varepsilon _1,\ldots ,\varepsilon _k) =\!\int \limits ^{t+\varepsilon _1\tau }_{s+\varepsilon _1\tau }\!\ldots \int \limits ^{t+\varepsilon _k\tau } _{s+\varepsilon _k\tau }\!|x_1-x_2|^{H-1}\!\ldots |x_{k-1}-x_k|^{H-1}|x_k-x_1|^{H-1}{ dx}_1 \ldots { dx}_k, \end{aligned}$$
(6.37)

where \((\varepsilon _1,\ldots ,\varepsilon _k)\in \{0,1\}^k\). For each \(j\), we substitute \(x'_j=x_j-\varepsilon _j\tau \) and use

$$\begin{aligned} |y-x+\tau |^{H-1}\le |y-x|^{H-1}\quad \mathrm{for}\quad x,y\in [s,t],\tau >\tau _0(s,t). \end{aligned}$$

Hence, similarly as in formula (13) in [36],

$$\begin{aligned} I_k(\varepsilon _1,\ldots ,\varepsilon _k)\le (C(s,t))^k. \end{aligned}$$
(6.38)

The same inequality holds for \(\tau \le \tau _0(s,t)\).

By (6.35), (2.2),

$$\begin{aligned} \widetilde{R}_k(z_1,z_2,s,t,\tau )=z^k_1I_k(0,\ldots ,0)+ \widetilde{\widetilde{R}}_k(z_1,z_2,s,t,\tau )+z^k_2I_k(0,\ldots ,0), \end{aligned}$$
(6.39)

where

$$\begin{aligned} \widetilde{\widetilde{R}}_k(z_1,z_2,s,t,\tau )=\sum ^{k-1}_{j=1}z^{k-j}_1z^j_2 {\mathop {\mathop {\sum }_{(\varepsilon _1,\ldots ,\varepsilon _k)\in \{0,1\}^k}}\limits _{ {\varepsilon _1+\ldots +\varepsilon _k=j}}}I_k(\varepsilon _1,\ldots ,\varepsilon _k). \end{aligned}$$
(6.40)

Hence, (6.38) implies

$$\begin{aligned} |\widetilde{R}_k(z_1,z_2,s,t,\tau )|\le \sum ^k_{j=1} \left( \begin{array}{l}k\\ j\end{array}\right) |z_1|^{k-j}|z_2|^j(C(s,t))^k =(|z_1|+|z_2|)^k(C(s,t))^k, \end{aligned}$$
(6.41)

which proves the lemma. \(\square \)

We return to the proof of the theorem.

Let \(U(s,t)\) be as in Corollary 6.6. It is now clear that for \((z_1,z_2)\) in \(U(s,t)\), the function \(D^\xi _\tau (z_1,z_2,s,t)\) given by (4.3) is well defined for all \(\tau >0\).

Moreover, by (4.3), (4.2), (6.34) and (6.39),

$$\begin{aligned} D^\xi _\tau (z_1,z_2,s,t)=\left| \frac{1}{2}\sum ^\infty _{k=2}\frac{(2i\sigma )^k}{k} \widetilde{\widetilde{R}}_k(z_1,z_2,s,t,\tau )\right| \end{aligned}$$
(6.42)

It is not difficult to see that due to the circular form of the integrand in each \(I_k\) in (6.40), after the substitution \(x'_j=x_j-\varepsilon _j\tau \) (see (6.37)), there appear at least two factors of the form \(|y-x+\tau |^{H-1}, y,x\in [s,t]\), and for some \((\varepsilon _1,\ldots ,\varepsilon _k)\), there are exactly two such factors. Hence, for each \(\gamma <2-2H\),

$$\begin{aligned} \lim _{\tau \rightarrow \infty }\tau ^\gamma \widetilde{\widetilde{R}}_k(z_1,z_2,s,t,\tau )=0, \end{aligned}$$

and for some \(z_1,z_2\)

$$\begin{aligned} \lim _{\tau \rightarrow \infty }\tau ^{2-2H}\widetilde{\widetilde{R}}_u(z_1,z_2,s,t,\tau )\ne 0. \end{aligned}$$

Hence, the theorem follows by Lemma 6.5.