1 Introduction

In this article we introduce a class of stationary processes, called oscillating Gaussian processes, defined by

$$\begin{aligned} X_t = \alpha _+ Y_t\mathbf{1}_{Y_t > 0} + \alpha _- Y_t\mathbf{1}_{Y_t < 0}, t \in T, \end{aligned}$$
(1.1)

where \(\alpha _+\) and \(\alpha _-\) are constants and Y is a stationary Gaussian process.

Our motivation stems partially from the connection to stochastic differential equations driven by the fractional Brownian motion with discontinuous diffusion coefficient, studied in Garzón et al. (2017).

During the past two decades interest in the study of the existence and uniqueness of stochastic differential equations driven by a fractional Brownian motion has been very intense and there have been many advances in their theory and applications. In particular, strong solutions of the following stochastic differential equation (SDE in short)

$$\begin{aligned} X_t = X_0 + \int _0^t b(s,X_s) ds + \int _0^t \sigma (s,X_s) dB^H_s, \end{aligned}$$
(1.2)

under usual conditions on the coefficients, such as Lipschitz and linear growth, were developed by Nualart and Rǎşcanu (2002), and have been considered by many authors, see Mishura (2008) and the references therein.

Nevertheless, the case of SDE with discontinuous coefficients has been less explored. Most of the cases of stochastic differential equations driven by a fractional Brownian motion and with discontinuous coefficients which have been studied are those corresponding to discontinuous drift coefficient (for \(H>1/2\)). Regarding that, in Mishura and Nualart (2004), the authors studied a drift that is Hölder continuous except on a finite numbers of points. Another class of discontinuity in SDE driven by a fractional Brownian motion is related to adding a Poisson process to the equation. In Bai and Ma (2015), extending the results given in Mishura and Nualart (2004), the authors proved the existence of the strong solution of this kind of SDE driven by a fractional Brownian motion and a Poisson point process. To the best of our knowledge, in the fractional Brownian motion framework, there is only a preliminary work that studies equations with discontinuous diffusion coefficient, written by Garzón et al. (2017). There the authors proved the existence and uniqueness of solutions to the SDE driven by the fractional Brownian motion \(B^H\) with \(H>\frac{1}{2}\) given by

$$\begin{aligned} X_t = X_0 + \int _0^t \sigma (X_s) d B^H_s \quad , \quad t \ge 0, \end{aligned}$$
(1.3)

where the function \(\sigma \) is given by

$$\begin{aligned} \sigma (x) = \frac{1}{\alpha } \mathbf{1}_{x \ge 0} + \frac{1}{1-\alpha } \mathbf{1}_{x < 0}, \ \alpha \in \left( 0,\frac{1}{2}\right) . \end{aligned}$$
(1.4)

The authors showed that the explicit solution to the equation (1.3) is

$$\begin{aligned} X_t = \alpha B_t^H \mathbf{1}_{B^H_t > 0} + (1- \alpha ) B^H_t\mathbf{1}_{B^H_t < 0} , \quad \quad t \ge 0. \end{aligned}$$
(1.5)

It is straightforward to see that the explicit existence and uniqueness of the solution to equation (1.3) holds also if \(\alpha \) and \(1-\alpha \) are replaced with \(\alpha _+\) and \(\alpha _-\) satisfying \( 0< \alpha _- < \alpha _+\) (or \(0< \alpha _+ < \alpha _-\), respectively). Thus, motivated by Eq. (1.5), we define the Oscillating Gaussian process directly by (1.1). On a related literature, we also mention preprint Torres and Viitasaari (2019) where the existence and uniqueness of the solution to Eq. (1.3) with more general discontinuous \(\sigma \) and more general driving force was proved.

One of the reasons why SDEs with discontinuous diffusion coefficient are interesting is their relation to the Skew Brownian motion. In the Brownian motion framework, the Skew Brownian motion appeared as a natural generalization of the Brownian motion. The Skew Brownian motion is a process that behaves like a Brownian motion except that the sign of each excursion is chosen using an independent Bernoulli random variable with the parameter \(\alpha \in (0 , 1)\). For \(\alpha = 1/2,\) the process corresponds to a Brownian motion. This process is a Markov process and a semi-martingale. Moreover, it is a strong solution to the following SDE with local time (see Lejay 2006 for a survey). Let

$$\begin{aligned} X_t = x + B_t + (2\alpha - 1) L^0_t(X), \end{aligned}$$
(1.6)

where \(L_t^0(X)\) is the symmetric local time of X at 0. In the case of the Brownian motion, it follows from the Itô-Tanaka formula that the Eqs. (1.6) and (1.3) with \(\sigma (x) = \frac{1}{\alpha } \mathbf{1}_{\{x \ge 0\}} + \frac{1}{1-\alpha } \mathbf{1}_{ \{x < 0\}}\) are equivalent. For a comprehensive survey on Skew Brownian motion, see the work by Lejay (2006).

In the case of the fractional Brownian motion, the Tanaka type formulas are more complicated and no relations between the two types of equations are known to exist. The motivation for the authors in Garzón et al. (2017) to study Eq. (1.3) stemmed from this fact.

To the best of our knowledge, Lejay and Pigato (2018) is the only study that considers the inference of parameters related to SDE with a discontinuous diffusion process. The study considers the case of a discontinuous diffusion coefficient that can only attain two different values. More precisely, the authors of Lejay and Pigato (2018) studied the so-called oscillating Brownian motion that is a solution to the SDE

$$\begin{aligned} X_t = x + \int _0^t \sigma (X_s) dW_s, \end{aligned}$$
(1.7)

where W is a standard Brownian motion and \(\sigma (x) = \alpha _+\mathbf{1}_{x \ge 0} + \alpha _- \mathbf{1}_{x < 0}, \quad x \in \mathbb {R}\). It is worth to note that while in the fractional Brownian motion case (1.5) solves (1.3) and we have adopted the name oscillating from Lejay and Pigato (2018) into our definition (1.1), in the Brownian motion case the solution of (1.7) is not given by (1.1) with \(Y=W\).

The authors in Lejay and Pigato (2018) proposed two natural consistent estimators, which are variants of the integrated volatility estimator. Moreover, the stable convergence towards certain Gaussian mixture of the renormalised estimators was proven. The estimators are given by

$$\begin{aligned} \hat{\alpha }_+ = \sqrt{\frac{\sum _{k=1}^n \left( X_{{t_k}} - X_{{t_{k-1}}} \right) ^2 }{\sum _{k=1}^{n} \mathbf{1}_{X_{{t_k}} \ge 0} }}, \quad \hat{\alpha }_- = \sqrt{\frac{\sum _{k=1}^n \left( X_{{t_k}} - X_{{t_{k-1}}} \right) ^2 }{\sum _{k=1}^{n} \mathbf{1}_{X_{{t_k}} \le 0} }}. \end{aligned}$$
(1.8)

Note that when the paths are strictly positive or strictly negative, only one of the estimators can be computed.

In addition to the above mentioned links to SDEs and skew Brownian motion, we note that (1.1) could be applied in various other modelling scenarios as well, making oscillating Gaussian process an interesting object of study. For example, (1.1) can be viewed as a model for different situations where the variance changes by regions.

One of the main interests in this paper is in the estimation of the model parameters \(\alpha _+\) and \(\alpha _-\). One natural idea is to study estimators defined by (1.8), where X is given by (1.1). However, in the general case one cannot apply the independence of the Brownian increments. Thus, while the behaviour of the quadratic variation terms in the numerators of (1.8) is rather well-understood in the case of Gaussian processes (see Viitasaari 2019 for a survey on the topic), it becomes much more complicated in the case of non-Gaussian X defined by (1.1). Instead of applying (1.8), we define estimators based on moments and study their asymptotic properties. Moreover, we show that our moment based approach can be applied under a large class of driving Gaussian processes Y in (1.1).

Estimation of moments in a stationary model \(f(Y_t)\) with some general function f and a stationary Gaussian process Y is extensively studied in the literature. Results on the central limit theorems in this context date back to Breuer and Major (1983) where the authors provided sufficient conditions for the function f and for the covariance function \(r(t) = \mathbb {E}(Y_tY_0)\) of Y that ensure the limiting normality of properly normalised average of \(f(Y_t)\). While in Breuer and Major (1983) a discrete time \(t\in \mathbb {Z}\) was considered, a similar statement holds also in continuous time corresponding to our case, see e.g. Campese et al. (2018). Similarly, in papers Dobrushin and Major (1979), Taqqu (1979, 1975) the authors studied limiting behaviour of the moment estimator in the case where the assumptions of Breuer and Major (1983) are violated. In this case, one usually needs different normalisation instead of the standard \(\sqrt{T}\), and the limiting distribution is not necessarily Gaussian.

After the development of the powerful Stein-Malliavin approach for central limit theorems, developed in Nourdin and Peccati (2009), interest in studying the behaviour of the moment estimator in the model \(f(Y_t)\) increased again and it is currently a topic under active research. For articles on the topic, we refer to Bai and Taqqu (2013), Campese et al. (2018), Nourdin and Nualart (2018), Nourdin and Peccati (2010), Nourdin et al. (2019) and the references therein, to simply name a few recent ones.

The rest of the paper is organised as follows. In Sect. 2, we introduce the oscillating Gaussian processes and study their basic properties such as moments, covariance structures, and continuity properties. Section 3 is devoted to model calibration. We begin by showing that the moment estimators are consistent and satisfy central limit theorems under suitable assumptions on the driving Gaussian process. On top of that, we also consider the special case \(|\alpha _+|=|\alpha _-|\) in details and we study the corresponding estimators based on discrete observations. In Sect. 3.4, we briefly discuss how Lamperti transform can be used to study oscillating Gaussian processes driven by self-similar Gaussian noise, and as a particular example, we apply the method to the case of the bifractional Brownian motion. We end the paper with a short summary and a discussion about future prospects.

2 Oscillating Gaussian processes

Throughout this section we consider Gaussian oscillating processes \(X=(X_t)_{t\ge 0}\) defined by

$$\begin{aligned} X_t = \alpha _+ Y_t\mathbf{1}_{Y_t > 0} + \alpha _- Y_t\mathbf{1}_{Y_t < 0}, \end{aligned}$$
(2.1)

where \(Y = (Y_t)_{t\ge 0}\) is a stationary centered Gaussian process and the \(\alpha _+\) and \(\alpha _-\) are positive parameters such that \(\alpha _+\ne \alpha _-\). Note that the \(\alpha _+\) and \(\alpha _-\) describe the magnitude of variations of X on different regions. Our goal is to estimate the unknown parameters \(\alpha _+\) and \(\alpha _-\). In order to do this, we assume that \(\mathbb {E}(Y_t^2)=1\). Note that the general case \(\mathbb {E}(Y_t^2) = \sigma ^2\) can be written as

$$\begin{aligned} X_t = \alpha _+ \sigma \tilde{Y}_t\mathbf{1}_{\tilde{Y}_t > 0} + \alpha _-\sigma \tilde{Y}_t\mathbf{1}_{\tilde{Y}_t < 0}, \end{aligned}$$

where now \(\mathbb {E}(\tilde{Y}^2_t) = 1\). We also assume that the parameters \(\alpha _+\) and \(\alpha _-\) are both strictly positive. Finally, without any loss of generality we assume that Y is continuous (recall that, by Belyaev (1960), a stationary Gaussian process is either continuous or unbounded on every interval) implying that the covariance function r is continuous as well.

Remark 1

We stress that our assumption \(\alpha _+,\alpha _->0\) is not very restricting, and it is straightforward to extend our analysis to other cases. Indeed, the case \(\alpha _+,\alpha _-<0\) follows directly by symmetry. In the case \(\alpha _-<0<\alpha _+\) (or \(\alpha _->0>\alpha _+\), in view of symmetry argument), we simply choose negative solution for \(\alpha _-\) in Lemma 2.3 (cf. Remark 2). We note however, that the reason why such extensions are straightforward to obtain is that we defined X with (2.1) directly instead of restricting ourselves to the situation where X is a solution to SDE (1.3), in which case the solution is known to exists and is of the form (2.1) only for \(\alpha _-,\alpha _+>0\).

Definition 2.1

(Oscillating Gaussian process (OGP)). Let Y be a centered stationary Gaussian process with variance \(\sigma ^2=1\) and covariance function \(r(t){= \mathbb {E}(Y_tY_0)}\), and let \(\alpha _+,\alpha _->0, \alpha _+\ne \alpha _-\) be constants. We define the oscillating version X of Y by

$$\begin{aligned} X_t = \alpha _+ Y_t\mathbf{1}_{Y_t > 0} + \alpha _- Y_t\mathbf{1}_{Y_t < 0}. \end{aligned}$$
(2.2)

In the following lemmas we compute the moments and covariances of the OGP X defined in (2.2).

Lemma 2.2

Let \(n\ge 1\) be an integer and \(t\ge 0\) arbitrary. Then

$$\begin{aligned} \mu _n := \mathbb {E}(X_0^n) = \mathbb {E}(X_t^n) = \frac{2^{\frac{n}{2}}\Gamma \left( \frac{n+1}{2}\right) }{2\sqrt{\pi }}(\alpha _+^n+(-1)^n\alpha _-^n). \end{aligned}$$

Proof

By the definition of OGP, we have

$$\begin{aligned} X_t^n = \alpha _+^nY_t^n\mathbf{1}_{Y_t > 0} + \alpha _-^n Y_t^n\mathbf{1}_{Y_t < 0}. \end{aligned}$$

Since Y is a centered stationary Gaussian process we have

$$\begin{aligned} \mathbb {E}(Y_t^n\mathbf{1}_{Y_t >0}) = \int _0^\infty \frac{x^n}{\sqrt{2\pi }}e^{-\frac{x^2}{2}}dx = \frac{1}{2}\int _{-\infty }^\infty \frac{|x|^n}{\sqrt{2\pi }}e^{-\frac{x^2}{2}}dx = \frac{1}{2}\mathbb {E}|N|^n, \end{aligned}$$
(2.3)

where \(N\sim \mathcal {N}(0,1)\). Similarly,

$$\begin{aligned} \mathbb {E}(Y_t^n\mathbf{1}_{Y_t <0}) = (-1)^n\frac{1}{2}\mathbb {E}|N|^n. \end{aligned}$$
(2.4)

Now, the well known formula for a standard normal variable \(\mathbb {E}|N|^n = \frac{2^{\frac{n}{2}}\Gamma \left( \frac{n+1}{2}\right) }{\sqrt{\pi }}\), implies the claim. \(\square \)

The following lemma allows us to compute the parameters \(\alpha _+\) and \(\alpha _-\) in terms of the moments.

Lemma 2.3

Let \(t>0\) be arbitrary. Then

$$\begin{aligned} \alpha _+ = \sqrt{\frac{\pi }{2}}\mu _1 + \frac{1}{2}\sqrt{4\mu _2-2\pi (\mu _1)^2} \end{aligned}$$
(2.5)

and

$$\begin{aligned} \alpha _- = -\sqrt{\frac{\pi }{2}}\mu _1 + \frac{1}{2}\sqrt{4\mu _2-2\pi (\mu _1)^2}. \end{aligned}$$
(2.6)

Proof

Since \(\Gamma (1) =1\) and \(\Gamma \left( \frac{3}{2}\right) = \frac{\sqrt{\pi }}{2}\), Lemma 2.2 yields

$$\begin{aligned} \mu _1 = \frac{1}{\sqrt{2\pi }}\left( \alpha _+-\alpha _-\right) \end{aligned}$$

and

$$\begin{aligned} \mu _2 = \frac{1}{2}\left( \alpha _+^2+\alpha _-^2\right) . \end{aligned}$$

From the first equality we get

$$\begin{aligned} \alpha _+ = \alpha _- + \sqrt{2\pi }\mu _1. \end{aligned}$$

Plugging into the second inequality with some simple manipulations gives

$$\begin{aligned} 2\alpha _-^2 + 2\sqrt{2\pi }\mu _1 \alpha _- + 2\pi \mu _1^2 -2\mu _2 = 0. \end{aligned}$$
(2.7)

Now

$$\begin{aligned} 4\mu _2 -2\pi \mu _1^2 = 2\alpha _+^2+2\alpha _-^2 - (\alpha _+ - \alpha _-)^2 = (\alpha _+ + \alpha _-)^2 > 0, \end{aligned}$$

and since \(\alpha _->0\), we obtain the result. \(\square \)

Remark 2

Note that in the proof of Lemma 2.3 we applied the assumption \(\alpha _->0\). In the case \(\alpha _-<0<\alpha _+\), one has to choose the other solution to Eq. (2.7) yielding

$$\begin{aligned} \alpha _- = -\sqrt{\frac{\pi }{2}}\mu _1 - \frac{1}{2}\sqrt{4\mu _2-2\pi (\mu _1)^2}. \end{aligned}$$

In the next lemma, we derive the covariance function of the process X. That allows us to obtain consistency for our estimators.

Lemma 2.4

Let \(N_1\sim \mathcal {N}(0,1)\) and \(N_2\sim \mathcal {N}(0,1)\) such that \(Cov(N_1,N_2) = a\). Then

$$\begin{aligned} \mathbb {E}(N_1^mN_2^n\mathbf{1}_{N_1,N_2>0})=2^{\frac{n+m-4}{2}}\pi ^{-1}(1-a^2)^{\frac{n+m-1}{2}}\sum _{r=0}^\infty \frac{(4a)^r}{r!}\Gamma \left( \frac{n+r+1}{2}\right) \Gamma \left( \frac{m+r+1}{2}\right) \end{aligned}$$

Proof

We have

$$\begin{aligned} \mathbb {E}(N_1^mN_2^n\mathbf{1}_{N_1,N_2>0}) = \frac{1}{2\pi \sqrt{1-a^2}}\int _0^\infty \int _0^\infty x^my^n e^{-\frac{x^2+y^2-2axy}{2(1-a^2)}}dx dy. \end{aligned}$$

Change of variables \(u = \frac{x}{\sqrt{2(1-a^2)}}\) and \(v=\frac{y}{\sqrt{2(1-a^2)}}\) gives

$$\begin{aligned} \mathbb {E}(N_1^mN_2^n\mathbf{1}_{N_1,N_2>0}) = 2^{\frac{n+m}{2}}\pi ^{-1}(1-a^2)^{\frac{n+m-1}{2}}\int _0^\infty \int _0^\infty u^mv^n e^{-u^2-v^2+2auv}du dv, \end{aligned}$$

and using formula 3.5-5 in Rice (1944) we obtain

$$\begin{aligned} \int _0^\infty \int _0^\infty u^mv^n e^{-u^2-v^2+2auv}du dv = \frac{1}{4}\sum _{r=0}^\infty \frac{(4a)^r}{r!}\Gamma \left( \frac{n+r+1}{2}\right) \Gamma \left( \frac{m+r+1}{2}\right) . \end{aligned}$$

This proves the claim. \(\square \)

We are interested in the asymptotic behaviour of covariance functions which in turn translates into asymptotic properties of our estimators. For this purpose we adopt the standard Landau notation \(f(x) = O(g(x))\) as \(x\rightarrow L \in [-\infty ,\infty ]\) meaning that, asymptotically as x is close to L, we have \(|f(x)| \le C|g(x)|\) for some constant C.

Corollary 2.5

Let \(N_1\sim \mathcal {N}(0,1)\) and \(N_2\sim \mathcal {N}(0,1)\) such that \(Cov(N_1,N_2) = a{\in [-1,1]}\), and let \(n\ge 1\) be an integer. Then, as \(a\rightarrow 0\), we have

$$\begin{aligned} \mathbb {E}(N_1^nN_2^n\mathbf{1}_{N_1,N_2>0})=2^{n-2}\pi ^{-1}\Gamma \left( \frac{n+1}{2}\right) ^2 + O(|a|) \end{aligned}$$

and

$$\begin{aligned} \mathbb {E}(N_1^nN_2^n\mathbf{1}_{N_1>0,N_2<0})=(-1)^n 2^{n-2}\pi ^{-1}\Gamma \left( \frac{n+1}{2}\right) ^2 + O(|a|). \end{aligned}$$

Proof

It follows from Lemma 2.4 that

$$\begin{aligned} \mathbb {E}(N_1^nN_2^n\mathbf{1}_{N_1,N_2>0}) = 2^{n-2}\pi ^{-1}\Gamma \left( \frac{n+1}{2}\right) ^2 (1-a^2)^{\frac{2n-1}{2}}+ O(|a|). \end{aligned}$$

Now, the first claim follows from the fact that

$$\begin{aligned} (1-a^2)^{\frac{2n-1}{2}} = 1 + O(|a|). \end{aligned}$$

The second claim follows similarly since

$$\begin{aligned} \mathbb {E}(N_1^nN_2^n\mathbf{1}_{N_1>0,N_2<0}) = (-1)^n\mathbb {E}(N_1^n(-N_2)^n\mathbf{1}_{N_1>0,-N_2>0}). \end{aligned}$$

\(\square \)

Corollary 2.6

Let X be the oscillating Gaussian process defined in (2.2) and let \(r(t) = \mathbb {E}(Y_tY_0)\) be the covariance function of the associated Gaussian process Y. Suppose further that \(r(t) \rightarrow 0\) as \(|t|\rightarrow \infty \). Then, as \(|t-s|\rightarrow \infty \), we have

$$\begin{aligned} Cov(X_t^n,X_s^n) =O(|r(t-s)|). \end{aligned}$$

Proof

We have

$$\begin{aligned} X_t^n X_s^n&= \alpha _+^{2n}Y_t^nY_s^n\mathbf{1}_{Y_t,Y_s> 0} + \alpha _-^{2n} Y_t^nY_s^n\mathbf{1}_{Y_t,Y_s< 0} \\&\quad + \alpha ^n_+\alpha ^n_-(Y_t^nY_s^n\mathbf{1}_{Y_t>0,Y_s<0}+Y_t^nY_s^n\mathbf{1}_{Y_t<0,{Y}_s>0}). \end{aligned}$$

Taking expectation and using Corollary 2.5 we get

$$\begin{aligned} \mathbb {E}(X_t^n X_s^n) = 2^{n-2}\pi ^{-1}\Gamma \left( \frac{n+1}{2}\right) ^2\left( \alpha _+^{2n}+\alpha _-^{2n} + 2(-1)^n\alpha ^n_+\alpha ^n_-\right) + O(|r(t-s)|). \end{aligned}$$

Lemma 2.2 now implies the claim. \(\square \)

We also give the following result providing the multidimensional density of our process.

Proposition 2.7

Let X be the oscillating Gaussian process defined by (2.2) and let \(n\in \mathbb {N}\) and \(0\le t_1<t_2<\ldots <t_n\) be fixed. Suppose that the Gaussian random vector \((Y_{t_1},Y_{t_2},\ldots ,Y_{t_n})\) is non-degenerate and denote its density function by \(\phi _Y\). Then the law of the random vector \((X_{t_1},X_{t_2},\ldots ,X_{t_n})\) is absolutely continuous with respect to the Lebesgue measure with a density given by

$$\begin{aligned} f_{t_1,t_2,\ldots ,t_n}(z_1,z_2,\ldots ,z_n) = \phi _Y\left( \frac{z_1}{s(z_1)},\frac{z_n}{s(z_n)},\ldots ,\frac{z_n}{s(z_n)}\right) \prod _{k=1}^n \frac{1}{s(z_k)}, \end{aligned}$$
(2.8)

where

$$\begin{aligned} s(z) = \alpha _-\mathbf{1}_{z\le 0} + \alpha _+\mathbf{1}_{z>0}. \end{aligned}$$

In particular, for a fixed \(t\in \mathbb {R}\) the density function of \(X_t\) is given by

$$\begin{aligned} f_t(z) = \frac{1}{\alpha _-}\phi \left( \frac{z}{\alpha _-}\right) \mathbf{1}_{z\le 0} + \frac{1}{\alpha _+}\phi \left( \frac{z}{\alpha _+}\right) \mathbf{1}_{z>0}, \end{aligned}$$

where \(\phi (z)\) is the density of a standard normal distribution.

Proof

From the very definition of (2.2) we obtain immediately that, for any \(t\in \mathbb {R}\) and any \(z\in \mathbb {R}\),

$$\begin{aligned} \left\{ X_t \le z\right\} = \left\{ Y_t \le \frac{z}{s(z)}\right\} . \end{aligned}$$

Thus the distribution function of the vector \((X_{t_1},X_{t_2},\ldots ,X_{t_n})\) is given by

$$\begin{aligned} \mathbb {P}\left( X_{t_1}\le z_1,\ldots , X_{t_n}\le z_n\right) = \mathbb {P}\left( Y_{t_1}\le \frac{z_1}{s(z_1)},\ldots , Y_{t_n}\le \frac{z_n}{s(z_n)}\right) . \end{aligned}$$

Clearly this function is continuously differentiable with bounded derivative given by (2.8) on every subset \(B\subset \mathbb {R}^n\) in which none of the variables \(z_i\) change sign and are non-zero. Thus, the law of \((X_{t_1},X_{t_2},\ldots ,X_{t_n})\) restricted to any such B is absolutely continuous with density given by (2.8). Next we observe that we may split \(\mathbb {R}^n\) into

$$\begin{aligned} B_0 = \{z\in \mathbb {R}^n: z_k = 0\text { for some } k\} \end{aligned}$$

and \(2^n\) subsets \(B_j,j=1,2,\ldots ,2^n\) where none of the variables change sign and are non-zero. The set \(B_0\) is clearly of zero measure, and since the law of \((X_{t_1},X_{t_2},\ldots ,X_{t_n})\) is absolutely continuous with density (2.8) on each \(B_j\), the result follows by a covering argument. \(\square \)

Remark 3

We remark that the distribution function of \((X_{t_1},X_{t_2},\ldots ,X_{t_n})\) is not differentiable in the set \(B_0\). However, since \(B_0\) is a (Lebesgue) zero set, we may define the density in \(B_0\) arbitrarily, and the density is understood in the usual sense as equivalence classes of \(L^1\) functions. In particular, we have defined it through (2.8) in the whole \(\mathbb {R}^n\).

We end this section with the following result that ensures the path continuity of the OGP X.

Proposition 2.8

Let X be the oscillating Gaussian process defined by (2.2). If Y has Hölder continuous paths of order \(\gamma \in (0,1]\) almost surely, then so does X.

Proof

Since functions \(x\mathbf{1}_{x>0}\) and \(x\mathbf{1}_{x<0}\) are Lipschitz continuous, it follows that

$$\begin{aligned} f_{\alpha _+,\alpha _-}(x) = \alpha _+ x\mathbf{1}_{x>0} + \alpha _-x\mathbf{1}_{x<0} \end{aligned}$$

is Lipschitz continuous as well. The result follows at once. \(\square \)

3 Model calibration

This section is devoted to the estimation of the unknown parameters \(\alpha _+,\alpha _-\) by the method of moments. Throughout, r denotes the covariance function

$$\begin{aligned} r(t) = \mathbb {E}(Y_t Y_0) \end{aligned}$$
(3.1)

of the stationary Gaussian process Y in (2.2). Following the ideas of Lemma 2.3, we define

$$\begin{aligned} \hat{\alpha }_+(T) = \sqrt{\frac{\pi }{2}}\hat{\mu }_1(T) + \frac{1}{2}\sqrt{\left| 4\hat{\mu }_2(T)-2\pi \hat{\mu }^2_1(T)\right| } \end{aligned}$$
(3.2)

and

$$\begin{aligned} \hat{\alpha }_-(T) = \hat{\alpha }_+(T) - {\sqrt{2\pi }}\hat{\mu }_1(T), \end{aligned}$$
(3.3)

where \(\hat{\mu }_i(T), i=1,2\) are the classical moment estimators defined by

$$\begin{aligned} \hat{\mu }_{i}(T) = \frac{1}{T} \int _0^{T} X_u^i du. \end{aligned}$$
(3.4)

Remark 4

Note that here we have taken absolute values inside the square roots in order to obtain real valued estimates for real valued quantities. Since

$$\begin{aligned} 4\mu _2 - 2\pi \mu _1^2 > 0, \end{aligned}$$

this does not affect the asymptotical properties of the estimators.

The following result gives us the consistency and can be viewed as one of our main theorems. The proof is postponed to Sect. 3.1.

Theorem 3.1

Assume that r given by (3.1) satisfies \(|r(T)|\rightarrow 0\) as \(T\rightarrow \infty \). Then, for any \(p\ge 1\), we have

$$\begin{aligned} \hat{\alpha }_+(T) \rightarrow \alpha _+ \end{aligned}$$

and

$$\begin{aligned} \hat{\alpha }_-(T) \rightarrow \alpha _- \end{aligned}$$

in \(L^p\), as \(T \rightarrow \infty \).

In order to study the limiting distribution, we need some additional assumptions on the covariance function r.

Assumption 3.1

For the covariance function r of Y, given by (3.1), we assume that one of the following condition hold:

  1. (1)

    The covariance function r satisfies \(r\in L^1(\mathbb {R})\).

  2. (2)

    We have that

    $$\begin{aligned} \lim _{t\rightarrow \infty }{r(t)t = C_2 \in (0, \infty )}. \end{aligned}$$
  3. (3)

    There exists \(H\in \left( \frac{1}{2},1\right) \) such that

    $$\begin{aligned} \lim _{t\rightarrow \infty }{t^{2-2H}r(t) = C_3 \in (0,\infty )}. \end{aligned}$$

Remark 5

The first condition in Assumption 3.1 corresponds to short-range dependence and the last condition corresponds to long-range dependence. The second condition corresponds to the border case resulting to a logarithmic factor to our normalising sequence (see Theorem 3.3).

The following theorem gives the central limit theorem for the moment estimators. For the asymptotic covariances let \(C_2,C_3\) be given as in Assumption 3.1 and denote

$$\begin{aligned} \varsigma _1 = \frac{\alpha _++\alpha _-}{2} \end{aligned}$$
(3.5)

and

$$\begin{aligned} \varsigma _2 = \sqrt{\frac{2}{\pi }}\left( \alpha _+^2-\alpha _-^2\right) . \end{aligned}$$
(3.6)

We define matrices \(\Sigma _1,\Sigma _2,\Sigma _3 \in \mathbb {R}^{2\times 2}\) by setting, for \(i,j=1,2\),

$$\begin{aligned} \left( \Sigma _1\right) _{ij}= & {} 2\int _0^\infty \mathbb {E}\left[ (X_0^i-\mu _i)(X_t^j-\mu _j)\right] dt, \end{aligned}$$
(3.7)
$$\begin{aligned} \left( \Sigma _2\right) _{ij}= & {} 2C_2\varsigma _i\varsigma _j, \end{aligned}$$
(3.8)

and

$$\begin{aligned} \left( \Sigma _3\right) _{ij} = \frac{C_3\varsigma _i\varsigma _j}{H(2H-1)}. \end{aligned}$$
(3.9)

Theorem 3.2

Let \(\hat{\mu }_1(T)\) and \(\hat{\mu }_2(T)\) be defined by (3.4), and let \(\hat{\mu }(T) = (\hat{\mu }_1(T),\hat{\mu }_2(T))\) and \(\mu = (\mu _1,\mu _2)\). Furthermore, let r be given by (3.1) and let \(\Sigma _i,i=1,2,3\) be given by (3.7)–(3.9). Then,

  1. (1)

    if r satisfies the condition (1) of Assumption 3.1,

    $$\begin{aligned} \sqrt{T}\left( \hat{\mu }(T) - \mu \right) \rightarrow \mathcal {N}(0,\Sigma _1) \end{aligned}$$

    in law as \(T \rightarrow \infty \),

  2. (2)

    if r satisfies the condition (2) of Assumption 3.1,

    $$\begin{aligned} \sqrt{\frac{T}{\log T}}\left( \hat{\mu }(T)-\mu \right) \rightarrow \mathcal {N}(0,\Sigma _2) \end{aligned}$$

    in law as \(T \rightarrow \infty \), and

  3. (3)

    if r satisfies the condition (3) of Assumption 3.1,

    $$\begin{aligned} T^{1-H}\left( \hat{\mu }(T)-\mu \right) \rightarrow \mathcal {N}(0,\Sigma _3) \end{aligned}$$

    in law as \(T \rightarrow \infty \).

Remark 6

By replacing \(\hat{\mu }_n(T)\) with

$$\begin{aligned} \hat{\mu }_n(t,T) = \frac{1}{T}\int _0^{tT} X_u^n du \end{aligned}$$

and normalising accordingly, one can obtain functional versions of the above limit theorems. That is, in cases (1) and (2) of Theorem 3.2, we obtain convergence in law in the space of continuous functions towards \(\sigma W_t\), where \(W_t\) is a Brownian motion. In the case (3), the limiting process is \(\sigma B^H_t\), where \(B^H\) is the fractional Brownian motion. Indeed, the last case follows from a classical result by Taqqu (1975) and the first case from Campese et al. (2018) and from the fact that all moments of X are finite. However, from practical point of view, translating these results to functional versions of the estimators \(\hat{\alpha }_+(T)\) and \(\hat{\alpha }_-(T)\) is not feasible. Indeed, this follows from the fact that in the functional central limit theorem for \(\hat{\mu }(t,T)\) the normalisation (subtracting the true value) is done inside the integral, while for \(\hat{\alpha }_+(T)\) and \(\hat{\alpha }_-(T)\) this is done after integration.

Theorems 3.1 and 3.2 now give us the following limiting distributions for the estimators \(\hat{\alpha }_+(T)\) and \(\hat{\alpha }_-(T)\). For the asymptotic covariance matrix, we also introduce the matrix

$$\begin{aligned} J_{\alpha } = \begin{pmatrix} \sqrt{\frac{\pi }{2}}-\frac{\pi \mu _1}{\sqrt{4\mu _2-2\pi (\mu _1)^2}} &{} \frac{1}{\sqrt{4\mu _2-2\pi (\mu _1)^2}} \\ -\sqrt{\frac{\pi }{2}}-\frac{\pi \mu _1}{\sqrt{4\mu _2-2\pi (\mu _1)^2}} &{} \frac{1}{\sqrt{4\mu _2-2\pi (\mu _1)^2}} \end{pmatrix} \end{aligned}$$
(3.10)

that is the Jacobian with respect to the variables \(\mu _1\) and \(\mu _2\) of the transformations (2.5) and (2.6).

Theorem 3.3

Let \(\hat{\alpha }_+(T)\) and \(\hat{\alpha }_-(T)\) be defined by (3.2) and (3.3), respectively, and let \(\hat{\alpha }(T) = (\hat{\alpha }_+(T),\hat{\alpha }_-(T))\) and \(\alpha = (\alpha _+,\alpha _-)\). Furthermore, let r be given by (3.1), \(\Sigma _i,i=1,2,3\) be given by (3.7)–(3.9), and \(J_\alpha \) by (3.10). Then,

  1. (1)

    if r satisfies the condition (1) of Assumption 3.1,

    $$\begin{aligned} \sqrt{T}\left( \hat{\alpha }(T)-\alpha \right) \rightarrow \mathcal {N}(0,{J_\alpha \Sigma _1 J_\alpha ^T}) \end{aligned}$$

    in law as \(T \rightarrow \infty \),

  2. (2)

    if r satisfies the condition (2) of Assumption 3.1, then

    $$\begin{aligned} \sqrt{\frac{T}{\log T}}\left( \hat{\alpha }(T)-\alpha \right) \rightarrow \mathcal {N}(0,{J_\alpha \Sigma _2 J_\alpha ^T}) \end{aligned}$$

    in law as \(T \rightarrow \infty \), and

  3. (3)

    if r satisfies the condition (3) of Assumption 3.1, then

    $$\begin{aligned} T^{1-H}\left( \hat{\alpha }(T)-\alpha \right) \rightarrow \mathcal {N}(0,{J_\alpha \Sigma _3 J_\alpha ^T}) \end{aligned}$$

    in law as \(T \rightarrow \infty \).

Proof

The result follows from Theorems 3.1 and 3.2 together with a simple application of a multidimensional delta method. We leave the details to the reader. \(\square \)

3.1 Proofs of Theorems 3.1 and 3.2

We begin with the following versions of weak law of large numbers.

Proposition 3.4

(Laws of large numbers). Let \(n\ge 1\) and suppose that r given by (3.1) satisfies \(|r(T)| \rightarrow 0\) as \(|T| \rightarrow \infty \). Then, for any \(p\ge 1\), as \(T\rightarrow \infty \),

$$\begin{aligned} \frac{1}{T}\int _0^{T} X_u^n du \rightarrow \frac{2^{\frac{n}{2}}\Gamma \left( \frac{n+1}{2}\right) }{2\sqrt{\pi }}(\alpha _+^n+(-1)^n\alpha _-^n) \end{aligned}$$
(3.11)

in \(L^p\).

Proof

By Lemma 2.2, the right-hand side of (3.11) is exactly the moment \(\mu _n\). Thus in order to prove the claim, we have to show that

$$\begin{aligned} \left\| \frac{1}{T}\int _0^T X_u^ndu - \mu _n \right\| _p \rightarrow 0, \end{aligned}$$

where \(\Vert \cdot \Vert _p\) is the p:th norm. We first observe that it suffices to prove convergence in probability. Indeed, by applying Minkowski’s inequality and stationarity of X, we have, for every \(p\ge 1\) and \(\epsilon >0\), that

$$\begin{aligned} \sup _{T\ge 1}\left\| \frac{1}{T}\int _0^T X_u^n du- \mu _n \right\| _{p+\epsilon } \le \sup _{T\ge 1}\frac{1}{T}\int _0^T \left\| X_u^n - \mu _n\right\| _{p+\epsilon } du \le C. \end{aligned}$$

Thus, for every \(p{\ge 1}\), the family

$$\begin{aligned} \left\{ \left| \frac{1}{T}\int _0^T X_u^n du - \mu _n \right| ^{p} : T\ge 1\right\} \end{aligned}$$

of random variables is uniformly integrable. Now the result follows from the fact that uniform integrability and convergence in probability implies convergence in \(L^1\), i.e.

$$\begin{aligned} \left| \frac{1}{T}\int _0^T X_u^n du - \mu _n\right| ^p \rightarrow 0 \end{aligned}$$

in \(L^1\) as \(T \rightarrow \infty \). Let us now prove the convergence in \(L^2\), which then implies the convergence in probability. By Corollary 2.6, we have that

$$\begin{aligned} \mathbb {E}\left| \frac{1}{T}\int _0^T X_u^n du - \frac{2^{\frac{n}{2}}\Gamma \left( \frac{n+1}{2}\right) }{2\sqrt{\pi }}(\alpha _+^n+(-1)^n\alpha _-^n)\right| ^2 = T^{-2}\int _0^T\int _0^T a(u,s)du ds, \end{aligned}$$

where \(a(u,s) = O(|r(s-u)|)\). Writing

$$\begin{aligned}&\int _{(u,s)\in [0,T]^2}r(u-s)du ds \\&\quad = \int _{(u,s)\in [0,T]^2,|u-s|\ge T_0}r(u-s)du ds + \int _{(u,s)\in [0,T]^2,|u-s|<T_0}r(u-s)du ds \end{aligned}$$

and choosing \(T_0\) such that \(|r(u-s)|<\epsilon \) on \(\{(u,s)\in [0,T]^2,|u-s|\ge T_0\}\) yields the result. \(\square \)

Proof of Theorem 3.1

By Proposition 3.4, we have that \((\hat{\mu }_1(T),\hat{\mu }_2(T)) \rightarrow (\mu _1,\mu _2)\) in \(L^p\) as \(T\rightarrow \infty \). As \( \displaystyle \sup _{T\ge 1}\Vert \hat{\mu }_1(T)\Vert _p < \infty \) for all \(p\ge 1\), it follows from Hölder inequality that, for any \(r>0\), we have

$$\begin{aligned} \Vert \hat{\mu }^2_1(T) - \mu _1^2\Vert _p = \Vert (\hat{\mu }_1(T) + \mu _1)(\hat{\mu }_1(T)-\mu _1)\Vert _p \le C \Vert \hat{\mu }_1(T)-\mu _1\Vert _{p+r}, \end{aligned}$$

where C is a constant. Thus

$$\begin{aligned} \Vert \hat{\mu }^2_1(T) - \mu _1^2\Vert _p \rightarrow 0 \quad \text{ as } \quad T \rightarrow \infty . \end{aligned}$$

Now, using \(|\sqrt{a}-\sqrt{b}| \le \sqrt{|a-b|}\) and the triangle inequality, we get

$$\begin{aligned}&\sqrt{\left| 4\hat{\mu }_2(T)-2\pi \hat{\mu }^2_1(T)\right| } - \sqrt{\left| 4\mu _2-2\pi \mu ^2_1\right| } \\&\quad \le C\sqrt{|\hat{\mu }_2(T)-\mu _2|} + C\sqrt{|\hat{\mu }_1^2(T) - \mu _1^2|}. \end{aligned}$$

The claim now follows from the fact that, for any random variable Z and for any \(p\ge 2\),

$$\begin{aligned} \Vert \sqrt{|Z|}\Vert _p = \sqrt{\Vert Z\Vert _{p/2}}. \end{aligned}$$

\(\square \)

We proceed now to the proof of Theorem 3.3. Before that we recall some preliminaries. In particular, we recall the concept of Hermite decomposition. For details, see e.g. monographs Janson (1997), Nualart (2006).

Let \(N\sim \mathcal {N}(0,1)\) and let f be a function such that \(\mathbb {E}\left( f(N)^2\right) < \infty \). Then f admits the Hermite decomposition

$$\begin{aligned} f(x) = \sum _{k=0}^\infty \beta _k H_k(x), \end{aligned}$$
(3.12)

where \(H_k,k=0,1,\ldots \) are the Hermite polynomials. The coefficients \(\beta _k\) are given by \(\beta _0 = \mathbb {E}f(N)\) and

$$\begin{aligned} \beta _k = \mathbb {E}\left[ H_k(N)f(N)\right] \quad k\ge 1, \end{aligned}$$
(3.13)

where \(N\sim \mathcal {N}(0,1)\). The index \(d=\min \{k\ge 1: \beta _k \ne 0\}\) is called the Hermite rank of f. For our purposes we need to consider the functions

$$\begin{aligned} f_i(x) = \alpha ^i_+ x^i \mathbf{1}_{x>0} + \alpha _-^{{i}} x^i \mathbf{1}_{x<0}, \quad i=1,2. \end{aligned}$$

The Hermite decompositions of \(f_1\) and \(f_2\) are denoted by

$$\begin{aligned} f_1(x) = \sum _{k=0}^{{\infty }} \beta _{1,k}H_k(x) \end{aligned}$$
(3.14)

and

$$\begin{aligned} f_2(x) = \sum _{k=0}^{{\infty }}\beta _{2,k}H_k(x), \end{aligned}$$
(3.15)

respectively. Moreover, we also recall the fact that if \(f_1\) and \(f_2\) admit representations (3.14) and (3.15), then for Gaussian random variables \(N_1,N_2\sim \mathcal {N}(0,1)\) we have

$$\begin{aligned} \mathbb {E}\left( f_1(N_1)f_2(N_2)\right) = \sum _{k=0}^\infty \beta _{1,k}\beta _{2,k}\left[ \mathbb {E}(N_1N_2)\right] ^k. \end{aligned}$$
(3.16)

Proof of Theorem 3.2

By Cramer-Wold device, it suffices to prove that each linear combination

$$\begin{aligned} Z(y_1,y_2,T):= y_1(\hat{\mu }_1(T) - \mu _1) + y_2(\hat{\mu }_2(T)-\mu _2), \end{aligned}$$

when properly normalised, converges towards a normal distribution. By using representations (3.14) and (3.15), it follows that \(Z(y_1,y_2,T)\) have representation

$$\begin{aligned} Z(y_1,y_2,T) = \frac{1}{T}\int _0^T \sum _{k=0}^\infty \gamma _k H_k(Y_t)dt, \end{aligned}$$
(3.17)

where \( \gamma _k = y_1\beta _{1,k} + y_2\beta _{2,k}. \) Note also that we have \( \mathbb {E}\hat{\mu }_i(T) = \mu _i,\quad i=1,2, \) and thus \(\gamma _0 = 0\), i.e. \(Z(y_1,y_2,T)\) is a normalised sequence. We begin with the first case that is relatively easy. Indeed, suppose that the condition (1) of Assumption 3.1 holds. Then, as r is integrable, continuous version of the Breuer-Major theorem (see e.g. Campese et al. 2018) implies the claim directly.

Under the other two conditions, we first note that the only contributing factor to the limiting distribution in (3.17) is

$$\begin{aligned} \frac{1}{T}\int _0^T \gamma _1H_1(Y_t)dt. \end{aligned}$$

This follows from the fact that

$$\begin{aligned} \mathbb {E}\left[ \sum _{k=2}^\infty \gamma _k \int _0^T H_k(Y_t)dt\right] ^2 \le CT\int _0^T r^2(u)du \end{aligned}$$

and clearly

$$\begin{aligned} \frac{1}{\log T}\int _0^T r^2(u)du \rightarrow 0 \end{aligned}$$

under the condition (2) and

$$\begin{aligned} T^{1-2H}\int _0^T r^2(u)du \rightarrow 0 \end{aligned}$$

under the condition (3). Thus it suffices to prove that

$$\begin{aligned}{}[y_1\beta _{1,1}+y_2\beta _{2,1}]\frac{l(T)}{T}\int _0^T Y_t dt \end{aligned}$$

converges towards normal distribution, where \(l(T) = \sqrt{\frac{T}{\log T}}\) under the condition (2) and \(l(T) = T^{1-H}\) under the condition (3). Convergence of \(\frac{l(T)}{T}\int _0^T Y_t dt\) follows from the fact that Y is Gaussian, and the variance converges. Indeed, we have that

$$\begin{aligned} \mathbb {E}\left( \int _0^T Y_t dt\right) ^2&= \int _0^T \int _0^T \mathbb {E}(Y_uY_s)du ds \\&= \int _0^T \int _0^T r(u-s)du ds \\&= 2\int _0^T r(u)(T-u)du. \end{aligned}$$

Under conditions (2) and (3) of Assumption 3.1 we obtain that, in both cases,

$$\begin{aligned} \frac{l^2(T)}{T^2}\int _0^T r(u)(T-u)du \rightarrow C>0. \end{aligned}$$
(3.18)

This can be seen by observing first that

$$\begin{aligned} \int _0^T r(u)(T-u) du \end{aligned}$$

diverges as \(T\rightarrow \infty \). Hence we may apply L’Hopital’s rule which, under condition (3), gives us

$$\begin{aligned} \lim _{T\rightarrow \infty } T^{-2H}\int _0^T r(u)(T-u) du&= \lim _{T\rightarrow \infty } \frac{1}{2HT^{2H-1}}\int _0^T r(u)du \\&= \lim _{T\rightarrow \infty } \frac{r(T)}{2H(2H-1)T^{2H-2}} \\&= \frac{C_3}{2H(2H-1)}. \end{aligned}$$

Similarly, under condition (2) of Assumption 3.1 we obtain by L’Hopital’s rule that

$$\begin{aligned} \lim _{T\rightarrow \infty } \frac{1}{T\log T}\int _0^T r(u)(T-u) du = C_2. \end{aligned}$$

Hence, in order to obtain the limiting normality, it suffices to show that either \(\beta _{1,1}\ne 0\) or \(\beta _{1,2}\ne 0\). However, in view of Lemma 2.2 and simple computations, we get immediately that

$$\begin{aligned} \beta _{1,1} = \mathbb {E}(f_1(N)N) = \frac{\alpha _++\alpha _-}{2} = \varsigma _1 \end{aligned}$$
(3.19)

and

$$\begin{aligned} \beta _{2,1} = \mathbb {E}(f_2(N)N) = \sqrt{\frac{2}{\pi }}\left( \alpha _+^2-\alpha _-^2\right) = \varsigma _2. \end{aligned}$$
(3.20)

This proves the claimed convergence in law, and thus it remains to compute the asymptotic covariances. For this it suffices to study the asymptotic behaviour of the quantities

$$\begin{aligned} \mathbb {E}\left[ (\hat{\mu }_i(T) - \mu _i)(\hat{\mu }_j(T)-\mu _j)\right] , \quad i,j=1,2. \end{aligned}$$

Using Fubini’s Theorem and representations (3.14)–(3.15) together with (3.16) we get

$$\begin{aligned} \mathbb {E}\left[ (\hat{\mu }_i(T) - \mu _i)(\hat{\mu }_j(T)-\mu _j)\right]&= \frac{1}{T^2}\int _0^T \int _0^T \mathbb {E}\left[ (X_u^i - \mu _i)(X_v^j-\mu _j)\right] dv du \\&= \frac{1}{T^2}\int _0^T \int _0^T \sum _{k=1}^\infty \beta _{1,k}\beta _{2,k} [r(u-v)]^kdv du \\&= \frac{2}{T^2}\int _0^T \int _0^u \sum _{k=1}^\infty \beta _{1,k}\beta _{2,k} [r(u-v)]^kdv du \\&= \frac{2}{T^2}\int _0^T \int _0^u \sum _{k=1}^\infty \beta _{1,k}\beta _{2,k} [r(x)]^kdx du \\&= \frac{2}{T^2}\int _0^T \sum _{k=1}^\infty \beta _{1,k}\beta _{2,k} [r(x)]^k(T-x) dx. \end{aligned}$$

Thus it suffices to study the asymptotic behaviour of properly normalised quantity

$$\begin{aligned} \frac{2}{T^2}\int _0^T \sum _{k=1}^\infty \beta _{1,k}\beta _{2,k} [r(x)]^k(T-x) dx. \end{aligned}$$
(3.21)

Let us first consider condition (1) of Assumption 3.1. By comparing to the asymptotic variance provided by the Breuer-Major Theorem, we obtain immediately that

$$\begin{aligned} T\mathbb {E}\left[ (\hat{\mu }_i(T) - \mu _i)(\hat{\mu }_j(T)-\mu _j)\right]&= \frac{2}{T}\int _0^T \sum _{k=1}^\infty \beta _{1,k}\beta _{2,k} [r(x)]^k(T-x) dx \\&\rightarrow \int _0^\infty \sum _{k=1}^\infty \beta _{1,k}\beta _{2,k} [r(x)]^k dx \end{aligned}$$

as \(T\rightarrow \infty \). Together with

$$\begin{aligned} \mathbb {E}\left[ (X_0^i-\mu _i)(X_t^j-\mu _j)\right] = \sum _{k=1}^\infty \beta _{i,k}\beta _{j,k} [r(x)]^k \end{aligned}$$

the representation (3.7) follows. Under conditions (2) and (3), we again observe that only the first term \(k=1\) in (3.21) contributes to the limit. Now the claim follows directly from Eqs. (3.18), (3.19), and (3.20) together with the above computations. This concludes the proof. \(\square \)

3.2 Note on the case \(|\alpha _+| = |\alpha _-|\)

The proof of Theorem 3.3 relied on the fact that \(\alpha _+ \ne \alpha _-\). In this case \(\varsigma _1\ne 0\) and \(\varsigma _2\ne 0\), implying the same rate of convergence and limiting normality for both moment estimators \(\hat{\mu }_1\) and \(\hat{\mu }_2\). Taking into account Remark 1, Theorem 3.3 can be extended easily to cover all the cases \(|\alpha _+|\ne |\alpha _-|\) by using symmetry and choosing \(\alpha _-\) in Lemma 2.3 appropriately. For the sake of completeness of the presentation, we treat the case \(|\alpha _+| = |\alpha _-|\). We begin with the case \(\alpha _+ = \alpha _- = \alpha > 0\) (the case \(\alpha <0\) following from symmetry). In this case, for \(\varsigma _1,\varsigma _2\) given by (3.5)–(3.6), we have \(\varsigma _1 \ne 0\) and \(\varsigma _2 = 0\). It follows that Theorems 3.2 and 3.3 applies, and we obtain the limiting normality. However, in this case the matrices \(\Sigma _2\) and \(\Sigma _3\) are of the form

$$\begin{aligned} \Sigma _2 =\begin{pmatrix} 2C_2\alpha ^2 &{} 0 \\ 0 &{} 0 \end{pmatrix} \end{aligned}$$

and

$$\begin{aligned} \Sigma _3 = \begin{pmatrix} \frac{C_3\alpha ^2}{H(2H-1)} &{} 0 \\ 0 &{} 0 \end{pmatrix}. \end{aligned}$$

This means that the rates given in Theorem 3.2 are not correct for the estimator \(\hat{\mu }_2(T)\). We also note that in this case, we actually have

$$\begin{aligned} X_t = \alpha Y_t. \end{aligned}$$

Thus \(\alpha \) cannot be recovered based on the first moment \(\mathbb {E}X_t = 0\). On the other hand, we have \(\mathbb {E}X_t^2 = \alpha ^2\) from which we get an estimator \(\hat{\alpha }(T) = \sqrt{\hat{\mu }_2}\). From this we get a variant of Theorem 3.3 for \(\hat{\alpha }(T)\) once the asymptotic behaviour of \(\hat{\mu }_2\) is established. For this purpose, we consider the Rosenblatt distribution. For a given \(H\in \left( \frac{1}{2},1\right) \), a random variable \(Z^H\) follows the Rosenblatt distribution if it has the following representation

$$\begin{aligned} Z^H = \sum _{k=1}^\infty \lambda _k\left( \xi _k^2-1\right) , \end{aligned}$$

where \(\xi _k \sim N(0,1)\) are independent and \(\left( \lambda _k\right) _{k\ge 1}\) is the eigenvalue sequence of an integral operator \(A: L^2(|y|^{-H}dy) \mapsto L^2(|y|^{-H}dy)\) given by

$$\begin{aligned} (Af)(x) = \int _{\mathbb {R}} \frac{e^{i(x-y)}-1}{i(x-y)}|y|^{-H}f(y)dy. \end{aligned}$$

For details on the Rosenblatt distribution, we refer to Embrechts and Maejima (2002), Taqqu (2011), Tudor (2013).

In our case the Rosenblatt distribution arises from the following result that is nothing more than a simplified continuous time version of the original example given by Rosenblatt (1961).

Lemma 3.5

Let Y be a centered stationary Gaussian process with unit variance and a covariance function r satisfying condition (3) of Assumption 3.1 with some \(H\in \left( \frac{3}{4},1\right) \). Then, as \(T\rightarrow \infty \), we have

$$\begin{aligned} T^{1-2H} \int _0^T Y_t^2 - \mathbb {E}Y_t^2 dt \rightarrow \frac{2C_3}{\sqrt{(4H-2)(4H-3)}}Z^H \end{aligned}$$

in law, where \(Z^H\) follows the Rosenblatt distribution.

Remark 7

For any \(H\in \left( \frac{1}{2},1\right) \), the variance of the Rosenblatt distribution equals one. Thus the constant stems from the normalising constant that can be computed by L’Hopital’s rule and using the arguments of the proof of Theorem 3.2.

The following gives precise asymptotic behaviour of the estimator \(\hat{\mu }_2(T)\).

Proposition 3.6

Let \(\hat{\mu }_2(T)\) be defined by (3.4) and let r be given by (3.1). Suppose further that \(\alpha _+ = \alpha _- = \alpha \). Then

  1. (1)

    If r satisfies the condition (1), condition (2), or condition (3) with \(H\in \left( \frac{1}{2},\frac{3}{4}\right) \) of Assumption 3.1,

    $$\begin{aligned} \sqrt{T}\left( \hat{\mu }_2(T) - \mu _2\right) \rightarrow \mathcal {N}\left( 0,4\alpha ^4\int _0^\infty r^2(v)dv\right) \end{aligned}$$

    in law as \(T \rightarrow \infty \).

  2. (2)

    If r satisfies the condition (3) of Assumption 3.1 with \(H=\frac{3}{4}\),

    $$\begin{aligned} \sqrt{\frac{T}{\log T}}\left( \hat{\mu }_2(T)-\mu _2\right) \rightarrow \mathcal {N}\left( 0,4\alpha ^4C_3^2\right) \end{aligned}$$

    in law as \(T \rightarrow \infty \).

  3. (3)

    If r satisfies the condition (3) of Assumption 3.1 with \(H\in \left( \frac{3}{4},1\right) \),

    $$\begin{aligned} T^{2-2H}\left( \hat{\mu }_2(T)-\mu _2\right) \rightarrow \frac{2C_3\alpha ^2}{\sqrt{(4H-2)(4H-3)}} Z^H \end{aligned}$$

    in law as \(T \rightarrow \infty \), where \(Z^H\) follows the Rosenblatt distribution.

Proof

Recall

$$\begin{aligned} \hat{\mu }_2(T) - \mu _2 = \frac{1}{T} \int _0^T X_t^2 - \mathbb {E}X_t^2 dt. \end{aligned}$$

Since now \(X_t = \alpha Y_t\) is Gaussian, the limiting normality in items (1)–(2) follows directly by an application of (Sottinen and Viitasaari 2018, Lemma 3.8.) (see also the computations in the proof of Proposition 4.1.(iii) in Sottinen and Viitasaari (2018) on the border case with logarithmic factor in the rate). Moreover, now

$$\begin{aligned} \mathbb {E}[\hat{\mu }_2(T)-\mu _2]^2 = \frac{2\alpha ^4}{T^2} \int _0^T \int _0^T r^2(u-v)dudv = \frac{4\alpha ^4}{T^2} \int _0^T r^2(v)(T-v)dv \end{aligned}$$

from which we obtain the claimed asymptotic variances by using the same arguments as in the proof of Theorem 3.2. Finally, item (3) follows directly from Lemma 3.5 by using \(X_t^2 = \alpha ^2 Y_t^2\). \(\square \)

Consider next the case \(\alpha _+ = - \alpha _- = \alpha > 0\). Then \(\varsigma _1 = \varsigma _2 = 0\) and \(X_t = \alpha |Y_t|\). In this case \(\alpha \) can be identified from the estimator \(\hat{\mu }_1\).

Proposition 3.7

Let \(\hat{\mu }_1(T)\) be defined by (3.4) and let r be given by (3.1). Suppose further that \(\alpha _+ = -\alpha _- = \alpha \). Then

  1. (1)

    If r satisfies the condition (1), condition (2), or condition (3) with \(H\in \left( \frac{1}{2},\frac{3}{4}\right) \) of Assumption 3.1,

    $$\begin{aligned} \sqrt{T}\left( \hat{\mu }_1(T) - \mu _1\right) \rightarrow \mathcal {N}\left( 0,2\alpha ^2\int _0^\infty Cov\left( |Y_t|,|Y_0|\right) dt\right) \end{aligned}$$

    in law as \(T \rightarrow \infty \).

  2. (2)

    If r satisfies the condition (3) of Assumption 3.1 with \(H=\frac{3}{4}\),

    $$\begin{aligned} \sqrt{\frac{T}{\log T}}\left( \hat{\mu }_1(T)-\mu _1\right) \rightarrow \mathcal {N}\left( 0,\frac{8C^2_3\alpha ^2}{\pi }\right) \end{aligned}$$

    in law as \(T \rightarrow \infty \).

  3. (3)

    If r satisfies the condition (3) of Assumption 3.1 with \(H\in \left( \frac{3}{4},1\right) \),

    $$\begin{aligned} T^{2-2H}\left( \hat{\mu }_1(T)-\mu _1\right) \rightarrow \frac{2C_3\alpha }{\sqrt{\pi (2H-1)(4H-3)}} Z^H \end{aligned}$$

    in law as \(T \rightarrow \infty \), where \(Z^H\) follows the Rosenblatt distribution.

Proof

Recall

$$\begin{aligned} \hat{\mu }_1(T) - \mu _1 = \frac{1}{T} \int _0^T X_t - \mathbb {E}X_t dt. \end{aligned}$$

Now \(X_t = \alpha |Y_t|\). It is easy to check from (3.13) that the function \(x\mapsto |x|\) has Hermite rank 2, and hence the limiting normality and the claimed asymptotic variance for item (1) follows again by the Breuer-Major theorem and square-integrability of the function r. For items (2) and (3) we get, by using the same arguments as in the proof of Theorem 3.2, that the higher order terms in the Hermite decomposition (3.12) do not contribute to the limit. Thus the only contributing factor is actually

$$\begin{aligned} \frac{\alpha \beta _2}{T} \int _0^T Y_t^2 - 1 dt, \end{aligned}$$
(3.22)

where

$$\begin{aligned} \beta _2 = \mathbb {E}\left[ H_2(N)|N|\right] = \mathbb {E}|N|^3 - \mathbb {E}|N| = \sqrt{\frac{2}{\pi }}. \end{aligned}$$

The asymptotic behaviour for (3.22) was already discussed in the proof of Proposition 3.6, from which the result follows by computing the asymptotic variances. This concludes the proof. \(\square \)

3.3 Estimation based on discrete observations

In practice, one does not observe the continuous path of X. Instead of that, one observes X on some discrete time points \(0\le t_0<t_1< \ldots<T_N<\infty \). That is why, in practical applications, the integrals in (3.4) are approximated by discrete sums. Thus the natural moment estimators \(\tilde{\mu }_n(N)\) are defined by

$$\begin{aligned} \tilde{\mu }_n(N) = \frac{1}{T_N}\sum _{k=1}^N X^n_{t_{k-1}}\Delta t_k, \end{aligned}$$
(3.23)

where \(\Delta t_k = t_k - t_{k-1}\). The corresponding estimators \(\tilde{\alpha }_+(N)\) and \(\tilde{\alpha }_-(N)\) for parameters \(\alpha _+\) and \(\alpha _-\) are

$$\begin{aligned} \tilde{\alpha }_+(N) = \sqrt{\frac{\pi }{2}}\tilde{\mu }_1(N) + \frac{1}{2}\sqrt{\left| 4\tilde{\mu }_2(N)-2\pi \tilde{\mu }^2_1(N)\right| } \end{aligned}$$
(3.24)

and

$$\begin{aligned} \tilde{\alpha }_-(N) = \tilde{\alpha }_+(N) - {\sqrt{2\pi }}\tilde{\mu }_1(N). \end{aligned}$$
(3.25)

Let \(\Delta _N = \max _k \Delta t_k\). In order to obtain consistency and asymptotic normality for the discretised versions, we have to assume that \(T_N \rightarrow \infty \) and, at the same time, that \(\Delta _N \rightarrow 0\) in a suitable way. The following proposition studies the difference between \(\hat{\mu }_n(T_N)\) and \(\tilde{\mu }_n(N)\).

Proposition 3.8

Let r(t) be given by (3.1) and denote the variogram of the stationary process Y by c(t), i.e.

$$\begin{aligned} c(t) = 2\left[ r(0)-r(t)\right] . \end{aligned}$$

Then, for any \(n\ge 1\) and for any \(p\ge 1\), there exists a constant \(C=C(n,p,\alpha _+,\alpha _-)\) such that

$$\begin{aligned} \left\| \hat{\mu }_n(T_N)-\tilde{\mu }_n(N)\right\| _p \le C\sup _{0\le t \le \Delta _N}\sqrt{c(t)}. \end{aligned}$$

Proof

We have, by Minkowski inequality, that

$$\begin{aligned}&\left\| \hat{\mu }_n(T_N)-\tilde{\mu }_n(N)\right\| _p\\&\quad = \left\| \frac{1}{T_N}\int _0^{T_N}X_u^n du - \frac{1}{T_N}\sum _{k=1}^N X^n_{t_{k-1}}\Delta t_k\right\| _p \\&\quad \le \frac{1}{T_N} \sum _{k=1}^N \int _{t_{k-1}}^{t_k}\left\| X_u^n - X_{t_{k-1}}^n\right\| _p du. \end{aligned}$$

Using,

$$\begin{aligned} x^n-y^n = (x-y)\sum _{j=0}^{n-1}x^jy^{n-1-j}, \end{aligned}$$

we get, for any \(s,u\ge 0\), that

$$\begin{aligned} |X_s^n-X_u^n| \le |X_s-X_u|\sum _{j=0}^{n-1}|X_s|^j|X_u|^{n-1-j}. \end{aligned}$$

Thus, a repeated application of Hölder inequality together with the fact that \(\sup _{s\ge 0}\Vert X_s\Vert _p < \infty \) implies that, for every \(q>p\), we have

$$\begin{aligned} \left\| X_u^n - X_{s}^n\right\| _p \le C\Vert X_u - X_s\Vert _{q}, \end{aligned}$$

where C is a constant. Moreover, by the proof of Proposition 2.8, we have,

$$\begin{aligned} |X_u-X_s| \le C|Y_u-Y_s|. \end{aligned}$$

Since Y is Gaussian, hypercontractivity implies that

$$\begin{aligned} \Vert X_u - X_s\Vert _q \le C\Vert Y_u - Y_s\Vert _2. \end{aligned}$$

Now stationarity of Y gives

$$\begin{aligned} \Vert Y_u -Y_s\Vert _2 = \sqrt{c(u-s)}. \end{aligned}$$

Thus we observe

$$\begin{aligned}&\frac{1}{T_N} \sum _{k=1}^N \int _{t_{k-1}}^{t_k}\left\| X_u^n - X_{t_{k-1}}^n\right\| _p du \\&\quad \le \frac{C}{T_N} \sum _{k=1}^N \int _{t_{k-1}}^{t_k}\sqrt{c(u-t_{k-1})} du\\&\quad \le C \sup _{0\le t \le \Delta _N}\sqrt{c(t)} \end{aligned}$$

proving the claim. \(\square \)

We can now easily deduce the following results for the asymptotical properties of the estimators \(\tilde{\alpha }_+\) and \(\tilde{\alpha }_-\).

Theorem 3.9

Let \(\tilde{\alpha }_+(N)\) and \(\tilde{\alpha }_-(N)\) be defined by (3.24) and (3.25), respectively. Suppose that r given by (3.1) satisfies \(r(T) \rightarrow 0\) as \(T\rightarrow \infty \) and that \(\sup _{0\le s\le T}c(s)\rightarrow 0\) as \(T\rightarrow 0\). If \(T_N \rightarrow \infty \) and \(\Delta _N \rightarrow 0\) as \(N \rightarrow \infty \), then for any \(p\ge 1\),

$$\begin{aligned} \tilde{\alpha }_+(N) \rightarrow \alpha _+ \end{aligned}$$

and

$$\begin{aligned} \tilde{\alpha }_-(N) \rightarrow \alpha _- \end{aligned}$$

in \(L^p\).

Proof

Using the arguments of the proof of Theorem 3.1 together with Proposition 3.8 we deduce that

$$\begin{aligned} \Vert \tilde{\alpha }_+(N) - \hat{\alpha }_+(T_N)\Vert _p \rightarrow 0 \end{aligned}$$

and

$$\begin{aligned} \Vert \tilde{\alpha }_-(N) - \hat{\alpha }_-(T_N)\Vert _p \rightarrow 0. \end{aligned}$$

Thus the claim follows from Theorem 3.1. \(\square \)

Theorem 3.10

Let \(\tilde{\alpha }_+(N)\) and \(\tilde{\alpha }_-(N)\) be defined by (3.24) and (3.25), respectively, and let \(\tilde{\alpha }(N) = (\tilde{\alpha }_+(N),\tilde{\alpha }_-(N))\) and \(\alpha = (\alpha _+,\alpha _-)\). Furthermore, let r be given by (3.1), \(\Sigma _i,i=1,2,3\) be given by (3.7)–(3.9), and \(J_\alpha \) by (3.10). Suppose that \(\displaystyle \sup _{0\le s\le t}c(s)\rightarrow 0\) as \(t\rightarrow 0\), \(T_N \rightarrow \infty \), and \(\Delta _N \rightarrow 0\) as \(N \rightarrow \infty \). Denote

$$\begin{aligned} h(N) = \sup _{0\le s \le \Delta _N}\sqrt{c(s)}. \end{aligned}$$

Then,

  1. (1)

    if r satisfies the condition (1) of Assumption 3.1,

    $$\begin{aligned} \sqrt{T_N}\left( \tilde{\alpha }(N)-\alpha \right) \rightarrow \mathcal {N}(0,{J_\alpha \Sigma _1J_\alpha ^T}) \end{aligned}$$

    in law for every partitions \(0\le t_0 < \ldots T_N\) satisfying \(\sqrt{T_N}h(N)\rightarrow 0\),

  2. (2)

    if r satisfies the condition (2) of Assumption 3.1,

    $$\begin{aligned} \sqrt{\frac{T_N}{\log T_N}}\left( \tilde{\alpha }(N)-\alpha \right) \rightarrow \mathcal {N}(0,{J_\alpha \Sigma _2J_\alpha ^T}) \end{aligned}$$

    in law for every partitions \(0\le t_0 < \ldots T_N\) satisfying \(\sqrt{\frac{T_N}{\log T_N}}h(N)\rightarrow 0\), and

  3. (3)

    if r satisfies the condition (3) of Assumption 3.1,

    $$\begin{aligned} T_N^{1-H}\left( \tilde{\alpha }(N)-\alpha \right) \rightarrow \mathcal {N}(0,{J_\alpha \Sigma _3J_\alpha ^T}) \end{aligned}$$

    in law for every partitions \(0\le t_0 < \ldots T_N\) satisfying \(T_N^{1-H}h(N)\rightarrow 0\).

Proof

The additional conditions on the mesh together with Proposition 3.8 guarantee that

$$\begin{aligned} l(T_N)\Vert \tilde{\alpha }(N) - \hat{\alpha }(T_N)\Vert _p \rightarrow 0, \end{aligned}$$

where \(l(T_N)\) is the corresponding normalisation for each case. Thus the result follows directly from Theorem 3.3. \(\square \)

One natural way for choosing the observ ation points such that the above mentioned conditions are fulfilled, is to choose N equidistant points with \(\Delta _N = \frac{\log N}{N}\). Then \(\Delta _N \rightarrow 0\) and \(T_N = N\Delta _N = \log N \rightarrow \infty \). If, in addition, Y is Hölder continuous of some order \(\theta >0\), then also the rest of the requirements are satisfied. Indeed, it follows from (Azmoodeh et al. 2014, Theorem 1) that if Y is Hölder continuous of order \(\theta >0\), then for any \(\epsilon >0\), we have

$$\begin{aligned} c(t) \le Ct^{\theta -\epsilon } \end{aligned}$$

for some constant C. Thus \(h(N)\le \sqrt{C}\Delta _N^{\frac{1}{2}(\theta -\epsilon )}\), from which it is easy to see that, for \(\epsilon <\theta \),

$$\begin{aligned} T_N^{1-H}h(N) \le \sqrt{\frac{T_N}{\log T_N}}h(N) \le \sqrt{T_N}h(N) \le \sqrt{C}\frac{(\log N)^{\frac{1}{2}(1+\theta -\epsilon )}}{N^{\frac{1}{2}(\theta -\epsilon )}} \rightarrow 0. \end{aligned}$$

3.4 Oscillating self-similar Gaussian processes

Self-similar processes form an interesting and applicable class of stochastic processes. In this subsection, we consider oscillating Gaussian processes driven by self-similar Gaussian processes Y. In other words, we consider processes of the type

$$\begin{aligned} X_t = \alpha _+ Y_t\mathbf{1}_{Y_t>0} + \alpha _-Y_t\mathbf{1}_{Y_t<0}, \end{aligned}$$

where Y is H-self-similar for some \(H>0\). That is, for every \(a>0\), the finite dimensional distributions of the processes \((Y_{at})_{t\ge 0}\) and \((a^HY_t)_{t\ge 0}\) are the same. Throughout this section we assume that we have observed \(X_t\) on an interval [0, 1], and our aim is to estimate \(\alpha _+\) and \(\alpha _-\). The key ingredient is the Lamperti transform

$$\begin{aligned} U_t = e^{-Ht}Y_{e^t}. \end{aligned}$$
(3.26)

It is well-known that U is stationary on \((-\infty ,0]\). Moreover, for \(t\ge 0\), we define a process

$$\begin{aligned} \tilde{X}_t := e^{Ht}X_{e^{-t}} = \alpha _+ U_{-t} \mathbf{1}_{U_{-t} >0} + \alpha _- U_{-t}\mathbf{1}_{U_{-t}<0}. \end{aligned}$$

Clearly, observing X on [0, 1] is equivalent to observing \(\tilde{X}_t\) on \(t\ge 0\). This leads to the ”moment estimators” \(\hat{\mu }_i(T)\) defined by

$$\begin{aligned} \hat{\mu }_i(T) = \frac{1}{T}\int _{e^{-T}}^1 u^{-H-1}X_u^idu. \end{aligned}$$
(3.27)

The corresponding parameter estimators \(\hat{\alpha }_+(T)\) and \(\hat{\alpha }_-(T)\) are defined by plugging in \(\hat{\mu }_1(T)\) and \(\hat{\mu }_2(T)\) into (3.2) and (3.3), respectively. Indeed, a change of variable \(u=e^{t}\) gives

$$\begin{aligned} \hat{\mu }_i(T)=\frac{1}{T}\int _0^T \tilde{X}^i_t dt. \end{aligned}$$

Thus studying the covariance function r of a stationary Gaussian process U given by (3.26) enables us to apply Theorems 3.1 and 3.3.

3.5 The case of bi-fractional Brownian motion

We end this section with an interesting example. We consider bifractional Brownian motions that, among others, cover fractional Brownian motions and standard Brownian motions. Recall that a bifractional Brownian motion \(B^{H,K}\) with \(H\in (0,1)\) and \(K\in (0,2)\) such that \(HK\in (0,1)\) is a centered Gaussian process with a covariance function

$$\begin{aligned} R(s,t)=\frac{1}{2^K}\left[ (t^{2H}+s^{2H})^K-|t-s|^{2HK}\right] . \end{aligned}$$

It is known that \(B^{H,K}\) is HK-self-similar. Furthermore, one recovers fractional Brownian motion by plugging in \(K=1\), from which standard Brownian motion is recovered by further setting \(H=\frac{1}{2}\). Now the covariance function r of the Lamperti transform \(U_t=e^{-HKt}B^{H,K}_{e^t}\) has exponential decay (see Sottinen and Viitasaari 2018). Thus, we may apply the item (1) of Theorem 3.3 to obtain that \( \sqrt{T}(\hat{\alpha }_+(T)-\alpha _+) \) and \(\sqrt{T}(\hat{\alpha }_-(T)-\alpha _-)\) are asymptotically normal. Similarly, discretising the integral in (3.27) and applying Theorems 3.9 and 3.10 allows us to consider parameter estimators based on discrete observations. We leave the details to the reader.

4 Discussion

In this paper we considered oscillating Gaussian processes and introduced a moment based estimators for the model parameters. Moreover, we proved consistency and asymptotic normality of the estimators under natural assumptions on the driving Gaussian process. One possible line for the future research is to study the estimators based on (1.8) and compare with our moment based estimators. Another interesting and natural extension to our approach would be to consider oscillating processes with several (more than two) parameters and corresponding regions. This would make the model class more flexible and adaptive. Finally, a topic for future research would be to develop testing procedures for the model parameters.