Abstract
In this article we introduce and study oscillating Gaussian processes defined by \(X_t = \alpha _+ Y_t \mathbf{1}_{Y_t >0} + \alpha _- Y_t\mathbf{1}_{Y_t<0}\), where \(\alpha _+,\alpha _->0\) are free parameters and Y is either stationary or self-similar Gaussian process. We study the basic properties of X and we consider estimation of the model parameters. In particular, we show that the moment estimators converge in \(L^p\) and are, when suitably normalised, asymptotically normal.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this article we introduce a class of stationary processes, called oscillating Gaussian processes, defined by
where \(\alpha _+\) and \(\alpha _-\) are constants and Y is a stationary Gaussian process.
Our motivation stems partially from the connection to stochastic differential equations driven by the fractional Brownian motion with discontinuous diffusion coefficient, studied in Garzón et al. (2017).
During the past two decades interest in the study of the existence and uniqueness of stochastic differential equations driven by a fractional Brownian motion has been very intense and there have been many advances in their theory and applications. In particular, strong solutions of the following stochastic differential equation (SDE in short)
under usual conditions on the coefficients, such as Lipschitz and linear growth, were developed by Nualart and Rǎşcanu (2002), and have been considered by many authors, see Mishura (2008) and the references therein.
Nevertheless, the case of SDE with discontinuous coefficients has been less explored. Most of the cases of stochastic differential equations driven by a fractional Brownian motion and with discontinuous coefficients which have been studied are those corresponding to discontinuous drift coefficient (for \(H>1/2\)). Regarding that, in Mishura and Nualart (2004), the authors studied a drift that is Hölder continuous except on a finite numbers of points. Another class of discontinuity in SDE driven by a fractional Brownian motion is related to adding a Poisson process to the equation. In Bai and Ma (2015), extending the results given in Mishura and Nualart (2004), the authors proved the existence of the strong solution of this kind of SDE driven by a fractional Brownian motion and a Poisson point process. To the best of our knowledge, in the fractional Brownian motion framework, there is only a preliminary work that studies equations with discontinuous diffusion coefficient, written by Garzón et al. (2017). There the authors proved the existence and uniqueness of solutions to the SDE driven by the fractional Brownian motion \(B^H\) with \(H>\frac{1}{2}\) given by
where the function \(\sigma \) is given by
The authors showed that the explicit solution to the equation (1.3) is
It is straightforward to see that the explicit existence and uniqueness of the solution to equation (1.3) holds also if \(\alpha \) and \(1-\alpha \) are replaced with \(\alpha _+\) and \(\alpha _-\) satisfying \( 0< \alpha _- < \alpha _+\) (or \(0< \alpha _+ < \alpha _-\), respectively). Thus, motivated by Eq. (1.5), we define the Oscillating Gaussian process directly by (1.1). On a related literature, we also mention preprint Torres and Viitasaari (2019) where the existence and uniqueness of the solution to Eq. (1.3) with more general discontinuous \(\sigma \) and more general driving force was proved.
One of the reasons why SDEs with discontinuous diffusion coefficient are interesting is their relation to the Skew Brownian motion. In the Brownian motion framework, the Skew Brownian motion appeared as a natural generalization of the Brownian motion. The Skew Brownian motion is a process that behaves like a Brownian motion except that the sign of each excursion is chosen using an independent Bernoulli random variable with the parameter \(\alpha \in (0 , 1)\). For \(\alpha = 1/2,\) the process corresponds to a Brownian motion. This process is a Markov process and a semi-martingale. Moreover, it is a strong solution to the following SDE with local time (see Lejay 2006 for a survey). Let
where \(L_t^0(X)\) is the symmetric local time of X at 0. In the case of the Brownian motion, it follows from the Itô-Tanaka formula that the Eqs. (1.6) and (1.3) with \(\sigma (x) = \frac{1}{\alpha } \mathbf{1}_{\{x \ge 0\}} + \frac{1}{1-\alpha } \mathbf{1}_{ \{x < 0\}}\) are equivalent. For a comprehensive survey on Skew Brownian motion, see the work by Lejay (2006).
In the case of the fractional Brownian motion, the Tanaka type formulas are more complicated and no relations between the two types of equations are known to exist. The motivation for the authors in Garzón et al. (2017) to study Eq. (1.3) stemmed from this fact.
To the best of our knowledge, Lejay and Pigato (2018) is the only study that considers the inference of parameters related to SDE with a discontinuous diffusion process. The study considers the case of a discontinuous diffusion coefficient that can only attain two different values. More precisely, the authors of Lejay and Pigato (2018) studied the so-called oscillating Brownian motion that is a solution to the SDE
where W is a standard Brownian motion and \(\sigma (x) = \alpha _+\mathbf{1}_{x \ge 0} + \alpha _- \mathbf{1}_{x < 0}, \quad x \in \mathbb {R}\). It is worth to note that while in the fractional Brownian motion case (1.5) solves (1.3) and we have adopted the name oscillating from Lejay and Pigato (2018) into our definition (1.1), in the Brownian motion case the solution of (1.7) is not given by (1.1) with \(Y=W\).
The authors in Lejay and Pigato (2018) proposed two natural consistent estimators, which are variants of the integrated volatility estimator. Moreover, the stable convergence towards certain Gaussian mixture of the renormalised estimators was proven. The estimators are given by
Note that when the paths are strictly positive or strictly negative, only one of the estimators can be computed.
In addition to the above mentioned links to SDEs and skew Brownian motion, we note that (1.1) could be applied in various other modelling scenarios as well, making oscillating Gaussian process an interesting object of study. For example, (1.1) can be viewed as a model for different situations where the variance changes by regions.
One of the main interests in this paper is in the estimation of the model parameters \(\alpha _+\) and \(\alpha _-\). One natural idea is to study estimators defined by (1.8), where X is given by (1.1). However, in the general case one cannot apply the independence of the Brownian increments. Thus, while the behaviour of the quadratic variation terms in the numerators of (1.8) is rather well-understood in the case of Gaussian processes (see Viitasaari 2019 for a survey on the topic), it becomes much more complicated in the case of non-Gaussian X defined by (1.1). Instead of applying (1.8), we define estimators based on moments and study their asymptotic properties. Moreover, we show that our moment based approach can be applied under a large class of driving Gaussian processes Y in (1.1).
Estimation of moments in a stationary model \(f(Y_t)\) with some general function f and a stationary Gaussian process Y is extensively studied in the literature. Results on the central limit theorems in this context date back to Breuer and Major (1983) where the authors provided sufficient conditions for the function f and for the covariance function \(r(t) = \mathbb {E}(Y_tY_0)\) of Y that ensure the limiting normality of properly normalised average of \(f(Y_t)\). While in Breuer and Major (1983) a discrete time \(t\in \mathbb {Z}\) was considered, a similar statement holds also in continuous time corresponding to our case, see e.g. Campese et al. (2018). Similarly, in papers Dobrushin and Major (1979), Taqqu (1979, 1975) the authors studied limiting behaviour of the moment estimator in the case where the assumptions of Breuer and Major (1983) are violated. In this case, one usually needs different normalisation instead of the standard \(\sqrt{T}\), and the limiting distribution is not necessarily Gaussian.
After the development of the powerful Stein-Malliavin approach for central limit theorems, developed in Nourdin and Peccati (2009), interest in studying the behaviour of the moment estimator in the model \(f(Y_t)\) increased again and it is currently a topic under active research. For articles on the topic, we refer to Bai and Taqqu (2013), Campese et al. (2018), Nourdin and Nualart (2018), Nourdin and Peccati (2010), Nourdin et al. (2019) and the references therein, to simply name a few recent ones.
The rest of the paper is organised as follows. In Sect. 2, we introduce the oscillating Gaussian processes and study their basic properties such as moments, covariance structures, and continuity properties. Section 3 is devoted to model calibration. We begin by showing that the moment estimators are consistent and satisfy central limit theorems under suitable assumptions on the driving Gaussian process. On top of that, we also consider the special case \(|\alpha _+|=|\alpha _-|\) in details and we study the corresponding estimators based on discrete observations. In Sect. 3.4, we briefly discuss how Lamperti transform can be used to study oscillating Gaussian processes driven by self-similar Gaussian noise, and as a particular example, we apply the method to the case of the bifractional Brownian motion. We end the paper with a short summary and a discussion about future prospects.
2 Oscillating Gaussian processes
Throughout this section we consider Gaussian oscillating processes \(X=(X_t)_{t\ge 0}\) defined by
where \(Y = (Y_t)_{t\ge 0}\) is a stationary centered Gaussian process and the \(\alpha _+\) and \(\alpha _-\) are positive parameters such that \(\alpha _+\ne \alpha _-\). Note that the \(\alpha _+\) and \(\alpha _-\) describe the magnitude of variations of X on different regions. Our goal is to estimate the unknown parameters \(\alpha _+\) and \(\alpha _-\). In order to do this, we assume that \(\mathbb {E}(Y_t^2)=1\). Note that the general case \(\mathbb {E}(Y_t^2) = \sigma ^2\) can be written as
where now \(\mathbb {E}(\tilde{Y}^2_t) = 1\). We also assume that the parameters \(\alpha _+\) and \(\alpha _-\) are both strictly positive. Finally, without any loss of generality we assume that Y is continuous (recall that, by Belyaev (1960), a stationary Gaussian process is either continuous or unbounded on every interval) implying that the covariance function r is continuous as well.
Remark 1
We stress that our assumption \(\alpha _+,\alpha _->0\) is not very restricting, and it is straightforward to extend our analysis to other cases. Indeed, the case \(\alpha _+,\alpha _-<0\) follows directly by symmetry. In the case \(\alpha _-<0<\alpha _+\) (or \(\alpha _->0>\alpha _+\), in view of symmetry argument), we simply choose negative solution for \(\alpha _-\) in Lemma 2.3 (cf. Remark 2). We note however, that the reason why such extensions are straightforward to obtain is that we defined X with (2.1) directly instead of restricting ourselves to the situation where X is a solution to SDE (1.3), in which case the solution is known to exists and is of the form (2.1) only for \(\alpha _-,\alpha _+>0\).
Definition 2.1
(Oscillating Gaussian process (OGP)). Let Y be a centered stationary Gaussian process with variance \(\sigma ^2=1\) and covariance function \(r(t){= \mathbb {E}(Y_tY_0)}\), and let \(\alpha _+,\alpha _->0, \alpha _+\ne \alpha _-\) be constants. We define the oscillating version X of Y by
In the following lemmas we compute the moments and covariances of the OGP X defined in (2.2).
Lemma 2.2
Let \(n\ge 1\) be an integer and \(t\ge 0\) arbitrary. Then
Proof
By the definition of OGP, we have
Since Y is a centered stationary Gaussian process we have
where \(N\sim \mathcal {N}(0,1)\). Similarly,
Now, the well known formula for a standard normal variable \(\mathbb {E}|N|^n = \frac{2^{\frac{n}{2}}\Gamma \left( \frac{n+1}{2}\right) }{\sqrt{\pi }}\), implies the claim. \(\square \)
The following lemma allows us to compute the parameters \(\alpha _+\) and \(\alpha _-\) in terms of the moments.
Lemma 2.3
Let \(t>0\) be arbitrary. Then
and
Proof
Since \(\Gamma (1) =1\) and \(\Gamma \left( \frac{3}{2}\right) = \frac{\sqrt{\pi }}{2}\), Lemma 2.2 yields
and
From the first equality we get
Plugging into the second inequality with some simple manipulations gives
Now
and since \(\alpha _->0\), we obtain the result. \(\square \)
Remark 2
Note that in the proof of Lemma 2.3 we applied the assumption \(\alpha _->0\). In the case \(\alpha _-<0<\alpha _+\), one has to choose the other solution to Eq. (2.7) yielding
In the next lemma, we derive the covariance function of the process X. That allows us to obtain consistency for our estimators.
Lemma 2.4
Let \(N_1\sim \mathcal {N}(0,1)\) and \(N_2\sim \mathcal {N}(0,1)\) such that \(Cov(N_1,N_2) = a\). Then
Proof
We have
Change of variables \(u = \frac{x}{\sqrt{2(1-a^2)}}\) and \(v=\frac{y}{\sqrt{2(1-a^2)}}\) gives
and using formula 3.5-5 in Rice (1944) we obtain
This proves the claim. \(\square \)
We are interested in the asymptotic behaviour of covariance functions which in turn translates into asymptotic properties of our estimators. For this purpose we adopt the standard Landau notation \(f(x) = O(g(x))\) as \(x\rightarrow L \in [-\infty ,\infty ]\) meaning that, asymptotically as x is close to L, we have \(|f(x)| \le C|g(x)|\) for some constant C.
Corollary 2.5
Let \(N_1\sim \mathcal {N}(0,1)\) and \(N_2\sim \mathcal {N}(0,1)\) such that \(Cov(N_1,N_2) = a{\in [-1,1]}\), and let \(n\ge 1\) be an integer. Then, as \(a\rightarrow 0\), we have
and
Proof
It follows from Lemma 2.4 that
Now, the first claim follows from the fact that
The second claim follows similarly since
\(\square \)
Corollary 2.6
Let X be the oscillating Gaussian process defined in (2.2) and let \(r(t) = \mathbb {E}(Y_tY_0)\) be the covariance function of the associated Gaussian process Y. Suppose further that \(r(t) \rightarrow 0\) as \(|t|\rightarrow \infty \). Then, as \(|t-s|\rightarrow \infty \), we have
Proof
We have
Taking expectation and using Corollary 2.5 we get
Lemma 2.2 now implies the claim. \(\square \)
We also give the following result providing the multidimensional density of our process.
Proposition 2.7
Let X be the oscillating Gaussian process defined by (2.2) and let \(n\in \mathbb {N}\) and \(0\le t_1<t_2<\ldots <t_n\) be fixed. Suppose that the Gaussian random vector \((Y_{t_1},Y_{t_2},\ldots ,Y_{t_n})\) is non-degenerate and denote its density function by \(\phi _Y\). Then the law of the random vector \((X_{t_1},X_{t_2},\ldots ,X_{t_n})\) is absolutely continuous with respect to the Lebesgue measure with a density given by
where
In particular, for a fixed \(t\in \mathbb {R}\) the density function of \(X_t\) is given by
where \(\phi (z)\) is the density of a standard normal distribution.
Proof
From the very definition of (2.2) we obtain immediately that, for any \(t\in \mathbb {R}\) and any \(z\in \mathbb {R}\),
Thus the distribution function of the vector \((X_{t_1},X_{t_2},\ldots ,X_{t_n})\) is given by
Clearly this function is continuously differentiable with bounded derivative given by (2.8) on every subset \(B\subset \mathbb {R}^n\) in which none of the variables \(z_i\) change sign and are non-zero. Thus, the law of \((X_{t_1},X_{t_2},\ldots ,X_{t_n})\) restricted to any such B is absolutely continuous with density given by (2.8). Next we observe that we may split \(\mathbb {R}^n\) into
and \(2^n\) subsets \(B_j,j=1,2,\ldots ,2^n\) where none of the variables change sign and are non-zero. The set \(B_0\) is clearly of zero measure, and since the law of \((X_{t_1},X_{t_2},\ldots ,X_{t_n})\) is absolutely continuous with density (2.8) on each \(B_j\), the result follows by a covering argument. \(\square \)
Remark 3
We remark that the distribution function of \((X_{t_1},X_{t_2},\ldots ,X_{t_n})\) is not differentiable in the set \(B_0\). However, since \(B_0\) is a (Lebesgue) zero set, we may define the density in \(B_0\) arbitrarily, and the density is understood in the usual sense as equivalence classes of \(L^1\) functions. In particular, we have defined it through (2.8) in the whole \(\mathbb {R}^n\).
We end this section with the following result that ensures the path continuity of the OGP X.
Proposition 2.8
Let X be the oscillating Gaussian process defined by (2.2). If Y has Hölder continuous paths of order \(\gamma \in (0,1]\) almost surely, then so does X.
Proof
Since functions \(x\mathbf{1}_{x>0}\) and \(x\mathbf{1}_{x<0}\) are Lipschitz continuous, it follows that
is Lipschitz continuous as well. The result follows at once. \(\square \)
3 Model calibration
This section is devoted to the estimation of the unknown parameters \(\alpha _+,\alpha _-\) by the method of moments. Throughout, r denotes the covariance function
of the stationary Gaussian process Y in (2.2). Following the ideas of Lemma 2.3, we define
and
where \(\hat{\mu }_i(T), i=1,2\) are the classical moment estimators defined by
Remark 4
Note that here we have taken absolute values inside the square roots in order to obtain real valued estimates for real valued quantities. Since
this does not affect the asymptotical properties of the estimators.
The following result gives us the consistency and can be viewed as one of our main theorems. The proof is postponed to Sect. 3.1.
Theorem 3.1
Assume that r given by (3.1) satisfies \(|r(T)|\rightarrow 0\) as \(T\rightarrow \infty \). Then, for any \(p\ge 1\), we have
and
in \(L^p\), as \(T \rightarrow \infty \).
In order to study the limiting distribution, we need some additional assumptions on the covariance function r.
Assumption 3.1
For the covariance function r of Y, given by (3.1), we assume that one of the following condition hold:
-
(1)
The covariance function r satisfies \(r\in L^1(\mathbb {R})\).
-
(2)
We have that
$$\begin{aligned} \lim _{t\rightarrow \infty }{r(t)t = C_2 \in (0, \infty )}. \end{aligned}$$ -
(3)
There exists \(H\in \left( \frac{1}{2},1\right) \) such that
$$\begin{aligned} \lim _{t\rightarrow \infty }{t^{2-2H}r(t) = C_3 \in (0,\infty )}. \end{aligned}$$
Remark 5
The first condition in Assumption 3.1 corresponds to short-range dependence and the last condition corresponds to long-range dependence. The second condition corresponds to the border case resulting to a logarithmic factor to our normalising sequence (see Theorem 3.3).
The following theorem gives the central limit theorem for the moment estimators. For the asymptotic covariances let \(C_2,C_3\) be given as in Assumption 3.1 and denote
and
We define matrices \(\Sigma _1,\Sigma _2,\Sigma _3 \in \mathbb {R}^{2\times 2}\) by setting, for \(i,j=1,2\),
and
Theorem 3.2
Let \(\hat{\mu }_1(T)\) and \(\hat{\mu }_2(T)\) be defined by (3.4), and let \(\hat{\mu }(T) = (\hat{\mu }_1(T),\hat{\mu }_2(T))\) and \(\mu = (\mu _1,\mu _2)\). Furthermore, let r be given by (3.1) and let \(\Sigma _i,i=1,2,3\) be given by (3.7)–(3.9). Then,
-
(1)
if r satisfies the condition (1) of Assumption 3.1,
$$\begin{aligned} \sqrt{T}\left( \hat{\mu }(T) - \mu \right) \rightarrow \mathcal {N}(0,\Sigma _1) \end{aligned}$$in law as \(T \rightarrow \infty \),
-
(2)
if r satisfies the condition (2) of Assumption 3.1,
$$\begin{aligned} \sqrt{\frac{T}{\log T}}\left( \hat{\mu }(T)-\mu \right) \rightarrow \mathcal {N}(0,\Sigma _2) \end{aligned}$$in law as \(T \rightarrow \infty \), and
-
(3)
if r satisfies the condition (3) of Assumption 3.1,
$$\begin{aligned} T^{1-H}\left( \hat{\mu }(T)-\mu \right) \rightarrow \mathcal {N}(0,\Sigma _3) \end{aligned}$$in law as \(T \rightarrow \infty \).
Remark 6
By replacing \(\hat{\mu }_n(T)\) with
and normalising accordingly, one can obtain functional versions of the above limit theorems. That is, in cases (1) and (2) of Theorem 3.2, we obtain convergence in law in the space of continuous functions towards \(\sigma W_t\), where \(W_t\) is a Brownian motion. In the case (3), the limiting process is \(\sigma B^H_t\), where \(B^H\) is the fractional Brownian motion. Indeed, the last case follows from a classical result by Taqqu (1975) and the first case from Campese et al. (2018) and from the fact that all moments of X are finite. However, from practical point of view, translating these results to functional versions of the estimators \(\hat{\alpha }_+(T)\) and \(\hat{\alpha }_-(T)\) is not feasible. Indeed, this follows from the fact that in the functional central limit theorem for \(\hat{\mu }(t,T)\) the normalisation (subtracting the true value) is done inside the integral, while for \(\hat{\alpha }_+(T)\) and \(\hat{\alpha }_-(T)\) this is done after integration.
Theorems 3.1 and 3.2 now give us the following limiting distributions for the estimators \(\hat{\alpha }_+(T)\) and \(\hat{\alpha }_-(T)\). For the asymptotic covariance matrix, we also introduce the matrix
that is the Jacobian with respect to the variables \(\mu _1\) and \(\mu _2\) of the transformations (2.5) and (2.6).
Theorem 3.3
Let \(\hat{\alpha }_+(T)\) and \(\hat{\alpha }_-(T)\) be defined by (3.2) and (3.3), respectively, and let \(\hat{\alpha }(T) = (\hat{\alpha }_+(T),\hat{\alpha }_-(T))\) and \(\alpha = (\alpha _+,\alpha _-)\). Furthermore, let r be given by (3.1), \(\Sigma _i,i=1,2,3\) be given by (3.7)–(3.9), and \(J_\alpha \) by (3.10). Then,
-
(1)
if r satisfies the condition (1) of Assumption 3.1,
$$\begin{aligned} \sqrt{T}\left( \hat{\alpha }(T)-\alpha \right) \rightarrow \mathcal {N}(0,{J_\alpha \Sigma _1 J_\alpha ^T}) \end{aligned}$$in law as \(T \rightarrow \infty \),
-
(2)
if r satisfies the condition (2) of Assumption 3.1, then
$$\begin{aligned} \sqrt{\frac{T}{\log T}}\left( \hat{\alpha }(T)-\alpha \right) \rightarrow \mathcal {N}(0,{J_\alpha \Sigma _2 J_\alpha ^T}) \end{aligned}$$in law as \(T \rightarrow \infty \), and
-
(3)
if r satisfies the condition (3) of Assumption 3.1, then
$$\begin{aligned} T^{1-H}\left( \hat{\alpha }(T)-\alpha \right) \rightarrow \mathcal {N}(0,{J_\alpha \Sigma _3 J_\alpha ^T}) \end{aligned}$$in law as \(T \rightarrow \infty \).
Proof
The result follows from Theorems 3.1 and 3.2 together with a simple application of a multidimensional delta method. We leave the details to the reader. \(\square \)
3.1 Proofs of Theorems 3.1 and 3.2
We begin with the following versions of weak law of large numbers.
Proposition 3.4
(Laws of large numbers). Let \(n\ge 1\) and suppose that r given by (3.1) satisfies \(|r(T)| \rightarrow 0\) as \(|T| \rightarrow \infty \). Then, for any \(p\ge 1\), as \(T\rightarrow \infty \),
in \(L^p\).
Proof
By Lemma 2.2, the right-hand side of (3.11) is exactly the moment \(\mu _n\). Thus in order to prove the claim, we have to show that
where \(\Vert \cdot \Vert _p\) is the p:th norm. We first observe that it suffices to prove convergence in probability. Indeed, by applying Minkowski’s inequality and stationarity of X, we have, for every \(p\ge 1\) and \(\epsilon >0\), that
Thus, for every \(p{\ge 1}\), the family
of random variables is uniformly integrable. Now the result follows from the fact that uniform integrability and convergence in probability implies convergence in \(L^1\), i.e.
in \(L^1\) as \(T \rightarrow \infty \). Let us now prove the convergence in \(L^2\), which then implies the convergence in probability. By Corollary 2.6, we have that
where \(a(u,s) = O(|r(s-u)|)\). Writing
and choosing \(T_0\) such that \(|r(u-s)|<\epsilon \) on \(\{(u,s)\in [0,T]^2,|u-s|\ge T_0\}\) yields the result. \(\square \)
Proof of Theorem 3.1
By Proposition 3.4, we have that \((\hat{\mu }_1(T),\hat{\mu }_2(T)) \rightarrow (\mu _1,\mu _2)\) in \(L^p\) as \(T\rightarrow \infty \). As \( \displaystyle \sup _{T\ge 1}\Vert \hat{\mu }_1(T)\Vert _p < \infty \) for all \(p\ge 1\), it follows from Hölder inequality that, for any \(r>0\), we have
where C is a constant. Thus
Now, using \(|\sqrt{a}-\sqrt{b}| \le \sqrt{|a-b|}\) and the triangle inequality, we get
The claim now follows from the fact that, for any random variable Z and for any \(p\ge 2\),
\(\square \)
We proceed now to the proof of Theorem 3.3. Before that we recall some preliminaries. In particular, we recall the concept of Hermite decomposition. For details, see e.g. monographs Janson (1997), Nualart (2006).
Let \(N\sim \mathcal {N}(0,1)\) and let f be a function such that \(\mathbb {E}\left( f(N)^2\right) < \infty \). Then f admits the Hermite decomposition
where \(H_k,k=0,1,\ldots \) are the Hermite polynomials. The coefficients \(\beta _k\) are given by \(\beta _0 = \mathbb {E}f(N)\) and
where \(N\sim \mathcal {N}(0,1)\). The index \(d=\min \{k\ge 1: \beta _k \ne 0\}\) is called the Hermite rank of f. For our purposes we need to consider the functions
The Hermite decompositions of \(f_1\) and \(f_2\) are denoted by
and
respectively. Moreover, we also recall the fact that if \(f_1\) and \(f_2\) admit representations (3.14) and (3.15), then for Gaussian random variables \(N_1,N_2\sim \mathcal {N}(0,1)\) we have
Proof of Theorem 3.2
By Cramer-Wold device, it suffices to prove that each linear combination
when properly normalised, converges towards a normal distribution. By using representations (3.14) and (3.15), it follows that \(Z(y_1,y_2,T)\) have representation
where \( \gamma _k = y_1\beta _{1,k} + y_2\beta _{2,k}. \) Note also that we have \( \mathbb {E}\hat{\mu }_i(T) = \mu _i,\quad i=1,2, \) and thus \(\gamma _0 = 0\), i.e. \(Z(y_1,y_2,T)\) is a normalised sequence. We begin with the first case that is relatively easy. Indeed, suppose that the condition (1) of Assumption 3.1 holds. Then, as r is integrable, continuous version of the Breuer-Major theorem (see e.g. Campese et al. 2018) implies the claim directly.
Under the other two conditions, we first note that the only contributing factor to the limiting distribution in (3.17) is
This follows from the fact that
and clearly
under the condition (2) and
under the condition (3). Thus it suffices to prove that
converges towards normal distribution, where \(l(T) = \sqrt{\frac{T}{\log T}}\) under the condition (2) and \(l(T) = T^{1-H}\) under the condition (3). Convergence of \(\frac{l(T)}{T}\int _0^T Y_t dt\) follows from the fact that Y is Gaussian, and the variance converges. Indeed, we have that
Under conditions (2) and (3) of Assumption 3.1 we obtain that, in both cases,
This can be seen by observing first that
diverges as \(T\rightarrow \infty \). Hence we may apply L’Hopital’s rule which, under condition (3), gives us
Similarly, under condition (2) of Assumption 3.1 we obtain by L’Hopital’s rule that
Hence, in order to obtain the limiting normality, it suffices to show that either \(\beta _{1,1}\ne 0\) or \(\beta _{1,2}\ne 0\). However, in view of Lemma 2.2 and simple computations, we get immediately that
and
This proves the claimed convergence in law, and thus it remains to compute the asymptotic covariances. For this it suffices to study the asymptotic behaviour of the quantities
Using Fubini’s Theorem and representations (3.14)–(3.15) together with (3.16) we get
Thus it suffices to study the asymptotic behaviour of properly normalised quantity
Let us first consider condition (1) of Assumption 3.1. By comparing to the asymptotic variance provided by the Breuer-Major Theorem, we obtain immediately that
as \(T\rightarrow \infty \). Together with
the representation (3.7) follows. Under conditions (2) and (3), we again observe that only the first term \(k=1\) in (3.21) contributes to the limit. Now the claim follows directly from Eqs. (3.18), (3.19), and (3.20) together with the above computations. This concludes the proof. \(\square \)
3.2 Note on the case \(|\alpha _+| = |\alpha _-|\)
The proof of Theorem 3.3 relied on the fact that \(\alpha _+ \ne \alpha _-\). In this case \(\varsigma _1\ne 0\) and \(\varsigma _2\ne 0\), implying the same rate of convergence and limiting normality for both moment estimators \(\hat{\mu }_1\) and \(\hat{\mu }_2\). Taking into account Remark 1, Theorem 3.3 can be extended easily to cover all the cases \(|\alpha _+|\ne |\alpha _-|\) by using symmetry and choosing \(\alpha _-\) in Lemma 2.3 appropriately. For the sake of completeness of the presentation, we treat the case \(|\alpha _+| = |\alpha _-|\). We begin with the case \(\alpha _+ = \alpha _- = \alpha > 0\) (the case \(\alpha <0\) following from symmetry). In this case, for \(\varsigma _1,\varsigma _2\) given by (3.5)–(3.6), we have \(\varsigma _1 \ne 0\) and \(\varsigma _2 = 0\). It follows that Theorems 3.2 and 3.3 applies, and we obtain the limiting normality. However, in this case the matrices \(\Sigma _2\) and \(\Sigma _3\) are of the form
and
This means that the rates given in Theorem 3.2 are not correct for the estimator \(\hat{\mu }_2(T)\). We also note that in this case, we actually have
Thus \(\alpha \) cannot be recovered based on the first moment \(\mathbb {E}X_t = 0\). On the other hand, we have \(\mathbb {E}X_t^2 = \alpha ^2\) from which we get an estimator \(\hat{\alpha }(T) = \sqrt{\hat{\mu }_2}\). From this we get a variant of Theorem 3.3 for \(\hat{\alpha }(T)\) once the asymptotic behaviour of \(\hat{\mu }_2\) is established. For this purpose, we consider the Rosenblatt distribution. For a given \(H\in \left( \frac{1}{2},1\right) \), a random variable \(Z^H\) follows the Rosenblatt distribution if it has the following representation
where \(\xi _k \sim N(0,1)\) are independent and \(\left( \lambda _k\right) _{k\ge 1}\) is the eigenvalue sequence of an integral operator \(A: L^2(|y|^{-H}dy) \mapsto L^2(|y|^{-H}dy)\) given by
For details on the Rosenblatt distribution, we refer to Embrechts and Maejima (2002), Taqqu (2011), Tudor (2013).
In our case the Rosenblatt distribution arises from the following result that is nothing more than a simplified continuous time version of the original example given by Rosenblatt (1961).
Lemma 3.5
Let Y be a centered stationary Gaussian process with unit variance and a covariance function r satisfying condition (3) of Assumption 3.1 with some \(H\in \left( \frac{3}{4},1\right) \). Then, as \(T\rightarrow \infty \), we have
in law, where \(Z^H\) follows the Rosenblatt distribution.
Remark 7
For any \(H\in \left( \frac{1}{2},1\right) \), the variance of the Rosenblatt distribution equals one. Thus the constant stems from the normalising constant that can be computed by L’Hopital’s rule and using the arguments of the proof of Theorem 3.2.
The following gives precise asymptotic behaviour of the estimator \(\hat{\mu }_2(T)\).
Proposition 3.6
Let \(\hat{\mu }_2(T)\) be defined by (3.4) and let r be given by (3.1). Suppose further that \(\alpha _+ = \alpha _- = \alpha \). Then
-
(1)
If r satisfies the condition (1), condition (2), or condition (3) with \(H\in \left( \frac{1}{2},\frac{3}{4}\right) \) of Assumption 3.1,
$$\begin{aligned} \sqrt{T}\left( \hat{\mu }_2(T) - \mu _2\right) \rightarrow \mathcal {N}\left( 0,4\alpha ^4\int _0^\infty r^2(v)dv\right) \end{aligned}$$in law as \(T \rightarrow \infty \).
-
(2)
If r satisfies the condition (3) of Assumption 3.1 with \(H=\frac{3}{4}\),
$$\begin{aligned} \sqrt{\frac{T}{\log T}}\left( \hat{\mu }_2(T)-\mu _2\right) \rightarrow \mathcal {N}\left( 0,4\alpha ^4C_3^2\right) \end{aligned}$$in law as \(T \rightarrow \infty \).
-
(3)
If r satisfies the condition (3) of Assumption 3.1 with \(H\in \left( \frac{3}{4},1\right) \),
$$\begin{aligned} T^{2-2H}\left( \hat{\mu }_2(T)-\mu _2\right) \rightarrow \frac{2C_3\alpha ^2}{\sqrt{(4H-2)(4H-3)}} Z^H \end{aligned}$$in law as \(T \rightarrow \infty \), where \(Z^H\) follows the Rosenblatt distribution.
Proof
Recall
Since now \(X_t = \alpha Y_t\) is Gaussian, the limiting normality in items (1)–(2) follows directly by an application of (Sottinen and Viitasaari 2018, Lemma 3.8.) (see also the computations in the proof of Proposition 4.1.(iii) in Sottinen and Viitasaari (2018) on the border case with logarithmic factor in the rate). Moreover, now
from which we obtain the claimed asymptotic variances by using the same arguments as in the proof of Theorem 3.2. Finally, item (3) follows directly from Lemma 3.5 by using \(X_t^2 = \alpha ^2 Y_t^2\). \(\square \)
Consider next the case \(\alpha _+ = - \alpha _- = \alpha > 0\). Then \(\varsigma _1 = \varsigma _2 = 0\) and \(X_t = \alpha |Y_t|\). In this case \(\alpha \) can be identified from the estimator \(\hat{\mu }_1\).
Proposition 3.7
Let \(\hat{\mu }_1(T)\) be defined by (3.4) and let r be given by (3.1). Suppose further that \(\alpha _+ = -\alpha _- = \alpha \). Then
-
(1)
If r satisfies the condition (1), condition (2), or condition (3) with \(H\in \left( \frac{1}{2},\frac{3}{4}\right) \) of Assumption 3.1,
$$\begin{aligned} \sqrt{T}\left( \hat{\mu }_1(T) - \mu _1\right) \rightarrow \mathcal {N}\left( 0,2\alpha ^2\int _0^\infty Cov\left( |Y_t|,|Y_0|\right) dt\right) \end{aligned}$$in law as \(T \rightarrow \infty \).
-
(2)
If r satisfies the condition (3) of Assumption 3.1 with \(H=\frac{3}{4}\),
$$\begin{aligned} \sqrt{\frac{T}{\log T}}\left( \hat{\mu }_1(T)-\mu _1\right) \rightarrow \mathcal {N}\left( 0,\frac{8C^2_3\alpha ^2}{\pi }\right) \end{aligned}$$in law as \(T \rightarrow \infty \).
-
(3)
If r satisfies the condition (3) of Assumption 3.1 with \(H\in \left( \frac{3}{4},1\right) \),
$$\begin{aligned} T^{2-2H}\left( \hat{\mu }_1(T)-\mu _1\right) \rightarrow \frac{2C_3\alpha }{\sqrt{\pi (2H-1)(4H-3)}} Z^H \end{aligned}$$in law as \(T \rightarrow \infty \), where \(Z^H\) follows the Rosenblatt distribution.
Proof
Recall
Now \(X_t = \alpha |Y_t|\). It is easy to check from (3.13) that the function \(x\mapsto |x|\) has Hermite rank 2, and hence the limiting normality and the claimed asymptotic variance for item (1) follows again by the Breuer-Major theorem and square-integrability of the function r. For items (2) and (3) we get, by using the same arguments as in the proof of Theorem 3.2, that the higher order terms in the Hermite decomposition (3.12) do not contribute to the limit. Thus the only contributing factor is actually
where
The asymptotic behaviour for (3.22) was already discussed in the proof of Proposition 3.6, from which the result follows by computing the asymptotic variances. This concludes the proof. \(\square \)
3.3 Estimation based on discrete observations
In practice, one does not observe the continuous path of X. Instead of that, one observes X on some discrete time points \(0\le t_0<t_1< \ldots<T_N<\infty \). That is why, in practical applications, the integrals in (3.4) are approximated by discrete sums. Thus the natural moment estimators \(\tilde{\mu }_n(N)\) are defined by
where \(\Delta t_k = t_k - t_{k-1}\). The corresponding estimators \(\tilde{\alpha }_+(N)\) and \(\tilde{\alpha }_-(N)\) for parameters \(\alpha _+\) and \(\alpha _-\) are
and
Let \(\Delta _N = \max _k \Delta t_k\). In order to obtain consistency and asymptotic normality for the discretised versions, we have to assume that \(T_N \rightarrow \infty \) and, at the same time, that \(\Delta _N \rightarrow 0\) in a suitable way. The following proposition studies the difference between \(\hat{\mu }_n(T_N)\) and \(\tilde{\mu }_n(N)\).
Proposition 3.8
Let r(t) be given by (3.1) and denote the variogram of the stationary process Y by c(t), i.e.
Then, for any \(n\ge 1\) and for any \(p\ge 1\), there exists a constant \(C=C(n,p,\alpha _+,\alpha _-)\) such that
Proof
We have, by Minkowski inequality, that
Using,
we get, for any \(s,u\ge 0\), that
Thus, a repeated application of Hölder inequality together with the fact that \(\sup _{s\ge 0}\Vert X_s\Vert _p < \infty \) implies that, for every \(q>p\), we have
where C is a constant. Moreover, by the proof of Proposition 2.8, we have,
Since Y is Gaussian, hypercontractivity implies that
Now stationarity of Y gives
Thus we observe
proving the claim. \(\square \)
We can now easily deduce the following results for the asymptotical properties of the estimators \(\tilde{\alpha }_+\) and \(\tilde{\alpha }_-\).
Theorem 3.9
Let \(\tilde{\alpha }_+(N)\) and \(\tilde{\alpha }_-(N)\) be defined by (3.24) and (3.25), respectively. Suppose that r given by (3.1) satisfies \(r(T) \rightarrow 0\) as \(T\rightarrow \infty \) and that \(\sup _{0\le s\le T}c(s)\rightarrow 0\) as \(T\rightarrow 0\). If \(T_N \rightarrow \infty \) and \(\Delta _N \rightarrow 0\) as \(N \rightarrow \infty \), then for any \(p\ge 1\),
and
in \(L^p\).
Proof
Using the arguments of the proof of Theorem 3.1 together with Proposition 3.8 we deduce that
and
Thus the claim follows from Theorem 3.1. \(\square \)
Theorem 3.10
Let \(\tilde{\alpha }_+(N)\) and \(\tilde{\alpha }_-(N)\) be defined by (3.24) and (3.25), respectively, and let \(\tilde{\alpha }(N) = (\tilde{\alpha }_+(N),\tilde{\alpha }_-(N))\) and \(\alpha = (\alpha _+,\alpha _-)\). Furthermore, let r be given by (3.1), \(\Sigma _i,i=1,2,3\) be given by (3.7)–(3.9), and \(J_\alpha \) by (3.10). Suppose that \(\displaystyle \sup _{0\le s\le t}c(s)\rightarrow 0\) as \(t\rightarrow 0\), \(T_N \rightarrow \infty \), and \(\Delta _N \rightarrow 0\) as \(N \rightarrow \infty \). Denote
Then,
-
(1)
if r satisfies the condition (1) of Assumption 3.1,
$$\begin{aligned} \sqrt{T_N}\left( \tilde{\alpha }(N)-\alpha \right) \rightarrow \mathcal {N}(0,{J_\alpha \Sigma _1J_\alpha ^T}) \end{aligned}$$in law for every partitions \(0\le t_0 < \ldots T_N\) satisfying \(\sqrt{T_N}h(N)\rightarrow 0\),
-
(2)
if r satisfies the condition (2) of Assumption 3.1,
$$\begin{aligned} \sqrt{\frac{T_N}{\log T_N}}\left( \tilde{\alpha }(N)-\alpha \right) \rightarrow \mathcal {N}(0,{J_\alpha \Sigma _2J_\alpha ^T}) \end{aligned}$$in law for every partitions \(0\le t_0 < \ldots T_N\) satisfying \(\sqrt{\frac{T_N}{\log T_N}}h(N)\rightarrow 0\), and
-
(3)
if r satisfies the condition (3) of Assumption 3.1,
$$\begin{aligned} T_N^{1-H}\left( \tilde{\alpha }(N)-\alpha \right) \rightarrow \mathcal {N}(0,{J_\alpha \Sigma _3J_\alpha ^T}) \end{aligned}$$in law for every partitions \(0\le t_0 < \ldots T_N\) satisfying \(T_N^{1-H}h(N)\rightarrow 0\).
Proof
The additional conditions on the mesh together with Proposition 3.8 guarantee that
where \(l(T_N)\) is the corresponding normalisation for each case. Thus the result follows directly from Theorem 3.3. \(\square \)
One natural way for choosing the observ ation points such that the above mentioned conditions are fulfilled, is to choose N equidistant points with \(\Delta _N = \frac{\log N}{N}\). Then \(\Delta _N \rightarrow 0\) and \(T_N = N\Delta _N = \log N \rightarrow \infty \). If, in addition, Y is Hölder continuous of some order \(\theta >0\), then also the rest of the requirements are satisfied. Indeed, it follows from (Azmoodeh et al. 2014, Theorem 1) that if Y is Hölder continuous of order \(\theta >0\), then for any \(\epsilon >0\), we have
for some constant C. Thus \(h(N)\le \sqrt{C}\Delta _N^{\frac{1}{2}(\theta -\epsilon )}\), from which it is easy to see that, for \(\epsilon <\theta \),
3.4 Oscillating self-similar Gaussian processes
Self-similar processes form an interesting and applicable class of stochastic processes. In this subsection, we consider oscillating Gaussian processes driven by self-similar Gaussian processes Y. In other words, we consider processes of the type
where Y is H-self-similar for some \(H>0\). That is, for every \(a>0\), the finite dimensional distributions of the processes \((Y_{at})_{t\ge 0}\) and \((a^HY_t)_{t\ge 0}\) are the same. Throughout this section we assume that we have observed \(X_t\) on an interval [0, 1], and our aim is to estimate \(\alpha _+\) and \(\alpha _-\). The key ingredient is the Lamperti transform
It is well-known that U is stationary on \((-\infty ,0]\). Moreover, for \(t\ge 0\), we define a process
Clearly, observing X on [0, 1] is equivalent to observing \(\tilde{X}_t\) on \(t\ge 0\). This leads to the ”moment estimators” \(\hat{\mu }_i(T)\) defined by
The corresponding parameter estimators \(\hat{\alpha }_+(T)\) and \(\hat{\alpha }_-(T)\) are defined by plugging in \(\hat{\mu }_1(T)\) and \(\hat{\mu }_2(T)\) into (3.2) and (3.3), respectively. Indeed, a change of variable \(u=e^{t}\) gives
Thus studying the covariance function r of a stationary Gaussian process U given by (3.26) enables us to apply Theorems 3.1 and 3.3.
3.5 The case of bi-fractional Brownian motion
We end this section with an interesting example. We consider bifractional Brownian motions that, among others, cover fractional Brownian motions and standard Brownian motions. Recall that a bifractional Brownian motion \(B^{H,K}\) with \(H\in (0,1)\) and \(K\in (0,2)\) such that \(HK\in (0,1)\) is a centered Gaussian process with a covariance function
It is known that \(B^{H,K}\) is HK-self-similar. Furthermore, one recovers fractional Brownian motion by plugging in \(K=1\), from which standard Brownian motion is recovered by further setting \(H=\frac{1}{2}\). Now the covariance function r of the Lamperti transform \(U_t=e^{-HKt}B^{H,K}_{e^t}\) has exponential decay (see Sottinen and Viitasaari 2018). Thus, we may apply the item (1) of Theorem 3.3 to obtain that \( \sqrt{T}(\hat{\alpha }_+(T)-\alpha _+) \) and \(\sqrt{T}(\hat{\alpha }_-(T)-\alpha _-)\) are asymptotically normal. Similarly, discretising the integral in (3.27) and applying Theorems 3.9 and 3.10 allows us to consider parameter estimators based on discrete observations. We leave the details to the reader.
4 Discussion
In this paper we considered oscillating Gaussian processes and introduced a moment based estimators for the model parameters. Moreover, we proved consistency and asymptotic normality of the estimators under natural assumptions on the driving Gaussian process. One possible line for the future research is to study the estimators based on (1.8) and compare with our moment based estimators. Another interesting and natural extension to our approach would be to consider oscillating processes with several (more than two) parameters and corresponding regions. This would make the model class more flexible and adaptive. Finally, a topic for future research would be to develop testing procedures for the model parameters.
References
Azmoodeh E, Sottinen T, Viitasaari L, Yazigi A (2014) Necessary and sufficient conditions for Hölder continuity of Gaussian processes. Stat Probab Lett 94:230–235
Bai L, Ma J (2015) Stochastic differential equations driven by fractional Brownian motion and Poisson point process. Bernoulli 21(1):303–334
Bai S, Taqqu M (2013) Multivariate limit theorems in the context of long-range dependence. J Time Ser Anal 34(6):717–743
Belyaev Y (1960) Local properties of the sample functions of stationary Gaussian processes. Teor Veroyatn Primen 5:128–131
Breuer P, Major P (1983) Central limit theorems for nonlinear functionals of Gaussian fields. J Multivar Anal 13(3):425–441
Campese S, Nourdin I, Nualart D (2018) Continuous Breuer-Major theorem: tightness and non-stationarity. Ann Probab (to appear)
Dobrushin RL, Major P (1979) Non-central limit theorems for non-linear functional of Gaussian fields. Z Wahrsch Verw Gebiete 50(1):27–52
Embrechts P, Maejima M (2002) Selfsimilar processes. Princeton series in applied mathematics. Princeton University Press, Princeton
Garzón J, León JA, Torres S (2017) Fractional stochastic differential equation with discontinuous diffusion. Stoch Anal Appl 35(6):1113–1123
Janson S (1997) Gaussian Hilbert spaces. Cambridge University Press, Cambridge
Lejay A (2006) On the constructions of the skew Brownian motion. Probab Surv 3:413–466
Lejay A, Pigato P (2018) Statistical estimation of the Oscillating Brownian Motion. Bernoulli 24(4B):3568–3602
Mishura Y (2008) Stochastic calculus for fractional Brownian motion and related processes. Springer, Berlin
Mishura Y, Nualart D (2004) Weak solutions for stochastic differential equations with additive fractional noise. Stat Probab Lett 70:253–261
Nourdin I, Nualart D (2018) The functional Breuer-Major theorem. Probab Theory Relat Fields (to appear)
Nourdin I, Peccati G (2009) Stein’s method on Wiener chaos. Probab Theory Relat Fields 145(1):75–118
Nourdin I, Peccati G (2010) Stein’s method and exact Berry-Esseen asymptotics for functionals of Gaussian fields. Ann Probab 37(6):2231–2261
Nourdin I, Peccati G, Yang X (2019) tBerry-Esseen bounds in the Breuer-Major CLT and Gebelein’s inequality. Electron Commun Probab 24(34):12
Nualart D (2006) The Malliavin calculus and related topics. Springer, Berlin
Nualart D, Rǎşcanu A (2002) Differential equations driven by fractional Brownian motion. Collect Math 53(1):55–81
Rice SO (1944) Mathematical Analysis of Random Noise Part III. Bell Syst Tech J 23(3):282–332
Rosenblatt M (1961) Independence and dependence. In: Proceedings of 4th Berkeley symposium mathematical statistics and probability II. University California Press, Berkeley, pp 431–443
Sottinen T, Viitasaari L (2018) Parameter estimation for the Langevin equation with stationary-increment Gaussian noise. Stat Infer Stoch Process 21(3):569–601
Taqqu M (1975) Weak convergence to fractional Brownian motion and to the Rosenblatt process. Probab Theory Relat Fields 31(4):287–302
Taqqu M (1979) Convergence of integrated processes of arbitrary Hermite rank. Z Wahrsch Verw Gebiete 50:53–83
Taqqu M (2011) The Rosenblatt process. In: Davis R, Lii KS, Politis D (eds) Selected works of Murray Rosenblatt. Springer, New York, pp 159–179
Torres S, Viitasaari L (2019) Stochastic differential equations with discontinuous diffusions. arXiv:1908.03183
Tudor CA (2013) Analysis of variations for self-similar processes: a stochastic calculus approach. Springer International Publishing, Manhattan
Viitasaari L (2019) Sufficient and necessary conditions for limit theorems for quadratic variations of Gaussian sequences. Probab Surv 16:62–98
Acknowledgements
Open access funding provided by Aalto University. S. Torres is partially supported by the Project Fondecyt N. 1171335. P. Ilmonen and L. Viitasaari wishes to thank Vilho, Yrjö, and Kalle Väisälä foundation for financial support.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ilmonen, P., Torres, S. & Viitasaari, L. Oscillating Gaussian processes. Stat Inference Stoch Process 23, 571–593 (2020). https://doi.org/10.1007/s11203-020-09212-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11203-020-09212-6
Keywords
- Gaussian processes
- Oscillating processes
- Stationarity
- Self-similarity
- Parameter estimation
- Central limit theorem