Abstract
Let \(S^{H}=\{S^{H}_{t},t\geq0\}\) be a sub-fractional Brownian motion with Hurst index \(0< H<1\). In this paper, we give a local law of the iterated logarithm of the form
almost surely, for all \(t > 0\), where \(\log^{+}x=\max{\{1, \log x\}}\) for \(x\geq0\). As an application, we introduce the \(\Phi_{H}\)-variation of \(S^{H}\) driven by \(\Phi_{H}(x):= [x/\sqrt{2\log^{+}\log ^{+}(1/x)} ]^{1/H}\) \((x>0)\) with \(\Phi_{H}(0)=0\).
Similar content being viewed by others
1 Introduction and main results
The quadratic variation and realized quadratic variation have been widely used in stochastic analysis and statistics of stochastic processes. The realized power variation of order \(p>0\) is a generalization of the quadratic variation, which is defined as
where \(\{X_{t}, t>0\}\) is a stochastic process and \(\kappa=\{ 0=t_{0}< t_{1}<\cdots<t_{n}=t\}\) is a partition of \([0, t]\) with \(\max _{1\leq i\leq n}\{t_{i}-t_{i-1}\}\to0\). It was introduced in Barndorff–Nielsen and Shephard [1, 2] to estimate the integrated volatility in some stochastic volatility models used in quantitative finance and also, under an appropriate modification, to estimate the jumps of the processes under analysis. The main interest in these papers is the asymptotic behavior of the statistic (1.1), or some appropriate renormalized version of it, as \(n\to\infty\), when the process \(X_{t}\) is a stochastic integral with respect to a Brownian motion. Refinements of their results have been obtained in Woerner [3]. A more general generalization to the realized quadratic variation is called Φ-variation, and it is defined by
where Φ is a nonnegative, increasing continuous function on \({\mathbb {R}}_{+}\) with \(\Phi(0)=0\). Let \({\mathscr {P}}([0,t])\) be a class of all partitions \(\kappa=\{0=t_{0}< t_{1}<\cdots<t_{n}=t\}\) of \([0, t]\) with \(|\kappa|:=\max_{1\leq i\leq n}\{t_{i}-t_{i-1}\}\). Then the Φ-variation of a stochastic process \(\{X_{t}, t>0\}\) is defined as
Consider the function
with \(\Phi_{H}(0)=0\) and \(0< H<1\), where \(\log^{+}x=\max\{1,\log x\}\) for \(x>0\). When X is a standard Brownian motion B, Taylor [4] first considered the \(\Phi_{1/2}\)-variation and proved \(S_{\Phi _{1/2}}(B,t)=t\) for all \(t>0\). Kawada and Kôno [5] extended this to some stationary Gaussian processes W and proved \(S_{\Phi_{1/2}}(W,t)=t\) for all \(t>0\) by using an estimate given by Kôno [6]. Recently, Dudley and Norvaiša [7] extended this to the fractional Brownian motion \(B^{H}\) with Hurst index \(H\in(0,1)\) and proved \(S_{\Phi _{H}}(B^{H},t)=t\) for all \(t>0\). More generally, for a bi-fractional Brownian motion \(B^{H,K}\), Norvaiša [8] showed that \(S_{\Phi_{H,K}}(B^{H,K},t)=t\) if
On the other hand, since Chung’s law and Strassen’s functional law of the iterated logarithm appeared, the functional law of the iterated logarithm and its rates for some classes of Gaussian processes have been discussed by many authors (see, for example, Csörgö and Révész [9], Lin et al. [10], Dudley and Norvaiša [7], Malyarenko [11]). However, almost all results considered only some Gaussian processes with stationary increments, and there has been little systematic investigation on other self-similar Gaussian processes (see, for example, Norvaiša [8], Tudor and Xiao [12], and Yan et al. [13]). The main reason for this is the complexity of dependence structures for self-similar Gaussian processes which do not have stationary increments.
Motivated by these results, in this paper, we consider the law of the iterated logarithm and Φ-variation of a sub-fractional Brownian motion. Recall that a mean-zero Gaussian process \(S^{H}=\{S^{H}_{t},t\geq0\} \) is said to be a sub-fractional Brownian motion (in short, sub-fBm) with Hurst index \(H\in(0,1)\), if \(S^{H}_{0}=0\) and
for all \(s,t>0\). When \(H=\frac{1}{2}\), this process coincides with the standard Brownian motion B. Sub-fBm was first introduced by Bojdecki et al. [14] as an extension of Brownian motion, and it arises from occupation time fluctuations of branching particle systems with Poisson initial condition. A sub-fBm with Hurst index H is H-self-similar, Hölder continuous, and it is long/short-range dependent. A process X is long-range dependent if \(\sum_{n\geq\alpha}\rho_{n}(\alpha)=\infty\) for any \(\alpha>0\), and it is short-range dependent if \(\sum_{n\geq\alpha}\rho_{n}(\alpha)<\infty \), where \(\rho_{n}(\alpha)=E[(X_{\alpha+1}-X_{\alpha})(X_{n+1}-X_{n})], \alpha>0\). However, when \(H\neq\frac{1}{2}\), it has no stationary increments. Moreover, it admits the following (quasi-helix) estimates:
for all \(t,s\geq0\). More works on sub-fractional Brownian motion can be found in Bojdecki et al. [15, 16], Shen and Yan [17], Sun and Yan [18], Tudor [19, 20], Yan et al. [21, 22], and the references therein. For the above discussions, we find that the complexity of sub-fractional Brownian motion is very different from that of fractional Brownian motion or bi-fractional Brownian motion. Therefore, it seems interesting to study the iterated logarithm and Φ-variation of sub-fractional Brownian motion. In the present paper, our main objectives are to expound and to prove the following theorems.
Theorem 1.1
Let \(0< H<1\), we then have
almost surely, for all \(t > 0\), where the function \(\varphi_{H}\) is defined by
with \(\varphi_{H}(0)=0\), where \(\log^{+}x=\max{\{1, \log x\}}\) for \(x\geq0\).
Theorem 1.2
Let \(0< H<1\), and let \(\Phi_{H}\) be defined as above. Then we have
for all \(T>0\).
As an immediate question driven by Theorem 1.2, one can consider the following asymptotic behavior:
as δ tends to zero, where \({\mathscr {L}}\) denotes a distribution, \(\phi(\delta)\uparrow\infty\) (\(\delta\to0\)), and \(S_{\Phi _{H}}(S^{H},T,\delta)\) is defined as follows:
We have known that when \(H=\frac{1}{2}\), the sub-fBm \(S^{H}\) coincides with a standard Brownian motion B. So, the two results above are some natural extensions to Brownian motion (see, for example, Csörgö and Révész [9], Dudley and Norvaivsa [7], Lin et al. [10]). This paper is organized as follows. In Sect. 2, we prove Theorem 1.1. In Sect. 3, we give the proof of Theorem 1.2.
2 Proof of Theorem 1.1
In this section and the next section, we prove our main results. When \(H=\frac{1}{2}\), the sub-fBm \(S^{H}\) is a standard Brownian motion, and Theorem 1.1 and Theorem 1.2 are given in Taylor [4]. In this section and the next section, we assume throughout that \(H\neq\frac{1}{2}\).
Lemma 2.1
Let μ be a centered Gaussian measure in a linear space \({\mathbb {E}}\), and let \(A\subset{\mathbb {E}}\) be a symmetric convex set. Then we have
for any \(h\in{\mathbb {E}}\).
Inequality (2.1) is called Anderson’s inequality (see, for example, [23]). It admits the following version:
-
Let \(X_{1},\ldots,X_{n}\) and \(Y_{1},\ldots,Y_{n}\) both be jointly Gaussian with mean zero and such that the matrix \(\{EY_{j}Y_{j}-EX_{i}X_{j},1 \leq i,j\leq n\}\) is nonnegative definite. Then we have
$$ P \Bigl(\max_{1\leq j\leq n} \vert X_{j} \vert \geq x \Bigr)\leq P \Bigl(\max_{1\leq j\leq n} \vert Y_{j} \vert \geq x \Bigr) $$(2.2)for any \(x>0\).
We also will need the next tail probability estimate which is introduced (Lemma 12.18) in Dudley and Norvaiša [7].
Lemma 2.2
(Dudley and Norvaiša [7])
Let \({\mathbb {B}}\) be a Banach space, and let \(S\subset{\mathbb {B}}\) be a compact set such that \(cS\subset S\) for each \(c\in(0,1]\). Assume that \(S(\delta_{0})\subset S\) is closed for some \(0<\delta_{0}\leq1\) and that
for \(0<\delta\leq\delta_{0}\). If \(Y=\{Y(t),{t\in S}\}\) is a mean-zero continuous Gaussian process with a self-similar index \(\alpha\in (0,1)\), then
for every \(\delta\in(0,\delta_{0}]\). Moreover, for any \(\theta\in (0,1)\), there is a constant \(C_{\theta}\in(0,\infty)\) depending only on θ such that
for all \(\delta\in(0,\delta_{0}]\) and \(x>0\).
The above result is Lemma 12.18 in Dudley and Norvaiša [7].
Lemma 2.3
(Dudley and Norvaiša [7])
Suppose that \(\{\xi_{k},k\geq1\}\) is a sequence of jointly normal random variables such that \(E\xi_{k}=0\), \({\rm Var}(\xi_{k})=1\) for all \(k\geq1\), and
then we have
almost surely.
The above result is Lemma 12.20 in Dudley and Norvaiša [7].
Lemma 2.4
Let \(H\in(0,\frac{1}{2})\cup(\frac{1}{2},1)\). Then the functions
with \(s,t\geq0\) are nonnegative definite.
By Kolmogorov’s consistency theorem, we find that there is a mean-zero Gaussian process \(\zeta^{H}=\{\zeta^{H}_{t}, t\geq0\} \) such that \(\zeta ^{H}_{0}=0\) and
for all \(t,s\geq0\).
Lemma 2.5
Let \(H\in(0,\frac{1}{2})\cup(\frac{1}{2},1)\) and \(t>0\). Denote \(X_{t}(s):=S^{H}_{t+s}-S^{H}_{t}\) for \(s\geq0\). Then we have
as \(s\downarrow0\), where the notation ∼ denotes the equivalence as \(s\downarrow0\) for every fixed \(t>0\), and
for all \(u,v\geq0\), and \(t>0\).
Proof
Clearly, we have
for all \(s,t\geq0\), where \(x=\frac{s}{s+t}\). An elementary calculus may show that
as \(x\to0\), which implies that estimate (2.5) holds.
Given \(t>0\). Consider the Gaussian process \(\zeta^{H}\) with the covariance \(\rho_{H}\) defined by (2.4). Then we have
for all \(u,v\geq0\). To see that the inequality holds, we define the function on \({\mathbb {R}}^{2}\)
with \((x,y)\in{\mathbb {D}}:=\{(x,y)| x,y\geq0,x+y\leq1\}\). Then, on the boundary of \({\mathbb {D}}\), we have
Moreover, the equations
admit a unique solution \((x,y)=(0,0)\). Thus, we get
and
which imply
for all \(\frac{1}{2}< H<1\) and
for all \(0< H<\frac{1}{2}\). It follows that
and
for all \(u,v\geq0\), which imply
for all \(u,v\geq0\) and \(t>0\). Combining this with (1.3), we give estimate (2.6) and the lemma follows. □
Lemma 2.6
For \(0< H<1\), we then have
almost surely, for all \(t > 0\).
Proof
Let \(\varepsilon\in(0,1)\) and \(t>0\). We see that
for every \(r\in(0,1)\), by the fact \(\log (-n\log{r} )\sim\log n\) (\(n\to\infty\)).
Now, we verify that
almost surely, for \(r\in(0, 1)\) small enough. In fact, by Lemma 2.3 we only need to prove
for any \(\varepsilon\in(0,1)\), where \(D_{n}=\{(k,m) | k,m\geq n,k\neq m\} \). Some elementary calculations may show that the following inequalities hold:
and
for any \(x\in(0,1)\). It follows from Lemma 2.5 that there is a real \(r\in(0, 1)\) small enough such that
for each \(k\neq m\), which implies that (2.11) holds and (2.10) follows with probability one. Combining this with the arbitrariness of \(\varepsilon\in(0, 1)\), (2.9), and (2.10), we get that inequality (2.8) holds for all \(t>0\). □
To prove Theorem 1.1, we now need to introduce the reverse inequality of (2.8), i.e.,
almost surely, for all \(t > 0\). The used method is due to the decomposition (2.7), i.e.,
for all \(u,v\geq0\) and \(t>0\). Recall that a mean-zero Gaussian process \(B^{H}=\{B^{H}_{t},t\geq0\}\) is said to be a fractional Brownian motion with Hurst index \(H\in(0,1)\), if \(B^{H}_{0}=0\) and
for all \(s,t>0\). When \(H=\frac{1}{2}\), this process coincides with the standard Brownian motion B. Moreover, for all \(t>0\), the process \(\{ B^{H}_{t+s}-B^{H}_{t},s\geq0\}\) also is a fractional Brownian motion with Hurst index \(H\in(0,1)\). It follows that
for all \(u,v\geq0\) and \(t>0\). More works on fractional Brownian motion can be found in Biagini et al. [24], Hu [25] and Mishura [26], Nourdin [27], and the references therein.
Proof of Theorem 1.1
Given \(\varepsilon>0\) and \(\gamma\in(0,1)\) such that
for each \(n\geq1\). Then we have
by the fact
It follows from the Borel–Cantelli lemma that
almost surely. Given \(\varepsilon\in(0,1/2)\), let \(\gamma\in(0,1)\) satisfy
We now need to prove the estimate
almost surely. For all \(n=1,2,\ldots\) and \(s\in[0, 1]\), let
and
Then, for all \(\gamma\in(0,1)\) and \(n\geq1\), \(Y_{t}^{n}=\{ Y_{t}^{n}(s),{s\in[0,1]}\}\) also is a fractional Brownian motion which admits the same distribution as \(\{B^{H}_{s\gamma^{n}},s\in [0,1] \}\).
On the other hand, by (2.14) and Anderson’s inequality (2.1), we have
for all \(\gamma,\varepsilon\in(0,1)\) and \(n\geq1\). It follows from Lemma 2.2 with \(Y_{t}=B^{H}_{t+\cdot}-B^{H}_{t}\), \(S=[0, 1]\), \(S(\delta)=[0,\delta]\), \(0<\delta\leq1\), \(\theta=1/2\), and \(C= C_{1/2}\) that
for each \(\gamma^{n+1}< e^{-e}\). Taking \(\varepsilon\in(0,1/2)\) and \(\gamma\in(0,1)\) such that
to see that
by (2.17), which gives
Therefore, by the Borel–Cantelli lemma, we have that
almost surely. Noting that \(\varphi_{H}\) is increasing, we see that
for \(\gamma^{n+1}\leq u\leq\gamma^{n}\). It follows that
by the choice of γ. Combining this with (2.15) and (2.18), we get (2.16). Finally, by (2.15) and (2.16), letting \(\varepsilon\downarrow0\), we get (2.12) for all \(t > 0\), and Theorem 1.1 follows. □
Remark 2.1
From the proof of Theorem 1.1 we see that the idea came from the decomposition
for all \(t\geq0\), where \(\stackrel{d}{=}\) stands for the equal in distribution and \(\zeta^{H}\) is a Gaussian process with the covariance \(\rho_{H}\) defined by (2.4). Thus, for a self-similar Gaussian process, \(G=\{G_{t},t\geq0\}\) admits the decomposition
for all \(t\geq0\), where \(\xi_{t}\) is a suitable Gaussian process. We can show that a similar limit theorem holds. However, for a different self-similar Gaussian process (weighted-fractional Brownian motion, bi-fractional Brownian motion, etc.) one needs to consider some concrete estimates.
3 Proof of Theorem 1.2
In order to prove Theorem 1.2, we first give a lemma which extends the related result for Brownian motion.
Lemma 3.1
Let \(0< H<1\) and \(t>0\). Denote
for \(\delta>0\), then we have
almost surely.
Proof
Given \(t\geq0\) and denote \(\delta_{0}=\min\{t,e^{-e}\}\). Define the function \(\delta\mapsto D(\delta)\) by
for each \(0 <\delta\leq\delta_{0}\).
By Theorem 1.1, we have known that \(\lim_{\delta\downarrow 0}D(\delta)\geq1\) almost surely. We now need to give the upper bound of \(D(\delta)\). Let \(\varepsilon\in(0, \frac{1}{2})\) and
for \(u, v\in[0,\delta_{0}]\). Denote \(\delta_{n}:=\exp\{-n^{1-\varepsilon }\}\), \({\mathbb {S}}_{n}:={\mathbb {S}}_{\delta_{0},\delta_{n}}\) and
for all \(n\geq8\). We need to handle \(P(E_{n})\). To this end, we define the process
Then, for any \(u_{1}, v_{1},u_{2}, v_{2}\geq0\), we have
by (2.14), which implies that the matrix
is nonnegative definite for any \(u_{i},v_{i}\in[0,\delta_{0}], i=1,\ldots ,n\). It follows from inequality (2.2) that
for all \(\delta\in(0,\delta_{0}]\) and \(x>0\). By (2.16) and Lemma 2.2 with \(\theta=\frac{1}{1+2\varepsilon}\), we then have
for every \(n\geq1\). It follows from the Borel–Cantelli lemma that there exists \(n_{0}=n_{0}(\omega)\) such that
almost surely, for each \(n\geq n_{0}\), which implies that
for \(\delta\leq\delta_{n}\) and \(\varepsilon\in(0,1)\), since \(\frac {\delta_{m}}{\delta_{m+1}}\to1\), as \(m\rightarrow\infty\). This completes the proof. □
Finally, at the end of this paper, we give the proof of Theorem 1.2. We will use the local law of the iterated logarithm (Theorem 1.1) for \(S^{H}\) and the Vitali covering lemma to introduce the next inequality and its reverse:
for all \(T>0\), where \(\Phi_{H}\) is defined in Sect. 1.
Proof of Theorem 1.2
Let \(H\neq\frac{1}{2}\). We first show that inequality (3.3) holds. Given \(\delta>0\). Let \(0<\varepsilon<1\) and
Clearly, we have that there exists \(\xi>0\) such that
for each \(0< v<\xi\) since \(\Phi_{H}\) is regularly varying of order \(H^{-1}\) and is asymptotic to \(\varphi_{H}^{-1}\) near zero. Therefore, by Theorem 1.1, for all \(t\in(0,T]\) and \(\delta\in(0,\xi)\), we have \(P(\{\omega:(t,\omega)\in E_{\delta}\})=1\). It follows from the Fubini theorem that \(P(\{m(E_{\delta})= T\})=1\) for each \(0 < v<\xi\), where \(m(\cdot)\) denotes the Lebesgue measure on \([0,T]\). Clearly, the set of all intervals \([t,t+s]\) with \(t\in[0,T]\) and arbitrarily small \(s>0\) is a Vitali covering of the set
and \(P(E)=1\). According to the Vitali lemma, we can choose a finite sub-collection \({\mathscr {E}}_{\delta}\) of intervals of length less than δ which are disjoint and have total length at least \(T-\varepsilon\). Then
almost surely, where \(\kappa=\{t_{i},i=0,1,2,\ldots,n\}\in{\mathscr {P}}([0,T])\) with mesh \(|\kappa|\leq\delta\) such that for each of the disjoint intervals \([t'_{j}, t'_{j}+s_{j}]\) from \({\mathscr {E}}_{\delta}\) with total length at least \(T-\varepsilon\), there is some i with \(t_{i-1}=t'_{j}\) and \(t_{i}=t'_{j}+s_{j}\). Therefore, for each \(\delta>0\) small enough, we obtain that
This shows that inequality (3.3) holds by taking δ and ε decreasing to zero.
Now, let us prove the reverse inequality of (3.3). Let \(\varepsilon>0\). For any partition \(\kappa=\{t_{i},i=0,1,2,\ldots,n\}\in {\mathscr {P}}([0,T])\), denote \(\Delta_{i}=t_{i}-t_{i-1}\) and \(\Delta _{i}S^{H}=S^{H}_{t_{i}}-S^{H}_{t_{i-1}}\) for \(i=1,\ldots,n\), and let
with
and hence the sum \(S_{\Phi_{H}}(S^{H},T,\kappa)\) can be divided into three sums of small, medium, and large increments. For \(i\in I_{1}\), we will show that the sum of \(\Delta_{i}\) is close to T if the mesh of κ becomes small enough, while for \(i\in I_{2}\cup I_{3}\), the sum of \(\Delta_{i}\) is negligible.
Step I. We estimate the sum
Since \(\Phi_{H}\) is regularly varying of order \(\frac{1}{H}\) and is asymptotic near zero to \(\varphi_{H}^{-1}\), again, there is a real number \(\eta(c,\varepsilon)>0\) such that
for any \(0< v<\eta(c,\varepsilon)\) and \(c>0\). Take \(\delta_{1}:=\eta (1+\varepsilon)\). For a partition \(\kappa=\{t_{i},i=0,1,2,\ldots,n\}\in {\mathscr {P}}([0,T])\) with mesh \(|\kappa|<\delta_{1}\), we then obtain
Step II. We estimate the sum
Let \(\delta>0\) and denote
for any \(u,v\geq0\), \(u+v\leq\delta\), and \(v\leq t\). By Lemma 3.1, for every \(t\in(0,T)\), we have
with probability one. It follows from Fatou’s lemma and the Fubini theorem that
with probability one. Let \(\Omega_{1}\) be a subset of Ω such that \(P(\Omega_{1})=1\), and for every \(\omega\in\Omega_{1}\), there exists \(\delta _{2}(\omega)>0\) such that
for all \(\delta\leq\delta_{2}(\omega)\). We choose \(\delta_{2}(\omega)\leq \eta(A,\varepsilon)\). If a partition \(\kappa\in{\mathscr {P}}([0,T])\) with \(|\kappa|\leq\delta_{2}(\omega)\) such that there exists an interval \([t_{i-1},t_{i}]\) contains a point of \({\mathbb {U}}_{\delta_{2}(\omega)}\), then \(i\in I_{1}\). So, the total length of such intervals is at least \(T-\varepsilon\), and in particular,
This shows that
by (3.5) with \(c=A\), provided \(|\kappa|\leq\delta_{2}(\omega)\).
Step III. We estimate the sum
We shall show that the sum also is small. Denote
and
for any integer number \(m\geq3\) and \(j=0,1,\ldots,j_{m}=[2Te^{m}]-1\), where \([x]\) denotes the integer part of x and A is defined by (3.4). The intervals \(S_{m,j},j=0,\ldots,j_{m}\), overlap and cover \([0,T]\). Moreover, we have
for each \(\omega\in\Omega\), where \(E^{\sharp}\) denotes the number of elements in a set E. In order to bound \(Z_{m}\), we need to estimate \(P({\mathbb {V}}_{m,j}(A))\) for \(j=0,\ldots,j_{m}\), and we show that one can replace \(S^{H}\) by a fractional Brownian motion \(B^{H}\) with Hurst index H.
Given \(u\geq0\). Recall that \(X_{u}(t)=S^{H}_{u+t}-S^{H}_{u}\) and \(Y_{u}(t)=B^{H}_{u+t}-B^{H}_{u}\) for each \(t\in[0,1]\), and
for all \(s,t\in[0,1]\), where \(\zeta^{H}\) is defined in Sect. 2. Hence, the matrix
with \(t_{1},t_{2},\ldots,t_{n}\in[0,1]\) is nonnegative definite. It follows from inequality (2.1) that
for all \(0<\delta\leq1\) and \(x>0\). Thus, by applying Lemma 2.2 with \(S=S(1)=[0,1]\) and \(\theta=1/2\) to the Gaussian process \(Y_{u}\), and setting \(u:=(j/2)e^{-m}, \delta:=e^{-m}\), we find that there is a constant \(C>0\) such that
for all \(m\geq3\) and \(j=0,1,\ldots,j_{m}\), which implies that
By the Borel–Cantelli lemma, there exists a set \(\Omega_{2}\subset\Omega\) with probability one and some integer number \(m_{1}(\omega)\geq3\) dependent only on \(\omega\in\Omega_{2}\) such that
Moreover, by Corollary 2.3 in Dudley [28] and (1.3), we see that there is another set \(\Omega_{3}\subset\Omega\) with probability one such that, for each \(\omega\in\Omega_{3}\), there exists a finite constant \(D(\omega)\) such that
for \(0\leq s< t\leq e^{-1}\).
Let now \(\omega\in\Omega_{2}\cap\Omega_{3}\) and \(m_{2}(\omega)\geq m_{1}(\omega)\) satisfy
Denote \(\delta_{3}(\omega):=e^{-m_{2}(\omega)}\) and
Let \(\kappa=\{t_{i},i=0,1,2,\ldots,n\}\in{\mathscr {P}}([0,T])\) with \(|\kappa|\leq\delta_{3}(\omega)\) and for each \(m\geq m_{2}(\omega)\). Then \([t_{i-1},t_{i}]\subset S_{m,j}\) for every m and some \(j=0,\ldots ,j_{m}\), if \(i\in\Lambda_{m}\). Combining this with (3.8), (3.9), and (3.10), we have
for such a \(\kappa=\{t_{i},i=0,1,2,\ldots,n\}\in{\mathscr {P}}([0,T])\).
Finally, for \(\omega\in\Omega_{1}\cap\Omega_{2}\cap\Omega_{3}\), by taking \(0<\delta\leq\min\{\delta_{1},\delta_{2}(\omega),\delta _{3}(\omega)\}\) by using (3.6), (3.7), and (3.11), we get
for every partition \(\kappa\in{\mathscr {P}}([0,T])\) with \(|\kappa |<\delta(\omega)\). Thus, we have gotten the desired reverse inequality of (3.3) by arbitrariness of \(\varepsilon>0\), and the theorem follows. □
4 Results, discussion, and conclusions
In this paper, we give an iterated logarithm and Φ-variation for a sub-fBm by using some precise estimations and inequalities. It is important to note that the method used here is also applicative to many similar Gaussian processes.
References
Barndorff-Nielsen, O.E., Shephard, N.: Realized power variation and stochastic volatility models. Bernoulli 9, 243–265 (2003)
Barndorff-Nielsen, O.E., Shephard, N.: Power and bipower variation with stochastic volatility and jumps (with discussion). J. Financ. Econom. 2, 1–48 (2004)
Woerner, J.H.C.: Variational sums and power variation: a unifying approach to model selection and estimation in semimartingale models. Stat. Decis. 21, 47–68 (2003)
Taylor, S.J.: Exact asymptotic estimates of Brownian path variation. Duke Math. J. 39, 219–241 (1972)
Kawada, T., Kôno, N.: On the variation of Gaussian processes. Lect. Notes Math. 330, 176–192 (1973)
Kôno, N.: Oscillation of sample functions in stationary Gaussian processes. Osaka J. Math. 6, 1–12 (1969)
Dudley, R.M., Norvaiša, R.: Concrete Functional Calculus. Springer, New Yor (2011)
Norvaiša, R.: Variation of a bifractional Brownian motion. Lith. Math. J. 48, 418–426 (2008)
Csörgö, M., Révész, P.: Strong Approximations in Probability and Statistics. Academic Press, New York (1981)
Lin, Z., Lu, C., Zhang, L.: Path Properties of Gaussian Processes. Zhejiang Univ. Press, Zhejiang (2001)
Malyarenko, A.: Functional limit theorems for multiparameter fractional Brownian motion. J. Theor. Probab. 19, 263–288 (2006)
Tudor, C.A., Xiao, Y.: Some path properties of bi-fractional Brownian motion. Bernoulli 13, 1023–1052 (2007)
Yan, L., Wang, Z., Jing, H.: Some path properties of weighted fractional Brownian motion. Stochastics 86, 721–758 (2014)
Bojdecki, T., Gorostiza, L.G., Talarczyk, A.: Sub-fractional Brownian motion and its relation to occupation times. Stat. Probab. Lett. 69, 405–419 (2004)
Bojdecki, T., Gorostiza, L.G., Talarczyk, A.: Limit theorems for occupation time fluctuations of branching systems (I): long-range dependence. Stoch. Process. Appl. 116, 1–18 (2006)
Bojdecki, T., Gorostiza, L.G., Talarczyk, A.: Some extension of fractional Brownian motion and sub-fractional Brownian motion related to particle systems. Electron. Commun. Probab. 12, 161–172 (2007)
Shen, G., Yan, L.: Estimators for the drift of sub-fractional Brownian motion. Commun. Stat., Theory Methods 43, 1601–1612 (2014)
Sun, X., Yan, L.: A central limit theorem associated with sub-fractional Brownian motion and an application. Sci. China Math. A 47(9), 1055–1076 (2017) (in Chinese)
Tudor, C.: Some properties of the sub-fractional Brownian motion. Stochastics 79, 431–448 (2007)
Tudor, C.: Some aspects of stochastic calculus for the sub-fractional Brownian motion. Ann. Univ. Bucuresti, Mathematica 199–230 (2008)
Yan, L., He, K., Chen, C.: The generalized Bouleau–Yor identity for a sub-fBm. Sci. China Math. 56, 2089–2116 (2013)
Yan, L., Shen, G.: On the collision local time of sub-fractional Brownian motions. Stat. Probab. Lett. 80, 296–308 (2010)
Dudley, R.M.: Uniform Central Limit Theorems. Cambridge University Press, Cambridge (1999)
Biagini, F., Hu, Y., Øksendal, B., Zhang, T.: Stochastic Calculus for fBm and Applications, Probability and Its Application. Springer, Berlin (2008)
Hu, Y.: Integral transformations and anticipative calculus for fractional Brownian motions. Mem. Am. Math. Soc. 175, 825 (2005)
Mishura, Y.S.: Stochastic Calculus for Fractional Brownian Motion and Related Processes. Lect. Notes in Math., vol. 1929 (2008)
Nourdin, I.: Selected Aspects of fBm. Springer, Berlin (2012)
Dudley, R.M.: Sample functions of the Gaussian process. Ann. Probab. 1, 66–103 (1973)
Acknowledgements
This work is supported by the National Natural Science Foundation of China (No. 11571071), Natural Science Foundation of Anhui Province (1808085MA02), Key Natural Science Foundation of Anhui Education Commission (KJ2016A453), and Natural Science Foundation of Bengbu University (2017ZR10zd, 2011ZR09).
Author information
Authors and Affiliations
Contributions
HQ and LY carried out the mathematical studies, participated in the sequence alignment, and drafted the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Qi, H., Yan, L. A law of iterated logarithm for the subfractional Brownian motion and an application. J Inequal Appl 2018, 96 (2018). https://doi.org/10.1186/s13660-018-1675-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-018-1675-1