Abstract
We consider equidistant approximations of stochastic integrals driven by Hölder continuous Gaussian processes of order \(H>\frac{1}{2}\) with discontinuous integrands involving bounded variation functions. We give exact rate of convergence in the \(L^1\)-distance and provide examples with different drivers. It turns out that the exact rate of convergence is proportional to \(n^{1-2H}\), which is twice as good as the best known results in the case of discontinuous integrands and corresponds to the known rate in the case of smooth integrands. The novelty of our approach is that, instead of using multiplicative estimates for the integrals involved, we apply a change of variables formula together with some facts on convex functions allowing us to compute expectations explicitly.
Similar content being viewed by others
1 Introduction
We consider the rate of convergence for equidistant approximations of pathwise stochastic integrals
where \(t_k = \frac{k}{n}{,k=0,1,\ldots ,n}\). Here, \(\Psi \) is a difference of convex functions and X is a centered Gaussian process on [0, 1] with non-decreasing variance function \(V(s) = {\mathbb {E}}X_s^2\) normalized such that \(V(1)=1\). We assume that the variogram function
satisfies, for some \(H\in (\frac{1}{2},1)\), that
where
This means, in particular, that the process X has H as its Hölder index. One way to realize the process X is to take fractional Brownian motion \(B^H\), with index H and an independent Gaussian process G with variogram g (such process has Hölder index at least H) and put
where \(X_0\) may be random initial (Gaussian) value. Since G has variogram g(s, t), it follows from (1.3) that G typically has more regular sample paths than \(B^H\). We also note that we have either \(V(0)=C>0\) (e.g., stationary case) or \(V(s)\ge {\text {cs}}^{2\,H}\) (e.g., the case of the fractional Brownian motion). Indeed, if \(X_0=0\), then \(V(s) = \vartheta (s,0)\) from which \(V(s)\ge {\text {cs}}^{2\,H}\) follows by (1.2) and (1.3). It follows that
Consequently, by [5] the pathwise Riemann–Stieltjes stochastic integral in (1.1) exists and we have the classical chain rule
In the case of the fractional Brownian motion, the problem was studied in [3]. This article extends the article [3] into two directions: (i) we allow more integrators than just the fractional Brownian motion and (ii) we give exact \(L^1\) error of the approximations. Rather surprisingly, it turns out that we obtain the rate \(n^{1-2H}\) that is twice better compared to the rate obtained in [3] and corresponds to the known correct rate in the case of smooth functions \(\Psi '\) (see for instance [3, 6] and the references therein). In contrast in the Brownian motion case, introducing jumps reduces the rate into \(n^{-1/4}\) in comparison with \(n^{-1/2}\) obtained for smooth functions \(\Psi '\) (see, e.g., [3]). For other related articles on stochastic integrals with discontinuous integrands, see also [5, 7, 8, 14, 15].
The rest of the article is organized as follows: the main results are give in Sect. 2. In Sect. 3, we give examples. Finally, the proofs are given in Sect. 4.
2 Statement of the Main Results
We begin by recalling some basic facts on convex functions and on functions of bounded variation. For details on the topic, see for instance [12].
For a convex function \(\Psi \), let \(\Psi '\) denote its one-sided derivative. Then, the derivative \(\Psi '' = \mu \) exists as a Radon measure. A particular example includes the function \(\Psi (x) = |x-a|\), in which case \(\Psi '(x) = sgn(x-a)\) and \(\Psi ''(x) = \delta _a(x)\), the Dirac measure at level a. More generally, if \(\Psi '\) is of (locally) bounded variation, then it can be represented as the difference of two non-decreasing functions. As a corollary, \(\Psi '\) can be regarded as the derivative of a function \(\Psi \) that is a difference of two convex functions. That is, we have \(\Psi = \Psi _1-\Psi _2\) and the second derivative \(\Psi ''\) is a signed Radon measure \(\mu = \mu _1-\mu _2\) with a total variation measure \(|\mu |=\mu _1+\mu _2\), where \(\mu _i,i=1,2\) are non-negative measures.
Throughout the article, we also use the short notation
where \(Y \sim N(0,1)\).
Our main result is the following.
Theorem 2.1
Let \(\Psi \) be a convex function with the left sided derivative \(\Psi '\), and let \(\mu \) denote the measure associated with the second derivative of \(\Psi \) such that \(\int _{\mathbb {R}}\varphi (a)\mu (da) < \infty \). Let X be a Gaussian process as above. Then,
where the remainder satisfies
for some constant C depending solely on the variance function V(s).
Remark 1
Since \(H<1\), we have \(n^{-H} < n^{1-2H}\). Consequently, in view of (1.3), the remainder \(\int _{\mathbb {R}}R_n(a)\mu ({\text {d}}a)\) satisfies
i.e., the remainder is negligible compared to the first term in (2.1).
Remark 2
It follows from assumption \(\int _{\mathbb {R}}\varphi (a)\mu ({\text {d}}a) < \infty \) that the stochastic integral and its Riemann approximation in (2.1) are integrable (a random variable Z is integrable if \({\mathbb {E}}|Z|<\infty \)), and hence, the bound (2.1) makes sense. Indeed, by the proof of Theorem 2.1, we obtain that the difference of the stochastic integral and its approximation in (2.1) is integrable. Moreover, in view of (1.4) and Lemma 4.2 below, it follows that stochastic integral is integrable. These facts imply that the Riemann approximation in (2.1) is integrable as well.
For functions of locally bounded variation, we obtain immediately the following corollary.
Corollary 2.2
Let \(\Psi '\) be of locally bounded variation with \(|\mu |\) as its total variation measure. Suppose \(\int _{\mathbb {R}}\varphi (a)|\mu |(da) < \infty \) and let X be a Gaussian process as above. Then,
where the remainder satisfies
for some constant C depending solely on the variance function V(s).
Finally, as a by-product of our proof we obtain lower and upper bounds with a weaker condition on the variogram \(\vartheta (t,s)\).
Corollary 2.3
Let \(\Psi \) be a convex function with the left sided derivative \(\Psi '\), and let \(\mu \) denote the measure associated with the second derivative of \(\Psi \) such that \(\int _{\mathbb {R}}\varphi (a)\mu (da) < \infty \). Let X be a centered Gaussian process with a non-decreasing variance function V(s) with \(V(1)=1\). Suppose further that the variogram satisfies
for some \(H\in \left( \frac{1}{2},1\right) \). Then, there exist constants \(C_-\) and \(C_+\) such that
Remark 3
Note that here we have incorporated the remainders into the constants \(C_-\) and \(C_+\). If one considers only the leading order terms (with respect to n), then \(C_- =\sigma _-^2\) and \(C_+ = \sigma _+^2\).
3 Examples
Our results cover many interesting Gaussian processes and functions \(\Psi '\). First of all, the assumption \(\int _{\mathbb {R}}\varphi (a)|\mu |({\text {d}}a)<\infty \) is not very restrictive, due to the exponential decay of \(\varphi (a) = \frac{1}{\sqrt{2\pi }}e^{-\frac{a^2}{2}}\). Our Assumption (1.2) on the Gaussian process is not very restrictive either as the following examples show.
Example 1
The normalized multi-mixed fractional Brownian motion (see [1]) is the process
where \(\sum _{k=1}^n \sigma _k^2 =1\) and \(B^{H_k}\)’s are independent fractional Brownian motions with Hurst indices \(H_k\). Let \(H_{\min } = \min _{k\le n} H_k\) and let \(k_{\min }\) be the index of \(H_{\min }\) (here, we assume for the sake of simplicity that \(k_{\min }\) is unique). Assume that \(H_{\min }>\frac{1}{2}\). We have
where
Theorem 2.1 is applicable with \(H=H_{\min }\) and \(V(s) = \sum _{k=1}^n \sigma _k s^{2H_k}\).
Example 2
Let X be a centered stationary Gaussian process with covariance function r satisfying, for some \({\alpha }\in \left( \frac{1}{2},1\right) \),
where \(\frac{g(t)}{|t|^{2{\alpha }}} \rightarrow 0\) as \(t\rightarrow 0\). Theorem 2.1 is applicable with \(H=\alpha \) and variance function \(V(s) = V(0)\). This example covers many interesting stationary Gaussian processes, including fractional Ornstein–Uhlenbeck and related processes (see, e.g., [10, 11]).
Example 3
The normalized sub-fractional Brownian \(S^{{{\tilde{H}}}}\) motion with index \({{\tilde{H}}}\in (0,1)\) (see [4]) is a centered Gaussian process with covariance
where \(\sigma ^2 = 1/(2-2^{2{{\tilde{H}}}-1})\) is a normalizing constant. We have
Assume that \({{\tilde{H}}}>\frac{1}{2}\). Now, Corollary 2.3 is applicable with \({H={\tilde{H}}}\) and \(V(s) = s^{2{{\tilde{H}}}}\).
Example 4
The bifractional Brownian motion (see [9, 13]) \(B^{{{\tilde{H}}},K}\) with indices \({{\tilde{H}}}\in (0,1)\) and \(K\in (0,1]\) is the centered Gaussian process with covariance
Similarly to the case of sub-fractional Brownian motion, we have
Assume \({{\tilde{H}}}K>\frac{1}{2}\). Now, Corollary 2.3 is applicable with \(H={\tilde{H}}K\) and \(V(s) = s^{2{{\tilde{H}}}K}\).
Example 5
The tempered fractional Brownian motion (see [2]) \(X^{{{\tilde{H}}}}\) with index \({{\tilde{H}}}\in (0,1)\) is the centered Gaussian process with covariance
with a certain function \(C_t\) (see [2, Lemma 2.3]). Similarly to the case of sub-fractional and bifractional Brownian motion, we have (see [2, Theorem 2.7])
Assume \({{\tilde{H}}}>\frac{1}{2}\). Now, Corollary 2.3 is applicable with \(H = {\tilde{H}}\) and \(V(s) = C_s^2s^{2{{\tilde{H}}}}\).
4 Proofs
In what follows, C denotes a generic constant that depends only on the variance function V(s), but may vary from line to line.
4.1 Auxiliary Lemmas on Gaussian Process X and Convex Function \(\Psi \)
The following is one of our key lemmas and allows to reduce our analysis to the simple case \(\Psi (x) =(x-a)^+\).
Lemma 4.1
Let \(\Psi \) be convex and \(\psi = \Psi '_-\) be its left-sided derivative. Then, for any \(x,y\in \mathbb {R}\) we have
Proof
Let I be an interval such that \(x,y \in I\). Then, it is well-known that we have representations [12]
and
Using these, \(|x-a|= 2(x-a)^+ -(x-a)\), and \(sgn(y-a) = 2\textbf{1}_{y>a} - 1\), we obtain that linear terms vanish and we get
It is an easy exercise to check that \((x-a)^+-(y-a)^+-\textbf{1}_{y>a}(x-y)\ge 0\) from which it follows that \(\Psi (x)- \Psi (y) - \psi (y)(x-y) \ge 0\) for any convex function \(\Psi \). It remains to note that
where the latter integral is well-defined since \((x-a)^+-(y-a)^+-\textbf{1}_{y>a}(x-y) = 0\) whenever \(a\notin I\). \(\square \)
As a consequence, we obtain the following lemma providing us integrability.
Lemma 4.2
Let \(\Psi \) be a convex function with the associated measure \(\Psi '' = \mu \) and let \(Y \sim N(0,1)\). If \(\int _{\mathbb {R}}\varphi (a)\mu (da) < \infty \), then \({\mathbb {E}}|\Psi (Y)| < \infty \).
Proof
By adding a linear function if necessary, we may assume without loss of generality that \(\Psi \ge 0\). Now, from Lemma 4.1 we deduce that, for any deterministic z,
Taking expectation and using Tonelli’s theorem, we get
In particular, for \(z=0\), we get
Hence, it suffices to prove
However, this now follows by observing that
and the well-known asymptotical relation \(a{\textbf {P}}(Y>a) \sim \varphi (a)\). \(\square \)
Next, we establish several lemmas related to the Gaussian process X.
Lemma 4.3
We always have
and
Proof
By Gaussianity, we have \({\mathbb {E}}|X_t| = C\sqrt{V(t)}\) from which reverse triangle inequality gives
leading to the first claim. The second claim now follows from
and the fact that \(V(t_{k-1}) \ge V(t_1) \ge cn^{-H}\) since \(k\ge 2\) and V is non-decreasing. \(\square \)
Throughout, we use the following short notation
where R(t, s) is the covariance function of X, and we use the convention \(\gamma _k=0\) whenever \(V(t_{k-1})=0\). The following gives us a useful relation.
Lemma 4.4
Let \(V(t_{k-1})>0\). Then,
Proof
We use
and
Using also
leads to
Consequently, we have
completing the proof. \(\square \)
4.2 Approximation Estimates
We begin with the following elementary lemma on the approximation of Riemann–Stieltjes integrals. For the reader’s convenience, we present the proof.
Lemma 4.5
Let f be a differentiable function on [0, 1], and let g be non-decreasing on [0, 1]. Then,
Proof
Without loss of generality, we can assume \(\int _0^1 |f'(s)|{\text {d}}s < \infty \) since otherwise there is nothing to prove. From this, it follows that f is of bounded variation, since for a differentiable function we have
where TV stands for total variation. Since V is continuous and non-decreasing, this further implies that \(s\rightarrow f(V(s))\) is continuous and of bounded variation as well, with
Indeed, this follows from the fact that
Thus, the Riemann–Stieltjes integral \(\int _0^1 f(V(s)){\text {d}}g(s)\) exists, as \(s\rightarrow f(V(s))\) is continuous and \(s \rightarrow g(s)\) is non-decreasing, and hence, of bounded variation. Let us now prove the claimed upper bounds. We have
where \(s_k^*\in [t_{k-1},t_k]\) and we have also applied the mean value theorem. This verifies the claimed upper bound and thus, completes the proof. \(\square \)
We apply the result for function \(f(x) = \frac{1}{\sqrt{x}}e^{-\frac{a^2}{2x}}\). The following lemma evaluates the integral for this function in terms of the level a when the level a is large enough.
Lemma 4.6
Let \(|a|>1\). Then, for \(f(x) = \frac{1}{\sqrt{x}}e^{-\frac{a^2}{2x}}\) we have
Proof
By straightforward computations, we get
from which we get
as \(x\in [0,1]\) and \(|a|>1\). Now,
By L’Hopital’s rule, we obtain that
It follows that
This completes the proof. \(\square \)
The following lemma is to obtain boundedness in the region \(|a|\le 1\).
Lemma 4.7
Set \(f_a(x) = \frac{a^4}{x^2}e^{-\frac{a^2}{2x}}\). Then,
Proof
The claim follows directly by noting that \(f_a(x) = h\left( \frac{a^2}{x}\right) \), where
is bounded for \(z\ge 0\). \(\square \)
Lemma 4.8
We have, for \(|a|\le 1\),
Proof
By mean value theorem and the fact \(\varphi '(x) = -x\varphi (x)\), we have
Here,
by Lemma 4.3, while
by Lemma 4.7. The claim follows from \(\Delta _k\sqrt{V(\cdot )} \le Cn^{-H}\). \(\square \)
Lemma 4.9
We have
Proof
From monotonicity, we get
Summing over \(k=2,\ldots ,n-1\) yields
from which we get
Here,
Since \(V(s)\ge cs^{2H}\), we get
This completes the proof. \(\square \)
Lemma 4.10
We have
Proof
We separate the cases \(|a|>1\) and \(|a|\le 1\). Let first \(|a|>1\). Noting that then, by using the convention \(\frac{1}{x}\varphi \left( \frac{a}{x}\right) =0\) for \(x=0\), we have
Now, Lemmas 4.5 and 4.6 apply, and we get, with \(f(x) = \frac{1}{\sqrt{x}}e^{-\frac{a^2}{2x}}\), that
This proves the claim when \(|a|>1\). For \(|a|\le 1\), we write
The second term can be bounded by Lemma 4.8, and we have
since for \(|a|\le 1\) we have \(\varphi (a)>\epsilon \). For the first term, we have by Lemma 4.9 that
yielding
This proves the case \(|a|\le 1\) and completes the whole proof. \(\square \)
4.3 Proof of Theorem 2.1 and Corollary 2.2
We begin by considering a simple case \(f(x) = (x-a)^+\).
Proposition 4.11
Let \(a\in \mathbb {R}\) be fixed. Then,
where the remainder satisfies
Proof
By (1.4), we have
Writing
we get
where the last inequality follows from Lemma 4.1. From \( (x-a)^+ = x\textbf{1}_{x>a} - a \textbf{1}_{x>a}\), we obtain for one interval increment
If \(V(t_{k-1})>0\), using representation
where \(Y\sim N(0,1)\) is independent of \(X_{t_{k-1}}\), R is the covariance of X, and b is such that \({\mathbb {E}}X_{t_k}^2 = V(t_k)\), we get
After rearranging the terms, this leads to
Note also that this remains valid in the case when \(V(t_{k-1})=0\), provided we use the convention \({\mathbb {P}}(Y>\infty ) = 0\), \({\mathbb {P}}(Y>-\infty )=1\), \(\varphi (\pm \infty )=0\), and
We have obtained
where
and
For \(I_{0,n}\), we have
Here, \(\gamma _1 = 0\) if \(V(0)=0\) leading to \(|I_{0,n}| \le \varphi (a) n^{-H}\), while for \(V(0)>0\) we can use Lemmas 4.4 and 4.3 to obtain
leading to \(|I_{0,n}|\le C\varphi (a)n^{-H}\) as well. Consider next the terms \(I_{2,n}\) and \(I_{3,n}\). Trivially
while for \(I_{2,n}\) we get by Lemma 4.5 for each subinterval that
where the remainder satisfies, since \(\varphi \left( \frac{a}{\sqrt{V(s)}}\right) \) is increasing in s,
Note that here, by using the fact that \(\varphi '(x)=-x\varphi (x)\) and \(\varphi (x)\) is the density of the normal distribution,
Consequently, we have
It remains to bound the term \(I_{1,n}\). Using Lemma 4.4 allows us to write \(I_{1,n} = I_{1,A,n} + I_{1,B,n}\), where
and
For \(I_{1,A,n}\), we estimate
Here, we have used the facts that \(\frac{1}{\sqrt{V(t_{k})}} \le \frac{1}{\sqrt{V(s)}}\) for \(t_{k-1}\le s\le t_k\) as V is non-decreasing, and that \(d\sqrt{V(s)} = \frac{1}{2\sqrt{V(s)}}{\text {d}}V(s)\) giving us
It remains to study the term \(I_{1,B,n}\). For this, we obtain
Here, the first term satisfies, by Lemma 4.10,
where
The second term in turn satisfies, again by Lemma 4.10,
Collecting all the estimates completes the proof. \(\square \)
Remark 4
We note that by the above proof, we actually obtain
whenever we have only the upper bound \({\mathbb {E}}(X_t-X_s)^2 \le C|t-s|^{2\,H}\) instead of (1.2). Indeed, the leading order term arises from \(I_{1,B,n}\) with a constant given by
With the help of Proposition 4.11, we are now ready to prove our main results.
Proof of Theorem 2.1
Using Lemma 4.1 and (1.4), we have
where (see the proof of Proposition 4.11)
Taking expectation and using Proposition 4.11 to compute \({\mathbb {E}}Z_n^+(a)\), we get
Here, the remainder \(R_n(a)\) is the remainder in Proposition 4.11 and hence, satisfies
that is integrable since \(\int _{\mathbb {R}}\varphi (a)\mu ({\text {d}}a) < \infty \) by assumption. Similarly, the leading order term is finite by the fact that
This yields the claim. \(\square \)
Proof of Corollary 2.2
Let \(A_K = \{\omega : \sup _{0\le t\le 1} |X_t| \le K\}\). Since f is locally of bounded variation, it follows that on the set \(A_K\) we obtain
It follows that
In view of Remark 4, taking expectation yields the claim.
Proof of Corollary 2.3
The proof follows directly from the proof of Theorem 2.1 by considering lower and upper bounds separately, and hence, we leave the details for an interested reader.
Data Availability
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
References
Almani, H.M., Sottinen, T.: Multi-mixed fractional Brownian motions and Ornstein–Uhlenbeck processes (2021). ArXiv: 2103.02978
Azmoodeh, E., Mishura, Y., Sabzikar, F.: How does tempering affect the local and global properties of fractional Brownian motion? J. Theor. Probab. 35, 484–527 (2022)
Azmoodeh, E., Viitasaari, L.: Rate of convergence for discretization of integrals with respect to fractional Brownian motion. J. Theor. Probab. 28, 396–422 (2015)
Bojdecki, T., Gorostiza, L.G., Talarczyk, A.: Sub-fractional Brownian motion and its relation to occupation times. Stat. Probab. Lett. 69, 405–419 (2004)
Chen, Z., Leskelä, L., Viitasaari, L.: Pathwise Stieltjes integrals of discontinuously evaluated stochastic processes. Stoch. Process. Appl. 129, 2723–2757 (2019)
Garino, V., Nourdin, I., Vallois, P.: Asymptotic error distribution for the Riemann approximation of integrals driven by fractional Brownian motion. Electron. J. Probab. (2020), to appear
Hinz, M., Tölle, J.M., Viitasaari, L.: Sobolev regularity of occupation measures and paths, variability and compositions. Electron. J. Probab. 27, 1–29 (2022)
Hinz, M., Tölle, J.M., Viitasaari, L.: Variability of paths and differential equations with BV-coefficients. Ann. Inst. H. Poincaré Probab. Statist. Accepted (2022)
Houdré, C., Villa, J.: An example of infinite dimensional quasi-helix, in Stochastic models. Contemp. Math. Am. Math. Soc. 336, 195–201 (2002)
Istas, J., Lang, G.: Quadratic variations and estimation of the local Hölder index of a Gaussian process. Ann. Inst. H. Poincaré Probab. Stat. 23, 407–436 (1997)
Kaarakka, T., Salminen, P.: On fractional Ornstein–Uhlenbeck processes. Commun. Stoch. Anal. 5, 121–133 (2011)
Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion. Cambridge University Press (1990)
Russo, F., Tudor, C.A.: On bifractional Brownian motion. Stoch. Process. Appl. 116, 830–856 (2006)
Sottinen, T., Viitasaari, L.: Pathwise integrals and Itô-Tanaka formula for Gaussian processes. J. Theor. Probab. 29, 590–616 (2016)
Yaskov, P.: On pathwise Riemann–Stieltjes integrals. Stat. Probab. Lett. 150, 101–107 (2019)
Acknowledgements
Pauliina Ilmonen gratefully acknowledges support from the Academy of Finland, decision number 346308 (The Centre of Excellence in Randomness and Structures).
Funding
Open access funding provided by Uppsala University.
Author information
Authors and Affiliations
Contributions
All authors contributed to the content and to the writing process of the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Azmoodeh, E., Ilmonen, P., Shafik, N. et al. On Sharp Rate of Convergence for Discretization of Integrals Driven by Fractional Brownian Motions and Related Processes with Discontinuous Integrands. J Theor Probab 37, 721–743 (2024). https://doi.org/10.1007/s10959-023-01272-7
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10959-023-01272-7
Keywords
- Approximation of stochastic integral
- Discontinuous integrands
- Sharp rate of convergence
- Fractional Brownian motions and related processes