On sharp rate of convergence for discretisation of integrals driven by fractional Brownian motions and related processes with discontinuous integrands

We consider equidistant approximations of stochastic integrals driven by H\"older continuous Gaussian processes of order $H>\frac12$ with discontinuous integrands involving bounded variation functions. We give exact rate of convergence in the $L^1$-distance and provide examples with different drivers. It turns out that the exact rate of convergence is proportional to $n^{1-2H}$ that is twice better compared to the best known results in the case of discontinuous integrands, and corresponds to the known rate in the case of smooth integrands. The novelty of our approach is that, instead of using multiplicative estimates for the integrals involved, we apply change of variables formula together with some facts on convex functions allowing us to compute expectations explicitly.


Introduction
We consider the rate of convergence for equidistant approximations of pathwise stochastic integrals where Here Ψ is a difference of convex functions and X is a centered Gaussian process with non-decreasing variance function V (s) = EX 2  s normalized such that V (1) = 1.We assume that the variogram function ϑ(t, s) = E(X t − X s ) 2   satisfies, for some H ∈ ( 1 2 , 1), that ϑ(t, s) = σ 2 |t − s| 2H + g(t, s), (1.2) where lim |t−s|→0 g(t, s) |t − s| 2H = 0.This means, in particular, that the process X has H as its Hölder index.One way to realize the process X is to take fractional Brownian motion B H , with index H and an independent Gaussian process G with variogram g (such process has Hölder index at least H) and put X t = X 0 + B H t + G t , where X 0 may be random initial (Gaussian) value.We also note that we have either V (0) = C > 0 (e.g.stationary case) or V (s) ≥ cs 2H (e.g. the case of the fractional Brownian motion).It follows that Consequently, by [5] the pathwise Riemann-Stieltjes stochastic integral in (1.1) exists and we have the classical chain rule In the case of the fractional Brownian motion, the problem was studied in [3].This article extends the article [3] into two directions: (i) we allow more integrators than just the fractional Brownian motion and (ii) we give exact L 1 error of the approximations.Rather surprisingly, it turns out that we obtain the rate n 1−2H that is twice better compared to the rate obtained in [3] and corresponds to the known correct rate in the case of smooth functions Ψ ′ (see for instance [3,6] and the references therein).In contrast in the Brownian motion case, introducing jumps reduces the rate into n −1/4 in comparison to n −1/2 obtained for smooth functions Ψ ′ (see, e.g.[3]).For other related articles on stochastic integrals with discontinuous integrands, see also [5,8,7,14,15].
The rest of the article is organized as follows: the main results are give in Section 2. In Section 3 we give examples.Finally, the proofs are given in Section 4.

Statement of the main results
We begin by recalling some basic facts on convex functions and on functions of bounded variation.For details on the topic, see for instance [12].
For a convex function Ψ, let Ψ ′ denote its one sided derivative.Then the derivative Ψ ′′ = µ exists as a Radon measure.A particular example includes the function Ψ(x) = |x − a|, in which case Ψ ′ (x) = sgn(x − a) and Ψ ′′ (x) = δ a (x), the Dirac measure at level a.More generally, if Ψ ′ is of (locally) bounded variation, then it can be represented as the difference of two non-decreasing functions.As a corollary, Ψ ′ can be regarded as the derivative of a function Ψ that is a difference of two convex functions.That is, we have Ψ = Ψ 1 − Ψ 2 and the second derivative Ψ ′′ is a signed Radon measure µ = µ 1 − µ 2 with a total variation measure |µ| = µ 1 + µ 2 , where µ i , i = 1, 2 are non-negative measures.
Throughout the article, we also use the short notation where Y ∼ N (0, 1).
Our main result is the following.
Theorem 2.1.Let Ψ be a convex function with the left sided derivative Ψ ′ and let µ denote the measure associated to the second derivative of Ψ such that R ϕ(a)µ(da) < ∞.Let X be a Gaussian process as above.Then where the remainder satisfies for some constant C depending solely on the variance function V (s).
Remark 1.It follows from assumption R ϕ(a)µ(da) < ∞ that the random objects in (2.1) are integrable, and hence the bound (2.1) makes sense.Indeed, by the proof of Theorem 2.1 we obtain that the difference of the stochastic integral and its approximation in (2.1) is integrable.Moreover, in view of (1.3) and Lemma 4.2 below, it follows that stochastic integral is integrable.These facts imply that the Riemann approximation in (2.1) is integrable as well.
For functions of locally bounded variation we obtain immediately the following corollary.
Corollary 2.2.Let Ψ ′ be of locally bounded variation with |µ| as its total variation measure.Suppose R ϕ(a)|µ|(da) < ∞ and let X be a Gaussian process as above.Then where the remainder satisfies for some constant C depending solely on the variance function V (s).
Finally, as a by-product of our proof we obtain lower and upper bounds with a weaker condition on the variogram ϑ(t, s).
Corollary 2.3.Let Ψ be a convex function with the left sided derivative Ψ ′ and let µ denote the measure associated to the second derivative of Ψ such that R ϕ(a)µ(da) < ∞.Let X be a centered Gaussian process with a non-decreasing variance function V (s) with V (1) = 1.Suppose further that the variogram satisfies Then there exist constants C − and C + such that Remark 2. Note that here we have incorporated the remainders into the constants C − and C + .If one considers only the leading order terms (with respect to n), then C − = σ 2 − and C + = σ 2 + .

Examples
Our results cover many interesting Gaussian processes and functions Ψ ′ .First of all, the assumption R ϕ(a)|µ|(da) < ∞ is not very restrictive, due to the exponential 2 .Our Assumption (1.2) on the Gaussian process is not very restrictive either as the following examples show.
Example 1.The normalized multi-mixed fractional Brownian motion (see [1]) is the process where n k=1 σ 2 k = 1 and B H k 's are independent fractional Brownian motions with Hurst indices H k .Let H min = min k≤n H k and let k min be the index of H min (here we assume for the sake of simplicity that k min is unique).Assume that Let X be a centered stationary Gaussian process with covariance function r satisfying, for some where g(t) |t| 2H → 0 as t → 0. Theorem 2.1 is applicable with rate H and variance function V (s) = V (0).This example covers many interesting stationary Gaussian processes, including fractional Ornstein-Uhlenbeck and related processes (see, e.g.[10,11]).
Example 3. The normalized sub-fractional Brownian S H motion with index H ∈ (0, 1) (see [4]) is a centered Gaussian process with covariance where 3 is applicable with rate H and V (s) = s 2H .Example 4. The bifractional Brownian motion (see [9,13]) B H,K with indices H ∈ (0, 1) and K ∈ (0, 1] is the centered Gaussian process with covariance Similarly to the case of sub-fractional Brownian motion we have Assume HK > 1 2 .Now Corollary 2.3 is applicable with rate HK and V (s) = s 2HK .Example 5.The tempered fractional Brownian motion (see [2]) X H with index H ∈ (0, 1) is the centered Gaussian process with covariance with a certain function C t (see [2,Lemma 2.3]).Similarly to the case of sub-fractional and bifractional Brownian motion we have (see [2, Theorem 2.7])

Proofs
In what follows, C denotes a generic constant that depends only on the variance function V (s), but may vary from line to line.
4.1.Auxiliary lemmas on Gaussian process X and convex function Ψ.The following is one of our key lemmas and allows to reduce our analysis to the simple case Ψ(x) = (x − a) + .Lemma 4.1.Let Ψ be convex and ψ = Ψ ′ − be its left-sided derivative.Then for any x, y ∈ R we have Proof.Let I be an interval such that x, y ∈ I. Then it is well-known that we have representations [12]  It is an easy exercise to check that (x − a) + − (y − a) + − 1 y>a (x − y) ≥ 0 from which it follows that Ψ(x) − Ψ(y) − ψ(y)(x − y) ≥ 0 for any convex function Ψ.It remains to note that where the latter integral is well-defined since (x − a) + − (y − a) + − 1 y>a (x − y) = 0 whenever a / ∈ I.
As a consequence we obtain the following lemma providing us integrability.
Lemma 4.2.Let Ψ be a convex function with the associated measure Proof.By adding a linear function if necessary, we may assume without loss of generality that Ψ ≥ 0. Now from Lemma 4.1 we deduce that, for any deterministic z, Taking expectation and using Tonelli's theorem we get In particular, for z = 0 we get Hence it suffices to prove However, this now follows by observing that and the well-known asymptotical relation aP(Y > a) ∼ ϕ(a).
Next we establish several lemmas related to the Gaussian process X.
Lemma 4.3.We always have Proof.By Gaussianity we have E|X t | = C V (t) from which reverse triangle inequality gives leading to the first claim.The second claim now follows from and the fact that Throughout, we use the following short notation , where R(t, s) is the covariance function of X, and we use the convention γ k = 0 whenever V (t k−1 ) = 0.The following gives us a useful relation.
Proof.We use and .
Using also leads to .
Consequently, we have , completing the proof.

Approximation estimates.
We begin with the following elementary lemma on the approximation of Riemann-Stieltjes integrals.For the reader's convenience, we present the proof.
Lemma 4.5.Let f be a differentiable function on [0, 1] and let g be non-decreasing on [0, 1].Then Proof.Without loss of generality, we can assume 1 0 |f ′ (s)|ds < ∞ since otherwise there is nothing to prove.From this it follows that f is of bounded variation, since for a differentiable function we have where T V stands for total variation.Since V is continuous and non-decreasing, this further implies that f (V (•)) is continuous and of bounded variation as well, with Indeed, this follows from the fact that Thus the Riemann-Stieltjes integral 1 0 f (V (s))dg(s) exists, as f (V (s)) is continuous and g(s) is non-decreasing, and hence of bounded variation.Let us now prove the claimed upper bounds.We have proving the claimed upper bound.This completes the proof.
We apply the result for function . The following lemma evaluates the integral for this function in terms of the level a when the level a is large enough.Proof.By straightforward computations we get from which we get By L'Hopital's rule, we obtain that It follows that This completes the proof.
The following lemma is to obtain boundedness in the region |a| ≤ 1.
Lemma 4.7.Set f a (x) = a 4 x 2 e − a 2 2x .Then Proof.The claim follows directly by noting that f a (x) = h a 2 x , where is bounded for z ≥ 0.
Lemma 4.8.We have, for |a| ≤ 1, Proof.By mean value theorem and the fact ϕ ′ (x) = −xϕ(x) we have Here Lemma 4.9.We have Proof.From monotonicity we get Summing over k = 2, . . ., n − 1 yields Here Lemma 4.10.We have Proof.We separate the cases |a| > 1 and |a| ≤ 1.Let first |a| > 1.Noting that then, by using the convention 1 x ϕ a x = 0 for x = 0, we have . Now Lemma 4.5 and Lemma 4.6 apply, and we get, with This proves the claim when |a| > 1.For |a| ≤ 1, we write The second term can be bounded by Lemma 4.8 and we have since for |a| ≤ 1 we have ϕ(a) > ǫ.For the first term, we have by Lemma 4.9 that This proves the case |a| ≤ 1 and completes the whole proof.
where the remainder satisfies Proof.By (1.3) we have Writing we get where the last inequality follows from Lemma 4.1.From (x − a) + = x1 x>a − a1 x>a we obtain for one interval increment If V (t k−1 ) > 0, using representation After rearranging the terms this leads to Note also that this remains valid in the case when V (t k−1 ) = 0, provided we use the convention We have obtained where For I 0,n we have It remains to study the term I 1,B,n .For this we obtain Here the first term satisfies, by Lemma 4.10, where The second term in turn satisfies, again by Lemma 4.10, Collecting all the estimates completes the proof.
Remark 3. We note that by the above proof, we actually obtain whenever we have only the upper bound E(X t − X s ) 2 ≤ C|t − s| 2H instead of (1.2).Indeed, the leading order term arises from I 1,B,n with a constant given by C(a) = With the help of Proposition 4.11, we are now ready to prove our main results.
Taking expectation and using Proposition 4.11, we get Here the remainder satisfies This yields the claim.
Proof of Corollary 2.2.Let A K = {ω : sup 0≤t≤1 |X t | ≤ K}.Since f is locally of bounded variation, it follows that on the set A K we obtain It follows that In view of Remark 3, taking expectation yields the claim.