## 1 Introduction

We consider the rate of convergence for equidistant approximations of pathwise stochastic integrals

\begin{aligned} \int _0^1 \Psi '(X_s)\, {\text {d}}X_s \approx \sum _{k=1}^n \Psi '(X_{t_{k-1}})(X_{t_k}-X_{t_{k-1}}), \end{aligned}
(1.1)

where $$t_k = \frac{k}{n}{,k=0,1,\ldots ,n}$$. Here, $$\Psi$$ is a difference of convex functions and X is a centered Gaussian process on [0, 1] with non-decreasing variance function $$V(s) = {\mathbb {E}}X_s^2$$ normalized such that $$V(1)=1$$. We assume that the variogram function

\begin{aligned} \vartheta (t,s) = {\mathbb {E}}(X_t-X_s)^2,\quad {s,t\in [0,1]} \end{aligned}

satisfies, for some $$H\in (\frac{1}{2},1)$$, that

\begin{aligned} \vartheta (t,s) = \sigma ^2|t-s|^{2H} + g(t,s), \end{aligned}
(1.2)

where

\begin{aligned} \lim _{|t-s|\rightarrow 0}\frac{g(t,s)}{|t-s|^{2H}} = 0. \end{aligned}
(1.3)

This means, in particular, that the process X has H as its Hölder index. One way to realize the process X is to take fractional Brownian motion $$B^H$$, with index H and an independent Gaussian process G with variogram g (such process has Hölder index at least H) and put

\begin{aligned} X_t = X_0 + {\sigma }B_t^H + G_t, \end{aligned}

where $$X_0$$ may be random initial (Gaussian) value. Since G has variogram g(st), it follows from (1.3) that G typically has more regular sample paths than $$B^H$$. We also note that we have either $$V(0)=C>0$$ (e.g., stationary case) or $$V(s)\ge {\text {cs}}^{2\,H}$$ (e.g., the case of the fractional Brownian motion). Indeed, if $$X_0=0$$, then $$V(s) = \vartheta (s,0)$$ from which $$V(s)\ge {\text {cs}}^{2\,H}$$ follows by (1.2) and (1.3). It follows that

\begin{aligned} \int _0^1 \frac{1}{\sqrt{V(s)}} \,{\text {d}}s < \infty . \end{aligned}

Consequently, by [5] the pathwise Riemann–Stieltjes stochastic integral in (1.1) exists and we have the classical chain rule

\begin{aligned} \Psi (X_1)-\Psi (X_0) = \int _0^1 \Psi '(X_s)\,{\text {d}}X_s. \end{aligned}
(1.4)

In the case of the fractional Brownian motion, the problem was studied in [3]. This article extends the article [3] into two directions: (i) we allow more integrators than just the fractional Brownian motion and (ii) we give exact $$L^1$$ error of the approximations. Rather surprisingly, it turns out that we obtain the rate $$n^{1-2H}$$ that is twice better compared to the rate obtained in [3] and corresponds to the known correct rate in the case of smooth functions $$\Psi '$$ (see for instance [3, 6] and the references therein). In contrast in the Brownian motion case, introducing jumps reduces the rate into $$n^{-1/4}$$ in comparison with $$n^{-1/2}$$ obtained for smooth functions $$\Psi '$$ (see, e.g., [3]). For other related articles on stochastic integrals with discontinuous integrands, see also [5, 7, 8, 14, 15].

The rest of the article is organized as follows: the main results are give in Sect. 2. In Sect. 3, we give examples. Finally, the proofs are given in Sect. 4.

## 2 Statement of the Main Results

We begin by recalling some basic facts on convex functions and on functions of bounded variation. For details on the topic, see for instance [12].

For a convex function $$\Psi$$, let $$\Psi '$$ denote its one-sided derivative. Then, the derivative $$\Psi '' = \mu$$ exists as a Radon measure. A particular example includes the function $$\Psi (x) = |x-a|$$, in which case $$\Psi '(x) = sgn(x-a)$$ and $$\Psi ''(x) = \delta _a(x)$$, the Dirac measure at level a. More generally, if $$\Psi '$$ is of (locally) bounded variation, then it can be represented as the difference of two non-decreasing functions. As a corollary, $$\Psi '$$ can be regarded as the derivative of a function $$\Psi$$ that is a difference of two convex functions. That is, we have $$\Psi = \Psi _1-\Psi _2$$ and the second derivative $$\Psi ''$$ is a signed Radon measure $$\mu = \mu _1-\mu _2$$ with a total variation measure $$|\mu |=\mu _1+\mu _2$$, where $$\mu _i,i=1,2$$ are non-negative measures.

Throughout the article, we also use the short notation

\begin{aligned} \varphi (a) = {\mathbb {E}}(Y\textbf{1}_{Y>a}) = \frac{1}{\sqrt{2\pi }}e^{-\frac{a^2}{2}}, \end{aligned}

where $$Y \sim N(0,1)$$.

Our main result is the following.

### Theorem 2.1

Let $$\Psi$$ be a convex function with the left sided derivative $$\Psi '$$, and let $$\mu$$ denote the measure associated with the second derivative of $$\Psi$$ such that $$\int _{\mathbb {R}}\varphi (a)\mu (da) < \infty$$. Let X be a Gaussian process as above. Then,

\begin{aligned} \begin{aligned}&{\mathbb {E}}\left| \int _0^1 \Psi '(X_s)dX_s - \sum _{k=1}^n \Psi '(X_{t_{k-1}})(X_{t_k}-X_{t_{k-1}})\right| \\&= \sigma ^2\int _{\mathbb {R}}\int _0^1 \frac{1}{\sqrt{V(s)}} \varphi \left( \frac{a}{\sqrt{V(s)}}\right) ds\mu (da) \left( \frac{1}{n}\right) ^{2H-1} + \int _{\mathbb {R}}R_n(a)\mu (da), \end{aligned} \end{aligned}
(2.1)

where the remainder satisfies

\begin{aligned} \int _{\mathbb {R}}R_n(a)\mu (da) \le C\max \{n^{-H},n^{1-2H}\max _{1\le k\le n} [g(t_k,t_{k-1})n^{2H}]\} \end{aligned}

for some constant C depending solely on the variance function V(s).

### Remark 1

Since $$H<1$$, we have $$n^{-H} < n^{1-2H}$$. Consequently, in view of (1.3), the remainder $$\int _{\mathbb {R}}R_n(a)\mu ({\text {d}}a)$$ satisfies

\begin{aligned} \lim _{n\rightarrow \infty } \frac{\int _{\mathbb {R}}R_n(a)\mu ({\text {d}}a)}{n^{1-2H}} = 0, \end{aligned}

i.e., the remainder is negligible compared to the first term in (2.1).

### Remark 2

It follows from assumption $$\int _{\mathbb {R}}\varphi (a)\mu ({\text {d}}a) < \infty$$ that the stochastic integral and its Riemann approximation in (2.1) are integrable (a random variable Z is integrable if $${\mathbb {E}}|Z|<\infty$$), and hence, the bound (2.1) makes sense. Indeed, by the proof of Theorem 2.1, we obtain that the difference of the stochastic integral and its approximation in (2.1) is integrable. Moreover, in view of (1.4) and Lemma 4.2 below, it follows that stochastic integral is integrable. These facts imply that the Riemann approximation in (2.1) is integrable as well.

For functions of locally bounded variation, we obtain immediately the following corollary.

### Corollary 2.2

Let $$\Psi '$$ be of locally bounded variation with $$|\mu |$$ as its total variation measure. Suppose $$\int _{\mathbb {R}}\varphi (a)|\mu |(da) < \infty$$ and let X be a Gaussian process as above. Then,

\begin{aligned} \begin{aligned}&{\mathbb {E}}\left| \int _0^1 \Psi '(X_s)dX_s - \sum _{k=1}^n \Psi '(X_{t_{k-1}})(X_{t_k}-X_{t_{k-1}})\right| \\&\le \sigma ^2\int _{\mathbb {R}}\int _0^1 \frac{1}{\sqrt{V(s)}} \varphi \left( \frac{a}{\sqrt{V(s)}}\right) ds|\mu |(da) \left( \frac{1}{n}\right) ^{2H-1} + \int _{\mathbb {R}}R_n(a)|\mu |(da), \end{aligned} \end{aligned}

where the remainder satisfies

\begin{aligned} \int _{\mathbb {R}}R_n(a)|\mu |(da) \le C\max \left\{ n^{-H},n^{1-2H}\max _{1\le k\le n} \left[ g\left( t_k,t_{k-1}\right) n^{2H}\right] \right\} \end{aligned}

for some constant C depending solely on the variance function V(s).

Finally, as a by-product of our proof we obtain lower and upper bounds with a weaker condition on the variogram $$\vartheta (t,s)$$.

### Corollary 2.3

Let $$\Psi$$ be a convex function with the left sided derivative $$\Psi '$$, and let $$\mu$$ denote the measure associated with the second derivative of $$\Psi$$ such that $$\int _{\mathbb {R}}\varphi (a)\mu (da) < \infty$$. Let X be a centered Gaussian process with a non-decreasing variance function V(s) with $$V(1)=1$$. Suppose further that the variogram satisfies

\begin{aligned} \sigma _-^2|t-s|^{2H} \le \vartheta (t,s)\le \sigma _+^2|t-s|^{2H} \end{aligned}

for some $$H\in \left( \frac{1}{2},1\right)$$. Then, there exist constants $$C_-$$ and $$C_+$$ such that

\begin{aligned} \begin{aligned}&C_- \int _{\mathbb {R}}\int _0^1 \frac{1}{\sqrt{V(s)}} \varphi \left( \frac{a}{\sqrt{V(s)}}\right) ds\mu (da)\left( \frac{1}{n}\right) ^{2H-1} \\&\le {\mathbb {E}}\left| \int _0^1 \Psi '(X_s)dX_s - \sum _{k=1}^n \Psi '(X_{t_{k-1}})(X_{t_k}-X_{t_{k-1}})\right| \\&\le C_+ \int _{\mathbb {R}}\int _0^1 \frac{1}{\sqrt{V(s)}} \varphi \left( \frac{a}{\sqrt{V(s)}}\right) ds\mu (da) \left( \frac{1}{n}\right) ^{2H-1}. \end{aligned} \end{aligned}

### Remark 3

Note that here we have incorporated the remainders into the constants $$C_-$$ and $$C_+$$. If one considers only the leading order terms (with respect to n), then $$C_- =\sigma _-^2$$ and $$C_+ = \sigma _+^2$$.

## 3 Examples

Our results cover many interesting Gaussian processes and functions $$\Psi '$$. First of all, the assumption $$\int _{\mathbb {R}}\varphi (a)|\mu |({\text {d}}a)<\infty$$ is not very restrictive, due to the exponential decay of $$\varphi (a) = \frac{1}{\sqrt{2\pi }}e^{-\frac{a^2}{2}}$$. Our Assumption (1.2) on the Gaussian process is not very restrictive either as the following examples show.

### Example 1

The normalized multi-mixed fractional Brownian motion (see [1]) is the process

\begin{aligned} X_t = \sum _{k=1}^n \sigma _k B^{H_k}, \end{aligned}

where $$\sum _{k=1}^n \sigma _k^2 =1$$ and $$B^{H_k}$$’s are independent fractional Brownian motions with Hurst indices $$H_k$$. Let $$H_{\min } = \min _{k\le n} H_k$$ and let $$k_{\min }$$ be the index of $$H_{\min }$$ (here, we assume for the sake of simplicity that $$k_{\min }$$ is unique). Assume that $$H_{\min }>\frac{1}{2}$$. We have

\begin{aligned} \vartheta (t,s) = \sigma _{k_{\min }}|t-s|^{2H_{\min }} + g(t,s), \end{aligned}

where

\begin{aligned} g(t,s) = \sum _{k\ne k_{\min }} \sigma _k^2|t-s|^{2H_k}. \end{aligned}

Theorem 2.1 is applicable with $$H=H_{\min }$$ and $$V(s) = \sum _{k=1}^n \sigma _k s^{2H_k}$$.

### Example 2

Let X be a centered stationary Gaussian process with covariance function r satisfying, for some $${\alpha }\in \left( \frac{1}{2},1\right)$$,

\begin{aligned} r(0) -r(t) = \sigma ^2|t|^{2{\alpha }} + g(t), \end{aligned}

where $$\frac{g(t)}{|t|^{2{\alpha }}} \rightarrow 0$$ as $$t\rightarrow 0$$. Theorem 2.1 is applicable with $$H=\alpha$$ and variance function $$V(s) = V(0)$$. This example covers many interesting stationary Gaussian processes, including fractional Ornstein–Uhlenbeck and related processes (see, e.g., [10, 11]).

### Example 3

The normalized sub-fractional Brownian $$S^{{{\tilde{H}}}}$$ motion with index $${{\tilde{H}}}\in (0,1)$$ (see [4]) is a centered Gaussian process with covariance

\begin{aligned} R(t,s) = \sigma ^2\left( s^{2{{\tilde{H}}}} + t^{2{{\tilde{H}}}} - \frac{1}{2}\left( (s+t)^{2{{\tilde{H}}}} + (s-t)^{2{\tilde{H}}}\right) \right) , \end{aligned}

where $$\sigma ^2 = 1/(2-2^{2{{\tilde{H}}}-1})$$ is a normalizing constant. We have

\begin{aligned} c |t-s|^{2{{\tilde{H}}}} \le {\mathbb {E}}(S_t^{{{\tilde{H}}}}-S_t^{{{\tilde{H}}}})^2 \le C|t-s|^{2{{\tilde{H}}}}. \end{aligned}

Assume that $${{\tilde{H}}}>\frac{1}{2}$$. Now, Corollary 2.3 is applicable with $${H={\tilde{H}}}$$ and $$V(s) = s^{2{{\tilde{H}}}}$$.

### Example 4

The bifractional Brownian motion (see [9, 13]) $$B^{{{\tilde{H}}},K}$$ with indices $${{\tilde{H}}}\in (0,1)$$ and $$K\in (0,1]$$ is the centered Gaussian process with covariance

\begin{aligned} R(t,s) = \frac{1}{2^K}\left( \left( t^{2{{\tilde{H}}}}+s^{2{\tilde{H}}}\right) ^K - |t-s|^{2{{\tilde{H}}}K}\right) . \end{aligned}

Similarly to the case of sub-fractional Brownian motion, we have

\begin{aligned} 2^{-K} |t-s|^{2{{\tilde{H}}}K} \le {\mathbb {E}}(B_t^{{{\tilde{H}}},K}-B_t^{{\tilde{H}},K})^2 \le 2^{1-K}|t-s|^{2{{\tilde{H}}}K}. \end{aligned}

Assume $${{\tilde{H}}}K>\frac{1}{2}$$. Now, Corollary 2.3 is applicable with $$H={\tilde{H}}K$$ and $$V(s) = s^{2{{\tilde{H}}}K}$$.

### Example 5

The tempered fractional Brownian motion (see [2]) $$X^{{{\tilde{H}}}}$$ with index $${{\tilde{H}}}\in (0,1)$$ is the centered Gaussian process with covariance

\begin{aligned} R(t,s) = \frac{1}{2}\left( C_t^2t^{2{{\tilde{H}}}}+C_s^2s^{2{\tilde{H}}}-C_{t-s}^2|t-s|^{2{{\tilde{H}}}}\right) \end{aligned}

with a certain function $$C_t$$ (see [2, Lemma 2.3]). Similarly to the case of sub-fractional and bifractional Brownian motion, we have (see [2, Theorem 2.7])

\begin{aligned} \sigma ^2_-|t-s|^{2{{\tilde{H}}}} \le {\mathbb {E}}(X_t^{{{\tilde{H}}}}-X_t^{{\tilde{H}}})^2 \le \sigma ^2_+|t-s|^{2{{\tilde{H}}}}. \end{aligned}

Assume $${{\tilde{H}}}>\frac{1}{2}$$. Now, Corollary 2.3 is applicable with $$H = {\tilde{H}}$$ and $$V(s) = C_s^2s^{2{{\tilde{H}}}}$$.

## 4 Proofs

In what follows, C denotes a generic constant that depends only on the variance function V(s), but may vary from line to line.

### 4.1 Auxiliary Lemmas on Gaussian Process X and Convex Function $$\Psi$$

The following is one of our key lemmas and allows to reduce our analysis to the simple case $$\Psi (x) =(x-a)^+$$.

### Lemma 4.1

Let $$\Psi$$ be convex and $$\psi = \Psi '_-$$ be its left-sided derivative. Then, for any $$x,y\in \mathbb {R}$$ we have

\begin{aligned} \begin{aligned} \Psi (x)- \Psi (y) - \psi (y)(x-y)&= \int _{\mathbb {R}}{\left[ |x-a|-|y-a|-sgn(y-a)(x-y)\right] }\mu (da) \\&= 2\int _{\mathbb {R}}{\left[ (x-a)^+-(y-a)^+-\textbf{1}_{y>a}(x-y)\right] }\mu (da)\\&\ge 0. \end{aligned} \end{aligned}

### Proof

Let I be an interval such that $$x,y \in I$$. Then, it is well-known that we have representations [12]

\begin{aligned} \Psi (x) = \alpha _I + \beta _Ix + \int _{I} |x-a|\mu ({\text {d}}a) \end{aligned}

and

\begin{aligned} \Psi '(x) = \beta _I + \int _I sgn(x-a)\mu ({\text {d}}a). \end{aligned}

Using these, $$|x-a|= 2(x-a)^+ -(x-a)$$, and $$sgn(y-a) = 2\textbf{1}_{y>a} - 1$$, we obtain that linear terms vanish and we get

\begin{aligned} \begin{aligned} \Psi (x)- \Psi (y) - \psi (y)(x-y)&= \int _I {\left[ |x-a|-|y-a|-sgn(y-a)(x-y)\right] }\mu ({\text {d}}a)\\&= 2\int _{I}{\left[ (x-a)^+-(y-a)^+-\textbf{1}_{y>a}(x-y)\right] }\mu ({\text {d}}a). \end{aligned} \end{aligned}

It is an easy exercise to check that $$(x-a)^+-(y-a)^+-\textbf{1}_{y>a}(x-y)\ge 0$$ from which it follows that $$\Psi (x)- \Psi (y) - \psi (y)(x-y) \ge 0$$ for any convex function $$\Psi$$. It remains to note that

\begin{aligned}{} & {} \int _{I}{\left[ (x-a)^+-(y-a)^+-\textbf{1}_{y>a}(x-y)\right] }\mu ({\text {d}}a)\\{} & {} \quad = \int _{\mathbb {R}}{\left[ (x-a)^+-(y-a)^+-\textbf{1}_{y>a}(x-y)\right] }\mu ({\text {d}}a), \end{aligned}

where the latter integral is well-defined since $$(x-a)^+-(y-a)^+-\textbf{1}_{y>a}(x-y) = 0$$ whenever $$a\notin I$$. $$\square$$

As a consequence, we obtain the following lemma providing us integrability.

### Lemma 4.2

Let $$\Psi$$ be a convex function with the associated measure $$\Psi '' = \mu$$ and let $$Y \sim N(0,1)$$. If $$\int _{\mathbb {R}}\varphi (a)\mu (da) < \infty$$, then $${\mathbb {E}}|\Psi (Y)| < \infty$$.

### Proof

By adding a linear function if necessary, we may assume without loss of generality that $$\Psi \ge 0$$. Now, from Lemma 4.1 we deduce that, for any deterministic z,

\begin{aligned} \Psi (Y) - \Psi (z) - \Psi '_-(z)(Y-z) = 2\int _{\mathbb {R}} \left[ (Y-a)^+ - (z-a)^+ - \textbf{1}_{z>a}(Y-z)\right] \mu ({\text {d}}a). \end{aligned}

Taking expectation and using Tonelli’s theorem, we get

\begin{aligned} {\mathbb {E}}\Psi (Y) - \Psi (z) + \Psi '_-(z)z = 2\int _{\mathbb {R}}\left[ {\mathbb {E}}(Y-a)^+ - (z-a)^+ + \textbf{1}_{z>a}z\right] \mu ({\text {d}}a). \end{aligned}

In particular, for $$z=0$$, we get

\begin{aligned} {\mathbb {E}}\Psi (Y) - \Psi (0) = 2\int _{\mathbb {R}}\left[ {\mathbb {E}}(Y-a)^+ -(-a)^+\right] \mu ({\text {d}}a). \end{aligned}

Hence, it suffices to prove

\begin{aligned} {\mathbb {E}}(Y-a)^+ -(-a)^+ \le C\varphi (a). \end{aligned}

However, this now follows by observing that

\begin{aligned} {\mathbb {E}}(Y-a)^+ -(-a)^+= \varphi (a) - a{\textbf {P}}(Y>a)-(-a)^+ = \varphi (a)-|a|P(Y>|a|) \end{aligned}

and the well-known asymptotical relation $$a{\textbf {P}}(Y>a) \sim \varphi (a)$$. $$\square$$

Next, we establish several lemmas related to the Gaussian process X.

### Lemma 4.3

We always have

\begin{aligned} \sqrt{V(t_k)} - \sqrt{V(t_{k-1})} \le \sqrt{\vartheta (t_k,t_{k-1})} \le Cn^{-H} \end{aligned}

and

\begin{aligned} \sup _{n\ge 1}\sup _{2\le k\le n} \frac{\sqrt{V(t_k)}}{\sqrt{V(t_{k-1})}} < \infty . \end{aligned}
(4.1)

### Proof

By Gaussianity, we have $${\mathbb {E}}|X_t| = C\sqrt{V(t)}$$ from which reverse triangle inequality gives

\begin{aligned} \sqrt{V(t_k)} - \sqrt{V(t_{k-1})} = C{\mathbb {E}}|X_{t_k}| - C{\mathbb {E}}|X_{t_{k-1}}| \le C{\mathbb {E}}|X_t-X_s| \end{aligned}

leading to the first claim. The second claim now follows from

\begin{aligned} \frac{\sqrt{V(t_k)}}{\sqrt{V(t_{k-1})}} = 1 + \frac{\sqrt{\vartheta (t_k,t_{k-1})}}{\sqrt{V(t_{k-1})}} \end{aligned}

and the fact that $$V(t_{k-1}) \ge V(t_1) \ge cn^{-H}$$ since $$k\ge 2$$ and V is non-decreasing. $$\square$$

Throughout, we use the following short notation

\begin{aligned} \gamma _k = \frac{R(t_k,t_{k-1})}{V(t_{k-1})}, \end{aligned}

where R(ts) is the covariance function of X, and we use the convention $$\gamma _k=0$$ whenever $$V(t_{k-1})=0$$. The following gives us a useful relation.

### Lemma 4.4

Let $$V(t_{k-1})>0$$. Then,

\begin{aligned}\sqrt{V(t_k)} - \gamma _k\sqrt{V(t_{k-1})} = -\frac{\left( \sqrt{V(t_k)} - \sqrt{V(t_{k-1})}\right) ^2}{2\sqrt{V(t_{k-1})}} + \frac{\vartheta (t_k,t_{k-1})}{2V(t_{k-1})}. \end{aligned}

### Proof

We use

\begin{aligned} \sqrt{V(t_k)} - \gamma _k\sqrt{V(t_{k-1})} = \sqrt{V(t_k)} - \sqrt{V(t_{k-1})} + \left[ 1-\gamma _k\right] \sqrt{V(t_{k-1})} \end{aligned}

and

\begin{aligned} \gamma _k - 1 = \frac{V(t_k)-V(t_{k-1}) - \vartheta (t_k,t_{k-1})}{2V(t_{k-1})}. \end{aligned}

Using also

\begin{aligned} \begin{aligned} V(t_k)-V(t_{k-1})&= \left( \sqrt{V(t_k)} - \sqrt{V(t_{k-1})}\right) \left( \sqrt{V(t_k)} + \sqrt{V(t_{k-1})}\right) \\&= \left( \sqrt{V(t_k)} - \sqrt{V(t_{k-1})}\right) ^2 + 2\left( \sqrt{V(t_k)} - \sqrt{V(t_{k-1})}\right) \sqrt{V(t_{k-1})} \end{aligned} \end{aligned}

\begin{aligned} \begin{aligned} \left[ 1-\gamma _k\right] \sqrt{V(t_{k-1})}&= -\frac{\left( \sqrt{V(t_k)} - \sqrt{V(t_{k-1})}\right) ^2}{2\sqrt{V(t_{k-1})}} - \left( \sqrt{V(t_k)} - \sqrt{V(t_{k-1})}\right) \\&\quad + \frac{\vartheta (t_k,t_{k-1})}{2V(t_{k-1})}. \end{aligned} \end{aligned}

Consequently, we have

\begin{aligned} \sqrt{V(t_k)} - \gamma _k\sqrt{V(t_{k-1})} = -\frac{\left( \sqrt{V(t_k)} - \sqrt{V(t_{k-1})}\right) ^2}{2\sqrt{V(t_{k-1})}} + \frac{\vartheta (t_k,t_{k-1})}{2V(t_{k-1})}, \end{aligned}

completing the proof. $$\square$$

### 4.2 Approximation Estimates

We begin with the following elementary lemma on the approximation of Riemann–Stieltjes integrals. For the reader’s convenience, we present the proof.

### Lemma 4.5

Let f be a differentiable function on [0, 1], and let g be non-decreasing on [0, 1]. Then,

\begin{aligned} \begin{aligned}&\left| \int _0^1 f(V(s))dg(s)-\sum _{k=1}^n f(V(t_{k-1}))(g(t_k)-g(t_{k-1}))\right| \\&\le \max _{1\le k\le n}(g(t_k)-g(t_{k-1}))\int _0^1 |f'(s)|ds. \end{aligned} \end{aligned}

### Proof

Without loss of generality, we can assume $$\int _0^1 |f'(s)|{\text {d}}s < \infty$$ since otherwise there is nothing to prove. From this, it follows that f is of bounded variation, since for a differentiable function we have

\begin{aligned} {\text {TV}}(f) = \int _0^1 |f'(s)|{\text {d}}s, \end{aligned}

where TV stands for total variation. Since V is continuous and non-decreasing, this further implies that $$s\rightarrow f(V(s))$$ is continuous and of bounded variation as well, with

\begin{aligned} {\text {TV}}(f(V)) \le \int _0^1 |f'(s)|{\text {d}}s. \end{aligned}

Indeed, this follows from the fact that

\begin{aligned} \begin{aligned} {\text {TV}}(f(V))&=\sup _{\{s_1,s_2,\ldots ,s_n\}} \sum _{k=1}^n |f(V(s_k))-f(V(s_{k-1}))| \\&\le \sup _{\{x_1,x_2,\ldots ,x_n\}}\sum _{k=1}^n |f(x_k)-f(x_{k-1})| = {\text {TV}}(f). \end{aligned} \end{aligned}

Thus, the Riemann–Stieltjes integral $$\int _0^1 f(V(s)){\text {d}}g(s)$$ exists, as $$s\rightarrow f(V(s))$$ is continuous and $$s \rightarrow g(s)$$ is non-decreasing, and hence, of bounded variation. Let us now prove the claimed upper bounds. We have

where $$s_k^*\in [t_{k-1},t_k]$$ and we have also applied the mean value theorem. This verifies the claimed upper bound and thus, completes the proof. $$\square$$

We apply the result for function $$f(x) = \frac{1}{\sqrt{x}}e^{-\frac{a^2}{2x}}$$. The following lemma evaluates the integral for this function in terms of the level a when the level a is large enough.

### Lemma 4.6

Let $$|a|>1$$. Then, for $$f(x) = \frac{1}{\sqrt{x}}e^{-\frac{a^2}{2x}}$$ we have

\begin{aligned} \int _0^1 |f'(s)|ds \le C\varphi (a). \end{aligned}

### Proof

By straightforward computations, we get

\begin{aligned} f'(x) = \frac{1}{2}e^{-\frac{a^2}{2x}}x^{-\frac{5}{2}}(a^2-x) \end{aligned}

from which we get

\begin{aligned} |f'(x)| = \frac{1}{2}e^{-\frac{a^2}{2x}}x^{-\frac{5}{2}}(a^2-x) \end{aligned}

as $$x\in [0,1]$$ and $$|a|>1$$. Now,

\begin{aligned} \begin{aligned} \int _0^1 |f'(s)|{\text {d}}s&\le \int _0^1 \frac{1}{2}e^{-\frac{a^2}{2s}}s^{-\frac{5}{2}}a^2{\text {d}}s \\&= \frac{a^2}{2}\int _{\frac{a^2}{2}}^\infty e^{-z}\left( \frac{a^2}{2z}\right) ^{-\frac{5}{2}}\frac{a^2}{2z^2}{\text {d}}z \\&= \frac{\sqrt{2}}{a} \int _{\frac{a^2}{2}}^\infty e^{-z}\sqrt{z}{\text {d}}z. \end{aligned} \end{aligned}

By L’Hopital’s rule, we obtain that

\begin{aligned} \lim _{a\rightarrow \infty }\frac{\int _{\frac{a^2}{2}}^\infty e^{-z}\sqrt{z}{\text {d}}z}{ae^{-\frac{a^2}{2}}} = \lim _{a\rightarrow \infty } \frac{e^{-\frac{a^2}{2}}\frac{a}{\sqrt{2}}\cdot a}{a^2e^{-\frac{a^2}{2}}-e^{-\frac{a^2}{2}}} = \frac{1}{\sqrt{2}}. \end{aligned}

It follows that

\begin{aligned} \int _0^1 |f'(s)|{\text {d}}s \le \frac{C}{a} \cdot a e^{-\frac{a^2}{2}} = C\varphi (a). \end{aligned}

This completes the proof. $$\square$$

The following lemma is to obtain boundedness in the region $$|a|\le 1$$.

### Lemma 4.7

Set $$f_a(x) = \frac{a^4}{x^2}e^{-\frac{a^2}{2x}}$$. Then,

\begin{aligned} \sup _{|a|\le 1}\sup _{0\le x\le 1} f_a(x) < \infty . \end{aligned}

### Proof

The claim follows directly by noting that $$f_a(x) = h\left( \frac{a^2}{x}\right)$$, where

\begin{aligned} h(z) = z^2e^{-\frac{z}{2}} \end{aligned}

is bounded for $$z\ge 0$$. $$\square$$

### Lemma 4.8

We have, for $$|a|\le 1$$,

\begin{aligned} \sum _{k=2}^n \frac{1}{\sqrt{V(t_{k-1})}}\left[ \varphi \left( \frac{a^2}{\sqrt{V(t_{k-1})}}\right) -\varphi \left( \frac{a^2}{\sqrt{V(t_{k})}}\right) \right] (t_k-t_{k-1}) \le C\varphi (a)n^{-H}. \end{aligned}

### Proof

By mean value theorem and the fact $$\varphi '(x) = -x\varphi (x)$$, we have

\begin{aligned} \begin{aligned}&\frac{1}{\sqrt{V(t_{k-1})}}\left[ \varphi \left( \frac{a^2}{\sqrt{V(t_{k-1})}}\right) -\varphi \left( \frac{a^2}{\sqrt{V(t_{k})}}\right) \right] \\&\le \frac{1}{\sqrt{V(t_{k-1})}}\left( \frac{a^2}{\sqrt{V(t_k)}}-\frac{a^2}{\sqrt{V(t_{k-1})}}\right) \frac{a^2}{\sqrt{\xi _k}}\varphi \left( \frac{a^2}{\sqrt{\xi _k}}\right) \\&\le \frac{\xi _k^{\frac{3}{2}}}{\sqrt{V(t_{k})}V(t_{k-1})}\Delta _k \sqrt{V(\cdot )}\frac{a^4}{\xi _k^2}\varphi \left( \frac{a^2}{\sqrt{\xi _k}}\right) . \end{aligned} \end{aligned}

Here,

\begin{aligned} \sup _k \frac{\xi _k^{\frac{3}{2}}}{\sqrt{V(t_{k})}V(t_{k-1})} < \infty \end{aligned}

by Lemma 4.3, while

\begin{aligned} \sup _{k}\sup _{|a|\le 1}\frac{a^4}{\xi _k^2}\varphi \left( \frac{a^2}{\sqrt{\xi _k}}\right) < \infty \end{aligned}

by Lemma 4.7. The claim follows from $$\Delta _k\sqrt{V(\cdot )} \le Cn^{-H}$$. $$\square$$

### Lemma 4.9

We have

\begin{aligned} \begin{aligned}&\left| \sum _{k=2}^{n-1}\left[ V(t_{k-1})\right] ^{-\frac{1}{2}} \varphi \left( \frac{a^2}{\sqrt{V(t_k)}}\right) [t_k-t_{k-1}]-\int _0^1\left[ V(s)\right] ^{-\frac{1}{2}} \varphi \left( \frac{a^2}{\sqrt{V(s)}}\right) ds\right| \\&\le C\varphi (a)n^{H-1}. \end{aligned} \end{aligned}

### Proof

From monotonicity, we get

\begin{aligned} \begin{aligned} \int _{t_{k-1}}^{t_{k}} \left[ V(s)\right] ^{-\frac{1}{2}} \varphi \left( \frac{a^2}{\sqrt{V(s)}}\right) ds&\le \left[ V(t_{k-1})\right] ^{-\frac{1}{2}} \varphi \left( \frac{a^2}{\sqrt{V(t_k)}}\right) [t_k-t_{k-1}]\\&\le \int _{t_{k}}^{t_{k+1}} \left[ V(s)\right] ^{-\frac{1}{2}} \varphi \left( \frac{a^2}{\sqrt{V(s)}}\right) ds. \end{aligned} \end{aligned}

Summing over $$k=2,\ldots ,n-1$$ yields

\begin{aligned} \begin{aligned} \int _{t_{1}}^{t_{n-1}} \left[ V(s)\right] ^{-\frac{1}{2}} \varphi \left( \frac{a^2}{\sqrt{V(s)}}\right) {\text {d}}s&\le \sum _{k=2}^{n-1}\left[ V(t_{k-1})\right] ^{-\frac{1}{2}} \varphi \left( \frac{a^2}{\sqrt{V(t_k)}}\right) [t_k-t_{k-1}]\\&\le \int _{t_{2}}^{1} \left[ V(s)\right] ^{-\frac{1}{2}} \varphi \left( \frac{a^2}{\sqrt{V(s)}}\right) {\text {d}}s \end{aligned} \end{aligned}

from which we get

\begin{aligned} \begin{aligned}&\left| \sum _{k=2}^{n-1}\left[ V(t_{k-1})\right] ^{-\frac{1}{2}} \varphi \left( \frac{a^2}{\sqrt{V(t_k)}}\right) [t_k-t_{k-1}]-\int _0^1\left[ V(s)\right] ^{-\frac{1}{2}} \varphi \left( \frac{a^2}{\sqrt{V(s)}}\right) {\text {d}}s\right| \\&\le \int _0^{t_{1}} \left[ V(s)\right] ^{-\frac{1}{2}} \varphi \left( \frac{a^2}{\sqrt{V(s)}}\right) {\text {d}}s+ \int _{t_{n-1}}^1\left[ V(s)\right] ^{-\frac{1}{2}} \varphi \left( \frac{a^2}{\sqrt{V(s)}}\right) {\text {d}}s. \end{aligned} \end{aligned}

Here,

\begin{aligned} \begin{aligned}&\int _0^{t_{1}} \left[ V(s)\right] ^{-\frac{1}{2}} \varphi \left( \frac{a^2}{\sqrt{V(s)}}\right) {\text {d}}s + \int _{t_{n-1}}^1\left[ V(s)\right] ^{-\frac{1}{2}} \varphi \left( \frac{a^2}{\sqrt{V(s)}}\right) {\text {d}}s\\&\le \varphi (a)\int _0^{t_{1}} \left[ V(s)\right] ^{-\frac{1}{2}}{\text {d}}s + \varphi (a)\int _{t_{n-1}}^1 \left[ V(s)\right] ^{-\frac{1}{2}}{\text {d}}s. \end{aligned} \end{aligned}

Since $$V(s)\ge cs^{2H}$$, we get

\begin{aligned} \begin{aligned}&\int _0^{t_{1}} \left[ V(s)\right] ^{-\frac{1}{2}}{\text {d}}s + \int _{t_{n-1}}^1 \left[ V(s)\right] ^{-\frac{1}{2}}{\text {d}}s\\&\le C\int _0^{t_{1}} s^{-H}{\text {d}}s + \int _{t_{n-1}}^1 s^{-H}{\text {d}}s\\&=Cn^{H-1}+ 1-\left[ \frac{n-1}{n}\right] ^{1-H}\\&\le Cn^{H-1} + \left[ 1-\frac{n-1}{n}\right] ^{1-H}\\&= Cn^{H-1}. \end{aligned} \end{aligned}

This completes the proof. $$\square$$

### Lemma 4.10

We have

\begin{aligned} \begin{aligned}&\left| \sum _{k=2}^n \frac{1}{2\sqrt{V(t_{k-1})}} \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) (t_k-t_{k-1}) - \int _0^1 \frac{1}{2\sqrt{V(s)}} \varphi \left( \frac{a}{\sqrt{V(s)}}\right) ds\right| \\&\le C\varphi (a)n^{H-1}. \end{aligned} \end{aligned}

### Proof

We separate the cases $$|a|>1$$ and $$|a|\le 1$$. Let first $$|a|>1$$. Noting that then, by using the convention $$\frac{1}{x}\varphi \left( \frac{a}{x}\right) =0$$ for $$x=0$$, we have

\begin{aligned} \begin{aligned}&\sum _{k=2}^n \frac{1}{2\sqrt{V(t_{k-1})}} \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) (t_k-t_{k-1})\\&= \sum _{k=1}^n \frac{1}{2\sqrt{V(t_{k-1})}} \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) (t_k-t_{k-1}). \end{aligned} \end{aligned}

Now, Lemmas 4.5 and 4.6 apply, and we get, with $$f(x) = \frac{1}{\sqrt{x}}e^{-\frac{a^2}{2x}}$$, that

\begin{aligned} \begin{aligned}&\left| \sum _{k=2}^n \frac{1}{2\sqrt{V(t_{k-1})}} \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) (t_k-t_{k-1}) - \int _0^1 \frac{1}{2\sqrt{V(s)}} \varphi \left( \frac{a}{\sqrt{V(s)}}\right) {\text {d}}s\right| \\&\le \frac{\int _0^1\left| f'(s)\right| {\text {d}}s}{n} \le \frac{C\varphi (a)}{n} \le C\varphi (a)n^{H-1}. \end{aligned} \end{aligned}

This proves the claim when $$|a|>1$$. For $$|a|\le 1$$, we write

\begin{aligned} \begin{aligned}&\sum _{k=2}^n \frac{1}{2\sqrt{V(t_{k-1})}} \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) (t_k-t_{k-1}) - \int _0^1 \frac{1}{2\sqrt{V(s)}} \varphi \left( \frac{a}{\sqrt{V(s)}}\right) {\text {d}}s \\&= \sum _{k=2}^n \frac{1}{2\sqrt{V(t_{k-1})}} \varphi \left( \frac{a}{\sqrt{V(t_{k})}}\right) (t_k-t_{k-1})-\int _0^1 \frac{1}{2\sqrt{V(s)}} \varphi \left( \frac{a}{\sqrt{V(s)}}\right) {\text {d}}s\\&\quad +\sum _{k=2}^n \frac{1}{2\sqrt{V(t_{k-1})}} \left[ \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) -\varphi \left( \frac{a}{\sqrt{V(t_{k})}}\right) \right] (t_k-t_{k-1}). \end{aligned} \end{aligned}

The second term can be bounded by Lemma 4.8, and we have

\begin{aligned} \begin{aligned}&\sum _{k=2}^n \frac{1}{2\sqrt{V(t_{k-1})}} \left[ \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) -\varphi \left( \frac{a}{\sqrt{V(t_{k})}}\right) \right] (t_k-t_{k-1}) \le Cn^{-H}\\&\le C\varphi (a)n^{H-1} \end{aligned} \end{aligned}

since for $$|a|\le 1$$ we have $$\varphi (a)>\epsilon$$. For the first term, we have by Lemma 4.9 that

\begin{aligned} \begin{aligned}&\left| \sum _{k=2}^{n-1} \frac{1}{2\sqrt{V(t_{k-1})}} \varphi \left( \frac{a}{\sqrt{V(t_{k})}}\right) (t_k-t_{k-1})-\int _0^1 \frac{1}{2\sqrt{V(s)}} \varphi \left( \frac{a}{\sqrt{V(s)}}\right) {\text {d}}s\right| \\&\le C\varphi (a)n^{H-1} \end{aligned} \end{aligned}

yielding

\begin{aligned} \begin{aligned}&\left| \sum _{k=2}^{n} \frac{1}{2\sqrt{V(t_{k-1})}} \varphi \left( \frac{a}{\sqrt{V(t_{k})}}\right) (t_k-t_{k-1})-\int _0^1 \frac{1}{2\sqrt{V(s)}} \varphi \left( \frac{a}{\sqrt{V(s)}}\right) {\text {d}}s\right| \\&\le C\varphi (a)n^{H-1} + \frac{1}{2\sqrt{V(t_{n-1})}} \varphi \left( \frac{a}{\sqrt{V(1)}}\right) n^{-1}\\&\le C\varphi (a)n^{H-1}. \end{aligned} \end{aligned}

This proves the case $$|a|\le 1$$ and completes the whole proof. $$\square$$

### 4.3 Proof of Theorem 2.1 and Corollary 2.2

We begin by considering a simple case $$f(x) = (x-a)^+$$.

### Proposition 4.11

Let $$a\in \mathbb {R}$$ be fixed. Then,

\begin{aligned} \begin{aligned}&{\mathbb {E}}\left| \int _0^1 I_{X_s>a}dX_s - \sum _{k=1}^n \textbf{1}_{X_{t_{k-1}}>a}(X_{t_k}-X_{t_{k-1}})\right| \\&= \frac{1}{2} \int _0^1 \frac{1}{\sqrt{V(s)}} \varphi \left( \frac{a}{\sqrt{V(s)}}\right) ds \left( \frac{1}{n}\right) ^{2H-1} + R_n(a), \end{aligned} \end{aligned}

where the remainder satisfies

\begin{aligned} R_n(a) \le C\varphi (a)\max \{n^{-H},n^{1-2H}\max _{1\le k\le n} [g(t_k,t_{k-1})n^{2H}]\}. \end{aligned}

### Proof

By (1.4), we have

\begin{aligned} \int _0^1 I_{X_s>a}dX_s = (X_1-a)^+-(X_0-a)^+. \end{aligned}

Writing

\begin{aligned} (X_1-a)^+-(X_0-a)^+ = \sum _{k=1}^n \left[ (X_{t_k}-a)^+ - (X_{t_{k-1}}-a)^+\right] , \end{aligned}

we get

\begin{aligned} \begin{aligned}&(X_1-a)^+-(X_0-a)^+ - \sum _{k=1}^n \textbf{1}_{X_{t_{k-1}}>a}(X_{t_k}-X_{t_{k-1}}) \\&= \sum _{k=1}^n \left[ (X_{t_k}-a)^+ - (X_{t_{k-1}}-a)^+ - \textbf{1}_{X_{t_{k-1}}>a}(X_{t_k}-X_{t_{k-1}})\right] \\&\ge 0, \end{aligned} \end{aligned}

where the last inequality follows from Lemma 4.1. From $$(x-a)^+ = x\textbf{1}_{x>a} - a \textbf{1}_{x>a}$$, we obtain for one interval increment

\begin{aligned} \begin{aligned}&(X_{t_k}-a)^+ - (X_{t_{k-1}}-a)^+ - \textbf{1}_{X_{t_{k-1}}>a}(X_{t_k}-X_{t_{k-1}})\\&= X_{t_k}\textbf{1}_{X_{t_k}>a} - X_{t_k}\textbf{1}_{X_{t_{k-1}}>a} -a \textbf{1}_{X_{t_k}> a} + a\textbf{1}_{X_{t_{k-1}}>a}. \end{aligned} \end{aligned}

If $$V(t_{k-1})>0$$, using representation

\begin{aligned} X_{t_k} = \frac{R(t_k,t_{k-1})}{V(t_{k-1})}X_{t_{k-1}} + bY, \end{aligned}

where $$Y\sim N(0,1)$$ is independent of $$X_{t_{k-1}}$$, R is the covariance of X, and b is such that $${\mathbb {E}}X_{t_k}^2 = V(t_k)$$, we get

\begin{aligned} {\mathbb {E}}\left( X_{t_k} \textbf{1}_{X_{t_{k-1}}>a}\right) = \frac{R(t_k,t_{k-1})}{V(t_{k-1})}{\mathbb {E}}\left( X_{t_{k-1}}\textbf{1}_{X_{t_{k-1}}>a}\right) = \gamma _k \sqrt{V(t_{k-1})} \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) . \end{aligned}

After rearranging the terms, this leads to

\begin{aligned} \begin{aligned}&{\mathbb {E}}\left[ (X_{t_k}-a)^+ - (X_{t_{k-1}}-a)^+ - \textbf{1}_{X_{t_{k-1}}>a}(X_{t_k}-X_{t_{k-1}})\right] \\&= \sqrt{V(t_k)}\varphi \left( \frac{a}{\sqrt{V(t_k)}}\right) - \gamma _k\sqrt{V(t_{k-1})}\varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) \\&\quad + a {\mathbb {P}}\left( Y>\frac{a}{\sqrt{V(t_{k-1})}}\right) - a {\mathbb {P}}\left( Y>\frac{a}{\sqrt{V(t_{k})}}\right) \\&= \left[ \sqrt{V(t_k)} - \gamma _k\sqrt{V(t_{k-1})}\right] \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) \\&\quad + \sqrt{V(t_{k})}\left[ \varphi \left( \frac{a}{\sqrt{V(t_k)}}\right) - \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) \right] \\&\quad + a {\mathbb {P}}\left( Y>\frac{a}{\sqrt{V(t_{k-1})}}\right) - a {\mathbb {P}}\left( Y>\frac{a}{\sqrt{V(t_{k})}}\right) . \end{aligned} \end{aligned}

Note also that this remains valid in the case when $$V(t_{k-1})=0$$, provided we use the convention $${\mathbb {P}}(Y>\infty ) = 0$$, $${\mathbb {P}}(Y>-\infty )=1$$, $$\varphi (\pm \infty )=0$$, and

\begin{aligned} \gamma _k \sqrt{V(t_{k-1})} \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) = 0. \end{aligned}

We have obtained

\begin{aligned} {\mathbb {E}}\left| (X_1-a)^+-(X_0-a)^+ - \sum _{k=1}^n \textbf{1}_{X_{t_{k-1}}>a}(X_{t_k}-X_{t_{k-1}})\right| = I_{0,n} +I_{1,n} + I_{2,n}+I_{3,n}, \end{aligned}

where

\begin{aligned}{} & {} I_{0,n} = \left[ \sqrt{V(t_1)} - \gamma _1\sqrt{V(0)}\right] \varphi \left( \frac{a}{\sqrt{V(0)}}\right) , \\{} & {} I_{1,n} = \sum _{k=2}^n\left[ \sqrt{V(t_k)} - \gamma _k\sqrt{V(t_{k-1})}\right] \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) , \\{} & {} I_{2,n} = \sum _{k=1}^n\sqrt{V(t_{k})}\left[ \varphi \left( \frac{a}{\sqrt{V(t_k)}}\right) - \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) \right] , \end{aligned}

and

\begin{aligned} I_{3,n} = \sum _{k=1}^n\left[ a {\mathbb {P}}\left( Y>\frac{a}{\sqrt{V(t_{k-1})}}\right) - a {\mathbb {P}}\left( Y>\frac{a}{\sqrt{V(t_{k})}}\right) \right] . \end{aligned}

For $$I_{0,n}$$, we have

\begin{aligned} |I_{0,n}| \le \varphi (a)\left| \sqrt{V(t_1)} - \gamma _1\sqrt{V(0)}\right| . \end{aligned}

Here, $$\gamma _1 = 0$$ if $$V(0)=0$$ leading to $$|I_{0,n}| \le \varphi (a) n^{-H}$$, while for $$V(0)>0$$ we can use Lemmas 4.4 and 4.3 to obtain

\begin{aligned} \begin{aligned}&\left| \sqrt{V(t_1)} - \gamma _1\sqrt{V(0)}\right| \\&\le \frac{\left( \sqrt{V(t_1)} - \sqrt{V(0)}\right) ^2}{2\sqrt{V(0)}} + \frac{\vartheta (t_1,t_0)}{2V(0)} \\&\le Cn^{-2H} \end{aligned} \end{aligned}

leading to $$|I_{0,n}|\le C\varphi (a)n^{-H}$$ as well. Consider next the terms $$I_{2,n}$$ and $$I_{3,n}$$. Trivially

\begin{aligned} I_{3,n} = a {\mathbb {P}}\left( Y>\frac{a}{\sqrt{V(0)}}\right) - a {\mathbb {P}}\left( Y>\frac{a}{\sqrt{V(1)}}\right) , \end{aligned}

while for $$I_{2,n}$$ we get by Lemma 4.5 for each subinterval that

\begin{aligned} I_{2,n} = \int _0^1 \sqrt{V(s)}d\varphi \left( \frac{a}{\sqrt{V(s)}}\right) + R_{2,n}, \end{aligned}

where the remainder satisfies, since $$\varphi \left( \frac{a}{\sqrt{V(s)}}\right)$$ is increasing in s,

\begin{aligned} R_{2,n} \le \max _{1\le k\le n} \Delta _k \sqrt{V(\cdot )}\varphi (a) \le C\varphi (a) n^{-H}. \end{aligned}

Note that here, by using the fact that $$\varphi '(x)=-x\varphi (x)$$ and $$\varphi (x)$$ is the density of the normal distribution,

\begin{aligned}&\int _0^1 \sqrt{V(s)}d\varphi \left( \frac{a}{\sqrt{V(s)}}\right) \\&=-\int _0^1 \sqrt{V(s)}\varphi \left( \frac{a}{\sqrt{V(s)}}\right) \left( \frac{a}{\sqrt{V(s)}}\right) a d\left[ \left( V(s)\right) ^{-\frac{1}{2}}\right] \\&=-a^2\int _0^1 \varphi \left( \frac{a}{\sqrt{V(s)}}\right) d\left[ \left( V(s)\right) ^{-\frac{1}{2}}\right] \\&=a^2 \int _{\frac{1}{\sqrt{V(1)}}}^{\frac{1}{\sqrt{V(0)}}} \varphi (az){\text {d}}z \\&=a \int _{\frac{a}{\sqrt{V(1)}}}^{\frac{a}{\sqrt{V(0)}}} \varphi (v){\text {d}}v\\&=-a {\mathbb {P}}\left( Y>\frac{a}{\sqrt{V(0)}}\right) + a {\mathbb {P}}\left( Y>\frac{a}{\sqrt{V(1)}}\right) . \end{aligned}

Consequently, we have

\begin{aligned} I_{2,n}+I_{3,n}&= R_{2,n} \le C\varphi (a)n^{-H}. \end{aligned}

It remains to bound the term $$I_{1,n}$$. Using Lemma 4.4 allows us to write $$I_{1,n} = I_{1,A,n} + I_{1,B,n}$$, where

\begin{aligned} I_{1,A,n} = -\sum _{k=2}^n \frac{\left( \sqrt{V(t_k)} - \sqrt{V(t_{k-1})}\right) ^2}{2\sqrt{V(t_{k-1})}} \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) \end{aligned}

and

\begin{aligned} I_{1,B,n} = \sum _{k=2}^n \frac{\vartheta (t_k,t_{k-1})}{2\sqrt{V(t_{k-1})}} \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) . \end{aligned}

For $$I_{1,A,n}$$, we estimate

\begin{aligned} |I_{1,A,n}|&\le \varphi (a)\max _{1\le k\le n}\Delta _k \sqrt{V(\cdot )}\sum _{k=2}^n \frac{\sqrt{V(t_k)} - \sqrt{V(t_{k-1})}}{2\sqrt{V(t_{k-1})}}\\&\le C\varphi (a) n^{-H} \sum _{k=2}^n \frac{\sqrt{V(t_k)}}{\sqrt{V(t_{k-1})}}\frac{1}{\sqrt{V(t_{k})}}\left( \sqrt{V(t_k)} - \sqrt{V(t_{k-1})}\right) \\&\le C\varphi (a)n^{-H}\sum _{k=1}^n \frac{1}{\sqrt{V(t_{k})}}\left( \sqrt{V(t_k)} - \sqrt{V(t_{k-1})}\right) \\&= C\varphi (a)n^{-H} \sum _{k=1}^n \int _{t_{k-1}}^{t_k}\frac{1}{\sqrt{V(t_{k})}} d\sqrt{V(s)}\\&\le C\varphi (a)n^{-H} \int _0^1 \frac{1}{\sqrt{V(s)}}d\sqrt{V(s)} \\&= C\varphi (a)n^{-H} \int _0^1 {\text {d}}V(s)\\&=C\varphi (a)n^{-H}. \end{aligned}

Here, we have used the facts that $$\frac{1}{\sqrt{V(t_{k})}} \le \frac{1}{\sqrt{V(s)}}$$ for $$t_{k-1}\le s\le t_k$$ as V is non-decreasing, and that $$d\sqrt{V(s)} = \frac{1}{2\sqrt{V(s)}}{\text {d}}V(s)$$ giving us

\begin{aligned} \int _0^1 \frac{1}{\sqrt{V(s)}}d\sqrt{V(s)} = \int _0^1 {\text {d}}V(s) = V(1)-V(0). \end{aligned}

It remains to study the term $$I_{1,B,n}$$. For this, we obtain

\begin{aligned} I_{1,B,n}&= \sum _{k=2}^n \frac{\vartheta (t_k,t_{k-1})}{2\sqrt{V(t_{k-1})}} \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) \\&= \sigma ^2 n^{1-2H}\sum _{k=2}^n \frac{1}{2\sqrt{V(t_{k-1})}} \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) (t_k-t_{k-1}) \\&\quad + n^{1-2H}\sum _{k=2}^n \frac{g(t_k,t_{k-1})n^{2H}}{2\sqrt{V(t_{k-1})}} \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) (t_k-t_{k-1}). \end{aligned}

Here, the first term satisfies, by Lemma 4.10,

\begin{aligned} \begin{aligned}&\sigma ^2 n^{1-2H}\sum _{k=2}^n \frac{1}{2\sqrt{V(t_{k-1})}} \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) (t_k-t_{k-1}) \\&= \sigma ^2 n^{1-2H}\int _0^1 \frac{1}{2\sqrt{V(s)}} \varphi \left( \frac{a}{\sqrt{V(s)}}\right) {\text {d}}s + n^{1-2H}R'_{2,B,n}, \end{aligned} \end{aligned}

where

\begin{aligned} n^{1-2H}R'_{2,B,n} \le C\varphi (a)n^{H-1}\cdot n^{1-2H} = C\varphi (a)n^{-H}. \end{aligned}

The second term in turn satisfies, again by Lemma 4.10,

\begin{aligned} \begin{aligned}&n^{1-2H}\sum _{k=2}^n \frac{g(t_k,t_{k-1})n^{2H}}{2\sqrt{V(t_{k-1})}} \varphi \left( \frac{a}{\sqrt{V(t_{k-1})}}\right) (t_k-t_{k-1}) \\&\le n^{1-2H}\max _{1\le k\le n} [g(t_k,t_{k-1})n^{2H}] \left[ \int _0^1 \frac{1}{2\sqrt{V(s)}} \varphi \left( \frac{a}{\sqrt{V(s)}}\right) ds + R''_{2,B,n}\right] \\&\le n^{1-2H}\max _{1\le k\le n} [g(t_k,t_{k-1})n^{2H}] \varphi (a). \end{aligned} \end{aligned}

Collecting all the estimates completes the proof. $$\square$$

### Remark 4

We note that by the above proof, we actually obtain

\begin{aligned} {\mathbb {E}}\left| \int _0^1 I_{X_s>a}{\text {d}}X_s - \sum _{k=1}^n I_{X_{t_{k-1}}>a}(X_{t_k}-X_{t_{k-1}})\right| \le C\varphi (a)n^{1-2H} \end{aligned}

whenever we have only the upper bound $${\mathbb {E}}(X_t-X_s)^2 \le C|t-s|^{2\,H}$$ instead of (1.2). Indeed, the leading order term arises from $$I_{1,B,n}$$ with a constant given by

\begin{aligned} C(a) = \int _0^1 \frac{1}{\sqrt{V(s)}}\varphi \left( \frac{a}{\sqrt{V(s)}}\right) {\text {d}}s \le \varphi (a) \int _0^1 \frac{1}{\sqrt{V(s)}}{\text {d}}s. \end{aligned}

With the help of Proposition 4.11, we are now ready to prove our main results.

### Proof of Theorem 2.1

Using Lemma 4.1 and (1.4), we have

\begin{aligned} \begin{aligned}&\Psi (X_1)-\Psi (X_0) - \sum _{k=1}^n \Psi '(X_{t_{k-1}})(X_{t_k}-X_{t_{k-1}}) \\&= \sum _{k=1}^n \left[ \Psi (X_{t_k})-\Psi (X_{t_{k-1}})-\Psi '(X_{t_{k-1}})(X_{t_k}-X_{t_{k-1}})\right] \\&=\int _{\mathbb {R}} 2Z_n^+(a)\mu ({\text {d}}a), \end{aligned} \end{aligned}

where (see the proof of Proposition 4.11)

\begin{aligned} \begin{aligned} Z_n^+(a)&= \sum _{k=1}^n \left[ (X_{t_k}-a)^+-(X_{t_{k-1}}-a)^+-I_{X_{t_{k-1}}>a}(X_{t_k}-X_{t_{k-1}})\right] \\&=\int _0^1 I_{X_s>a}{\text {d}}X_s - \sum _{k=1}^n \textbf{1}_{X_{t_{k-1}}>a}(X_{t_k}-X_{t_{k-1}})\\&\ge 0. \end{aligned} \end{aligned}

Taking expectation and using Proposition 4.11 to compute $${\mathbb {E}}Z_n^+(a)$$, we get

\begin{aligned} \begin{aligned}&{\mathbb {E}}\left| \Psi (X_1)-\Psi (X_0) - \sum _{k=1}^n \Psi '(X_{t_{k-1}})(X_{t_k}-X_{t_{k-1}})\right| \\&= 2\int _{\mathbb {R}} {\mathbb {E}}Z_n^+(a)\mu ({\text {d}}a)\\&=\sigma ^2\int _{\mathbb {R}}\int _0^1 \frac{1}{\sqrt{V(s)}} \varphi \left( \frac{a}{\sqrt{V(s)}}\right) {\text {d}}s\mu ({\text {d}}a) \left( \frac{1}{n}\right) ^{2H-1} + \int _{\mathbb {R}}R_n(a)\mu ({\text {d}}a). \end{aligned} \end{aligned}

Here, the remainder $$R_n(a)$$ is the remainder in Proposition 4.11 and hence, satisfies

\begin{aligned} R_n(a)\le C\varphi (a)\max \left\{ n^{-H},n^{1-2H}\max _{1\le k\le n} \left[ g(t_k,t_{k-1})n^{2H}\right] \right\} \end{aligned}

that is integrable since $$\int _{\mathbb {R}}\varphi (a)\mu ({\text {d}}a) < \infty$$ by assumption. Similarly, the leading order term is finite by the fact that

\begin{aligned} \int _0^1 \frac{1}{\sqrt{V(s)}}\varphi \left( \frac{a}{\sqrt{V(s)}}\right) {\text {d}}s \le \varphi (a) \int _0^1 \frac{1}{\sqrt{V(s)}}{\text {d}}s \le C\varphi (a). \end{aligned}

This yields the claim. $$\square$$

### Proof of Corollary 2.2

Let $$A_K = \{\omega : \sup _{0\le t\le 1} |X_t| \le K\}$$. Since f is locally of bounded variation, it follows that on the set $$A_K$$ we obtain

\begin{aligned} \int _0^1 \Psi '(X_s){\text {d}}X_s - \sum _{k=1}^n \Psi '(X_{t_{k-1}})(X_{t_k}-X_{t_{k-1}}) = \int _{-K}^K Z_n^+(a)\mu ({\text {d}}a). \end{aligned}

It follows that

\begin{aligned} \left| \int _0^1 \Psi '(X_s){\text {d}}X_s - \sum _{k=1}^n \Psi '(X_{t_{k-1}})(X_{t_k}-X_{t_{k-1}})\right| \le \int _{\mathbb {R}}Z_n^+(a)|\mu |({\text {d}}a). \end{aligned}

In view of Remark 4, taking expectation yields the claim.

### Proof of Corollary 2.3

The proof follows directly from the proof of Theorem 2.1 by considering lower and upper bounds separately, and hence, we leave the details for an interested reader.