1 Introduction

Consider the d-dimensional stochastic differential equation (SDE)

$$\begin{aligned} X_{t}^{x}=x+\alpha L_{t}(X^{x})\cdot \varvec{1}_{d}+B_{t}^{H},\quad 0\le t\le T,x\in \mathbb {R}^{d}, \end{aligned}$$
(1.1)

where the driving noise \(B_{\cdot }^{H}\) of this equation is a d-dimensional fractional Brownian motion, whose components are given by one-dimensional independent fractional Brownian motions with a Hurst parameter \(H\in (0,1/2),\) and where \(\alpha \in \mathbb {R}\) is a constant and \(\varvec{1}_{d}\) is the vector in \(\mathbb {R}^{d}\) with entries given by 1. Further, \(L_{t}(X^{x})\) is the (existing) local time at zero of \(X_{\cdot }^{x},\) which can be formally written as

$$\begin{aligned} L_{t}(X^{x})=\int _{0}^{t}\delta _{0}(X_{s}^{x})\mathrm{d}s, \end{aligned}$$

where \(\delta _{0}\) denotes the Dirac delta function in 0.

We also assume that \(B_{\cdot }^{H}\) is defined on a complete probability space \((\Omega ,\mathfrak {A},P).\)

We recall here for \(d=1\) and Hurst parameter \(H\in (0,1)\) that \( B_{t}^{H},0\le t\le T\) is a centered Gaussian process with covariance structure \(R_{H}(t,s)\) given by

$$\begin{aligned} R_{H}(t,s)=E[B_{t}^{H}B_{s}^{H}]=\frac{1}{2}(s^{2H}+t^{2H}-\left| t-s\right| ^{2H}). \end{aligned}$$

For \(H=\frac{1}{2}\), the fractional Brownian motion \(B_{\cdot }^{H}\) coincides with the Brownian motion. Moreover, \(B_{\cdot }^{H}\) has a version with \((H-\varepsilon ) \)-Hölder continuous paths for all \(\varepsilon \in (0,H)\) and is the only stationary Gaussian process having the self-similarity property, that is

$$\begin{aligned} \{B_{\gamma t}^{H}\}_{t\ge 0}=\{\gamma ^{H}B_{t}^{H}\}_{t\ge 0} \end{aligned}$$

in law for all \(\gamma >0\). Finally, we mention that for \(H\ne \frac{1}{2}\) the fractional Brownian motion is neither a Markov process nor a (weak) semimartingale. The latter properties, however, complicate the study of SDE’s driven by \(B_{\cdot }^{H}\) and in fact call for the development of new construction techniques of solutions of such equations beyond the classical Markovian framework. For further information about the fractional Brownian motion, the reader may consult, e.g., [35] and the references therein.

In this paper, we want to analyze for small Hurst parameters \(H\in (0,1/2)\) strong solutions \(X_{\cdot }^{x}\) to the SDE (1.1), that is solutions to (1.1), which are adapted to a P-augmented filtration \(\mathcal {F} =\{\mathcal {F}_{t}\}_{0\le t\le T}\) generated by \(B_{\cdot }^{H}\). Let us mention here that solutions to (1.1) can be considered a generalization of the concept of a skew Brownian motion to the case of a fractional Brownian motion. The skew Brownian motion, which was first studied in the 1970s in [23, 43] and which has applications to, e.g., astrophysics, geophysics or more recently to the simulation of diffusion processes with discontinuous coefficients (see, e.g., [18, 26, 48]) , is the a solution to the SDE

$$\begin{aligned} X_{t}^{x}=x+(2p-1)L_{t}(X^{x})+B_{t},\quad 0\le t\le T,x\in \mathbb {R}, \end{aligned}$$
(1.2)

where \(B_{\cdot }\) is a one-dimensional Brownian motion, \(L_{t}(X^{x})\) the local time at zero of \(X_{\cdot }^{x}\) and p a parameter, which stands for the probability of positive excursions of \(X_{\cdot }^{x}\).

It was shown in [22] that the SDE (1.2) has a unique strong solution if and only if \(p\in [0,1]\). The approach used by the latter authors relies on a one-to-one transformation of (1.2 ) into an SDE without drift and the symmetric Itô–Tanaka formula. Moreover, based on Skorohod’s problem the authors show for \(2p-1=1\) or \(-1\) that the skew Brownian motion coincides with the reflected Brownian motion—a result, which we think, does not hold true in the case of solutions to (1.1). An extension of the latter results to SDE’s of the type

$$\begin{aligned} \mathrm{d}X_{t}=\sigma (X_{t})\mathrm{d}B_{t}+\int _{\mathbb {R}}\nu (\mathrm{d}x)\mathrm{d}L_{t}^{x}(X) \end{aligned}$$
(1.3)

was given in the work [25] under fairly general conditions on the coefficient \(\sigma \) and the measure \(\nu ,\) where the author also proves that strong solutions to (1.3) can be obtained through a limit of sequences of solutions to classical Itô-SDE’s by using the comparison theorem.

We remark here that the Walsh Brownian motion [43] also provides a natural extension of the skew Brownian motion, which is a diffusion process on rays in \(\mathbb {R}^{2}\) originating in zero and which exhibits the behavior of a Brownian motion on each of those rays. A further generalization of the latter process is the spider martingale, which has been used in the literature for the study of Brownian filtrations [47].

Other important generalizations of the skew Brownian motion to the multidimensional case in connection with weak solutions were studied in [10, 40]: Using PDE techniques, Portenko in [40] gives a construction of a unique solution process associated with an infinitesimal generator with a singular drift coefficient, which is concentrated on some smooth hypersurface.

On the other hand, Bass and Chen [10] analyze (unique) weak solutions of equations of the form

$$\begin{aligned} {d}X_{t}={d}A_{t}+{d}B_{t}, \end{aligned}$$
(1.4)

where \(B_{\cdot }\) is a d-dimensional Brownian motion and \(A_{t}\) a process , which is obtained from limits of the form

$$\begin{aligned} \lim _{n\longrightarrow \infty }\int _{0}^{t}b_{n}(X_{s})\mathrm{d}s \end{aligned}$$

in the sense of probability uniformly over time t for functions \(b_{n}: \mathbb {R}^{d}\longrightarrow \mathbb {R}^{d}\). Here, the ith components of \( A_{t}\) are bounded variation processes, which correspond to signed measures in the Kato class \(K_{d-1}\). The method of the authors for the construction of unique weak solutions of such equations is based on the construction of a certain resolvent family on the space \(C_{b}(\mathbb {R}^{d})\) in connection with the properties of the Kato class \(K_{d-1}\).

In this context, we also mention the paper [20] on SDE’s with distributional drift coefficients. As for a general overview of various construction techniques with respect to the skew Brownian motion and related processes based, e.g., on the theory of Dirichlet forms or martingale problems, the reader is referred to [27]. See also the book [38].

The objective of this paper is the construction of strong solutions to the multidimensional SDE (1.1) with fractional Brownian noise initial data for small Hurst parameters \(H<\frac{1}{2},\) where the generalized drift is given by the local time of the unknown process. Note that in contrast to [22] in the case of a skew Brownian motion, we obtain in this article the existence of strong solutions to (1.1) for all parameters \(\alpha \in \mathbb {R}.\)

Since the fractional Brownian motion is neither a Markov process nor a semimartingale, if \(H\ne \frac{1}{2}\), the methods of the above-mentioned authors cannot be (directly) used for the construction of strong solutions in our setting. In fact, our construction technique considerably differs from those in the literature in the Wiener case. More specifically, we approximate the Dirac delta function in zero by means of functions \(\varphi _{\varepsilon }\) for \(\varepsilon \searrow 0\) given by

$$\begin{aligned} \varphi _{\varepsilon }(x)=\varepsilon ^{-\frac{d}{2}}\varphi (\varepsilon ^{-\frac{1}{2}}x),\quad x\in \mathbb {R}^{d} \end{aligned}$$

where \(\varphi \) is, e.g., the d-dimensional standard Gaussian density. Then, we prove that the sequence of strong solutions \(X_{t}^{n}\) to the SDE’s

$$\begin{aligned} X_{t}^{n}=x+\int _{0}^{t}\alpha \varphi _{1/n}(X_{s}^{n})\cdot \varvec{1}_{d}\mathrm{d}s+B_{t}^{H} \end{aligned}$$

converges in \(L^{2}(\Omega )\), strongly to a solution to (1.1) for \( n\longrightarrow \infty \). In showing this, we employ a compactness criterion for sets in \(L^{2}(\Omega )\) based on Malliavin calculus combined with a “local time variational calculus” argument. See [9] for the existence of strong solutions of SDE’s driven by \(B_{\cdot }^{H},\) \(H<\frac{1 }{2}\), when, e.g., the drift coefficients b belong to \(L^{1}(\mathbb {R} ^{d})\cap L^{\infty }(\mathbb {R}^{d})\) or see [33] in the Wiener case. We also refer to a series of other papers in the Wiener and Lévy process case and in the Hilbert space setting based on that approach: [7, 8, 19, 32, 34].

Although we can show strong uniqueness (see Proposition 5.2) with respect to (1.1) under some restrictive conditions, we remark that in contrast to, e.g., [9], our construction technique—as it is applied in this paper—does not allow for establishing this property under more general conditions. Since the fractional Brownian motion is not a semimartingale for \(H\ne \frac{1}{2}\), we cannot pursue the same or similar proof strategy as, e.g., in [22] for the verification of strong uniqueness of solutions by using, e.g., the Itô–Tanaka formula. However, it is conceivable that our arguments combined with those in [4] which are based on results in [42] and a certain type of supremum concentration inequality in [44] will enable the construction of unique strong solutions to (1.1)—possibly even in the sense of Davie [15].

Here, we also want to point out a recent work of Catellier, Gubinelli [11], which came to our attention, after having finalized our article. In their striking paper, which extends the results of Davie [15] to the case of a fractional Brownian noise, the authors study the problem, which fractional Brownian paths actually regularize solutions to SDE’s of the form

$$\begin{aligned} {d}X_{t}^{x}=b(X_{t}^{x}){d}t+{d}B_{t}^{H},\quad X_{0}^{x}=x\in \mathbb {R}^{d} \end{aligned}$$

for all \(H\in (0,1)\). The (unique) solutions constructed in [11] are path by path with respect to time-dependent vector fields b in the Besov–H ölder space \(B_{\infty ,\infty }^{\alpha },\) \(\alpha \in \mathbb {R}\) and in the case of distributional vector fields solutions to the SDE’s, where the drift term is given by a nonlinear Young type of integral based on an averaging operator. In proving existence and uniqueness results, the authors use the Leray–Schauder–Tychonoff fixed point theorem and a comparison principle in connection with an average translation operator. Further, Lipschitz regularity of the flow \((x\longmapsto X_{t}^{x})\) under certain conditions is shown.

We remark that our techniques are very different from those developed by Catellier and Gubinelli [11], which seem not to work in the case of vector fields b belonging to, e.g., \(L^{1}(\mathbb {R} ^{d})\cap L^{\infty }(\mathbb {R}^{d})\) (private communication with one of the authors in [11]). Further, their methods do not yield Malliavin differentiability of strong solutions.

Another interesting paper in the direction of path-by-path analysis of differential equations, we wish to comment on, is that of Aida [1] (see also [2]), where the author studies the existence (not uniqueness) of solutions of reflected differential equations (with a Young integral term) for certain domains by using an Euler approximation scheme and Skorohod’s equation. As in the Wiener case (for \(d=1\) and \(\alpha =1\) or \(-1\)), we believe that our constructed solutions to (1.1) do not coincide with those in [1].

Finally, we mention that the construction technique in this article may be also used for showing strong solutions of SDE’s with respect to generalized drifts in the sense of (1.4) based on Kato classes. The existence of strong solutions of such equations in the Wiener case is to the best of our knowledge still an open problem. See the work of Bass, Chen [10].

Our paper is organized as follows: In Sect. 2, we introduce the framework of our paper and recall in this context some basic facts from fractional calculus and Malliavin calculus for (fractional) Brownian noise. Further, in Sect. 3 we discuss an integration by parts formula based on a local time on a simplex, which we want to employ in connection with a compactness criterion from Malliavin calculus in Sect. 5. Section 4 is devoted to the study of the local time of the fractional Brownian motion and its properties. Finally, in Sect. 5 we prove the existence of a strong solution to (1.1) by using the results of the previous sections.

2 Framework

In this section, we pass in review some theory on fractional calculus, basic facts on fractional Brownian noise, occupation measures and some other results which will be progressively used throughout the article in combination with methods from Malliavin calculus. The reader may consult [30, 31] or [17] for a general theory on Malliavin calculus for Brownian motion and [35, Chapter 5] for fractional Brownian motion. As for the theory of occupation measures, we refer to [21] or [24].

2.1 Fractional Calculus

We start up here with some basic definitions and properties of fractional derivatives and integrals. For more information, see [29, 41].

Let \(a,b\in \mathbb {R}\) with \(a<b\). Let \(f\in L^p([a,b])\) with \(p\ge 1\) and \(\alpha >0\). Introduce the left- and right-sided Riemann–Liouville fractional integrals by

$$\begin{aligned} I_{a^+}^\alpha f(x) = \frac{1}{\Gamma (\alpha )} \int _a^x (x-y)^{\alpha -1}f(y)\mathrm{d}y \end{aligned}$$

and

$$\begin{aligned} I_{b^-}^\alpha f(x) = \frac{1}{\Gamma (\alpha )} \int _x^b (y-x)^{\alpha -1}f(y)\mathrm{d}y \end{aligned}$$

for almost all \(x\in [a,b]\) where \(\Gamma \) is the Gamma function.

Further, for a given integer \(p\ge 1\), let \(I_{a^+}^{\alpha } (L^p)\) (resp. \(I_{b^-}^{\alpha } (L^p)\)) be the image of \(L^p([a,b])\) of the operator \(I_{a^+}^\alpha \) (resp. \(I_{b^-}^\alpha \)). If \(f\in I_{a^+}^{\alpha } (L^p)\) (resp. \(f\in I_{b^-}^{\alpha } (L^p)\)) and \(0<\alpha <1\), then define the left- and right-sided Riemann–Liouville fractional derivatives by

$$\begin{aligned} D_{a^+}^{\alpha } f(x)= \frac{1}{\Gamma (1-\alpha )} \frac{\text{ d }}{\text{ d }x} \int _a^x \frac{f(y)}{(x-y)^{\alpha }}\mathrm{d}y \end{aligned}$$

and

$$\begin{aligned} D_{b^-}^{\alpha } f(x)= \frac{1}{\Gamma (1-\alpha )} \frac{\text{ d }}{\text{ d }x} \int _x^b \frac{f(y)}{(y-x)^{\alpha }}\mathrm{d}y. \end{aligned}$$

The left- and right-sided derivatives of f defined as above can be represented as follows by

$$\begin{aligned} D_{a^+}^{\alpha } f(x)= \frac{1}{\Gamma (1-\alpha )} \left( \frac{f(x)}{(x-a)^\alpha }+\alpha \int _a^x \frac{f(x)-f(y)}{(x-y)^{\alpha +1}}\mathrm{d}y\right) \end{aligned}$$

and

$$\begin{aligned} D_{b^-}^{\alpha } f(x)= \frac{1}{\Gamma (1-\alpha )} \left( \frac{f(x)}{(b-x)^\alpha }+\alpha \int _x^b \frac{f(x)-f(y)}{(y-x)^{\alpha +1}}\mathrm{d}y\right) . \end{aligned}$$

Finally, we see by construction that the following relations are valid

$$\begin{aligned} I_{a^+}^\alpha (D_{a^+}^{\alpha } f) = f \end{aligned}$$

for all \(f\in I_{a^+}^{\alpha } (L^p)\) and

$$\begin{aligned} D_{a^+}^{\alpha }(I_{a^+}^\alpha f) = f \end{aligned}$$

for all \(f\in L^p([a,b])\) and similarly for \(I_{b^-}^{\alpha }\) and \(D_{b^-}^{\alpha }\).

2.2 Shuffles

Let m and n be integers. We denote by S(mn) the set of shuffle permutations, i.e., the set of permutations \(\sigma : \{1, \dots , m+n\} \rightarrow \{1, \dots , m+n\}\) such that \(\sigma (1)< \dots < \sigma (m)\) and \(\sigma (m+1)< \dots < \sigma (m+n)\).

The m-dimensional simplex is defined as

$$\begin{aligned} \Delta _{\theta ,t}^m := \{(s_m,\dots ,s_1)\in [0,T]^m : \, \theta<s_m<\cdots< s_1<t\}. \end{aligned}$$

The product of two simplices then is given by the following union

$$\begin{aligned}&\Delta _{\theta ,t}^m \times \Delta _{\theta ,t}^n \\&\quad = { \bigcup _{\sigma \in S(m,n)} \{(w_{m+n},\dots ,w_1)\in [0,T]^{m+n} : \, \theta< w_{\sigma (m+n)}<\cdots< w_{\sigma (1)} <t\} \cup \mathcal {N} }, \end{aligned}$$

where the set \(\mathcal {N}\) has null Lebesgue measure. Thus, if \(f_i:[0,T] \rightarrow \mathbb {R}\), \(i=1,\dots ,m+n\) are integrable functions, we obtain that

$$\begin{aligned}&\int _{\Delta _{\theta ,t}^m} \prod _{j=1}^m f_j(s_j) \mathrm{d}s_m \dots \mathrm{d}s_1 \int _{\Delta _{\theta ,t}^n} \prod _{j=m+1}^{m+n} f_j(s_j) \mathrm{d}s_{m+n} \dots \mathrm{d}s_{m+1} \nonumber \\&\quad = \sum _{\sigma \in S(m,n)} \int _{\Delta _{\theta ,t}^{m+n}} \prod _{j=1}^{m+n} f_{\sigma (j)} (w_j) \mathrm{d}w_{m+n}\cdots \mathrm{d}w_1. \end{aligned}$$
(2.1)

We hereby give a slight generalization of the above lemma, whose proof can be also found in [9]. This lemma will be used in Sect. 5. The reader may skip this lemma at first reading.

Lemma 2.1

Let np and k be integers, \(k \le n\). Assume we have integrable functions \(f_j : [0,T] \rightarrow \mathbb {R}\), \(j = 1, \dots , n\) and \(g_i : [0,T] \rightarrow \mathbb {R}\), \(i = 1, \dots , p\). We may then write

$$\begin{aligned}&\int _{\Delta _{\theta ,t}^n} f_1(s_1) \dots f_k(s_k) \int _{\Delta _{\theta , s_k}^p} g_1(r_1) \dots g_p(r_p) \mathrm{d}r_p \dots \mathrm{d}r_1 f_{k+1}(s_{k+1}) \dots f_n(s_n) \mathrm{d}s_n \dots \mathrm{d}s_1 \\&\qquad = \sum _{\sigma \in A_{n,p}} \int _{\Delta _{\theta ,t}^{n+p}} h^{\sigma }_1(w_1) \dots h^{\sigma }_{n+p}(w_{n+p}) \mathrm{d}w_{n+p} \dots \mathrm{d}w_1, \end{aligned}$$

where \(h^{\sigma }_l \in \{ f_j, g_i : 1 \le j \le n, 1 \le i \le p\}\). Here, \(A_{n,p}\) is a subset of permutations of \(\{1, \dots , n+p\}\) such that \(\# A_{n,p} \le C^{n+p}\) for a constant \(C \ge 1\), and we use the definition \(s_0 = \theta \).

Proof

The proof of the result is given by induction on n. For \(n=1\) and \(k=0\), the result is trivial. For \(k=1\), we have

$$\begin{aligned}&\int _{\theta }^t f_1(s_1) \int _{\Delta _{\theta ,s_1}^p} g_1(r_1) \dots g_p(r_p) \mathrm{d}r_p \dots \mathrm{d}r_1 \mathrm{d}s_1 \\&\quad = \int _{\Delta _{\theta ,t}^{p+1}} f_1(w_1) g_1(w_2) \dots g_p(w_{p+1}) \mathrm{d}w_{p+1} \dots \mathrm{d}w_1, \end{aligned}$$

where we have put \(w_1 =s_1, w_2 = r_1, \dots , w_{p+1} = r_p\).

Assume the result holds for n and let us show that this implies that the result is true for \(n+1\). Either \(k=0,1\) or \(2 \le k \le n+1\). For \(k=0\), the result is trivial. For \(k=1\), we have

$$\begin{aligned}&\int _{\Delta _{\theta ,t}^{n+1}} f_1(s_1) \int _{\Delta _{\theta ,s_1}^p} g_1(r_1) \dots g_p(r_p) \mathrm{d}r_p \dots \mathrm{d}r_1 f_2(s_2) \dots f_{n+1}(s_{n+1}) \mathrm{d}s_{n+1} \dots \mathrm{d}s_1 \\&\quad = \int _{\theta }^t f_1(s_1) \left( \int _{\Delta _{\theta ,s_1}^n} \int _{\Delta _{\theta ,s_1}^p} g_1(r_1) \dots g_p(r_p) \mathrm{d}r_p \dots \mathrm{d}r_1 f_2(s_2) \dots \right. \\&\qquad \left. f_{n+1}(s_{n+1}) \mathrm{d}s_{n+1} \dots \mathrm{d}s_2 \right) \mathrm{d}s_1 . \end{aligned}$$

The result follows from (2.1) coupled with \( \# S(n,p) = \frac{(n+p)!}{n! p!} \le C^{n+p} \le C^{(n+1) + p}\). For \(k \ge 2\), we have from the induction hypothesis

$$\begin{aligned}&\int _{\Delta _{\theta ,t}^{n+1}} f_1(s_1) \dots f_k(s_k) \int _{\Delta _{\theta , s_k}^p} g_1(r_1) \dots g_p(r_p) \mathrm{d}r_p \dots \mathrm{d}r_1 f_{k+1}(s_{k+1}) \dots \\&\qquad f_{n+1}(s_{n+1}) \mathrm{d}s_{n+1} \dots \mathrm{d}s_1 \\&\quad = \int _{\theta }^t f_1(s_1) \int _{\Delta _{\theta ,s_1}^n} f_2(s_2) \dots f_k(s_k) \int _{\Delta _{\theta , s_k}^p} g_1(r_1) \dots g_p(r_p) \mathrm{d}r_p \dots \mathrm{d}r_1 \\&\qquad \times f_{k+1}(s_{k+1}) \dots f_{n+1}(s_{n+1}) \mathrm{d}s_{n+1} \dots \mathrm{d}s_2 \mathrm{d}s_1 \\&\quad = \sum _{\sigma \in A_{n,p}} \int _{\theta }^t f_1(s_1) \int _{\Delta _{\theta ,s_1}^{n+p}} h^{\sigma }_1(w_1) \dots h^{\sigma }_{n+p}(w_{n+p}) \mathrm{d}w_{n+p} \dots \mathrm{d}w_1 \mathrm{d}s_1\\&\quad = \sum _{\tilde{\sigma } \in A_{n+1,p}} \int _{\Delta _{\theta ,t}^{n+1+p}} h^{\tilde{\sigma }}_1(w_1) \dots \tilde{h}^{\tilde{\sigma }}_{w_{n+1+p}} \mathrm{d}w_1 \dots \mathrm{d}w_{n+1+p}, \end{aligned}$$

where \(A_{n+1,p}\) is the set of permutations \(\tilde{\sigma }\) of \(\{1, \dots , n+1+p\}\) such that \(\tilde{\sigma }(1) = 1\) and \(\tilde{\sigma }(j+1) = \sigma (j)\), \(j=1, \dots , n+p\) for some \(\sigma \in A_{n,p}\) . \(\square \)

Remark 2.2

We remark that the set \(A_{n,p}\) in the above lemma also depends on k but we shall not make use of this fact.

2.3 Fractional Brownian motion

Denote by \(B^H = \{B_t^H, t\in [0,T]\}\) a d-dimensional fractional Brownian motion with Hurst parameter \(H\in (0,1/2)\). So \(B^H\) is a centered Gaussian process with covariance structure

$$\begin{aligned} (R_H(t,s))_{i,j}:= E[B_t^{H,(i)} B_s^{H,(j)}]=\delta _{ij}\frac{1}{2}\left( t^{2H} + s^{2H} - |t-s|^{2H} \right) , \quad i,j=1,\dots ,d, \end{aligned}$$

where \(\delta _{ij}\) is one, if \(i=j\), or zero else. Observe that \(E[|B_t^H - B_s^H|^2]= d|t-s|^{2H}\) and hence \(B^H\) has stationary increments and Hölder continuous trajectories of index \(H-\varepsilon \) for all \(\varepsilon \in (0,H)\). Observe that the increments of \(B^H\), \(H\in (0,1/2)\) are not independent. As a matter of fact, this process does not satisfy the Markov property, either. Another obstacle one is faced with is that \(B^H\) is not a semimartingale, see, e.g., [35, Proposition 5.1.1].

We give an abridged survey on how to construct fractional Brownian motion via an isometry. We will do it in one dimension inasmuch as we will treat the multidimensional case componentwise. See [35] for further details.

Let \(\mathcal {E}\) be the set of step functions on [0, T], and let \(\mathcal {H}\) be the Hilbert space given by the closure of \(\mathcal {E}\) with respect to the inner product

$$\begin{aligned} \langle 1_{[0,t]} , 1_{[0,s]}\rangle _{\mathcal {H}} = R_H(t,s). \end{aligned}$$

The mapping \(1_{[0,t]} \mapsto B_t\) has an extension to an isometry between \(\mathcal {H}\) and the Gaussian subspace of \(L^2(\Omega )\) associated with \(B^H\). We denote the isometry by \(\varphi \mapsto B^H(\varphi )\). Let us recall the following result (see [35, Proposition 5.1.3] ) which gives an integral representation of \(R_H(t,s)\) when \(H<1/2\):

Proposition 2.3

Let \(H<1/2\). The kernel

$$\begin{aligned} K_H(t,s)= c_H \left[ \left( \frac{t}{s}\right) ^{H- \frac{1}{2}} (t-s)^{H- \frac{1}{2}} + \left( \frac{1}{2}-H\right) s^{\frac{1}{2}-H} \int _s^t u^{H-\frac{3}{2}} (u-s)^{H-\frac{1}{2}} \mathrm{d}u\right] , \end{aligned}$$

where \(c_H = \sqrt{\frac{2H}{(1-2H) \beta (1-2H , H+1/2)}}\) being \(\beta \) the Beta function satisfies

$$\begin{aligned} R_H(t,s) = \int _0^{t\wedge s} K_H(t,u)K_H(s,u)\mathrm{d}u. \end{aligned}$$
(2.2)

The kernel \(K_H\) also has the following representation by means of fractional derivatives

$$\begin{aligned} K_H(t,s) = c_H \Gamma \left( H+\frac{1}{2}\right) s^{\frac{1}{2}-H} \left( D_{t^-}^{\frac{1}{2}-H} u^{H-\frac{1}{2}}\right) (s). \end{aligned}$$

Consider now the linear operator \(K_H^{*}: \mathcal {E} \rightarrow L^2([0,T])\) defined by

$$\begin{aligned} (K_H^{*} \varphi )(s) = K_H(T,s)\varphi (s) + \int _s^T (\varphi (t)-\varphi (s)) \frac{\partial K_H}{\partial t}(t,s)\mathrm{d}t \end{aligned}$$

for every \(\varphi \in \mathcal {E}\). We see that \((K_H^{*} 1_{[0,t]})(s) = K_H(t,s)1_{[0,t]}(s)\), and then, from this fact and (2.2) one can conclude that \(K_H^{*}\) is an isometry between \(\mathcal {E}\) and \(L^2([0,T])\) which extends to the Hilbert space \(\mathcal {H}\). See, e.g., [16] and [3] and the references therein.

For a given \(\varphi \in \mathcal {H}\), one proves that \(K_H^{*}\)can be represented in terms of fractional derivatives in the following ways

$$\begin{aligned} (K_H^{*} \varphi )(s) = c_H \Gamma \left( H+\frac{1}{2}\right) s^{\frac{1}{2}-H} \left( D_{T^-}^{\frac{1}{2}-H} u^{H-\frac{1}{2}}\varphi (u)\right) (s) \end{aligned}$$

and

$$\begin{aligned} (K_H^{*} \varphi )(s)&= c_H \Gamma \left( H+\frac{1}{2}\right) \left( D_{T^-}^{\frac{1}{2}-H} \varphi (s)\right) (s)\\&\quad + c_H \left( \frac{1}{2}-H\right) \int _s^T \varphi (t) (t-s)^{H-\frac{3}{2}} \left( 1- \left( \frac{t}{s}\right) ^{H-\frac{1}{2}}\right) \mathrm{d}t. \end{aligned}$$

One finds that \(\mathcal {H} = I_{T^-}^{\frac{1}{2}-H}(L^2)\) (see [16] and [3, Proposition 6]).

Using the fact that \(K_H^{*}\) is an isometry from \(\mathcal {H}\) into \(L^2([0,T])\), the d-dimensional process \(W=\{W_t, t\in [0,T]\}\) defined by

$$\begin{aligned} W_t := B^H((K_H^{*})^{-1}(1_{[0,t]})) \end{aligned}$$
(2.3)

is a Wiener process and the process \(B^H\) can be represented as follows

$$\begin{aligned} B_t^H = \int _0^t K_H(t,s) \mathrm{d}W_s, \end{aligned}$$
(2.4)

see [3].

We also need to introduce the concept of fractional Brownian motion associated with a filtration.

Definition 2.4

Let \(\mathcal {G}=\left\{ \mathcal {G}_{t}\right\} _{t\in \left[ 0,T\right] }\) be a right-continuous increasing family of \(\sigma \)-algebras on \(\left( \Omega ,\mathcal {F},P\right) \) such that \(\mathcal {G}_{0}\) contains the null sets. A fractional Brownian motion \(B^{H}\) is called a \(\mathcal {G}\)-fractional Brownian motion if the process W defined by (2.3) is a \(\mathcal {G}\)-Brownian motion.

In what follows, we will denote by W a standard Wiener process on a given probability space \((\Omega , \mathfrak {A}, P)\) equipped with the natural filtration \(\mathcal {F}=\{\mathcal {F}_t\}_{t\in [0,T]}\) which is generated by W and augmented by all P-null sets, we shall denote by \(B:=B^H\) the fractional Brownian motion with Hurst parameter \(H\in (0,1/2)\) given by the representation (2.4).

In this paper, we want to make use of a version of Girsanov’s theorem for fractional Brownian motion which is due to [16, Theorem 4.9]. Here, we recall the version given in [36, Theorem 2]. However, we first need the definition of an isomorphism \(K_H\) from \(L^2([0,T])\) onto \(I_{0+}^{H+\frac{1}{2}}(L^2)\) associated with the kernel \(K_H(t,s)\) in terms of the fractional integrals as follows, see [16, Theorem 2.1]

$$\begin{aligned} (K_H \varphi )(s) = I_{0^+}^{2H} s^{\frac{1}{2}-H} I_{0^+}^{\frac{1}{2}-H}s^{H-\frac{1}{2}} \varphi , \quad \varphi \in L^2([0,T]). \end{aligned}$$

It follows from this and the properties of the Riemann–Liouville fractional integrals and derivatives that the inverse of \(K_H\) takes the form

$$\begin{aligned} (K_H^{-1} \varphi )(s) = s^{\frac{1}{2}-H} D_{0^+}^{\frac{1}{2}-H} s^{H-\frac{1}{2}} D_{0^+}^{2H} \varphi (s), \quad \varphi \in I_{0+}^{H+\frac{1}{2}}(L^2). \end{aligned}$$

The latter implies that if \(\varphi \) is absolutely continuous, see [36], one has

$$\begin{aligned} (K_H^{-1} \varphi )(s) = s^{H-\frac{1}{2}} I_{0^+}^{\frac{1}{2}-H} s^{\frac{1}{2}-H}\varphi '(s). \end{aligned}$$
(2.5)

Theorem 2.5

(Girsanov’s theorem for fBm) Let \(u=\{u_t, t\in [0,T]\}\) be an \(\mathcal {F}\)-adapted process with integrable trajectories and set \(\widetilde{B}_t^H = B_t^H + \int _0^t u_s \mathrm{d}s, \quad t\in [0,T].\) Assume that

  1. (i)

    \(\int _0^{\cdot } u_s \mathrm{d}s \in I_{0+}^{H+\frac{1}{2}} (L^2 ([0,T])\), P-a.s.

  2. (ii)

    \(E[\xi _T]=1\) where

    $$\begin{aligned} \xi _T := \exp \left\{ -\int _0^T K_H^{-1}\left( \int _0^{\cdot } u_r \mathrm{d}r\right) (s)\mathrm{d}W_s - \frac{1}{2} \int _0^T K_H^{-1} \left( \int _0^{\cdot } u_r \mathrm{d}r \right) ^2(s)\mathrm{d}s \right\} . \end{aligned}$$

Then, the shifted process \(\widetilde{B}^H\) is an \(\mathcal {F}\)-fractional Brownian motion with Hurst parameter H under the new probability \(\widetilde{P}\) defined by \(\frac{\mathrm{d}\widetilde{P}}{\mathrm{d}P}=\xi _T\).

Remark 2.6

As for the multidimensional case, define

$$\begin{aligned} (K_H \varphi )(s):= ( (K_H \varphi ^{(1)} )(s), \dots , (K_H \varphi ^{(d)})(s))^{*}, \quad \varphi \in L^2([0,T];\mathbb {R}^d), \end{aligned}$$

where \(*\) denotes transposition and similarly for \(K_H^{-1}\) and \(K_H^{*}\).

In this paper, we will also employ a crucial property of the fractional Brownian motion which was shown by [39] for general Gaussian vector fields. The latter property will be a helpful substitute for the lack of independent increments of the underlying noise.

Let \(m\in \mathbb {N}\) and \(0=:t_0<t_1<\cdots<t_m<T\). Then, for all \(\xi _1,\dots , \xi _m\in \mathbb {R}^d\) there exists a positive finite constant \(C>0\) (depending on m) such that

$$\begin{aligned} \mathrm {Var}\left[ \sum _{j=1}^m \langle \xi _j, B_{t_j}-B_{t_{j-1}}\rangle _{\mathbb {R}^d}\right] \ge C \sum _{j=1}^m |\xi _j|^2 \mathrm {Var}\left[ B_{t_j}-B_{t_{j-1}}\right] . \end{aligned}$$
(2.6)

The above property is referred to the literature as local non-determinism property of the fractional Brownian motion. The reader may consult [39] or [46] for more information on this property. A stronger version of local non-determinism is also satisfied by the fractional Brownian motion. There exists a constant \(K>0\), depending only on H and T, such that for any \(t\in \left[ 0,T\right] ,0<r<t\) and for \(i=1,\ldots ,d,\)

$$\begin{aligned} \mathrm {Var}\left[ B_{t}^{H,i}|\left\{ B_{s}^{H,i}:\left| t-s\right| \ge r\right\} \right] \ge Kr^{2H}. \end{aligned}$$
(2.7)

3 An Integration by Parts Formula

In this section, we recall an integration by parts formula, which is essentially based on the local time of the Gaussian process \(B^H\). The whole content as well as the proofs can be found in [9].

Let m be an integer, and let \(f :[0,T]^m \times (\mathbb {R}^d)^m \rightarrow \mathbb {R}\) be a function of the form

$$\begin{aligned} f(s,z)= \prod _{j=1}^m f_j(s_j,z_j),\quad s = (s_1, \dots ,s_m) \in [0,T]^m, \quad z = (z_1, \dots , z_m) \in (\mathbb {R}^d)^m, \end{aligned}$$
(3.1)

where \(f_j:[0,T]\times \mathbb {R}^d \rightarrow \mathbb {R}\), \(j=1,\dots ,m\) are smooth functions with compact support. Further, let \(\varkappa :[0,T]^m\rightarrow \mathbb {R}\) be a function of the form

$$\begin{aligned} \varkappa (s)= \prod _{j=1}^m \varkappa _j(s_j), \quad s\in [0,T]^m, \end{aligned}$$
(3.2)

where \(\varkappa _j : [0,T] \rightarrow \mathbb {R}\), \(j=1,\dots , m\) are integrable functions.

Next, denote by \(\alpha _j\) a multiindex and \(D^{\alpha _j}\) its corresponding differential operator. For \(\alpha = (\alpha _1, \dots , \alpha _m)\) considered an element of \(\mathbb {N}_0^{d\times m}\) so that \(|\alpha |:= \sum _{j=1}^m \sum _{l=1}^d \alpha _{j}^{(l)}\), we write

$$\begin{aligned} D^{\alpha }f(s,z) = \prod _{j=1}^m D^{\alpha _j} f_j(s_j,z_j). \end{aligned}$$

In this section, we aim at deriving an integration by parts formula of the form

$$\begin{aligned} \int _{\Delta _{\theta ,t}^m} D^{\alpha }f(s,B_s) \mathrm{d}s = \int _{(\mathbb {R}^d)^m} \Lambda ^{f}_{\alpha } (\theta ,t,z)\mathrm{d}z , \end{aligned}$$
(3.3)

for a suitable random field \(\Lambda ^f_{\alpha }\), where \(\Delta _{\theta ,t}^m\) is the m-dimensional simplex as defined in Sect. 2.2 and \(B_{s}=(B_{s_{1}},\ldots ,B_{s_{m}})\) on that simplex. More specifically, we have that

$$\begin{aligned} \Lambda ^f_{\alpha }(\theta ,t ,z) = (2 \pi )^{-dm} \int _{(\mathbb {R}^d)^m} \int _{\Delta _{\theta ,t}^m} \prod _{j=1}^m f_j(s_j,z_j) (-i u_j)^{\alpha _j} \exp \{ -i \langle u_j, B_{s_j} - z_j \rangle \}\mathrm{d}s \mathrm{d}u . \end{aligned}$$
(3.4)

Let us start by defining \(\Lambda ^f_{\alpha }(\theta ,t,z)\) as above and show that it is a well-defined element of \(L^2(\Omega )\).

To this end, we need the following notation: Given \((s,z) = (s_1, \dots , s_m ,z_1 \dots , z_m) \in [0,T]^m \times (\mathbb {R}^d)^m\) and a shuffle \(\sigma \in S(m,m)\), we write

$$\begin{aligned} f_{\sigma }(s,z) := \prod _{j=1}^{2m} f_{[\sigma (j)]}(s_j, z_{[\sigma (j)]}) \end{aligned}$$

and

$$\begin{aligned} \varkappa _{\sigma } (s) := \prod _{j=1}^{2m} \varkappa _{[\sigma (j)]}(s_j), \end{aligned}$$

where [j] is equal to j if \(1 \le j \le m\) and \(j-m\) if \(m+1 \le j \le 2m \).

For integers \(k \ge 0\), let us define the expressions

$$\begin{aligned}&\Psi _{k}^{f}(\theta ,t,z) \\&\quad :=\prod _{l=1}^{d}\sqrt{(2\left| \alpha ^{(l)}\right| )!} \sum _{\sigma \in S(m,m)}\int _{\Delta _{0,t}^{2m}}\left| f_{\sigma }(s,z)\right| \prod _{j=1}^{2m}\frac{1}{\left| s_{j}-s_{j-1}\right| ^{H(d+2\sum _{l=1}^{d}\alpha _{[\sigma (j)]}^{(l)})}}\mathrm{d}s_{1} \ldots \mathrm{d}s_{2m} \end{aligned}$$

, respectively,

$$\begin{aligned}&\Psi _{k}^{\varkappa }(\theta ,t) \\&\quad :=\prod _{l=1}^{d}\sqrt{(2\left| \alpha ^{(l)}\right| )!} \sum _{\sigma \in S(m,m)}\int _{\Delta _{0,t}^{2m}}\left| \varkappa _{\sigma }(s)\right| \prod _{j=1}^{2m}\frac{1}{\left| s_{j}-s_{j-1}\right| ^{H(d+2\sum _{l=1}^{d}\alpha _{[\sigma (j)]}^{(l)})}}\mathrm{d}s_{1} \ldots \mathrm{d}s_{2m}. \end{aligned}$$

Theorem 3.1

Suppose that \(\Psi _{k}^{f}(\theta ,t,z),\Psi _{k}^{\varkappa }(\theta ,t)<\infty \). Then, defining \(\Lambda _{\alpha }^{f}(\theta ,t,z)\) as in (3.4) gives a random variable in \(L^{2}(\Omega )\) and there exists a universal constant \(C=C(T,H,d)>0\) such that

$$\begin{aligned} E[\vert \Lambda _{\alpha }^{f}(\theta ,t,z)\vert ^{2}]\le C^{m+\left| \alpha \right| }\Psi _{k}^{f}(\theta ,t,z). \end{aligned}$$
(3.5)

Moreover, we have

$$\begin{aligned} \left| E\left[ \int _{(\mathbb {R}^{d})^{m}}\Lambda _{\alpha }^{f}(\theta ,t,z)\mathrm{d}z\right] \right| \le C^{m/2+\left| \alpha \right| /2}\prod _{j=1}^{m}\left\| f_{j}\right\| _{L^{1}(\mathbb {R}^{d};L^{\infty }([0,T]))}(\Psi _{k}^{\varkappa }(\theta ,t))^{1/2}. \end{aligned}$$
(3.6)

Proof

For notational convenience, we consider \(\theta =0\) and set \(\Lambda _{\alpha }^{f}(t,z)=\Lambda _{\alpha }^{f}(0,t,z).\)

For an integrable function \(g:(\mathbb {R}^{d})^{m}\longrightarrow \mathbb {C}\), we can write

$$\begin{aligned}&\left| \int _{(\mathbb {R}^{d})^{m}}g(u_{1}, \ldots ,u_{m})\mathrm{d}u_{1} \ldots \mathrm{d}u_{m} \right| ^{2} \\&\quad =\int _{(\mathbb {R}^{d})^{m}}g(u_{1}, \ldots ,u_{m})\mathrm{d}u_{1} \ldots \mathrm{d}u_{m}\int _{( \mathbb {R}^{d})^{m}}\overline{g(u_{m+1}, \ldots ,u_{2m})}\mathrm{d}u_{m+1} \ldots \mathrm{d}u_{2m} \\&\quad =\int _{(\mathbb {R}^{d})^{m}}g(u_{1}, \ldots ,u_{m})\mathrm{d}u_{1} \ldots \mathrm{d}u_{m}(-1)^{dm}\\&\qquad \int _{(\mathbb {R}^{d})^{m}}\overline{g(-u_{m+1}, \ldots ,-u_{2m})} \mathrm{d}u_{m+1} \ldots \mathrm{d}u_{2m}, \end{aligned}$$

where we used the change of variables \((u_{m+1}, \ldots ,u_{2m})\longmapsto (-u_{m+1}, \ldots ,-u_{2m})\) in the third equality.

This gives

$$\begin{aligned}&\left| \Lambda _{\alpha }^{f}(\theta ,t,z)\right| ^{2} \\&\quad =(2\pi )^{-2dm}(-1)^{dm}\int _{(\mathbb {R}^{d})^{2m}}\int _{\Delta _{0,t}^{m}}\prod _{j=1}^{m}f_{j}(s_{j},z_{j})(-iu_{j})^{\alpha _{j}}e^{-i\left\langle u_{j},B_{s_{j}}-z_{j}\right\rangle }\mathrm{d}s_{1} \ldots \mathrm{d}s_{m} \\&\qquad \times \int _{\Delta _{0,t}^{m}}\prod _{j=m+1}^{2m}f_{[j]}(s_{j},z_{[j]})(-iu_{j})^{\alpha _{[j]}}e^{-i\left\langle u_{j},B_{s_{j}}-z_{[j]}\right\rangle }\mathrm{d}s_{m+1} \ldots \mathrm{d}s_{2m}\mathrm{d}u_{1} \ldots \mathrm{d}u_{2m} \\&\quad =(2\pi )^{-2dm}(-1)^{dm}\sum _{\sigma \in S(m,m)}\int _{(\mathbb {R} ^{d})^{2m}}\left( \prod _{j=1}^{m}e^{-i\left\langle z_{j},u_{j}+u_{j+m}\right\rangle }\right) \\&\qquad \times \int _{\Delta _{0,t}^{2m}}f_{\sigma }(s,z)\prod _{j=1}^{2m}u_{\sigma (j)}^{\alpha _{[\sigma (j)]}}\exp \left\{ -\sum _{j=1}^{2m}\left\langle u_{\sigma (j)},B_{s_{j}}\right\rangle \right\} \mathrm{d}s_{1} \ldots \mathrm{d}s_{2m}\mathrm{d}u_{1} \ldots \mathrm{d}u_{2m}, \end{aligned}$$

where we used (2.1) in the last step.

Taking the expectation on both sides yields

$$\begin{aligned}&E[\left| \Lambda _{\alpha }^{f}(\theta ,t,z)\right| ^{2}] =(2\pi )^{-2dm}(-1)^{dm}\sum _{\sigma \in S(m,m)}\int _{(\mathbb {R} ^{d})^{2m}}\left( \prod _{j=1}^{m}e^{-i\left\langle z_{j},u_{j}+u_{j+m}\right\rangle }\right) \nonumber \\&\qquad \times \int _{\Delta _{0,t}^{2m}}f_{\sigma }(s,z)\prod _{j=1}^{2m}u_{\sigma (j)}^{\alpha _{[\sigma (j)]}} \nonumber \\&\qquad \exp \left\{ -\frac{1}{2}\mathrm{Var}\left[ \sum _{j=1}^{2m}\left\langle u_{\sigma (j)},B_{s_{j}}\right\rangle \right] \right\} \mathrm{d}s_{1} \ldots \mathrm{d}s_{2m}\mathrm{d}u_{1} \ldots \mathrm{d}u_{2m} \nonumber \\&\quad =(2\pi )^{-2dm}(-1)^{dm}\sum _{\sigma \in S(m,m)}\int _{(\mathbb {R} ^{d})^{2m}}\left( \prod _{j=1}^{m}e^{-i\left\langle z_{j},u_{j}+u_{j+m}\right\rangle }\right) \nonumber \\&\qquad \times \int _{\Delta _{0,t}^{2m}}f_{\sigma }(s,z)\prod _{j=1}^{2m}u_{\sigma (j)}^{\alpha _{[\sigma (j)]}}\exp \left\{ -\frac{1}{2}\sum _{l=1}^{d}\mathrm{Var}\left[ \sum _{j=1}^{2m}u_{\sigma (j)}^{(l)}B_{s_{j}}^{(1)}\right] \right\} \nonumber \\&\qquad \times \mathrm{d}s_{1} \ldots \mathrm{d}s_{2m}\mathrm{d}u_{1}^{(1)} \ldots \mathrm{d}u_{2m}^{(1)} \ldots \mathrm{d}u_{1}^{(d)} \ldots \mathrm{d}u_{2m}^{(d)} \nonumber \\&\quad =(2\pi )^{-2dm}(-1)^{dm}\sum _{\sigma \in S(m,m)}\int _{(\mathbb {R} ^{d})^{2m}}\left( \prod _{j=1}^{m}e^{-i\left\langle z_{j},u_{j}+u_{j+m}\right\rangle }\right) \nonumber \\&\qquad \times \int _{\Delta _{0,t}^{2m}}f_{\sigma }(s,z)\prod _{j=1}^{2m}u_{\sigma (j)}^{\alpha _{[\sigma (j)]}}\prod _{l=1}^{d}\exp \left\{ -\frac{1}{2}((u_{\sigma (j)}^{(l)})_{1\le j\le 2m})^{T}Q((u_{\sigma (j)}^{(l)})_{1\le j\le 2m})\right\} \nonumber \\&\qquad \times \mathrm{d}s_{1} \ldots \mathrm{d}s_{2m} \mathrm{d}u_{\sigma (1)}^{(1)} \ldots \mathrm{d}u_{\sigma (2m)}^{(1)} \ldots \mathrm{d}u_{\sigma (1)}^{(d)} \ldots \mathrm{d}u_{\sigma (2m)}^{(d)}, \end{aligned}$$
(3.7)

where

$$\begin{aligned} Q=Q(s):=(E[B_{s_{i}}^{(1)}B_{s_{j}}^{(1)}])_{1\le i,j\le 2m}. \end{aligned}$$

Further, we see that

$$\begin{aligned}&\int _{\Delta _{0,t}^{2m}}\left| f_{\sigma }(s,z)\right| \int _{( \mathbb {R}^{d})^{2m}}\prod _{j=1}^{2m}\prod _{l=1}^{d}\left| u_{\sigma (j)}^{(l)}\right| ^{\alpha _{[\sigma (j)]}^{(l)}}\prod _{l=1}^{d}\exp \nonumber \\&\qquad \left\{ -\frac{1}{2}((u_{\sigma (j)}^{(l)})_{1\le j\le 2m})^{T}Q((u_{\sigma (j)}^{(l)})_{1\le j\le 2m})\right\} \nonumber \\&\qquad \mathrm{d}u_{\sigma (1)}^{(1)} \ldots \mathrm{d}u_{\sigma (2m)}^{(1)} \ldots \mathrm{d}u_{\sigma (1)}^{(d)} \ldots \mathrm{d}u_{\sigma (2m)}^{(d)}\mathrm{d}s_{1} \ldots \mathrm{d}s_{2m} \nonumber \\&\quad =\int _{\Delta _{0,t}^{2m}}\left| f_{\sigma }(s,z)\right| \int _{( \mathbb {R}^{d})^{2m}}\prod _{j=1}^{2m}\prod _{l=1}^{d}\left| u_{j}^{(l)}\right| ^{\alpha _{[\sigma (j)]}^{(l)}} \prod _{l=1}^{d}\exp \left\{ -\frac{1}{2}\left\langle Qu^{(l)},u^{(l)}\right\rangle \right\} \nonumber \\&\qquad \mathrm{d}u_{1}^{(1)} \ldots \mathrm{d}u_{2m}^{(1)} \ldots \mathrm{d}u_{1}^{(d)} \ldots \mathrm{d}u_{2m}^{(d)}\mathrm{d}s_{1} \ldots \mathrm{d}s_{2m} \nonumber \\&\quad =\int _{\Delta _{0,t}^{2m}}\left| f_{\sigma }(s,z)\right| \prod _{l=1}^{d}\int _{\mathbb {R}^{2m}}\left( \prod _{j=1}^{2m}\left| u_{j}^{(l)}\right| ^{\alpha _{[\sigma (j)]}^{(l)}}\right) \exp \nonumber \\&\qquad \left\{ -\frac{1}{2}\left\langle Qu^{(l)},u^{(l)}\right\rangle \right\} \mathrm{d}u_{1}^{(l)} \ldots \mathrm{d}u_{2m}^{(l)}\mathrm{d}s_{1} \ldots \mathrm{d}s_{2m}, \end{aligned}$$
(3.8)

where

$$\begin{aligned} u^{(l)}:=(u_{j}^{(l)})_{1\le j\le 2m}. \end{aligned}$$

We have that

$$\begin{aligned}&\int _{\mathbb {R}^{2m}}\left( \prod _{j=1}^{2m}\left| u_{j}^{(l)}\right| ^{\alpha _{[\sigma (j)]}^{(l)}}\right) \exp \left\{ - \frac{1}{2}\left\langle Qu^{(l)},u^{(l)}\right\rangle \right\} \mathrm{d}u_{1}^{(l)} \ldots \mathrm{d}u_{2m}^{(l)} \\&\quad =\frac{1}{(\det Q)^{1/2}}\int _{\mathbb {R}^{2m}}\left( \prod _{j=1}^{2m} \left| \left\langle Q^{-1/2}u^{(l)},e_{j}\right\rangle \right| ^{\alpha _{[\sigma (j)]}^{(l)}}\right) \exp \left\{ -\frac{1}{2}\left\langle u^{(l)},u^{(l)}\right\rangle \right\} \mathrm{d}u_{1}^{(l)} \ldots \mathrm{d}u_{2m}^{(l)}, \end{aligned}$$

where \(e_{i},i=1, \ldots ,2m\) is the standard ONB of \(\mathbb {R}^{2m}\).

We also get that

$$\begin{aligned}&\int _{\mathbb {R}^{2m}}\left( \prod _{j=1}^{2m}\left| \left\langle Q^{-1/2}u^{(l)},e_{j}\right\rangle \right| ^{\alpha _{[\sigma (j)]}^{(l)}}\right) \exp \left\{ -\frac{1}{2}\left\langle u^{(l)},u^{(l)}\right\rangle \right\} \mathrm{d}u_{1}^{(l)} \ldots \mathrm{d}u_{2m}^{(l)} \\&\quad =(2\pi )^{m}E\Bigg [\prod _{j=1}^{2m}\left| \left\langle Q^{-1/2}Z,e_{j}\right\rangle \right| ^{\alpha _{[\sigma (j)]}^{(l)}}\Bigg ], \end{aligned}$$

where

$$\begin{aligned} Z\sim \mathcal {N}(\mathcal {O},I_{2m\times 2m}). \end{aligned}$$

We know from Lemma [28], which is a type of Brascamp–Lieb inequality that

$$\begin{aligned}&E\left[ \prod _{j=1}^{2m}\left| \left\langle Q^{-1/2}Z,e_{j}\right\rangle \right| ^{\alpha _{[\sigma (j)]}^{(l)}}\right] \\&\quad \le \sqrt{\mathrm{perm}\left( \sum \right) }=\sqrt{\sum _{\pi \in S_{2\left| \alpha ^{(l)}\right| }}\prod _{i=1}^{2\left| \alpha ^{(l)}\right| }a_{i\pi (i)}}, \end{aligned}$$

where \(\mathrm{perm}(\sum )\) is the permanent of the covariance matrix \(\sum =(a_{ij}) \) of the Gaussian random vector

$$\begin{aligned}&\underset{\alpha _{[\sigma (1)]}^{(l)}\text { times}}{\underbrace{ \Big (\left\langle Q^{-1/2}Z,e_{1}\right\rangle , \ldots ,\left\langle Q^{-1/2}Z,e_{1}\right\rangle }},\underset{\alpha _{[\sigma (2)]}^{(l)} \text { times}}{\underbrace{\left\langle Q^{-1/2}Z,e_{2}\right\rangle , \ldots ,\left\langle Q^{-1/2}Z,e_{2}\right\rangle }}, \ldots ,\\&\quad \underset{\alpha _{[\sigma (2m)]}^{(l)}\text { times}}{\underbrace{\left\langle Q^{-1/2}Z,e_{2m}\right\rangle , \ldots ,\left\langle Q^{-1/2}Z,e_{2m}\right\rangle }}\Big ), \end{aligned}$$

\(\left| \alpha ^{(l)}\right| :=\sum _{j=1}^{m}\alpha _{j}^{(l)}\) and where \(S_{n}\) stands for the permutation group of size n.

In addition, using an upper bound for the permanent of positive semidefinite matrices (see [5]) or direct computations we get that

$$\begin{aligned} \mathrm{perm}\left( \sum \right) =\sum _{\pi \in S_{2\left| \alpha ^{(l)}\right| }}\prod _{i=1}^{2\left| \alpha ^{(l)}\right| }a_{i\pi (i)}\le (2\left| \alpha ^{(l)}\right| )!\prod _{i=1}^{2\left| \alpha ^{(l)}\right| }a_{ii}. \end{aligned}$$
(3.9)

Let now \(i\in [\sum _{r=1}^{j-1}\alpha _{[\sigma (r)]}^{(l)}+1,\sum _{r=1}^{j}\alpha _{[\sigma (r)]}^{(l)}]\) for some arbitrary fixed \(j\in \{1, \ldots ,2m\}\). Then,

$$\begin{aligned} a_{ii}=E[\left\langle Q^{-1/2}Z,e_{j}\right\rangle \left\langle Q^{-1/2}Z,e_{j}\right\rangle ]. \end{aligned}$$

Further using substitution, we also have that

$$\begin{aligned}&E[\left\langle Q^{-1/2}Z,e_{j}\right\rangle \left\langle Q^{-1/2}Z,e_{j}\right\rangle ] \\&\quad =(\det Q)^{1/2}\frac{1}{(2\pi )^{m}}\int _{\mathbb {R}^{2m}}\left\langle u,e_{j}\right\rangle ^{2}\exp \left( -\frac{1}{2}\left\langle Qu,u\right\rangle \right) \mathrm{d}u_{1} \ldots \mathrm{d}u_{2m} \\&\quad =(\det Q)^{1/2}\frac{1}{(2\pi )^{m}}\int _{\mathbb {R}^{2m}}u_{j}^{2}\exp \left( - \frac{1}{2}\left\langle Qu,u\right\rangle \right) \mathrm{d}u_{1} \ldots \mathrm{d}u_{2m} \end{aligned}$$

We now want to use Lemma A.7.

Then, we get that

$$\begin{aligned}&\int _{\mathbb {R}^{2m}}u_{j}^{2}\exp \left( -\frac{1}{2}\left\langle Qu,u\right\rangle \right) \mathrm{d}u_{1} \ldots \mathrm{d}u_{m} \\&\quad =\frac{(2\pi )^{(2m-1)/2}}{(\det Q)^{1/2}}\int _{\mathbb {R}}v^{2}\exp \left( - \frac{1}{2}v^{2}\right) \mathrm{d}v\frac{1}{\sigma _{j}^{2}} \\&\quad =\frac{(2\pi )^{m}}{(\det Q)^{1/2}}\frac{1}{\sigma _{j}^{2}}, \end{aligned}$$

where \(\sigma _{j}^{2}:=\mathrm{Var}[B_{s_{j}}^{H}\left| B_{s_{1}}^{H}, \ldots ,B_{s_{2m}}^{H}\text { without }B_{s_{j}}^{H}\right] .\)

We now want to use strong local non-determinism of the form (see (2.7)): For all \(t\in [0,T],\) \(0<r<t:\)

$$\begin{aligned} \mathrm{Var}\big [B_{t}^{H}\vert B_{s}^{H},\vert t-s\vert \ge r\big ] \ge Kr^{2H}. \end{aligned}$$

The latter implies that

$$\begin{aligned} (\det Q(s))^{1/2}\ge K^{(2m-1)/2}\left| s_{1}\right| ^{H}\left| s_{2}-s_{1}\right| ^{H} \ldots \left| s_{2m}-s_{2m-1}\right| ^{H} \end{aligned}$$

as well as

$$\begin{aligned} \sigma _{j}^{2}\ge K\min \{\left| s_{j}-s_{j-1}\right| ^{2H},\left| s_{j+1}-s_{j}\right| ^{2H}\}. \end{aligned}$$

Thus,

$$\begin{aligned} \prod _{j=1}^{2m}\sigma _{l}^{-2\alpha _{[\sigma (j)]}^{(1)}}\le & {} K^{-2\left| \alpha ^{(l)}\right| }\prod _{j=1}^{2m}\frac{1}{\min \big \{\left| s_{j}-s_{j-1}\right| ^{2H\alpha _{[\sigma (j)]}^{(1)}},\left| s_{j+1}-s_{j}\right| ^{2H\alpha _{[\sigma (j)]}^{(1)}}\big \}} \\\le & {} C^{\left| \alpha ^{(l)}\right| }\prod _{j=1}^{2m}\frac{1}{\left| s_{j}-s_{j-1}\right| ^{4H\alpha _{[\sigma (j)]}^{(1)}}} \end{aligned}$$

for a constant C only depending on H and T.

Hence, it follows from (3.9) that

$$\begin{aligned} \mathrm{perm}\left( \sum \right)\le & {} (2\vert \alpha ^{(l)}\vert )!\prod _{i=1}^{2\vert \alpha ^{(l)}\vert }a_{ii} \\\le & {} (2\vert \alpha ^{(l)}\vert )!\prod _{j=1}^{2m}((\det Q)^{1/2}\frac{1}{(2\pi )^{m}}\frac{(2\pi )^{m}}{(\det Q)^{1/2}}\frac{1}{\sigma _{j}^{2}})^{\alpha _{[\sigma (j)]}^{(1)}} \\\le & {} (2\vert \alpha ^{(l)}\vert )!C^{\vert \alpha ^{(l)}\vert }\prod _{j=1}^{2m} \frac{1}{\vert s_{j}-s_{j-1}\vert ^{4H\alpha _{[\sigma (j)]}^{(1)}}}. \end{aligned}$$

So

$$\begin{aligned}&E\Bigg [\prod _{j=1}^{2m}\left| \left\langle Q^{-1/2}Z,e_{j}\right\rangle \right| ^{\alpha _{[\sigma (j)]}^{(l)}}\Bigg ]\le \sqrt{\mathrm{perm}\left( \sum \right) } \\&\quad \le \sqrt{(2\left| \alpha ^{(l)}\right| )!}C^{\left| \alpha ^{(l)}\right| }\prod _{j=1}^{2m}\frac{1}{\left| s_{j}-s_{j-1}\right| ^{2H\alpha _{[\sigma (j)]}^{(1)}}}. \end{aligned}$$

Therefore, we obtain from (3.7) and (3.8) that

$$\begin{aligned}&E[\left| \Lambda _{\alpha }^{f}(\theta ,t,z)\right| ^{2}] \\&\quad \le C^{m}\int _{\Delta _{0,t}^{2m}}\left| f_{\sigma }(s,z)\right| \prod _{l=1}^{d}\int _{\mathbb {R}^{2m}}\left( \prod _{j=1}^{2m}\left| u_{j}^{(l)}\right| ^{\alpha _{[\sigma (j)]}^{(l)}}\right) \exp \\&\qquad \left\{ -\frac{1}{2}\left\langle Qu^{(l)},u^{(l)}\right\rangle \right\} \mathrm{d}u_{1}^{(l)} \ldots \mathrm{d}u_{2m}^{(l)}\mathrm{d}s_{1} \ldots \mathrm{d}s_{2m} \\&\quad \le M^{m}\int _{\Delta _{0,t}^{2m}}\left| f_{\sigma }(s,z)\right| \frac{1}{(\det Q(s))^{d/2}}\prod _{l=1}^{d}\sqrt{(2\left| \alpha ^{(l)}\right| )!}C^{\left| \alpha ^{(l)}\right| }\\&\qquad \prod _{j=1}^{2m}\frac{1}{\left| s_{j}-s_{j-1}\right| ^{2H\alpha _{[\sigma (j)]}^{(1)}}} \mathrm{d}s_{1} \ldots \mathrm{d}s_{2m} \\&\quad =M^{m}C^{\left| \alpha \right| }\prod _{l=1}^{d}\sqrt{(2\left| \alpha ^{(l)}\right| )!}\int _{\Delta _{0,t}^{2m}}\left| f_{\sigma }(s,z)\right| \\&\qquad \prod _{j=1}^{2m}\frac{1}{\left| s_{j}-s_{j-1}\right| ^{H(d+2\sum _{l=1}^{d}\alpha _{[\sigma (j)]}^{(1)})}}\mathrm{d}s_{1} \ldots \mathrm{d}s_{2m} \end{aligned}$$

for a constant M depending on d.

Finally, we show estimate (3.6). Using the inequality (3.5), we find that

$$\begin{aligned}&\left| E\left[ \int _{(\mathbb {R}^{d})^{m}}\Lambda _{\alpha }^{\varkappa f}(\theta ,t,z)\mathrm{d}z\right] \right| \\&\quad \le \int _{(\mathbb {R}^{d})^{m}}(E[\vert \Lambda _{\alpha }^{\varkappa f}(\theta ,t,z)\vert ^{2})^{1/2}\mathrm{d}z\le C^{m/2+\left| \alpha \right| /2}\int _{( \mathbb {R}^{d})^{m}}(\Psi _{k}^{\varkappa f}(\theta ,t,z))^{1/2}\mathrm{d}z. \end{aligned}$$

Taking the supremum over [0, T] for each function \(f_{j}\), i.e.,

$$\begin{aligned} \left| f_{[\sigma (j)]}(s_{j},z_{[\sigma (j)]})\right| \le \sup _{s_{j}\in [0,T]}\left| f_{[\sigma (j)]}(s_{j},z_{[\sigma (j)]})\right| ,j=1, \ldots ,2m \end{aligned}$$

one obtains that

$$\begin{aligned}&\left| E\left[ \int _{(\mathbb {R}^{d})^{m}}\Lambda _{\alpha }^{\varkappa f}(\theta ,t,z)\mathrm{d}z\right] \right| \\&\quad \le C^{m+\left| \alpha \right| }\max _{\sigma \in S(m,m)}\int _{(\mathbb {R}^{d})^{m}}\left( \prod _{l=1}^{2m}\left\| f_{[\sigma (l)]}(\cdot ,z_{[\sigma (l)]})\right\| _{L^{\infty }([0,T])}\right) ^{1/2}\mathrm{d}z \\&\qquad \times \Bigg (\prod _{l=1}^{d}\sqrt{(2\left| \alpha ^{(l)}\right| )!}\sum _{\sigma \in S(m,m)}\int _{\Delta _{0,t}^{2m}}\left| \varkappa _{\sigma }(s)\right| \\&\qquad \prod _{j=1}^{2m}\frac{1}{\left| s_{j}-s_{j-1}\right| ^{H(d+2\sum _{l=1}^{d}\alpha _{[\sigma (j)]}^{(1)})}}\mathrm{d}s_{1} \ldots \mathrm{d}s_{2m}\Bigg )^{1/2} \\&\quad =C^{m+\left| \alpha \right| }\max _{\sigma \in S(m,m)}\int _{(\mathbb {R}^{d})^{m}}\left( \prod _{l=1}^{2m}\left\| f_{[\sigma (l)]}(\cdot ,z_{[\sigma (l)]})\right\| _{L^{\infty }([0,T])}\right) ^{1/2}\mathrm{d}z\cdot (\Psi _{k}^{\varkappa }(\theta ,t))^{1/2} \\&\quad =C^{m+\left| \alpha \right| }\int _{(\mathbb {R}^{d})^{m}}\prod _{j=1}^{m}\left\| f_{j}(\cdot ,z_{j})\right\| _{L^{\infty }([0,T])}\mathrm{d}z\cdot (\Psi _{k}^{\varkappa }(\theta ,t))^{1/2} \\&\quad =C^{m+\left| \alpha \right| }\prod _{j=1}^{m}\left\| f_{j}(\cdot ,z_{j})\right\| _{L^{1}(\mathbb {R}^{d};L^{\infty }([0,T]))}\cdot (\Psi _{k}^{\varkappa }(\theta ,t))^{1/2}. \end{aligned}$$

\(\square \)

The next result is a key estimate which shows why fractional Brownian motion regularizes (1.1). It rests in fact on the earlier integration by parts formula. This estimate is given in more explicit terms when the function \(\varkappa \) is chosen to be

$$\begin{aligned} \varkappa _j(s) = (K_H(s,\theta )-K_H(s,\theta '))^{\varepsilon _j}, \quad \theta< s < t \end{aligned}$$

and

$$\begin{aligned} \varkappa _j(s) = (K_H(s,\theta ))^{\varepsilon _j}, \quad \theta< s < t \end{aligned}$$

for every \(j=1,\dots ,m\) with \((\varepsilon _1,\dots , \varepsilon _{m})\in \{0,1\}^{m}\). It will be made clear why these choices are important in the forthcoming section.

Proposition 3.2

Let \(B^{H},H\in (0,1/2)\) be a standard d-dimensional fractional Brownian motion and functions f and \(\varkappa \) as in (3.1), respectively, as in (3.2). Let \(\theta ,\theta \prime ,t\in [0,T],\theta \prime<\theta <t\) and

$$\begin{aligned} \varkappa _{j}(s)=(K_{H}(s,\theta )-K_{H}(s,\theta \prime ))^{\varepsilon _{j}},\quad \theta<s<t \end{aligned}$$

for every \(j=1, \ldots ,m\) with \((\varepsilon _{1}, \ldots ,\varepsilon _{m})\in \{0,1\}^{m}\) for \(\theta ,\theta \prime \in [0,T]\) with \(\theta \prime <\theta .\) Let \(\alpha \in (\mathbb {N}_{0}^{d})^{m}\) be a multi-index. If

$$\begin{aligned} H<\frac{\frac{1}{2}-\gamma }{\left( d-1+2\sum _{l=1}^{d}\alpha _{ j}^{(l)}\right) } \end{aligned}$$

for all j, where \(\gamma \in (0,H)\) is sufficiently small, then there exists a universal constant C (depending on H, T and d, but independent of m, \(\{f_{i}\}_{i=1, \ldots ,m}\) and \(\alpha \)) such that for any \(\theta ,t\in [0,T]\) with \(\theta <t\) we have

$$\begin{aligned}&\left| E\int _{\Delta _{\theta ,t}^{m}}\left( \prod _{j=1}^{m}D^{\alpha _{j}}f_{j}(s_{j},B_{s_{j}}^{H})\varkappa _{j}(s_{j})\right) \mathrm{d}s\right| \\&\quad \le C^{m+\left| \alpha \right| }\prod _{j=1}^{m}\left\| f_{j}(\cdot ,z_{j})\right\| _{L^{1}(\mathbb {R}^{d};L^{\infty }([0,T]))}\left( \frac{\theta -\theta \prime }{\theta \theta \prime }\right) ^{\gamma \sum _{j=1}^{m}\varepsilon _{j}}\theta ^{(H-\frac{1}{2}-\gamma )\sum _{j=1}^{m}\varepsilon _{j}} \\&\qquad \times \frac{\Bigg (\prod _{l=1}^{d}(2\left| \alpha ^{(l)}\right| )!\Bigg )^{1/4}(t-\theta )^{-H(md+2\left| \alpha \right| )+(H-\frac{1}{2} -\gamma )\sum _{j=1}^{m}\varepsilon _{j}+m}}{\Gamma (-H(2md+4\left| \alpha \right| )+2(H-\frac{1}{2}-\gamma )\sum _{j=1}^{m}\varepsilon _{j}+2m)^{1/2}}. \end{aligned}$$

Proof

By definition of \(\Lambda _{\alpha }^{\varkappa f}\) (3.4), it immediately follows that the integral in our proposition can be expressed as

$$\begin{aligned} \int _{\Delta _{\theta ,t}^{m}}\left( \prod _{j=1}^{m}D^{\alpha _{j}}f_{j}(s_{j},B_{s_{j}}^{H})\varkappa _{j}(s_{j})\right) \mathrm{d}s=\int _{\mathbb { R}^{dm}}\Lambda _{\alpha }^{\varkappa f}(\theta ,t,z)\mathrm{d}z. \end{aligned}$$

Taking expectation and using Theorem 3.1, we obtain

$$\begin{aligned}&\left| E\int _{\Delta _{\theta ,t}^{m}}\left( \prod _{j=1}^{m}D^{\alpha _{j}}f_{j}(s_{j},B_{s_{j}}^{H})\varkappa _{j}(s_{j})\right) \mathrm{d}s\right| \\&\quad \le C^{m+\left| \alpha \right| }\prod _{j=1}^{m}\left\| f_{j}(\cdot ,z_{j})\right\| _{L^{1}(\mathbb {R}^{d};L^{\infty }([0,T]))}\cdot (\Psi _{k}^{\varkappa }(\theta ,t))^{1/2}, \end{aligned}$$

where in this situation

$$\begin{aligned}&\Psi _{k}^{\varkappa }(\theta ,t) \\&\quad :=\prod _{l=1}^{d}\sqrt{(2\left| \alpha ^{(l)}\right| )!} \sum _{\sigma \in S(m,m)}\int _{\Delta _{0,t}^{2m}}\prod _{j=1}^{2m}(K_{H}(s_{j},\theta )-K_{H}(s_{j},\theta \prime ))^{\varepsilon _{[\sigma (j)]}} \\&\quad \frac{1}{\left| s_{j}-s_{j-1}\right| ^{H(d+2\sum _{l=1}^{d}\alpha _{[\sigma (j)]}^{(1)})}}\mathrm{d}s_{1} \ldots \mathrm{d}s_{2m}. \end{aligned}$$

We want to apply Lemma A.8. For this, we need that \( -H(d+2\sum _{l=1}^{d}\alpha _{[\sigma (j)]}^{(1)})+(H-\frac{1}{2} -\gamma )\varepsilon _{[\sigma (j)]}>-1\) for all \(j=1, \ldots ,2m.\) The worst case is when \(\varepsilon _{[\sigma (j)]}=1\) for all j. So \( H<\frac{\frac{1}{2}-\gamma }{(d-1+2\sum _{l=1}^{d}\alpha _{[\sigma (j)]}^{(1)})}\) for all j. Hence, we have

$$\begin{aligned}&\Psi _{k}^{\varkappa }(\theta ,t) \le \sum _{\sigma \in S(m,m)}\left( \frac{\theta -\theta \prime }{\theta \theta \prime }\right) ^{\gamma \sum _{j=1}^{2m}\varepsilon _{[\sigma (j)]}}\theta ^{(H-\frac{1}{2} -\gamma )\sum _{j=1}^{2m}\varepsilon _{[\sigma (j)]}} \\&\quad \times \prod _{l=1}^{d}\sqrt{(2\left| \alpha ^{(l)}\right| )!}\Pi _{\gamma }(2m)(t-\theta )^{-H(2md+4\left| \alpha \right| )+(H- \frac{1}{2}-\gamma )\sum _{j=1}^{2m}\varepsilon _{[\sigma (j)]}+2m}, \end{aligned}$$

where \(\Pi _{\gamma }(m)\) is defined as in Lemma A.8. The latter can be bounded above as follows

$$\begin{aligned} \Pi _{\gamma }(2m)\le \frac{\prod _{j=1}^{2m}\Gamma \left( 1-H\left( d+2\sum _{l=1}^{d}\alpha _{[\sigma (j)]}^{(1)}\right) \right) }{\Gamma \left( -H(2md+4\left| \alpha \right| )+(H-\frac{1}{2}-\gamma )\sum _{j=1}^{2m}\varepsilon _{[\sigma (j)]}+2m\right) }. \end{aligned}$$

Observe that \(\sum _{j=1}^{2m}\varepsilon _{[\sigma (j)]}=2\sum _{j=1}^{m}\varepsilon _{j}.\) Therefore, we have that

$$\begin{aligned}&(\Psi _{k}^{\varkappa }(\theta ,t))^{1/2} \\&\quad \le C^{m}\left( \frac{\theta -\theta \prime }{\theta \theta \prime } \right) ^{\gamma \sum _{j=1}^{m}\varepsilon _{j}}\theta ^{(H-\frac{1}{2} -\gamma )\sum _{j=1}^{m}\varepsilon _{j}} \\&\qquad \times \frac{\Bigg (\prod _{l=1}^{d}(2\left| \alpha ^{(l)}\right| )!\Bigg )^{1/4}(t-\theta )^{-H(md+2\left| \alpha \right| )+(H-\frac{1}{2} -\gamma )\sum _{j=1}^{m}\varepsilon _{j}+m}}{\Gamma (-H(2md+4\left| \alpha \right| )+2(H-\frac{1}{2}-\gamma )\sum _{j=1}^{m}\varepsilon _{j}+2m)^{1/2}}, \end{aligned}$$

where we used \(\prod _{j=1}^{2m}\Gamma (1-H(d+2\sum _{l=1}^{d}\alpha _{[\sigma (j)]}^{(1)})\le C^{m}\) for a large enough constant \(C>0\) and \(\sqrt{a_{1}+\cdots +a_{m}}\le \sqrt{a_{1}}+\cdots \sqrt{ a_{m}}\) for arbitrary nonnegative numbers \(a_{1}, \ldots ,a_{m}\). \(\square \)

Proposition 3.3

Let \(B^{H},H\in (0,1/2)\) be a standard d-dimensional fractional Brownian motion and functions f and \(\varkappa \) as in (3.1), respectively, as in (3.2). Let \(\theta ,t\in [0,T]\) with \(\theta <t\) and

$$\begin{aligned} \varkappa _{j}(s)=(K_{H}(s,\theta ))^{\varepsilon _{j}},\quad \theta<s<t \end{aligned}$$

for every \(j=1, \ldots ,m\) with \((\varepsilon _{1}, \ldots ,\varepsilon _{m})\in \{0,1\}^{m}\). Let \(\alpha \in (\mathbb {N}_{0}^{d})^{m}\) be a multi-index. If

$$\begin{aligned} H<\frac{\frac{1}{2}-\gamma }{(d-1+2\sum _{l=1}^{d}\alpha _{ j}^{(l)})} \end{aligned}$$

for all j, where \(\gamma \in (0,H)\) is sufficiently small, then there exists a universal constant C (depending on H, T and d, but independent of m, \(\{f_{i}\}_{i=1, \ldots ,m}\) and \(\alpha \)) such that for any \(\theta ,t\in [0,T]\) with \(\theta <t\) we have

$$\begin{aligned}&\left| E\int _{\Delta _{\theta ,t}^{m}}\left( \prod _{j=1}^{m}D^{\alpha _{j}}f_{j}(s_{j},B_{s_{j}}^{H})\varkappa _{j}(s_{j})\right) \mathrm{d}s\right| \\&\quad \le C^{m+\left| \alpha \right| }\prod _{j=1}^{m}\left\| f_{j}(\cdot ,z_{j})\right\| _{L^{1}(\mathbb {R}^{d};L^{\infty }([0,T]))}\theta ^{(H-\frac{1}{2})\sum _{j=1}^{m}\varepsilon _{j}} \\&\qquad \times \frac{\left( \prod _{l=1}^{d}(2\vert \alpha ^{(l)}\vert )!\right) ^{1/4}(t-\theta )^{-H(md+2\vert \alpha \vert )+(H-\frac{1}{2} -\gamma )\sum _{j=1}^{m}\varepsilon _{j}+m}}{\Gamma \left( -H(2md+4\vert \alpha \vert )+2\left( H-\frac{1}{2}-\gamma \right) \sum _{j=1}^{m}\varepsilon _{j}+2m\right) ^{1/2}}. \end{aligned}$$

Proof

The proof is similar to the previous proposition. \(\square \)

Remark 3.4

We mention that

$$\begin{aligned} \prod _{l=1}^{d}(2\left| \alpha ^{(l)}\right| )!\le (2\left| \alpha \right| )!C^{\left| \alpha \right| } \end{aligned}$$

for a constant C depending on d. Later on in the paper, when we deal with the existence of strong solutions, we will consider the case

$$\begin{aligned} \alpha _{j}^{(l)}\in \{0,1\}\quad \text {for all}\quad j,l \end{aligned}$$

with

$$\begin{aligned} \left| \alpha \right| =m. \end{aligned}$$

4 Local Times of a Fractional Brownian Motion and Properties

One can define, heuristically, the local time \(L_{t}^{x}\left( B^{H}\right) \) of \(B^{H}\) at \(x\in \mathbb {R}^{d}\) by

$$\begin{aligned} L_{t}^{x}\left( B^{H}\right) =\int _{0}^{t}\delta _{x}(B_{s}^{H})\mathrm{d}s. \end{aligned}$$

It is known that \(L_{t}^{x}\left( B^{H}\right) \) exists and is jointly continuous in \(\left( t,x\right) \) as long as \(Hd<1\). See, e.g., [39] and the references therein. Moreover, by the self-similarity property of the fBm one has that \(L_{t}^{x}\left( B^{H}\right) \overset{law}{=}t^{1-Hd}L_{1}^{x/t^{H}}(B^{H})\) and, in particular

$$\begin{aligned} L_{t}^{0}\left( B^{H}\right) \overset{law}{=}t^{1-Hd}L_{1}^{0}(B^{H}). \end{aligned}$$

The rigorous construction of \(L_{t}^{x}\left( B^{H}\right) \) involves approximating the Dirac delta function by an approximate unity. It is convenient to consider the Gaussian approximation of unity

$$\begin{aligned} \varphi _{\varepsilon }(x)=\varepsilon ^{-\frac{d}{2}}\varphi \left( \varepsilon ^{-\frac{1}{2}}x\right) ,\quad \varepsilon >0, \end{aligned}$$

for every \(x\in \mathbb {R}^d\) where \(\varphi \) is the d-dimensional standard Gaussian density. Then, we can define the smoothed local times

$$\begin{aligned} L_{t}^{x}(B^{H},\varepsilon )=\int _{0}^{t}\varphi _{\varepsilon }(B_{s}^{H}-x)\mathrm{d}s \end{aligned}$$

and construct \(L_{t}^{x}(B^{H})\) as the limit when \(\varepsilon \) tends to zero in \(L^{2}(\Omega )\). Note that, using the Fourier transform, one can write \(\varphi _{\varepsilon }(x)\) as follows

$$\begin{aligned} \varphi _{\varepsilon }(x)=\frac{1}{\left( 2\pi \right) ^{d}}\int _{\mathbb {R}^{d}}\exp \left( i\left\langle \xi ,x\right\rangle _{\mathbb {R}^{d}}-\varepsilon \frac{\left| \xi \right| _{\mathbb {R}^{d}}^{2}}{2}\right) \mathrm{d}\xi . \end{aligned}$$

The previous expression allows us to write

$$\begin{aligned} L_{t}^{x}\left( B^{H},\varepsilon \right) =\frac{1}{\left( 2\pi \right) ^{d}}\int _{0}^{t}\int _{\mathbb {R}^{d}}\exp \left( i\left\langle \xi ,B_{s}^{H}-x\right\rangle _{\mathbb {R}^{d}}-\varepsilon \frac{\left| \xi \right| _{\mathbb {R}^{d}}^{2}}{2}\right) \mathrm{d}\xi \mathrm{d}s, \end{aligned}$$

and

$$\begin{aligned} \mathbb {E}\left[ L_{t}^{x}\left( B^{H},\varepsilon \right) ^{m}\right]&= \frac{m!}{\left( 2\pi \right) ^{md}}\int _{\mathcal {T}_{m}(0,t)}\int _{\mathbb {R}^{md}}\mathbb {E}\left[ \exp \left( i\sum _{j=1}^{m}\left\langle \xi _{j},B_{s_{j}}^{H}\right\rangle _{\mathbb {R}^{d}}\right) \right] \nonumber \\&\quad \times \exp \left( -\sum _{j=1}^{m}\left( i\left\langle \xi _{j},x\right\rangle _{\mathbb {R}^{d}}+\frac{\varepsilon \left| \xi _{j}\right| _{\mathbb {R}^{d}}^{2}}{2}\right) \right) \mathrm{d}{\bar{\varvec{\xi }}}\mathrm{d}\mathbf {s}, \end{aligned}$$
(4.1)

where \({\bar{\varvec{\xi }}}=(\xi _{1}, \ldots ,\xi _{m})=(\xi _{1}^{1}, \ldots ,\xi _{1}^{d},\ldots ,\xi _{m}^{1} \ldots ,\xi _{m}^{d})\in \mathbb {R}^{md}\) and \(\mathbf {s}=\left( s_{1}, \ldots ,s_{m}\right) \in \mathcal {T}_{m}(0,t)=\left\{ 0\le s_{1}<s_{2}<\cdots <s\le t\right\} \). Next, note that

$$\begin{aligned} \mathbb {E}\left[ \exp \left( i\sum _{j=1}^{m}\left\langle \xi _{j},B_{s_{j}}^{H}\right\rangle _{\mathbb {R}^{d}}\right) \right]= & {} \exp \left( -\frac{1}{2}\mathrm {Var}\left[ \sum _{j=1}^{m}\sum _{k=1}^{d}\xi _{j}^{k}B_{s_{j}}^{H,k}\right] \right) \\= & {} \exp \left( -\frac{1}{2}\sum _{k=1}^{d}\mathrm {Var}\left[ \sum _{j=1}^{m}\xi _{j}^{k}B_{s_{j}}^{H,k}\right] \right) \\= & {} \exp \left( -\frac{1}{2}\sum _{k=1}^{d}\left\langle \xi ^{k},Q(\mathbf {s})\xi ^{k}\right\rangle _{\mathbb {R}{}^{m}}\right) , \end{aligned}$$

where \(\xi ^{k}=\left( \xi _{1}^{k}, \ldots ,\xi _{m}^{k}\right) \) and \(Q(\mathbf {s})\) is the covariance matrix of the vector \(\left( B_{s_{1}}^{H,1}, \ldots ,B_{s_{m}}^{H,1}\right) \). Rearranging the terms in the second exponential in Eq. (4.1), we can write

$$\begin{aligned}&\mathbb {E}\left[ L_{t}^{x}\left( B^{H},\varepsilon \right) ^{m}\right] \\&\quad = \frac{m!}{\left( 2\pi \right) ^{md}}\int _{\mathcal {T}_{m}(0,t)}\int _{\mathbb {R}^{md}}\exp \left( -\frac{1}{2}\sum _{k=1}^{d}\left( {\left\langle \xi ^{k},Q(\mathbf {s})\xi ^{k}\right\rangle }_{\mathbb {R}{}^{m}}+\frac{\varepsilon \left| \xi ^{k}\right| _{\mathbb {R}{}^{m}}^{2}}{2}\right) \right) \\&\qquad \times \exp \left( -i\sum _{j=1}^{m}\left\langle \xi _{j},x\right\rangle _{d}\right) \mathrm{d}{\bar{\varvec{\xi }}}\mathrm{d}\mathbf {s},\\&\quad \le \frac{m!}{\left( 2\pi \right) ^{md}}\int _{\mathcal {T}_{m}(0,t)}\left( \int _{\mathbb {R}^{m}}\exp \left( -\frac{1}{2}{\left\langle \xi ^{1},Q(\mathbf {s})\xi ^{1}\right\rangle }_{\mathbb {R}{}^{m}}-\frac{\varepsilon \left| \xi ^{1}\right| _{\mathbb {R}{}^{m}}^{2}}{2}\right) \mathrm{d}\xi ^{1}\right) ^{d}\mathrm{d}\mathbf {s}\\&\quad \le \frac{m!}{\left( 2\pi \right) ^{md}}\int _{\mathcal {T}_{m}(0,t)}\left( \int _{\mathbb {R}^{m}}\exp \left( -\frac{1}{2}{\left\langle \xi ^{1},Q(\mathbf {s})\xi ^{1}\right\rangle }\right) \mathrm{d}\xi ^{1}\right) ^{d}\mathrm{d}\mathbf {s}\\&\quad = \frac{m!}{\left( 2\pi \right) ^{\frac{dm}{2}}}\int _{\mathcal {T}_{m}(0,t)}\left( \det Q(\mathbf {s})\right) ^{-\frac{d}{2}}\mathrm{d}\mathbf {s}\triangleq \alpha _{m}. \end{aligned}$$

Hence, by dominated convergence, we can conclude that \(\mathbb {E}\left[ L_{t}^{x}\left( B^{H},\varepsilon \right) ^{m}\right] \) converges when \(\varepsilon \) tends to zero as long as \(\alpha _{m}<\infty .\) If \(\alpha _{2}<\infty \), then one can similarly show that

$$\begin{aligned} \lim _{\varepsilon _{1},\varepsilon _{2}\rightarrow 0+}\mathbb {E}\left[ L_{t}^{x} \left( B^{H},\varepsilon _{1}\right) L_{t}^{x}\left( B^{H},\varepsilon _{2}\right) \right] \end{aligned}$$

exists, which yields the convergence in \(L^{2}\left( \Omega \right) \) of \(L_{t}^{x}\left( B^{H},\varepsilon \right) .\) If \(\alpha _{m}<\infty \) for all \(m\ge 1\), one can deduce the convergence in \(L^{p}\left( \Omega \right) ,p\ge 2\) of \(L_{t}^{x}\left( B^{H},\varepsilon \right) \).

The following well-known result can be found in Anderson [6, p. 42].

Lemma 4.1

Let \(\left( X_{1},\ldots ,X_{m}\right) \) be a mean-zero Gaussian random vector. Then,

$$\begin{aligned} \det \left( \mathrm {Cov}\left[ X_{1},\ldots ,X_{m}\right] \right) =\mathrm {Var}\left[ X_{1}\right] \mathrm {Var}\left[ X_{2}|X_{1}\right] \cdots \mathrm {Var}\left[ X_{m}|X_{m-1},\ldots ,X_{1}\right] . \end{aligned}$$

Another useful elementary result is:

Lemma 4.2

Let X be a square integrable random variable and \(\mathcal {G}_{1}\subset \mathcal {G}_{2}\) be two \(\sigma \)-algebras. Then,

$$\begin{aligned} \mathrm {Var}\left[ X|\mathcal {G}_{1}\right] \ge \mathrm {Var}\left[ X|\mathcal {G}_{2}\right] . \end{aligned}$$

Combining Lemmas 4.1, 4.2 and (2.7), we get that

$$\begin{aligned} \det Q(\mathbf {s})= & {} \mathrm {Var}\left[ B_{s_{1}}^{H,1}\right] \mathrm {Var}\left[ B_{s_{2}}^{H,1}|B_{s_{1}}^{H,1}\right] \cdots \mathrm {Var}\left[ B_{s_{m}}^{H,1}|B_{s_{m-1}}^{H,1},\ldots ,B_{s_{1}}^{H,1}\right] \\\ge & {} s_{1}^{2H}\mathrm {Var}\left[ B_{s_{2}}^{H,1}|\mathcal {F}_{s_{1}}\right] \cdots \mathrm {Var}\left[ B_{s_{m}}^{H,1}|\mathcal {F}_{s_{m-1}}\right] \\\ge & {} K^{m-1}s_{1}^{2H}\left( s_{2}-s_{1}\right) ^{2H}\cdots \left( s_{m}-s_{m-1}\right) ^{2H} \end{aligned}$$

and, therefore,

$$\begin{aligned}&\int _{\mathcal {T_{\text {m}}}\left( 0,t\right) }\left( \det Q(\mathbf {s})\right) ^{-\frac{d}{2}}\mathrm{d}\mathbf {s} \le K^{\frac{d}{2}(1-m)}\int _{\mathcal {T_{\text {m}}}\left( 0,t\right) }s_{1}^{-Hd}\left( s_{2}-s_{1}\right) ^{-Hd}\cdots \left( s_{m}-s_{m-1}\right) ^{-Hd}\mathrm{d}\mathbf {s}\\&\quad =K^{\frac{d}{2}(1-m)}\left( \prod _{j=1}^{m}\mathcal {B}\left( j\left( 1-Hd\right) ,1-Hd\right) \right) t^{m\left( 1-Hd\right) }<\infty , \end{aligned}$$

if \(Hd < 1\). Finally, we have proved the bound

$$\begin{aligned} \mathbb {E}\left[ L_{t}^{x}\left( B^{H}\right) ^{m}\right] \le \frac{m!}{\left( 2\pi \right) ^{\frac{dm}{2}}}K^{\frac{d}{2}(1-m)}\left( \prod _{j=1}^{m}\mathcal {B}\left( j\left( 1-Hd\right) ,1-Hd\right) \right) t^{m\left( 1-Hd\right) } \end{aligned}$$
(4.2)

Remark 4.3

We just have checked that if \(Hd<1\), then \(L_{t}^{x}\left( B^{H}\right) \) exists and has moments of all orders. By checking that \(\sum _{m\ge 1}\frac{\alpha _{m}}{m!}<\infty \), one can deduce that \(L_{t}^{x}\left( B^{H}\right) \) has exponential moments or all orders. Furthermore, one can also show the existence of exponential moments of \(L_{t}^{x}\left( B^{H}\right) ^{2}\) by doing similar computations as before. However, one may also use Theorem 4.4 to show that the exponential moments are finite.

Chen et al. [12] proved the following result on large deviations for local times of fractional Brownian motion, which we will not use in our paper but which is of independent interest:

Theorem 4.4

Let \(B^{H}\) be a standard fractional Brownian motion with Hurst index H such that \(Hd<1\). Then, the limit

$$\begin{aligned} \lim _{a\rightarrow \infty }a^{-\frac{1}{Hd}}\log P\left( L_{1}^{0}(B^{H})\ge a\right) =-\theta (H,d), \end{aligned}$$

exists and \(\theta (H,d)\) satisfies the following bounds

$$\begin{aligned} \left( \frac{\pi c_{H}^{2}}{H}\right) ^{\frac{1}{2H}}\theta _{0}(Hd)\le \theta (H,d)\le \left( 2\pi \right) ^{\frac{1}{2H}}\theta _{0}(Hd), \end{aligned}$$

where \(c_{H}\) is given by and

$$\begin{aligned} \theta _{0}(\lambda )=\lambda \left( \frac{(1-\lambda )^{1-\lambda }}{\Gamma (1-\lambda )}\right) ^{1/\lambda }. \end{aligned}$$

5 Existence of Strong Solutions

As outlined in the introduction, the object of study is a generalized SDE with additive d-dimensional fractional Brownian noise \(B^H\) with Hurst parameter \(H\in (0,1/2)\), i.e.,

$$\begin{aligned} X_{t}^{x}=x+\alpha L_{t}(X^{x})\cdot \varvec{1}_{d}+B_{t}^{H},\quad 0\le t\le T,\;x\in \mathbb {R}^{d}, \end{aligned}$$
(5.1)

where \(L_t(X^x)\), \(t\in [0,T]\) is a stochastic process of bounded variation which arises from taking the limit

$$\begin{aligned} L_t(X^x) := \lim _{\varepsilon \searrow 0} \int _0^t \varphi _{\varepsilon } (X_s^x) \mathrm{d}s, \end{aligned}$$

in probability, where \(\varphi _{\varepsilon }\) are probability densities approximating \(\delta _{0}\), denoting \(\delta _0\) the Dirac delta generalized function with total mass at 0. We will consider

$$\begin{aligned} \varphi _{\varepsilon }(x)=\varepsilon ^{-\frac{d}{2}}\varphi \left( \varepsilon ^{-\frac{1}{2}}x\right) ,\quad \varepsilon >0, \end{aligned}$$
(5.2)

where \(\varphi \) is the d-dimensional standard Gaussian density function.

Hereunder, we establish the main result of this section for \(H<\frac{1}{2(2+d)}\) (see [11]).

Theorem 5.1

If \(H< 1/(2(2+d))\), \(d\ge 1\), there exists a continuous strong solution \(X^x =\{X_t^x , t\in [0,T], x\in \mathbb {R}^d\}\) of Eq. (5.1) for all \(\alpha \). Moreover, for every \(t\in [0,T]\), \(X_t\) is Malliavin differentiable in the direction of the Brownian motion W in (2.3).

Proposition 5.2

Retain the conditions of Theorem 5.1. Let \(Y_{\cdot }^{x}\) be another solution to the SDE (5.1). Suppose that the Doleans–Dade exponentials

$$\begin{aligned} \mathcal {E}\left( \int _{0}^{T}-K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{\varepsilon }(Y_{u}^{x})\varvec{1}_{d}\mathrm{d}u\right) ^{*}(s)\mathrm{d}W_{s}\right) ,\quad \varepsilon >0 \end{aligned}$$

converge in \(L^{p}(\Omega )\) for \(\varepsilon \longrightarrow 0\) for all \( p\ge 1\), where \(\varphi _{\varepsilon }\) is the approximation of the Dirac delta \(\delta _{0}\) in (5.2) and \(*\) denotes transposition. Then, strong uniqueness holds for such solutions.

In particular, this is the case, if, e.g., uniqueness in law is satisfied.

The proof of Theorem 5.1 essentially consists of four steps:

  1. (1)

    In the first step, we construct a weak solution X to (5.1) by using the version of Girsanov’s theorem for the fractional Brownian motion, that is we consider a probability space \((\Omega ,\mathfrak {A},P)\) on which a fractional Brownian motion \(B^{H}\) and a process \(X^{x}\) are defined such that (5.1) holds. However, a priori the solution is not a measurable functional of the driving noise, that is \(X^x\) is not adapted to the filtration \(\mathcal {F}=\{\mathcal {F}_{t}\}_{t\in [0,T]}\) generated by \(B^{H}\).

  2. (2)

    In the next step, we approximate the generalized drift coefficient \(\delta _{0}\) by the Gaussian kernels \(\varphi _{\varepsilon }\). Using classical Picard iteration, we know that for each smooth coefficient \(\varphi _{\varepsilon }\), \(\varepsilon >0\), there exists unique strong solution \(X_{\cdot }^{\varepsilon }\) to the SDE

    $$\begin{aligned} dX_{t}^{\varepsilon }=\alpha \varphi _{\varepsilon }(X_{t}^{\varepsilon })\cdot \varvec{1}_{d}\mathrm{d}t+{d}B_{t}^{H},\quad 0\le t\le T,\quad X_{0}^{\varepsilon }=x\in \mathbb {R}^{d}. \end{aligned}$$
    (5.3)

    Then, we prove that for each \(t\in [0,T]\), the family \(\{X_{t}^{\varepsilon }\}_{\varepsilon >0}\) converges weakly as \(\varepsilon \searrow 0\) to the conditional expectation \(E[X_{t}|\mathcal {F}_{t}]\) in the space \(L^{2}(\Omega ;\mathcal {F}_{t})\) of square integrable, \(\mathcal {F}_{t}\)-measurable random variables.

  3. (3)

    Further, it is well known, see, e.g., [35], that for each \(t\in [0,T]\) the strong solution \(X_{t}^{\varepsilon }\), \(\varepsilon >0\), is Malliavin differentiable, and that the Malliavin derivative \(D_{s}X_{t}^{\varepsilon }\), \(0\le s\le t\), with respect to W in (2.3) solves the equation

    $$\begin{aligned} D_{s}X_{t}^{\varepsilon }=K_{H}(t,s)I_{d}+\int _{s}^{t}\alpha \varphi _{\varepsilon }'(X_{u}^{\varepsilon })\cdot \varvec{1}_{d}D_{s}X_{u}^{\varepsilon }\mathrm{d}u, \end{aligned}$$
    (5.4)

    where \(\varphi _{\varepsilon }'\) denotes the Jacobian of \(\varphi _{\varepsilon }\). Using a compactness criterion based on Malliavin calculus (see “Appendix A”), we then show that for every \(t\in [0,T]\) the set of random variables \(\{X_{t}^{\varepsilon }\}_{\varepsilon > 0}\) is relatively compact in \(L^{2}(\Omega )\), which enables us to conclude that \(X_{t}^{\varepsilon }\) converges strongly as \(\varepsilon \searrow 0\) in \(L^{2}(\Omega ;\mathcal {F}_{t})\) to \(\mathbb {E}\left[ X_{t}|\mathcal {F}_{t}\right] \). As a consequence of the compactness criterion, we also observe that \(E[X_{t}|\mathcal {F}_{t}]\) is Malliavin differentiable.

  4. (4)

    Finally, we prove that \(\mathbb {E}\left[ X_{t}|\mathcal {F}_{t}\right] =X_{t}\), which entails that \(X_{t}\) is \(\mathcal {F}_{t}\)-measurable and thus a strong solution on our specific probability space, on which we assumed our weak solution.

We assume without loss of generality that \(\alpha =1\). Let us first have a look at step 1 of our program, that is we want to construct weak solutions of (5.1) by using Girsanov’s theorem. Let \((\Omega ,\mathfrak {A},\widetilde{P})\) be some given probability space which carries a d-dimensional fractional Brownian motion \(\widetilde{B}^{H}\) with Hurst parameter \(H\in (0,1/2)\) and set \(X_{t}^{x}:=x+\widetilde{B}_{t}^{H}\), \(t\in \left[ 0,T\right] \), \(x\in \mathbb {R}^{d}\). Set \(\theta _{t}:=\left( K_{H}^{-1}\left( \int _{0}^{\cdot }\delta _{0}(X_{r}^{x})\mathrm{d}r\varvec{1}_{d}\right) \right) (t)\) and consider the Doléans–Dade exponential

$$\begin{aligned} \xi _{t} := \exp \left\{ \int _{0}^{t}\theta _{s}^{T}\mathrm{d}W_{s}-\frac{1}{2}\int _{0}^{t}\theta _{s}^{T}\theta _{s}\mathrm{d}s\right\} ,\quad t\in [0,T]. \end{aligned}$$

formally.

If we were allowed to implement Girsanov’s theorem in this setting, we would arrive at the conclusion that the process

$$\begin{aligned} B_{t}^{H}&:= X_{t}^{x}-x-\int _{0}^{t}\delta _{x}(X_{s}^{x})\mathrm{d}s\varvec{1}_{d}\nonumber \\&= \widetilde{B}_{t}^{H}-\int _{0}^{t}\delta _{0}(\widetilde{B}_{s}^{H})\mathrm{d}s\varvec{1}_{d} \end{aligned}$$
(5.5)

is a fractional Brownian motion on \((\Omega ,\mathfrak {A},P)\) with Hurst parameter \(H\in (0,1/2)\), where \(\frac{dP}{d\widetilde{P}}=\xi _{T}\). Hence, because of (5.5), the couple \((X^{x},B^{H})\) will be a weak solution of 5.1 on \((\Omega ,\mathfrak {A},P)\).

Therefore, in what follows we show that the requirements of Theorem 2.5 are accomplished.

Lemma 5.3

Let \(x\in \mathbb {R}^{d}\). If \(H<\frac{1}{2(1+d)}\), then

$$\begin{aligned} \sup _{\varepsilon >0}E\left[ \exp \left( \mu \int _{0}^{T}(K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{x,\varepsilon }(B_{u}^{H})\mathrm{d}u)(t)\right) ^{2}\mathrm{d}t\right) \right] <\infty \end{aligned}$$

for all \(\mu \in \mathbb {R},\) where

$$\begin{aligned} \varphi _{x,\varepsilon }(B_{u}^{H})=\frac{1}{(2\pi \varepsilon )^{\frac{d}{2 }}}\exp \left( -\frac{\left| B_{u}^{H}-x\right| _{\mathbb {R}^{d}}^{2}}{ 2\varepsilon }\right) . \end{aligned}$$

Proof

In order to prove Lemma 5.3, we can write

$$\begin{aligned} K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{x,\varepsilon }(B_{r}^{H})\mathrm{d}r\right) (t)= & {} t^{H-\frac{1}{2}}I_{0+}^{\frac{1}{2}-H}t^{\frac{1}{2}-H}\left( \int _{0}^{\cdot }\varphi _{x,\varepsilon }(B_{r}^{H})\mathrm{d}r\right) ^{\prime }\left( t\right) \\= & {} t^{H-\frac{1}{2}}\int _{0}^{t}\gamma _{-\frac{1}{2}-H,\frac{1}{2}-H}(t,u)\varphi _{x,\varepsilon }(B_{u}^{H})\mathrm{d}u, \end{aligned}$$

where

$$\begin{aligned} \gamma _{\alpha ,\beta }(t,u)=\left( t-u\right) ^{\alpha }u^{\beta }. \end{aligned}$$

Using the self-similarity of the fBm, we can write

$$\begin{aligned} K_{H}^{-1}\left( \int _{0}^{.}\varphi _{x,\varepsilon }(B_{r}^{H})\mathrm{d}r\right) (t)\overset{law }{=}t^{\frac{1}{2}-H(1+d)}\int _{0}^{1}\gamma _{-\frac{1}{2}-H,\frac{1}{2} -H}(1,u)\varphi _{xt^{-H},\varepsilon (t)}(B_{u}^{H})\mathrm{d}u, \end{aligned}$$

where \(\varepsilon (t):=\varepsilon t^{-2H}\), and hence

$$\begin{aligned}&K_{H}^{-1}\left( \int _{0}^{.}\varphi _{x,\varepsilon }(B_{r}^{H})\mathrm{d}r\right) ^{2m}(t) \overset{law}{=}t^{2m\left( \frac{1}{2}-H(1+d)\right) }\\&\qquad \left( \int _{0}^{1}\gamma _{-\frac{1}{2}-H,\frac{1}{ 2}-H}(1,u)\varphi _{xt^{-H},\varepsilon (t)}(B_{u}^{H})\mathrm{d}u\right) ^{2m} \\&\quad =t^{2m\left( \frac{1}{2}-H(1+d)\right) }(2m)!\int _{\mathcal {T}_{2m}(0,1)}\prod _{j=1}^{2m}\gamma _{-\frac{1}{2}-H,\frac{1}{2}-H}(1,u_{j})\varphi _{xt^{-H},\varepsilon (t)}(B_{u_{j}}^{H})\mathrm{d}\mathbf {u,} \end{aligned}$$

where \(\mathcal {T}_{n}(0,s)=\{0\le u_{1}<u_{2}<\cdots <u_{n}\le s\}\) and

$$\begin{aligned} \varphi _{xt^{-H},\varepsilon (t)}(B_{u_{j}}^{H})=\frac{1}{(2\pi )^{d}}\int _{ \mathbb {R}^{d}}\exp \left( i\left\langle \xi ,B_{u_{j}}^{H}-xt^{-H}\right\rangle _{ \mathbb {R}^{d}}-\varepsilon (t)\frac{\left| \xi \right| _{\mathbb {R} ^{d}}^{2}}{2}\right) \mathrm{d}\xi . \end{aligned}$$

Then,

$$\begin{aligned}&\mathbb {E}\left[ \left( \int _{0}^{T}\left( K_{H}^{-1}\left( \int _{0}^{.}\varphi _{x,\varepsilon }(B_{u}^{H})\mathrm{d}u\right) (t)\right) ^{2}\mathrm{d}t\right) ^{m}\right] \\&\quad \le T^{m-1}\int _{0}^{T}E\left[ K_{H}^{-1}\left( \int _{0}^{.}\varphi _{x,\varepsilon }(B_{r}^{H})\mathrm{d}r\right) ^{2m}(t)\right] \mathrm{d}t \\&\quad =T^{m-1}\int _{0}^{T}t^{2m(\frac{1}{2}-H(1+d))}(2m)!\int _{\mathcal {T} _{2m}(0,1)}\left( \prod _{j=1}^{2m}\gamma _{-\frac{1}{2}-H,\frac{1}{2} -H}(1,u_{j})\right) E\\&\qquad \left[ \prod _{j=1}^{2m}\varphi _{xt^{-H},\varepsilon (t)}(B_{u_{j}}^{H})\right] \mathrm{d}\mathbf {u}\mathrm{d}t. \end{aligned}$$

Moreover,

$$\begin{aligned}&\mathbb {E}\left[ \prod _{j=1}^{2m}\varphi _{xt^{-H},\varepsilon (t)}(B_{u_{j}}^{H})\right] \\&\quad =\frac{1}{(2\pi )^{2dm}}\mathbb {E}\left[ \prod _{j=1}^{2m}\int _{\mathbb {R} ^{d}}\exp \left( i\left\langle \xi _{j},B_{u_{j}}^{H}-xt^{-H}\right\rangle _{ \mathbb {R}^{d}}-\varepsilon (t)\frac{\left| \xi _{j}\right| _{ \mathbb {R}^{d}}^{2}}{2}\right) \mathrm{d}\xi _{j}\right] \\&\quad =\frac{1}{(2\pi )^{2dm}}\int _{\mathbb {R}^{2dm}}E\left[ \exp \left( i\sum _{j=1}^{2m}\left\langle \xi _{j},B_{u_{j}}^{H}\right\rangle _{\mathbb {R }^{d}}\right) \right] \\&\qquad \times \exp \left( -i\sum _{j=1}^{2m}\left\langle \xi _{j},xt^{-H}\right\rangle _{ \mathbb {R}^{d}}\right) \\&\qquad \times \exp \left( -\frac{\varepsilon (t)}{2}\sum _{j=1}^{2m}\left| \xi _{j}\right| _{\mathbb {R}^{d}}^{2}\right) \mathrm{d}\xi _{1} \ldots \mathrm{d}\xi _{2m}. \end{aligned}$$

Next, note that

$$\begin{aligned}&\mathbb {E}\left[ \exp \left( i\sum _{j=1}^{2m}\left\langle \xi _{j},B_{u_{j}}^{H}\right\rangle _{\mathbb {R}^{d}}\right) \right] =\exp \left( -\frac{1}{2}\mathrm{Var}\left[ \sum _{j=1}^{2m}\sum _{k=1}^{d}\xi _{j}^{k}B_{u_{j}}^{H,k}\right] \right) \\&\quad =\exp \left( -\frac{1}{2}\sum _{k=1}^{d}\mathrm{Var}\left[ \sum _{j=1}^{2m}\xi _{j}^{k}B_{u_{j}}^{H,k}\right] \right) =\exp \left( -\frac{1}{2}\sum _{k=1}^{d}\left\langle \xi ^{k},Q(\mathbf {u})\xi ^{k}\right\rangle _{\mathbb {R}^{2m}}\right) , \end{aligned}$$

where

$$\begin{aligned} Q(\mathbf {u})=\mathrm{Cov}(B_{u_{1}}^{H,1}, \ldots ,B_{u_{2m}}^{H,1}). \end{aligned}$$

Hence,

$$\begin{aligned}&\mathbb {E}\left[ \prod _{j=1}^{2m}\varphi _{xt^{-H},\varepsilon (t)}(B_{u_{j}}^{H})\right] \\&\quad \le \frac{1}{(2\pi )^{2dm}}\int _{\mathbb {R}^{2dm}}\exp \left( -\frac{1}{2}\mathrm{Var} \left[ \sum _{j=1}^{2m}\sum _{k=1}^{d}\xi _{j}^{k}B_{u_{j}}^{H,k}\right] \right) \mathrm{d}\xi _{1} \ldots \mathrm{d}\xi _{2m} \\&\quad =\frac{1}{(2\pi )^{2dm}}\int _{\mathbb {R}^{2dm}}\exp \left( -\frac{1}{2} \sum _{k=1}^{d}\left\langle \xi ^{k},Q(\mathbf {u})\xi ^{k}\right\rangle _{ \mathbb {R}^{2m}}\right) \mathrm{d}\xi _{1} \ldots \mathrm{d}\xi _{2m} \\&\quad =\frac{1}{(2\pi )^{2dm}}\left( \int _{\mathbb {R}^{2m}}\exp \left( -\frac{1}{2} \sum _{k=1}^{d}\left\langle \xi ^{1},Q(\mathbf {u})\xi ^{1}\right\rangle _{ \mathbb {R}^{2m}}\right) \mathrm{d}\xi ^{1}\right) ^{d} \\&\quad \le \frac{1}{(2\pi )^{dm}}(\det Q(\mathbf {u}))^{-\frac{d}{2}}. \end{aligned}$$

Using the last estimate, we get that

$$\begin{aligned}&\mathbb {E}\left[ \left( \int _{0}^{T}(K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{x,\varepsilon }(B_{u}^{H})\mathrm{d}u)(t)\right) ^{2}\mathrm{d}t\right) ^{m}\right] \\&\quad \le \frac{T^{m-1}}{(2\pi )^{dm}}T^{2m\left( \frac{1}{2}-H(1+d)\right) } \\&\qquad \times (2m)!\int _{\mathcal {T}_{2m}(0,1)}\left( \prod _{j=1}^{2m}\gamma _{- \frac{1}{2}-H,\frac{1}{2}-H}(1,u_{j})\right) (\det Q(\mathbf {u}))^{-\frac{d}{2}}d \mathbf {u} \\&\quad \le \frac{T^{m-1}}{(2\pi )^{dm}}T^{2m\left( \frac{1}{2}-H(1+d)\right) } \\&\qquad \times C_{H,d}^{m}(m!)^{2H(1+d)}, \end{aligned}$$

where the last bound is due to Lemma A.5 for a constant \(C_{H,d}\) only depending on H and d. So the result follows. \(\square \)

Proposition 5.4

Let \(x\in \mathbb {R}^{d}\) and \(H<\frac{1}{2(1+d)}\). Then, there exists a \(\zeta _{T}\in L^{p}(\Omega )\) such that

$$\begin{aligned} \mathcal {E}\left( \int _{0}^{T}K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{x,1/n}(B_{u}^{H})\varvec{1}_{d}\mathrm{d}u\right) ^{*}(s)\mathrm{d}W_{s}\right) \underset{ n\longrightarrow \infty }{\longrightarrow }\zeta _{T}\quad \mathrm{in}\quad L^{p}(\Omega ) \end{aligned}$$

for all \(p\ge 1\). Furthermore,

$$\begin{aligned} B_{t}^{H}-L_{t}^{x}(B^{H})\varvec{1}_{d},0\le t\le T \end{aligned}$$

is a fractional Brownian motion with Hurst parameter H under the change of measure with respect to the Radon–Nikodym derivative \(\zeta _{T}\).

Proof

Without loss of generality, let \(p=1\). Then, using \(\left| e^{x}-e^{y}\right| \le \left| x-y\right| e^{x+y}\), Hölder’s inequality, the supermartingale property of Doleans–Dade exponentials we get in connection with the previous lemma that

$$\begin{aligned}&E\left[ \left| \mathcal {E}\left( \int _{0}^{T}K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{x,1/n}(B_{u}^{H})\varvec{1}_{d}\mathrm{d}u\right) ^{*}(s)\mathrm{d}W_{s}\right) \right. \right. \\&\qquad \left. \left. -\mathcal {E} \left( \int _{0}^{T}K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{x,1/r}(B_{u}^{H}\Bigg ) \varvec{1}_{d}\mathrm{d}u\right) ^{*}(s)\mathrm{d}W_{s}\right) \right| \right] \\&\quad \le C(I_{1}+I_{2})E, \end{aligned}$$

where

$$\begin{aligned}&I_{1}:=E\Bigg [\int _{0}^{T}\left| K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/n}(B_{u}^{H})\varvec{1}_{d}\mathrm{d}u\right) ^{*}(s)\right. \\&\qquad \left. -K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/r}(B_{u}^{H})\varvec{1}_{d}\mathrm{d}u\right) ^{*}(s)\right| ^{2}\mathrm{d}s\Bigg ]^{1/2},\\&I_{2}:=E\Bigg [\Bigg (\int _{0}^{T}\left| K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/n}(B_{u}^{H})\varvec{1}_{d}\mathrm{d}u\right) ^{*}(s)\right| ^{2}\mathrm{d}s\\&\qquad -\int _{0}^{T}\left| K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/r}(B_{u}^{H})\varvec{1}_{d}\mathrm{d}u\right) ^{*}(s)\right| ^{2}\mathrm{d}s\Bigg )^{2}\Bigg ]^{1/2}\\&E :=E[\exp \{\mu _{1}\int _{0}^{t}\left| K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/n}(B_{u}^{H})\varvec{1}_{d}\mathrm{d}u\right) ^{*}(s)\right| ^{2}\mathrm{d}s\}]^{1/4} \\&\qquad \cdot E\left[ \exp \left\{ \mu _{2}\int _{0}^{t}\left| K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/r}(B_{u}^{H})\varvec{1}_{d}\mathrm{d}u\right) ^{*}(s)\right| ^{2}\mathrm{d}s\right\} \right] ^{1/4} \end{aligned}$$

for constants \(C,\mu _{1},\mu _{2}>0\).

Now, let us have a look at the proof of the previous lemma and adopt the notation therein. In the sequel, we omit \(\varvec{1}_{d}\). Then, we obtain for \(m=1\) by using the self-similarity of the fBm in a similar way (but under expectation) that

$$\begin{aligned}&E\left[ \left| K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{\varepsilon _{1}}(B_{u}^{H})\mathrm{d}u\right) ^{*}(t)\right| ^{2}\left| K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{\varepsilon _{2}}(B_{u}^{H})\mathrm{d}u\right) ^{*}(t)\right| ^{2}\right] \\&\quad =E\left[ \left( t^{2m\left( \frac{1}{2}-H(1+d)\right) }(2m)!\right) ^{2}\int _{\mathcal {T}_{2m}(0,1)}\prod _{j=1}^{2m}\gamma _{-\frac{1}{2}-H,\frac{1}{2}-H}(1,u_{j})\varphi _{xt^{-H},\varepsilon _{1}(t)}(B_{u_{j}}^{H})\mathrm{d}\mathbf {u} \right. \\&\qquad \left. \times \int _{\mathcal {T}_{2m}(0,1)}\prod _{j=1}^{2m}\gamma _{-\frac{ 1}{2}-H,\frac{1}{2}-H}(1,u_{j})\varphi _{xt^{-H},\varepsilon _{2}(t)}(B_{u_{j}}^{H})\mathrm{d}\mathbf {u}\right] , \end{aligned}$$

where \(\varepsilon _{i}(t)=\varepsilon _{i}t^{-2H},\) \(i=1,2.\) Using shuffling (see Sect. 2.2), we get that

$$\begin{aligned}&E\left[ \left| K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{\varepsilon _{1}}(B_{u}^{H})\mathrm{d}u\right) ^{*}(t)\right| ^{2}\left| K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{\varepsilon _{2}}(B_{u}^{H})\mathrm{d}u\right) ^{*}(t)\right| ^{2}\right] \\&\quad =E\left[ \left( t^{2m\left( \frac{1}{2}-H(1+d)\right) }(2m)!\right) ^{2} \right. \\&\qquad \left. \times \sum _{\sigma \in S(2m,2m)}\int _{\mathcal {T}_{4m}(0,1)}\prod _{j=1}^{4m}f_{\sigma (j)}(u_{j})\mathrm{d}\mathbf {u}\right] , \end{aligned}$$

where \(f_{j}(s):=\gamma _{-\frac{1}{2}-H,\frac{1}{2}-H}(1,s)\varphi _{xt^{-H},\varepsilon _{1}(t)}(B_{s}^{H}),\) if \(j=1, \ldots ,2m\) and \(\gamma _{- \frac{1}{2}-H,\frac{1}{2}-H}(1,s)\varphi _{xt^{-H},\varepsilon _{2}(t)}(B_{s}^{H}),\) if \(j=2m+1, \ldots ,4m\). Without loss of generality, consider the case

$$\begin{aligned} \prod _{j=1}^{4m}f_{\sigma (j)}(u_{j})&=\prod _{j=1}^{2m}\gamma _{-\frac{1}{2}-H,\frac{1}{2} -H}(1,u_{j})\varphi _{xt^{-H},\varepsilon _{1}(t)}(B_{u_{j}}^{H}) \\&\quad \times \prod _{j=2m+1}^{4m}\gamma _{-\frac{1}{2}-H,\frac{1}{2} -H}(1,u_{j})\varphi _{xt^{-H},\varepsilon _{2}(t)}(B_{u_{j}}^{H}). \end{aligned}$$

Then,

$$\begin{aligned}&E\left[ \left( t^{2m\left( \frac{1}{2}-H(1+d)\right) }(2m)!\right) ^{2}\int _{\mathcal {T}_{4m}(0,1)}\prod _{j=1}^{4m}f_{\sigma (j)}(u_{j})\mathrm{d}\mathbf {u}\right] =\left( t^{2m\left( \frac{1}{2}-H(1+d)\right) }(2m)!\right) ^{2} \\&\qquad \times \int _{\mathcal {T}_{4m}(0,1)}\prod _{j=1}^{2m}\gamma _{-\frac{ 1}{2}-H,\frac{1}{2}-H}(1,u_{j})\prod _{j=2m+1}^{4m}\gamma _{-\frac{1}{ 2}-H,\frac{1}{2}-H}(1,u_{j}) \\&\qquad \times E\left[ \prod _{j=1}^{2m}\varphi _{xt^{-H},\varepsilon _{1}(t)}(B_{u_{j}}^{H})\prod _{j=2m+1}^{4m}\varphi _{xt^{-H},\varepsilon _{2}(t)}(B_{u_{j}}^{H})\right] \mathrm{d}\mathbf {u} \\&\quad =\left( t^{2m\left( \frac{1}{2}-H(1+d)\right) }(2m)!\right) ^{2}\int _{\mathcal {T}_{4m}(0,1)}\prod _{j=1}^{4m}\gamma _{-\frac{1}{2}-H,\frac{1}{2}-H}(1,u_{j}) \\&\qquad \times E\Bigg [\prod _{j=1}^{2m}\int _{\mathbb {R}^{d}}\exp \left( i\left\langle \xi _{j},B_{u_{j}}^{H}-xt^{-H}\right\rangle _{\mathbb {R}^{d}}-\varepsilon _{1}(t)\frac{\left| \xi _{j}\right| _{\mathbb {R}^{d}}^{2}}{2}\right) \mathrm{d}\xi _{j} \\&\qquad \times \prod _{j=2m+1}^{4m}\int _{\mathbb {R}^{d}}\exp \left( i\left\langle \xi _{j},B_{u_{j}}^{H}-xt^{-H}\right\rangle _{\mathbb {R}^{d}}-\varepsilon _{2}(t)\frac{\left| \xi _{j}\right| _{\mathbb {R}^{d}}^{2}}{2}\right) \mathrm{d}\xi _{j}\Bigg ]\mathrm{d}\mathbf {u} \\&\quad =\left( t^{2m\left( \frac{1}{2}-H(1+d)\right) }(2m)!\right) ^{2} \times \int _{\mathcal {T}_{4m}(0,1)}\prod _{j=1}^{4m}\gamma _{-\frac{ 1}{2}-H,\frac{1}{2}-H}(1,u_{j}) \\&\qquad \times \frac{1}{(2\pi )^{4dm}}\int _{\mathbb {R}^{4dm}}E\left[ \exp \left( i\sum _{j=1}^{4m}\left\langle \xi _{j},B_{u_{j}}^{H}\right\rangle _{\mathbb {R }^{d}}\right) \right] \\&\qquad \times \exp \left( -i\sum _{j=1}^{4m}\left\langle \xi _{j},xt^{-H}\right\rangle _{ \mathbb {R}^{d}}\right) \exp \left( -\frac{\varepsilon _{1}(t)}{2}\sum _{j=1}^{2m}\left| \xi _{j}\right| _{\mathbb {R}^{d}}^{2}-\frac{\varepsilon _{2}(t)}{2} \sum _{j=1}^{2m}\left| \xi _{j}\right| _{\mathbb {R}^{d}}^{2}\right) \\&\qquad \mathrm{d}\xi _{1} \ldots \mathrm{d}\xi _{4m}\mathrm{d}\mathbf {u}. \end{aligned}$$

So

$$\begin{aligned}&E\left[ \left( t^{2m\left( \frac{1}{2}-H(1+d)\right) }(2m)!\right) ^{2}\int _{\mathcal {T}_{4m}(0,1)}\prod _{j=1}^{4m}f_{\sigma (j)}(u_{j})\mathrm{d}\mathbf {u}\right] \\&\quad =\left( t^{2m\left( \frac{1}{2}-H(1+d)\right) }(2m)!\right) ^{2}\int _{\mathcal {T}_{4m}(0,1)}\prod _{j=1}^{4m}\gamma _{-\frac{1}{2}-H,\frac{1}{2}-H}(1,u_{j}) \\&\qquad \times \frac{1}{(2\pi )^{4dm}}\int _{\mathbb {R}^{4dm}}\exp \left( -\frac{1}{2} \sum _{k=1}^{d}\left\langle \xi ^{k},Q(\mathbf {u})\xi ^{k}\right\rangle _{ \mathbb {R}^{4m}}\right) \\&\qquad \times \exp \left( -i\sum _{j=1}^{4m}\left\langle \xi _{j},xt^{-H}\right\rangle _{ \mathbb {R}^{d}}\right) \exp \left( -\frac{\varepsilon _{1}(t)}{2}\sum _{j=1}^{2m}\left| \xi _{j}\right| _{\mathbb {R}^{d}}^{2}-\frac{\varepsilon _{2}(t)}{2} \sum _{j=1}^{2m}\left| \xi _{j}\right| _{\mathbb {R}^{d}}^{2}\right) \\&\qquad \mathrm{d}\xi _{1} \ldots \mathrm{d}\xi _{4m}\mathrm{d}\mathbf {u}. \end{aligned}$$

Hence, using dominated convergence in connection with Lemma A.5, we see that

$$\begin{aligned}&\int _{0}^{T}E\left[ \left( t^{2m\left( \frac{1}{2}-H(1+d)\right) }(2m)!\right) ^{2}\int _{\mathcal {T} _{4m}(0,1)}\prod _{j=1}^{4m}f_{\sigma (j)}(u_{j})\mathrm{d}\mathbf {u}\right] \mathrm{d}t \\&\quad \longrightarrow \int _{0}^{T}\left( t^{2m\left( \frac{1}{2}-H(1+d)\right) }(2m)!\right) ^{2}\int _{\mathcal {T} _{4m}(0,1)}\prod _{j=1}^{4m}\gamma _{-\frac{1}{2}-H,\frac{1}{2} -H}(1,u_{j}) \\&\qquad \times \frac{1}{(2\pi )^{4dm}}\int _{\mathbb {R}^{4dm}}\exp \left( -\frac{1}{2} \sum _{k=1}^{d}\left\langle \xi ^{k},Q(\mathbf {u})\xi ^{k}\right\rangle _{ \mathbb {R}^{4m}}\right) \\&\qquad \exp \left( -i\sum _{j=1}^{4m}\left\langle \xi _{j},xt^{-H}\right\rangle _{ \mathbb {R}^{d}}\right) \mathrm{d}\xi _{1} \ldots \mathrm{d}\xi _{4m}\mathrm{d}\mathbf {u}\mathrm{d}t \end{aligned}$$

for \(\varepsilon _{1},\varepsilon _{2}\searrow 0\). For other \(\sigma \in S(2m,2m)\), we obtain similar limit values. In summary, we find (by also considering the case \(\varepsilon _{1}=\varepsilon _{2}\)) that

$$\begin{aligned} E\left[ \left( \int _{0}^{T}\left| K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{\varepsilon _{1}}(B_{u}^{H})\mathrm{d}u\right) (s)-K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{\varepsilon _{2}}(B_{u}^{H})\mathrm{d}u\right) (s)\right| ^{2}\mathrm{d}s\right) ^{2}\right] \longrightarrow 0 \end{aligned}$$

for \(\varepsilon _{1},\varepsilon _{2}\searrow 0\). Thus,

$$\begin{aligned} I_{2}=I_{2}(n,r)\longrightarrow 0\quad \text {for}\quad n,r\longrightarrow \infty . \end{aligned}$$

Similarly, we have that

$$\begin{aligned} I_{1}=I_{1}(n,r)\longrightarrow 0\quad \text {for}\quad n,r\longrightarrow \infty . \end{aligned}$$

Since \(E=E(n,r)\) is uniformly bounded with respect to nr because of Lemma 5.3, we obtain the convergence of the Radon–Nikodym derivatives to a \(\zeta _{T}\) in \(L^{p}(\Omega )\) for \(p=1\). The second statement of the lemma follows by using characteristic functions combined with dominated convergence. \(\square \)

Henceforth, we confine ourselves to the probability space \((\Omega , \mathfrak {A},P)\), which carries a weak solution \((X^{x},B^{H})\) of (5.1) constructed from a fractional Brownian motion \(\overline{B} _{t}^{H},0\le t\le T\) with respect to a probability measure \(\overline{P}\) by Girsanov’s theorem.

We now turn to the second step of our procedure.

Lemma 5.5

Suppose that \(H<\frac{1}{2(1+d)}\) and let \(\{\varphi _{\varepsilon }\}_{\varepsilon >0}\) be defined as

$$\begin{aligned} \varphi _{\varepsilon }(y)=\varphi _{\varepsilon ,x}(y)=\varepsilon ^{-\frac{ d}{2}}\varphi \left( \varepsilon ^{-\frac{1}{2}}(y-x)\right) ,\;\,\varepsilon >0, \end{aligned}$$

where \(\varphi \) is the d-dimensional standard normal density. Denote by \( X^{x,\varepsilon }=\{X_{t}^{x,\varepsilon },t\in [0,T]\}\) the corresponding solutions of (5.1), if we replace \(\delta _{x}\) by \(\varphi _{\varepsilon ,x}(y),\varepsilon >0\). Then, for every \(t\in [0,T]\) and bounded continuous function \(\eta :\mathbb {R}^{d}\longrightarrow \mathbb {R}\) we have that

$$\begin{aligned} \eta (X_{t}^{x,\varepsilon })\overset{\varepsilon \longrightarrow 0_{+}}{ \longrightarrow }E[\eta (X_{t}^{x})\left| \mathcal {F}_{t}\right] \end{aligned}$$

weakly in \(L^{2}(\Omega ,\mathcal {F}_{t},P)\).

Proof

Without loss of generality, let \(x=0\). We mention that

$$\begin{aligned} \Sigma _{t}:=\left\{ \exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}\right\rangle \right\} :\{\alpha _{j}\}_{j=1}^{k}\subset \mathbb {R}^{d},0=t_{0}<\cdots <t_{k}=t,k\ge 1\right\} \end{aligned}$$

is a total subset of \(L^{2}(\Omega ,\mathcal {F}_{t},P).\) Denote \( X_{t}^{x,\varepsilon }\) by \(X_{t}^{n}\) for \(\varepsilon =1/n\) and define

$$\begin{aligned} u_{s}^{n}=K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/n}(X_{u}^{n})\varvec{1} _{d}\mathrm{d}u\right) ^{*}(s). \end{aligned}$$

By the classical Girsanov theorem, the process

$$\begin{aligned} \widetilde{W}_{t}^{n}:=W_{t}+\int _{0}^{t}u_{s}^{n}\mathrm{d}s,\quad 0\le s\le T \end{aligned}$$

is a Wiener process under \(\widetilde{P}_{n}\) with Radon–Nikodym density

$$\begin{aligned} \mathcal {E}\left( \int _{0}^{T}(-u_{s}^{n}{})^{*}\mathrm{d}W_{s}\right) . \end{aligned}$$

Therefore, it follows from the definition of \(K_{H}^{-1}\) that

$$\begin{aligned} X_{t}^{n}=x+\int _{0}^{t}K_{H}(t,s)d\widetilde{W}_{s}^{n},\quad 0\le s\le T \end{aligned}$$
(5.6)

is a fractional Brownian motion with Hurst parameter H under \(\widetilde{P} _{n}\). Then, using Girsanov’s theorem, we find that

$$\begin{aligned}&E\left[ \eta (X_{t}^{n})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}\right\rangle \right\} \right] \\&\quad =E\left[ \eta (X_{t}^{n})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},X_{t_{j}}^{n}-X_{t_{j-1}}^{n}-\int _{t_{j-1}}^{t_{j}}\varphi _{1/n}(X_{s}^{n})\varvec{1}_{d}\mathrm{d}s\right\rangle \right\} \right] \\&\quad =E_{\widetilde{P}_{n}}\left[ \eta (X_{t}^{n})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},X_{t_{j}}^{n}-X_{t_{j-1}}^{n}-\int _{t_{j-1}}^{t_{j}}\varphi _{1/n}(X_{s}^{n})\varvec{1}_{d}\mathrm{d}s\right\rangle \right\} \right. \\&\qquad \left. \exp \left( \int _{0}^{T}K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/n}(X_{u}^{n}\right) \varvec{1}_{d}\mathrm{d}u\right) ^{*}(s)\mathrm{d}W_{s}\right. \\&\qquad \left. +\frac{1}{2}\int _{0}^{T}\left| K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/r}(X_{u}^{n})\varvec{1} _{d}\mathrm{d}u\right) ^{*}(s)\right| ^{2}\mathrm{d}s\right] \\&\quad =E_{\widetilde{P}_{n}}\left[ \eta (X_{t}^{n})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},X_{t_{j}}^{n}-X_{t_{j-1}}^{n}-\int _{t_{j-1}}^{t_{j}}\varphi _{1/n}(X_{s}^{n})\varvec{1}_{d}\mathrm{d}s\right\rangle \right\} \right. \\&\qquad \left. \cdot \mathcal {E}\left( \int _{0}^{T}K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/n}(X_{u}^{n})\varvec{1}_{d}\mathrm{d}u\right) ^{*}(s)d\widetilde{W}_{s}^{n}\right) \right] \\&\quad =E_{P}\left[ \eta (B_{t}^{H})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}-\int _{t_{j-1}}^{t_{j}}\varphi _{1/n}(B_{s}^{H})\varvec{1}_{d}\mathrm{d}s\right\rangle \right\} \right. \\&\qquad \left. \cdot \mathcal {E}\left( \int _{0}^{t}K_{H}^{-1}(\int _{0}^{\cdot }\varphi _{1/n}(B_{u}^{H})\varvec{1}_{d}\mathrm{d}u)^{*}(s)\mathrm{d}W_{s}\right) \right] , \end{aligned}$$

where we used in the last equality relation (5.6), conditioning and the fact that \(X_{t}^{n},0\le t\le T\) under \(\widetilde{P}_{n}\) has the same law as \(B_{t}^{H},0\le t\le T\) under P in connection with measurable functionals (\(E_{\mu }\) expectation with respect to \(\mu \)).

On the other hand, denoting by \(\zeta _{T}\) the Radon–Nikodym density associated with \(B_{t}^{H},0\le t\le T\) in Proposition 5.4, we obtain by \(\left| e^{x}-e^{y}\right| \le \left| x-y\right| e^{x+y}\), Hölder’s inequality, the supermartingale property of Doleans–Dade exponentials and the proof of Proposition 5.4 that

$$\begin{aligned}&\left| E\left[ \eta (B_{t}^{H})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}-\int _{t_{j-1}}^{t_{j}}\varphi _{1/n}(B_{s}^{H})\varvec{1}_{d}\mathrm{d}s\right\rangle \right\} \right. \right. \\&\quad \left. \left. \cdot \mathcal {E}\left( \int _{0}^{t}K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/n}(B_{u}^{H})\varvec{1}_{d}\mathrm{d}u\right) ^{*}(s)\mathrm{d}W_{s}\right) \right. \right. \\&\quad \left. \left. -\eta (B_{t}^{H})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}-\int _{t_{j-1}}^{t_{j}}\delta _{0}(B_{s}^{H})\mathrm{d}s\varvec{1}_{d}\right\rangle \right\} \cdot \zeta _{T}\right] \right| \\&\quad \le C(I_{1}+I_{2}+I_{3})E, \end{aligned}$$

where

$$\begin{aligned} I_{1}&:=E\left[ \left( \sum _{j=1}^{k}\left\langle \alpha _{j},\int _{t_{j-1}}^{t_{j}}\delta _{0}(B_{s}^{H})\mathrm{d}s\varvec{1} _{d}-\int _{t_{j-1}}^{t_{j}}\varphi _{1/n}(B_{s}^{H})\varvec{1} _{d}\mathrm{d}s\right\rangle \right) ^{2}\right] ^{1/2},\\ I_{2}&:=\lim _{r\longrightarrow \infty }E\Bigg [\int _{0}^{t}\left| K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/n}(B_{u}^{H})\varvec{1} _{d}\mathrm{d}u\right) ^{*}(s)\right. \\&\quad \left. -K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/r}(B_{u}^{H}) \varvec{1}_{d}\mathrm{d}u\right) ^{*}(s)\right| ^{2}\mathrm{d}s\Bigg ]^{1/2},\\ I_{3}&:=\lim _{r\longrightarrow \infty }E\left[ \left( \int _{0}^{t}\left| K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/n}(B_{u}^{H})\varvec{1} _{d}\mathrm{d}u\right) ^{*}(s)\right| ^{2}\mathrm{d}s\right. \right. \\&\quad \left. \left. -\int _{0}^{t}\left| K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/r}(B_{u}^{H})\varvec{1} _{d}\mathrm{d}u\right) ^{*}(s)\right| ^{2}\mathrm{d}s\right) ^{2}\right] ^{1/2} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} E&:=\sup _{r\ge 1}E\left[ \exp \left\{ 8\sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}-\int _{t_{j-1}}^{t_{j}}\varphi _{1/n}(B_{s}^{H})\varvec{1}_{d}\mathrm {d}s\right\rangle \right\} \right] ^{1/8} \\ {}&\quad \cdot E\left[ \exp \left\{ 8\sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}-\int _{t_{j-1}}^{t_{j}}\delta _{0}(B_{s}^{H})\mathrm {d}s\varvec{1}_{d}\right\rangle \right\} \right] ^{1/8} \\ {}&\quad \cdot E\left[ \exp \{\mu _{1}\int _{0}^{t}\left| K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/n}(B_{u}^{H})\varvec{1}_{d}\mathrm {d}u\right) ^{*}(s)\right| ^{2}\mathrm {d}s\}\right] ^{1/8} \\ {}&\quad \cdot E\left[ \exp \{\mu _{2}\int _{0}^{t}\left| K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/r}(B_{u}^{H})\mathrm {d}s\varvec{1}_{d}\mathrm {d}u\right) ^{*}(s)\right| ^{2}\mathrm {d}s\}\right] ^{1/16} \end{aligned} \end{aligned}$$

for constants \(C,\mu _{1},\mu _{2}>0\).

By inspecting the proof of Proposition 5.4 once again, we know that

$$\begin{aligned} I_{3}=I_{3}(n)\longrightarrow 0\quad \text {for}\quad n\longrightarrow \infty . \end{aligned}$$

and

$$\begin{aligned} I_{2}=I_{2}(n)\longrightarrow 0\quad \text {for}\quad n\longrightarrow \infty . \end{aligned}$$

Since \(L_{t}^{x}(B^{H},\varepsilon )\) converges to \(L_{t}^{x}(B^{H})\) in \( L^{p}(\Omega )\) for all \(p\ge 1\), we also conclude that

$$\begin{aligned} I_{1}=I_{1}(n)\longrightarrow 0\quad \text {for}\quad n\longrightarrow \infty . \end{aligned}$$

On the other hand, we obtain from (4.2), Theorem 4.4 and Lemma 5.3 that

$$\begin{aligned} E=E(n)\le K \end{aligned}$$

for all n, where K is a constant.

Denote by \(\overline{\zeta }_{T}\) the Radon–Nikodym density associated with the \(\overline{P}\)-fractional Brownian motion \(\overline{B}_{t}^{H},0\le t\le T\). By assumption, \(X_{t}=x+\overline{B}_{t}^{H}\) is our weak solution to (5.1) under P with \(\frac{dP}{d\overline{P}}=\overline{\zeta } _{T}\). Let \(\overline{W}_{t},0\le t\le T\) be the \(\overline{P}\)-Wiener process in the stochastic integral representation of \(\overline{B} _{t}^{H},0\le t\le T\). Since measurable functionals of \(\overline{W} _{t},0\le t\le T\) under \(\overline{P}\) coincide in law with those of \( W_{t},0\le t\le T\) under P, we see that

$$\begin{aligned}&E\left[ \eta (B_{t}^{H})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}-\int _{t_{j-1}}^{t_{j}}\delta _{0}(B_{s}^{H})\varvec{1}_{d}\mathrm{d}s\right\rangle \right\} \zeta _{T}\right] \\&\quad =E_{\overline{P}}\left[ \eta (\overline{B}_{t}^{H})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},\overline{B}_{t_{j}}^{H}-\overline{B }_{t_{j-1}}^{H}-\int _{t_{j-1}}^{t_{j}}\delta _{0}(\overline{B}_{s}^{H}) \varvec{1}_{d}\mathrm{d}s\right\rangle \right\} \overline{\zeta }_{T}\right] \\&\quad =E_{P}\left[ \eta (\overline{B}_{t}^{H})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},\overline{B}_{t_{j}}^{H}-\overline{B}_{t_{j-1}}^{H}- \int _{t_{j-1}}^{t_{j}}\delta _{0}(\overline{B}_{s}^{H})\varvec{1} _{d}\mathrm{d}s\right\rangle \right\} \right] \\&\quad =E_{P}\left[ \eta (X_{t})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}\right\rangle \right\} \right] \\&\quad =E_{P}\left[ E_{P}\left[ \eta (X_{t})\vert \mathcal {F}_{t}\right] \exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}\right\rangle \right\} \right] . \end{aligned}$$

So we see that

$$\begin{aligned}&E\left[ \eta (X_{t}^{n})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}\right\rangle \right\} \right] \\&\quad \longrightarrow E\left[ \eta (B_{t}^{H})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}-\int _{t_{j-1}}^{t_{j}}\delta _{0}(B_{s}^{H})\varvec{1}_{d}\mathrm{d}s\right\rangle \right\} \cdot \zeta _{T}\right] \\&\quad =E\left[ E[\eta (X_{t})\left| \mathcal {F}_{t}\right] \exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}\right\rangle \right\} \right] \end{aligned}$$

for \(n\longrightarrow \infty ,\) which completes the proof. \(\square \)

Remark 5.6

In fact, we can also show that Lemma 5.5 holds true for \(\eta =Id\). To see this, let us adopt the notation of the proof of Lemma 5.5 and let \(\eta _{m}:\mathbb {R}^{d}\longrightarrow \mathbb {R},m\ge 1\) be a sequence of bounded continuous functions such that

$$\begin{aligned} E[(\eta _{m}(B_{t}^{H})-B_{t}^{H})^{2}]\underset{m\longrightarrow \infty }{ \longrightarrow }0. \end{aligned}$$

Then, using Girsanov’s theorem we find (without loss of generality for \(x=0\)) that

$$\begin{aligned}&E\left[ (\eta _{m}(X_{t}^{n})-X_{t}^{n})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}\right\rangle \right\} \right] \\&\quad =E\left[ (\eta _{m}(X_{t}^{n})-X_{t}^{n})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},X_{t_{j}}^{n}-X_{t_{j-1}}^{n}-\int _{t_{j-1}}^{t_{j}}\varphi _{1/n}(X_{s}^{n})\varvec{1}_{d}\mathrm{d}s\right\rangle \right\} \right] \\&\quad =E\left[ (\eta _{m}(B_{t}^{H})-B_{t}^{H})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}-\int _{t_{j-1}}^{t_{j}}\varphi _{1/n}(B_{s}^{H})\varvec{1}_{d}\mathrm{d}s\right\rangle \right\} \right. \\&\qquad \left. \cdot \mathcal {E}\left( \int _{0}^{t}K_{H}^{-1}(\int _{0}^{\cdot }\varphi _{1/n}(B_{u}^{H})\varvec{1}_{d}\mathrm{d}u)^{*}(s)\mathrm{d}W_{s}\right) \right] . \end{aligned}$$

Hence, Hölder’s inequality and the supermartingale property of Doleans–Dade exponentials yield

$$\begin{aligned}&\left| E\left[ (\eta _{m}(X_{t}^{n})-X_{t}^{n})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}\right\rangle \right\} \right] \right| \\&\quad \le E[(\eta _{m}(B_{t}^{H})-B_{t}^{H})^{2}]^{1/2}A_{1}A_{2}, \end{aligned}$$

where

$$\begin{aligned}&A_{1} =A_{1}(n) \\&\quad :=E\left[ \exp \left\{ 4\sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}-\int _{t_{j-1}}^{t_{j}}\varphi _{1/n}(B_{s}^{H})\varvec{1}_{d}\mathrm{d}s\right\rangle \right\} \right] ^{1/4} \end{aligned}$$

and

$$\begin{aligned}&A_{2} =A_{2}(n) \\&\quad :=E\left[ \exp \left( \mu \int _{0}^{t}\left| K_{H}^{-1}\left( \int _{0}^{\cdot }\varphi _{1/n}(B_{u}^{H})\varvec{1}_{d}\mathrm{d}u\right) ^{*}(s)\right| ^{2}\mathrm{d}s\right) \right] ^{1/4} \end{aligned}$$

for a constant \(\mu >0\). Then, as in the proof of Lemma 5.5 (i.e., with respect to the upper bound for \(E=E(n)\)) we can apply (4.2), Theorem 4.4 and Lemma 5.3 and observe that

$$\begin{aligned}&\sup _{n\ge 1}A_{i}(n)<\infty ,i=1,2. \\&\quad \sup _{n\ge 1} \left| E\left[ (\eta _{m}(X_{t}^{n})-X_{t}^{n})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}\right\rangle \right\} \right] \right| \underset{m\longrightarrow \infty }{\longrightarrow }0. \end{aligned}$$

Using Proposition 5.4, we can similarly show for a weak solution \((X^{x},B^{H})\) of (5.1) that

$$\begin{aligned} \left| E\left[ (\eta _{m}(X_{t}^{x})-X_{t}^{x})\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}\right\rangle \right\} \right] \right| \underset{ m\longrightarrow \infty }{\longrightarrow }0. \end{aligned}$$

So it follows from Lemma 5.5 that

$$\begin{aligned} X_{t}^{x,\varepsilon }\overset{\varepsilon \longrightarrow 0_{+}}{ \longrightarrow }E[X_{t}^{x}\left| \mathcal {F}_{t}\right] \end{aligned}$$

weakly in \(L^{2}(\Omega ,\mathcal {F}_{t},P)\).

We continue with the third step of our scheme. This is the most challenging part. For notational convenience, let us from now on assume that \(\alpha =1\) in (5.1) and that \(\varphi _{\varepsilon }^{\shortmid }\) stands for the Jacobian of \(\varphi _{\varepsilon }\mathbf {1}_{d}\). The following result is based on a compactness criterion for subsets of \(L^2(\Omega )\) which is summarized in Appendix.

Lemma 5.7

Assume \(H<\frac{1}{2(2+d)}\) and let \(\{\varphi _{\varepsilon }\}_{\varepsilon > 0}\) the family of Gaussian kernels approximating Dirac’s delta function \(\delta _0\) in the sense of (5.6). Fix \(t\in [0,T]\) and denote by \(X_t^{\varepsilon }\) the corresponding solutions of (5.1) if we replace \(L_t(X^x)\) by \(\int _0^t \varphi _{\varepsilon }(X_s^{\varepsilon }) \mathrm{d}s\), \(\varepsilon > 0\). Then, there exists a \(\beta \in (0,1/2)\) such that

$$\begin{aligned} \sup _{\varepsilon > 0} \int _0^t \int _0^t \frac{E[\Vert D_\theta X_t^{\varepsilon } - D_{\theta '} X_t^{\varepsilon }\Vert ^2]}{|\theta ' - \theta |^{1+2\beta }} \mathrm{d}\theta ' \mathrm{d}\theta <\infty \end{aligned}$$

and

$$\begin{aligned} \sup _{\varepsilon > 0} \Vert D_{\cdot } X_t^{\varepsilon }\Vert _{L^2(\Omega \times [0,T],\mathbb {R}^{d\times d})} <\infty . \end{aligned}$$
(5.7)

Proof

Fix \(t\in [0,T]\) and take \(\theta ,\theta '>0\) such that \(0<\theta '<\theta < t\). Using the chain rule for the Malliavin derivative, see [35, Proposition 1.2.3], we have

$$\begin{aligned} D_{\theta } X_t^{\varepsilon } = K_H(t,\theta ) I_{d} + \int _{\theta }^t \varphi _{\varepsilon }'(X_s^{\varepsilon }) D_{\theta } X_s^{\varepsilon } \mathrm{d}s \end{aligned}$$

P-a.s. for all \(0\le \theta \le t\) where \(\varphi _{\varepsilon }'(z) = \left( \frac{\partial }{\partial z_j} \varphi _{\varepsilon }^{(i)} (z) \right) _{i,j=1,\dots , d}\) denotes the Jacobian matrix of \(\varphi _{\varepsilon }\) and \(I_d\) the identity matrix in \(\mathbb {R}^{d\times d}\). Thus, we have

$$\begin{aligned} D_{\theta '} X_t^{\varepsilon } - D_{\theta } X_t^{\varepsilon }&= K_H(t,\theta ')I_d- K_H(t,\theta )I_d\\&\quad +\int _{\theta '}^t \varphi _{\varepsilon }'(X_s^{\varepsilon }) D_{\theta '} X_s^{\varepsilon } \mathrm{d}s - \int _{\theta }^t \varphi _{\varepsilon }'(X_s^{\varepsilon }) D_{\theta } X_s^{\varepsilon } \mathrm{d}s\\&= K_H(t,\theta ')I_d- K_H(t,\theta ) I_d\\&\quad +\int _{\theta '}^{\theta } \varphi _{\varepsilon }'(X_s^{\varepsilon }) D_{\theta '} X_s^{\varepsilon } \mathrm{d}s+\int _{\theta }^t \varphi _{\varepsilon }'(X_s^{n}) (D_{\theta '} X_s^{\varepsilon } - D_{\theta }X_s^{\varepsilon }) \mathrm{d}s\\&= K_H(t,\theta ')I_d - K_H(t,\theta ) I_d + D_{\theta '}X_{\theta }^{\varepsilon } - K_H (\theta ,\theta ')I_d \\&\quad + \int _{\theta }^t \varphi _{\varepsilon }'(X_s^{\varepsilon })(D_{\theta '} X_s^{\varepsilon } - D_{\theta }X_s^{\varepsilon })\mathrm{d}s. \end{aligned}$$

Using Picard iteration applied to the above equation, we may write

$$\begin{aligned}&D_{\theta '} X_t^{\varepsilon } - D_{\theta } X_t^{\varepsilon } = K_H(t,\theta ')I_d - K_H (t,\theta )I_d\\&\qquad + \sum _{m=1}^{\infty } \int _{\Delta _{\theta ,t}^m} \prod _{j=1}^m \varphi _{\varepsilon }'(X_{s_j}^{\varepsilon }) \left( K_H(s_m,\theta ')I_d - K_H (s_m,\theta )I_d\right) \mathrm{d}s_m \cdots \mathrm{d}s_1\\&\qquad + \left( I_d + \sum _{m=1}^{\infty } \int _{\Delta _{\theta ,t}^m} \prod _{j=1}^m \varphi _{\varepsilon }'(X_{s_j}^{\varepsilon })\mathrm{d}s_m \cdots \mathrm{d}s_1 \right) \left( D_{\theta '} X_{\theta }^{\varepsilon } - K_H(\theta ,\theta ') I_d\right) . \end{aligned}$$

On the other hand, observe that one may again write

$$\begin{aligned} D_{\theta '} X_{\theta }^{\varepsilon } - K_H(\theta ,\theta ')I_d = \sum _{m=1}^{\infty } \int _{\Delta _{\theta ',\theta }^m} \prod _{j=1}^m \varphi _{\varepsilon }'(X_{s_j}^{\varepsilon }) (K_H(s_m,\theta ') I_d) \, \mathrm{d}s_m \cdots \mathrm{d}s_1. \end{aligned}$$

Altogether, we can write

$$\begin{aligned} D_{\theta '} X_t^{\varepsilon } - D_{\theta } X_t^{\varepsilon } = I_1(\theta ',\theta ) + I_2^{\varepsilon } (\theta ',\theta )+ I_3^{\varepsilon } (\theta ',\theta ), \end{aligned}$$

where

$$\begin{aligned} I_1(\theta ',\theta )&:= K_H(t,\theta ')I_d - K_H (t,\theta )I_d\\ I_2^{\varepsilon }(\theta ',\theta )&:= \sum _{m=1}^{\infty } \int _{\Delta _{\theta ,t}^m} \prod _{j=1}^m \varphi _{\varepsilon }'(X_{s_j}^{\varepsilon }) \left( K_H(s_m,\theta ')I_d - K_H (s_m,\theta )I_d \right) \mathrm{d}s_m \cdots \mathrm{d}s_1\\ I_3^{\varepsilon }(\theta ',\theta )&:=\left( I_d+ \sum _{m=1}^{\infty } \int _{\Delta _{\theta ,t}^m} \prod _{j=1}^m \varphi _{\varepsilon }'(X_{s_j}^{\varepsilon })\mathrm{d}s_m \cdots \mathrm{d}s_1 \right) \\&\quad \times \left( \sum _{m=1}^{\infty } \int _{\Delta _{\theta ',\theta }^m} \prod _{j=1}^m \varphi _{\varepsilon }'(X_{s_j}^{\varepsilon }) (K_H(s_m,\theta ')I_d) \mathrm{d}s_m \cdots \mathrm{d}s_1.\right) . \end{aligned}$$

It follows from Lemma A.4 that

$$\begin{aligned} \int _0^t \int _0^t \frac{\Vert I_1(\theta ',\theta )\Vert _{L^2(\Omega )}^2}{|\theta '-\theta |^{1+2\beta }}\mathrm{d}\theta \mathrm{d}\theta ' = \int _0^t \int _0^t \frac{|K_H(t,\theta ')-K_H(t,\theta )|^2}{|\theta '-\theta |^{1+2\beta }}\mathrm{d}\theta \mathrm{d}\theta '<\infty \end{aligned}$$
(5.8)

for a suitably small \(\beta \in (0,1/2)\).

Let us continue with the term \(I_2^n(\theta ',\theta )\). Then, Girsanov’s theorem, Cauchy–Schwarz inequality and Lemma 5.3 imply

$$\begin{aligned}&E[\Vert I_2^{\varepsilon }(\theta ',\theta ) \Vert ^2]\\&\quad \le C E\left[ \left\| \sum _{m=1}^{\infty } \int _{\Delta _{\theta ,t}^m} \prod _{j=1}^m \varphi _{\varepsilon }'(x+B_{s_j}^H) \left( K_H(s_m,\theta ')I_d - K_H (s_m,\theta )I_d \right) \mathrm{d}s_m \ldots \mathrm{d}s_1\right\| ^4 \right] ^{1/2}, \end{aligned}$$

where \(C>0\) is an upper bound from Lemma 5.3.

Let \(\Vert \cdot \Vert \) denote the matrix norm in \(\mathbb {R}^{d\times d}\) such that \(\Vert A\Vert = \sum _{i,j=1}^d |a_{ij}|\) for a matrix \(A=\{a_{ij}\}_{i,j=1,\dots ,d}\), then taking this matrix norm and expectation, we have

$$\begin{aligned}&E[\Vert I_2^{\varepsilon }(\theta ',\theta ) \Vert ^2] \\&\quad \le C\left( \sum _{m=1}^{\infty } \sum _{i,j=1}^d \sum _{l_1,\dots , l_{m-1}=1}^d \Bigg \Vert \int _{\Delta _{\theta ,t}^m} \frac{\partial }{\partial x_{l_1}} \varphi _{\varepsilon }^{(i)} (x+B_{s_1}^H) \frac{\partial }{\partial x_{l_2}}\varphi _{\varepsilon }^{(l_1)} (x+B_{s_2}^H) \cdots \right. \\&\qquad \left. \cdots \frac{\partial }{\partial x_j}\varphi _{\varepsilon }^{(l_{m-1})} (x+B_{s_m}^H) \left( K_H(s_m,\theta ') - K_H (s_m,\theta ) \right) \mathrm{d}s_m \cdots \mathrm{d}s_1\Bigg \Vert _{L^4(\Omega , \mathbb {R})}\right) ^{2} . \end{aligned}$$

Now, we concentrate on the expression

$$\begin{aligned} J_2^{\varepsilon }(\theta ',\theta )&:= \int _{\Delta _{\theta ,t}^m} \frac{\partial }{\partial x_{l_1}} \varphi _{\varepsilon }^{(i)} (x+B_{s_1}^H) \cdots \frac{\partial }{\partial x_j}\varphi _{\varepsilon }^{(l_{m-1})} (x+B_{s_m}^H)\nonumber \\&\quad \left( K_H(s_m,\theta ') - K_H (s_m,\theta ) \right) \mathrm{d}s. \end{aligned}$$
(5.9)

Then, shuffling \(J_2^{\varepsilon }(\theta ',\theta )\) as shown in (2.1), one can write \((J_2^{\varepsilon }(\theta ',\theta ))^2\) as a sum of at most \(2^{2m}\) summands of length 2m of the form

$$\begin{aligned} \int _{\Delta _{\theta ,t}^{2m}} g_1^{\varepsilon } (B_{s_1}^H) \cdots g_{2m}^{\varepsilon } (B_{s_{2m}}^H) \mathrm{d}s_{2m} \cdots \mathrm{d}s_1, \end{aligned}$$
(5.10)

where for each \(l=1,\dots , 2m\),

$$\begin{aligned} g_l^{\varepsilon }(B_{\cdot }^H) \in \left\{ \frac{\partial }{\partial x_j} \varphi _{\varepsilon }^{(i)} (x+B_{\cdot }^H), \frac{\partial }{\partial x_j} \varphi _{\varepsilon }^{(i)} (x+B_{\cdot }^H)\left( K_H(\cdot ,\theta ') - K_H (\cdot ,\theta ) \right) , \, i,j=1,\dots ,d\right\} . \end{aligned}$$

Repeating this argument once again, we find that \(J_2^{\varepsilon }(\theta ',\theta )^4\) can be expressed as a sum of, at most, \(2^{8m}\) summands of length 4m of the form

$$\begin{aligned} \int _{\Delta _{\theta ,t}^{4m}} g_1^{\varepsilon } (B_{s_1}^H) \cdots g_{4m}^{\varepsilon } (B_{s_{4m}}^H) \mathrm{d}s_{4m} \cdots \mathrm{d}s_1, \end{aligned}$$
(5.11)

where for each \(l=1,\dots , 4m\),

$$\begin{aligned} g_l^{\varepsilon }(B_{\cdot }^H)&\in \left\{ \frac{\partial }{\partial x_j} \varphi _{\varepsilon }^{(i)} (x+B_{\cdot }^H), \frac{\partial }{\partial x_j} \varphi _{\varepsilon }^{(i)} (x+B_{\cdot }^H)( K_H(\cdot ,\theta ') \right. \\&\quad \left. - K_H (\cdot ,\theta )), \, i,j=1,\dots ,d\right\} . \end{aligned}$$

It is important to note that the function \(\left( K_H(\cdot ,\theta ') - K_H (\cdot ,\theta ) \right) \) appears only once in term (5.9) and hence only four times in term (5.11). So there are indices \(j_1,\dots , j_4 \in \{1,\dots , 4m\}\) such that we can write (5.11) as

$$\begin{aligned} \int _{\Delta _{\theta ,t}^{4m}} \left( \prod _{j=1}^{4m} g_j^{\varepsilon }(B_{s_j}^H)\right) \prod _{i=1}^4 \left( K_H(s_{j_i},\theta ') - K_H (s_{j_i},\theta )\right) \mathrm{d}s_{4m} \cdots \mathrm{d}s_1, \end{aligned}$$

where

$$\begin{aligned} g_l^{\varepsilon }(B_{\cdot }^H) \in \left\{ \frac{\partial }{\partial x_j} \varphi _{\varepsilon }^{(i)} (x+B_{\cdot }^H), \, i,j=1,\dots ,d\right\} , \quad l=1,\dots ,4m. \end{aligned}$$

The latter enables us to use the estimate from Proposition 3.2 with \(\sum _{j=1}^{4m} \varepsilon _j=4,\, \sum _{l=1}^{d}\alpha _{[\sigma (j)]}^{(l)}=1\) for all \(j, \left| \alpha \right| =4m\) and Remark 3.4. Thus, we obtain that

$$\begin{aligned} E(J_2^{\varepsilon }(\theta ',\theta ))^4 \le \left( \frac{\theta -\theta '}{\theta \theta '}\right) ^{4\gamma } \theta ^{4\left( H-\frac{1}{2}-\gamma \right) } C^{4m} \Vert \varphi _{\varepsilon }\Vert _{L^{1}(\mathbb {R}^d)}^{4m} A_m^{\gamma }(H,d, |t-\theta |) \end{aligned}$$

whenever \(H<\frac{1}{2(2+d)}\) and \(\gamma \in (0,H)\), where

$$\begin{aligned} A_m^{\gamma }(H,d,\left| t-\theta \right| ):=\frac{((8m)!)^{1/4}(t- \theta )^{-H(4m(d+2))-4(H-\frac{1}{2}-\gamma )+4m}}{\Gamma (-H(d+2)8m+8(H- \frac{1}{2}-\gamma )+8m)^{1/2}}. \end{aligned}$$

Note that \(\left\| \varphi _{\varepsilon }\right\| _{L^{1}(\mathbb {R}^{d})}=1\).

Altogether, we see that

$$\begin{aligned}&E\left[ \Vert I_2^{\varepsilon } (\theta ',\theta )\Vert ^2 \right] \\&\quad \le \left( \frac{\theta -\theta '}{\theta \theta '}\right) ^{2\gamma } \theta ^{2\left( H-\frac{1}{2}-\gamma \right) } \left( \sum _{m=1}^\infty d^{m+1} C^m \Vert \varphi _{\varepsilon }\Vert _{L^{1}(\mathbb {R}^d)}^m A_m^{\gamma }(H,d,|T|)^{1/4} \right) ^2. \end{aligned}$$

So we can find a constant \(C>0\) such that

$$\begin{aligned} \sup _{\varepsilon > 0}E\left[ \Vert I_2^{\varepsilon } (\theta ',\theta )\Vert ^2 \right] \le C\left( \frac{\theta -\theta '}{\theta \theta '}\right) ^{2\gamma } \theta ^{2\left( H-\frac{1}{2}-\gamma \right) } \end{aligned}$$

for \(\gamma \in (0,H)\) provided that \(H<\frac{1}{2(2+d)}\). It is easy to see that we can choose \(\gamma \in (0,H)\) such that there is a suitably small \(\beta \in (0,1/2)\), \(0<\beta< \gamma<H<1/2\) so that it follows from the proof of Lemma A.4 that

$$\begin{aligned} \int _0^t \int _0^t \left| \frac{\theta -\theta '}{\theta \theta '}\right| ^{2\gamma } |\theta |^{2\left( H-\frac{1}{2}-\gamma \right) } |\theta -\theta '|^{-1-2\beta } \mathrm{d}\theta ' \mathrm{d}\theta <\infty , \end{aligned}$$
(5.12)

for every \(t\in (0,T]\).

We now turn to the term \(I_3^{\varepsilon }(\theta ',\theta )\). Observe that term \(I_3^{\varepsilon } (\theta ',\theta )\) is the product of two terms, where the first one will simply be bounded uniformly in \(\theta ,t \in [0,T]\) under expectation. This can be shown by following meticulously the same steps as we did for \(I_2^{\varepsilon }(\theta ',\theta )\) and observing that in virtue of Proposition 3.3 with \(\varepsilon _j = 0\) for all j the singularity in \(\theta \) vanishes.

Again Girsanov’s theorem, Cauchy–Schwarz inequality several times and Lemma 5.3 lead to

$$\begin{aligned}&E [\Vert I_3^{\varepsilon }(\theta ',\theta )\Vert ^2] \le C \left\| I_d+ \sum _{m=1}^{\infty } \int _{\Delta _{\theta ,t}^m} \prod _{j=1}^m \varphi _{\varepsilon }'(x+B_{s_j}^H)\mathrm{d}s_m \cdots \mathrm{d}s_1 \right\| _{L^8(\Omega , \mathbb {R}^{d\times d}) }^2 \\&\quad \times \left\| \sum _{m=1}^{\infty } \int _{\Delta _{\theta ',\theta }^m} \prod _{j=1}^m \varphi _{\varepsilon }'(x+B_{s_j}^H) K_H(s_m,\theta ') \mathrm{d}s_m \cdots \mathrm{d}s_1\right\| _{L^4(\Omega , \mathbb {R}^{d\times d}) }^2, \end{aligned}$$

where \(C>0\) denotes an upper bound obtained from Lemma 5.3.

Again, we have

$$\begin{aligned}&E [\Vert I_3^{\varepsilon }(\theta ',\theta )\Vert ^2] \le C \Bigg (1+ \sum _{m=1}^{\infty } \sum _{i,j=1}^d \sum _{l_1,\dots , l_{m-1}=1}^d \Bigg \Vert \int _{\Delta _{\theta ,t}^m} \frac{\partial }{\partial x_{l_1}} \varphi _{\varepsilon }^{(i)}(x+B_{s_1}^H) \cdots \\&\quad \cdots \frac{\partial }{\partial x_j} \varphi _{\varepsilon }^{(l_{m-1})} (x+B_{s_m}^H) \mathrm{d}s_m \cdots \mathrm{d}s_1 \Bigg \Vert _{L^8(\Omega , \mathbb {R})} \Bigg )^2\\&\quad \times \Bigg ( \sum _{m=1}^{\infty }\sum _{i,j=1}^d \sum _{l_1,\dots , l_{m-1}=1}^d \Bigg \Vert \int _{\Delta _{\theta ',\theta }^m} \frac{\partial }{\partial x_{l_1}} \varphi _{\varepsilon }^{(i)}(x+B_{s_1}^H) \cdots \\&\quad \cdots \frac{\partial }{\partial x_j} \varphi _{\varepsilon }^{(l_{m-1})} (x+B_{s_m}^H) K_H(s_m,\theta ') \mathrm{d}s_m \cdots \mathrm{d}s_1\Bigg \Vert _{L^4(\Omega , \mathbb {R})} \Bigg )^2. \end{aligned}$$

Using exactly the same reasoning as for \(I_2^{\varepsilon }(\theta ',\theta )\), we see that the first factor can be bounded by some finite constant C depending on H, d, T, i.e.,

$$\begin{aligned}&E [\Vert I_3^{\varepsilon }(\theta ',\theta )\Vert ^2] \le C \Bigg ( \sum _{m=1}^{\infty }\sum _{i,j=1}^d \sum _{l_1,\dots , l_{m-1}=1}^d \Bigg \Vert \int _{\Delta _{\theta ',\theta }^m} \frac{\partial }{\partial x_{l_1}} \varphi _{\varepsilon }^{(i)}(x+B_{s_1}^H) \cdots \\&\quad \cdots \frac{\partial }{\partial x_j} \varphi _{\varepsilon }^{(l_{m-1})} (x+B_{s_m}^H) K_H(s_m,\theta ') \mathrm{d}s_m \cdots \mathrm{d}s_1\Bigg \Vert _{L^4(\Omega , \mathbb {R})} \Bigg )^2. \end{aligned}$$

As before, we pay attention to

$$\begin{aligned} J_3^{\varepsilon }(\theta ',\theta ) := \int _{\Delta _{\theta ',\theta }^m} \frac{\partial }{\partial x_{l_1}} \varphi _{\varepsilon }^{(i)} (x+B_{s_1}^H) \cdots \frac{\partial }{\partial x_j}\varphi _{\varepsilon }^{(l_{m-1})} (x+B_{s_m}^H) K_H(s_m,\theta ') \mathrm{d}s_m \cdots \mathrm{d}s_1. \end{aligned}$$
(5.13)

We can express \((J_3^{\varepsilon }(\theta ',\theta ))^4\) as a sum of, at most, \(2^{8m}\) summands of length 4m of the form

$$\begin{aligned} \int _{\Delta _{\theta ',\theta }^{4m}} g_1^{\varepsilon } (B_{s_1}^H) \cdots g_{4m}^{\varepsilon } (B_{s_{4m}}^H) \mathrm{d}s_{4m} \cdots \mathrm{d}s_1, \end{aligned}$$
(5.14)

where for each \(l=1,\dots , 4m\),

$$\begin{aligned} g_l^{\varepsilon }(B_{\cdot }^H) \in \left\{ \frac{\partial }{\partial x_j} \varphi _{\varepsilon }^{(i)} (x+B_{\cdot }^H), \frac{\partial }{\partial x_j} \varphi _{\varepsilon }^{(i)} (x+B_{\cdot }^H) K_H(\cdot ,\theta '), \, i,j=1,\dots ,d\right\} , \end{aligned}$$

where the factor \(K_H(\cdot ,\theta ')\) is repeated four times in the integrand of (5.14). Now, we can simply apply Proposition 3.3 with \(\sum _{j=1}^{4m}\varepsilon _j=4\), \(\sum _{l=1}^{d}\alpha _{[\sigma (j)]}^{(l)}=1\) for all \(j, \left| \alpha \right| =4m\) and Remark 3.4 in order to get

$$\begin{aligned} E[(J_3^{\varepsilon }(\theta ',\theta ))^4] \le \theta ^{4\left( H-\frac{1}{2}\right) } C^{4m} \Vert \varphi _{\varepsilon }\Vert _{L^{1}(\mathbb {R}^d)}^{4m} A_m^{0}(H,d, |\theta -\theta '|), \end{aligned}$$

whenever \(H<\frac{1}{2(2+d)}\) where \(A_m^{0}(H,d, |\theta -\theta '|)\) is defined as in (5) by inserting \(\gamma =0\).

As a result,

$$\begin{aligned} E[\Vert I_3^{\varepsilon }(\theta ',\theta )\Vert ^2] \le \theta ^{2\left( H-\frac{1}{2}\right) }\left( \sum _{m=1}^\infty d^{m+1} C^m \Vert \varphi _{\varepsilon }\Vert _{L^{1}(\mathbb {R}^d)}^{m} A_m^0(H,d,|\theta -\theta '|)^{1/4}\right) ^2. \end{aligned}$$

Since the exponent of \(|\theta -\theta '|\) appearing in \(A_m^0(H,d,|\theta -\theta |)\) is strictly positive by assumption, we can find a small enough \(\delta >0\) and a constant \(C:=C_{H,d,T}>0\) such that

$$\begin{aligned} \sup _{\varepsilon >0} E[\Vert I_3^{\varepsilon }(\theta ',\theta )\Vert ^2] \le C |\theta |^{2\left( H-\frac{1}{2}\right) }|\theta - \theta '|^{\delta } \end{aligned}$$

provided \(H<\frac{1}{2(2+d)}\). Then again, it is easy to see that we can choose \(\beta \in (0,1/2)\) small enough so that it follows from the proof of Lemma A.4 that

$$\begin{aligned} \int _0^t \int _0^t |\theta |^{2\left( H-\frac{1}{2}\right) }|\theta - \theta '|^{\varepsilon -1 -2\beta } \mathrm{d}\theta ' \mathrm{d}\theta <\infty , \end{aligned}$$
(5.15)

for every \(t\in [0,T]\).

Altogether, taking a suitable \(\beta \) so that (5.8), (5.12) and (5.15) are finite, we have

$$\begin{aligned} \sup _{\varepsilon > 0} \int _0^t \int _0^t \frac{E[\Vert D_{\theta '} X_t^{\varepsilon } - D_{\theta } X_t^{\varepsilon } \Vert ^2]}{|\theta ' - \theta |^{1+2\beta }} \mathrm{d}\theta ' \mathrm{d}\theta <\infty . \end{aligned}$$

Similar computations show that

$$\begin{aligned} \sup _{\varepsilon >0} \Vert D_{\cdot } X_t^{\varepsilon }\Vert _{L^2(\Omega \times [0,T],\mathbb {R}^{d\times d})} < \infty . \end{aligned}$$

\(\square \)

Corollary 5.8

Let \(\{X_t^{\varepsilon }\}_{\varepsilon >0}\) the family of approximating solutions of (5.1) in the sense of (5.6). Then, for every \(t\in [0,T]\) and bounded continuous function \(h:\mathbb {R}^d \rightarrow \mathbb {R}\) we have

$$\begin{aligned} h(X_t^{n}) \xrightarrow {n \rightarrow \infty } h(E\left[ X_t |\mathcal {F}_t \right] ) \end{aligned}$$

strongly in \(L^2(\Omega ; \mathcal {F}_t)\). In addition, \(E\left[ X_t|\mathcal {F}_t\right] \) is Malliavin differentiable for every \(t\in [0,T]\).

Proof

This is an immediate consequence of the relative compactness from Lemma 5.7 and by Lemma 5.5 in connection with Remark 5.6, we can identify the limit of \(X_{t}^{n}\) as being \(E[X_{t}| \mathcal {F}_{t}]\) and then the convergence holds for any bounded continuous functions as well. The Malliavin differentiability of \(E[X_{t}|\mathcal {F} _{t}]\) is shown by taking \(h=Id\) and estimate (5.7) together with [35, Proposition 1.2.3]. \(\square \)

Finally, in the fourth step we can prove main result of this section.

Proof of Theorem 5.1

It remains to prove that \(X_t\) is \(\mathcal {F}_t\)-measurable for every \(t\in [0,T]\). It follows that there exists a strong solution in the usual sense that is Malliavin differentiable. Indeed, let h be a globally Lipschitz continuous function, then by Corollary 5.8 we have that

$$\begin{aligned} h(X_t^{n}) \rightarrow h(E[X_t|\mathcal {F}_t]), \quad P-\mathrm{a.s.} \end{aligned}$$

as \(n\rightarrow \infty \).

On the other hand, by Lemma 5.5 we also have

$$\begin{aligned} h (X_t^{n}) \rightarrow E\left[ h(X_t)|\mathcal {F}_t\right] \end{aligned}$$

weakly in \(L^2(\Omega ;\mathcal {F}_t)\) as \(n\rightarrow \infty \). By the uniqueness of the limit, we immediately have

$$\begin{aligned} h\left( E[X_t|\mathcal {F}_t] \right) = E\left[ h(X_t)|\mathcal {F}_t\right] , \quad P-\mathrm{a.s.} \end{aligned}$$

which implies that \(X_t\) is \(\mathcal {F}_t\)-measurable for every \(t\in [0,T]\).

Let us finally show that our strong solution has a continuous modification. We observe that

$$\begin{aligned}&E[\left| X_{t}^{x}-X_{s}^{x}\right| ^{m}] \\&\quad \le C_{d,m}\left( E\left[ \left( \int _{s}^{t}\delta _{0}(X_{u}^{x})\mathrm{d}u\right) ^{m}\right] +E\left[ \left| B_{t}^{H}-B_{s}^{H}\right| ^{m}\right] \right) \\&\quad \le C_{d,m}\left( E\left[ \left( \int _{s}^{t}\delta _{0}(X_{u}^{x})\mathrm{d}u\right) ^{m}\right] +\left| t-s\right| ^{mH}\right) . \end{aligned}$$

On the other hand, we have that

$$\begin{aligned} E\left[ \left( \int _{s}^{t}\delta _{0}(X_{u}^{x})\mathrm{d}u\right) ^{m}\right] \le E\left[ \left( \int _{s}^{t}\delta _{0}(B_{u}^{H}+x)\mathrm{d}u\right) ^{2m}\right] ^{1/2}E[X^{2}]^{1/2}, \end{aligned}$$

where X is the Radon–Nikodym derivative as constructed in Proposition 5.4. Further, we know from (4.2) for a similar estimate that

$$\begin{aligned} E\left[ \left( \int _{s}^{t}\delta _{0}(B_{u}^{H}+x)\mathrm{d}u\right) ^{2m}\right] ^{1/2}\le C_{d,m,H}\left| t-s\right| ^{\frac{m}{2}(1-Hd)} \end{aligned}$$

So

$$\begin{aligned} E[\left| X_{t}^{x}-X_{s}^{x}\right| ^{m}]\le C(\left| t-s\right| ^{\frac{m}{2}(1-Hd)}+\left| t-s\right| ^{mH}),s\le t,m\ge 1, \end{aligned}$$

which entails by Kolmogorov’s lemma the existence of a continuous modification of \(X_{\cdot }^{x}\). \(\square \)

Remark 5.9

In Theorem 5.1, we have constructed strong solutions with respect to probability measures P with \(\frac{dP}{d\overline{P}}=\overline{ \zeta }_{T}\), where \(\overline{\zeta }_{T}\) is the Radon–Nikodym derivative associated with a \(\overline{P}\)-fractional Brownian motion \(\overline{B} _{t}^{H},0\le t\le T\) in Proposition 5.4 (see also Lemma 5.5). In order to obtain strong solutions with respect to arbitrary measures \(\widetilde{P}\), we can proceed as follows (without loss of generality for \(\alpha =1\)): Since \(X_{\cdot }^{n},n\ge 1\) (approximating sequence) and \(X_{\cdot }\) are strong solutions with respect to P, there exist progressively measurable functionals \(\Phi _{n}(t,\cdot ),n\ge 1\) and \(\Phi (t,\cdot )\) (on the space of continuous functions) such that

$$\begin{aligned} X_{t}^{n}=\Phi _{n}(t,B_{\cdot }^{H}),n\ge 1,X_{t}=\Phi (t,B_{\cdot }^{H}). \end{aligned}$$

For a \(\widetilde{P}\)-fractional Brownian motion \(\widetilde{B}_{t},0\le t\le T\), define the processes

$$\begin{aligned} \widetilde{X}_{t}^{n}=\Phi _{n}(t,\widetilde{B}_{\cdot }^{H}),n\ge 1, \widetilde{X}_{t}=\Phi (t,\widetilde{B}_{\cdot }^{H}),0\le t\le T. \end{aligned}$$

Then, we see that

$$\begin{aligned}&E_{\widetilde{P}}\left[ \left| \widetilde{X}_{t}^{n}-x-\int _{0}^{t}\varphi _{1/n}(\widetilde{X}_{s}^{n})\mathbf {1}_{d}\mathrm{d}s-\widetilde{B} _{t}^{H}\right| ^{2}\right] \\&\quad =E_{\widetilde{P}}\left[ \left| \Phi _{n}(t,\widetilde{B}_{\cdot }^{H})-x-\int _{0}^{t}\varphi _{1/n}(\Phi _{n}(s,\widetilde{B}_{\cdot }^{H})) \mathbf {1}_{d}\mathrm{d}s-\widetilde{B}_{t}^{H}\right| ^{2}\right] \\&\quad =E\left[ \left| \Phi _{n}(t,B_{\cdot }^{H})-x-\int _{0}^{t}\varphi _{1/n}(\Phi _{n}(s,B_{\cdot }^{H}))\mathbf {1}_{d}\mathrm{d}s-B_{t}^{H}\right| ^{2}\right] \\&\quad =E\left[ \left| X_{t}^{n}-x-\int _{0}^{t}\varphi _{1/n}(X_{s}^{n})\mathbf {1} _{d}\mathrm{d}s-B_{t}^{H}\right| ^{2}\right] =0 \end{aligned}$$

for all t. So

$$\begin{aligned} \widetilde{X}_{t}^{n}=x+\int _{0}^{t}\varphi _{1/n}(\widetilde{X}_{s}^{n}) \mathbf {1}_{d}\mathrm{d}s+\widetilde{B}_{t}^{H},\quad n\ge 1. \end{aligned}$$

We also know from our construction of \(X_{\cdot }\) under P that

$$\begin{aligned} \int _{0}^{t}\varphi _{1/n}(X_{s})\mathbf {1}_{d}\mathrm{d}s\underset{n\longrightarrow \infty }{\longrightarrow }L_{t}(X)\mathbf {1}_{d}=X_{t}-x-B_{t}^{H} \end{aligned}$$

in probability. Since

$$\begin{aligned}&E_{\widetilde{P}}\left[ \min \left( 1,\left| \int _{0}^{t}\varphi _{1/n}(\widetilde{ X}_{s})\mathbf {1}_{d}\mathrm{d}s-(\widetilde{X}_{t}-x-\widetilde{B} _{t}^{H})\right| \right) \right] \\&\quad =E\left[ \min \left( 1,\left| \int _{0}^{t}\varphi _{1/n}(X_{s})\mathbf {1} _{d}\mathrm{d}s-(X_{t}-x-B_{t}^{H})\right| \right) \right] \underset{n\longrightarrow \infty }{ \longrightarrow }0. \end{aligned}$$

So

$$\begin{aligned} \int _{0}^{t}\varphi _{1/n}(\widetilde{X}_{s})\mathbf {1}_{d}\mathrm{d}s\underset{ n\longrightarrow \infty }{\longrightarrow }L_{t}(\widetilde{X})\mathbf {1} _{d}=\widetilde{X}_{t}-x-\widetilde{B}_{t}^{H} \end{aligned}$$

in probability with respect to \(\widetilde{P}\). Therefore, one finds that \( \widetilde{X}_{\cdot }\) is a strong solution to

$$\begin{aligned} \widetilde{X}_{t}=x+L_{t}(\widetilde{X})\mathbf {1}_{d}+\widetilde{B}_{t}^{H} \end{aligned}$$

under \(\widetilde{P}\).

Proof of Proposition 5.2

Denote by Y the \(L^{p}\)-limit of the Doleans–Dade exponentials. Using characteristic functions combined with Novikov’s condition, we see that

$$\begin{aligned} Y_{t}^{x}-x=B_{t}^{H}+L_{t}(Y^{x})\varvec{1}_{d} \end{aligned}$$

is a fractional Brownian motion under a change of measure with respect to the density Y. The latter enables us to proceed similarly to arguments in the proof of Lemma 5.5 and to verify that

$$\begin{aligned} E\left[ Y_{t}^{x}\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}\right\rangle \right\} \right] =E\left[ X_{t}^{x}\exp \left\{ \sum _{j=1}^{k}\left\langle \alpha _{j},B_{t_{j}}^{H}-B_{t_{j-1}}^{H}\right\rangle \right\} \right] \end{aligned}$$

for all \(\{\alpha _{j}\}_{j=1}^{k}\subset \mathbb {R} ^{d},0=t_{0}<\cdots <t_{k}=t,k\ge 1\), where \(X_{\cdot }^{x}\) denotes the constructed strong solution of our main theorem. This allows us conclude that both solutions must coincide a.e. \(\square \)