1 Introduction

Let \(W:=\{W_t, t\ge 0\}\) be a standard Brownian motion defined on a filtered probability space \((\Omega , \mathcal {F}, {{\mathbb {F}}}, {{\mathbb {P}}}).\) Consider the stochastic differential equation

$$\begin{aligned} X_t=X_0+\int _0^t b (u,X_u)du + \int _0^t a_u X_u \delta W_u,\quad t\ge 0. \end{aligned}$$
(1.1)

Here \(X_0:\Omega \rightarrow {{\mathbb {R}}}\) is an \(\mathcal {F}-\)measurable random variable, \(b:[0,\infty )\times {{\mathbb {R}}}\times \Omega \rightarrow {{\mathbb {R}}}\) is an \({\mathbb {F}}\)-adapted random field and \(a:[0,\infty ) \times \Omega \rightarrow {{\mathbb {R}}}\) is an \({\mathbb {F}}\)-adapted random process. Since the initial condition \(X_0\) is a random variable, then the stochastic integral has to be an anticipating one that allows us to integrate processes that are not necessarily adapted to the underlying filtration \({\mathbb {F}}\). Here we use the well-known Skorohod integral, introduced by Skorohod in [17], which is an extension of the classical Itô integral. The existence and uniqueness of the solution, and other properties, of anticipating stochastic differential equations like (1.1) have been studied in [4, 5, 12]. See also [14]. This type of equation has proven to be useful in quantitative finance, for instance in insider trading modeling. See, for example, the recent paper [6] and the references therein, and [13].

The purpose of the present paper is to study different types of stability of the solution of Eq. (1.1). Being \(X_0\) a random variable we need to extend the concept of stability. Concretely we introduce three types of stochastic stability: weak stability in probability, exponential \(p-\)stability and exponential stability in probability. We prove that the solution of equation (1.1) satisfies all these types of stability under suitable conditions. The case in which \(X_0\) is a constant, where anticipative calculus is not necessary, is treated by Khasminskii in [9] (Sections 1.5\(-\)1.8 and Chapter 5). See also Arnold [2] (Chapter 11) and Gard [7] (Chapter 5).

Stability means insensitivity of a system to small changes in the initial state. In a stable system, the trajectories that are close to each other at a specific instant continuous to be close to each other at the subsequent instants. Lyapunov developed in 1892 a method to determine stability of a system despite not knowing its explicit solution. For deterministic dynamical systems a theory of stability of solutions is very well developed, see for example [3]. On other hand, it is clear that stability is a very important property in applications. For example, for stable systems such that explicit solutions are not known we can try to find approximated solutions using numerical methods.

Stochastic stability has been developed much more recently. To generalize deterministic stability to stochastic stability it is not straightforward. Different definitions have been considered in the literature. During the last decades many results based on the Lyapunov point of view have been obtained for Itô stochastic differential equations, see Chapters 1 and 5 of Khasminskii [9] as a main reference. As it is pointed out by Khasminskii, the study of stability is important in many applications of stochastic dynamical systems (see also the references in [9]). In particular, the stability of linear systems has applications in automatic control (for instance, [11, 16]).

In the present paper, as far as we know, we extend for the first time Khasminskii notions of weak stability in probability and exponential stability in probability to the case of random initial condition, and therefore, to the case of an anticipating stochastic differential equation. Different results of stability of the solution are obtained. Malliavin differentiability hypotheses are naturally required.

In Sect. 2 we recall some preliminary results about Malliavin calculus and anticipative Girsanov transformations, following essentially Buckdhan [5] and Nualart [14]. In Sect. 3 we establish the existence and uniqueness of the solution of Eq. (1.1), extending the result for the linear case (\(b(u,x)=b(u)\cdot x\)) proved in Buckdhan [4]; see also Buckdhan [5] (Theorem 3.2.1). The proof of this existence and uniqueness result is sketched in the Appendix (Section 6.1). The solution of Eq. (1.1) is written in terms of an auxiliary process Z, whose properties are analyzed in Sect. 4. Finally, in Sect. 5, we introduce the three new types of stochastic stability, suitable to our context, and prove different stability results for the solution of Eq. (1.1).

2 Preliminaries

In the present paper we assume \((\Omega ,\mathcal {F},\mathbb {P})\) is the canonical Wiener space. That is, \(\Omega \) is the family of all continuous functions from \([0,\infty )\) to \({\mathbb {R}}\) null at 0, \(\mathcal {F}\) is the Borel \(\sigma \)-algebra of \(\Omega \), when this is equipped with the topology of uniform convergence on compact sets, and \({{\mathbb {P}}}\) is the probability measure such that the canonical process \(W_t(\omega )=\omega (t)\) is a standard Brownian motion. Moreover, \({{\mathbb {F}}}:=\{{{\mathcal {F}}}_t, t\ge 0\}\) is the completed natural filtration of W. We denote by \(\mathcal {B}(\mathbb {R})\), the Borel \(\sigma \)-algebra on \(\mathbb {R}\), and, for any \(T>0\), we denote by \(\mathcal {P}_T\), the progressive \(\sigma \)-algebra on \(\Omega \times [0,T].\)

2.1 Malliavin calculus and Sobolev spaces

Let \(C_b^{\infty }(\mathbb {R} ^n)\) be the family of all the \(C^{\infty }\)-functions from \({{\mathbb {R}}}^n\) to \({\mathbb {R}}\) that are bounded together with all their partial derivatives. Consider the class \(\mathcal {S}\) of smooth random variables F of the form

$$\begin{aligned} F=f(W_{t_1},\ldots ,W_{t_n}), \end{aligned}$$
(2.1)

with \(f\in C_b^{\infty }(\mathbb {R}^n)\) and \(t_1,\ldots ,t_n\in \mathbb {R}_+\). For the smooth functional F given in (2.1), we define its derivative in the Malliavin calculus sense as the process

$$\begin{aligned} D_s F=\sum _{j=1}^n\partial _{x_j}f (W_{t_1},\ldots ,W_{t_n}){1\!\!1}_{[0,t_j]}(s),\quad s\ge 0. \end{aligned}$$

More generally, we define the k-th derivative of F as \(D^k_{s_1,\ldots ,s_k}F=D_{s_k}\cdots D_{s_1}F.\)

Now, we introduce the spaces \({{\mathbb {D}}}^{k,p}_T\), where \(k\in \mathbb {N}\), \(T>0\) and \(p\ge 1\). On \(\mathcal {S}\), consider the semi-norm

$$\begin{aligned} ||F||_{k,p,T}:=||F||_{p}+\sum _{i=1}^k || (\int _{[0,T]^i} |D^i_z F|^2 dz)^{\frac{1}{2}}||_{p}, \end{aligned}$$

where \(||\cdot ||_p\) stands for the norm in \(L^p(\Omega )\). It is well-known that the operator \(D^k\) is closable from \(\mathcal {S}\subset L^p(\Omega )\) into \(L^p(\Omega ;L^2([0,T]^k))\), see Nualart [14] (Section 1.2). Thus, the space \({\mathbb D}^{k,p}_T\) is defined as the completion of the family \(\mathcal {S}\) with respect to the semi-norm \(||\cdot ||_{k,p,T}\). Note that if \(0<{{\tilde{T}}}<T\), we have \({{\mathbb {D}}}^{k,p}_T\subset {\mathbb D}^{k,p}_{{\tilde{T}}}.\)

As in Buckdhan [5], \(\mathbb {D}^{k,\infty }_T\) (resp. \(\tilde{\mathbb {D}}^{k,\infty }_T\)) denotes the family of all random variables \(F\in \mathbb {D}^{k,2}_T\) such that \(F\in {L^{\infty }(\Omega )}\) and \(D^m F\in L^{\infty }(\Omega ;L^2([0,T]^m))\) (resp. \(D^m F\in L^{\infty }(\Omega \times [0,T]^m)\)), for \(m=1,\ldots ,k.\)

For \(T>0\), the Skorohod integral with respect to W, denoted by \(\delta _T,\) is the adjoint of the derivative operator \(D:\tilde{\mathbb {D}}^{1,\infty }_T \subset L^{\infty }\left( \Omega \right) \rightarrow L^{\infty }\left( \Omega \times [0,T]\right) \). That is, u is in \(Dom \ \delta _T\) if and only if \(u\in L^{1}\left( \Omega \times [0,T]\right) \) and there exists a random variable \(\delta _T(u)\in L^1(\Omega )\) satisfying the duality relation

$$\begin{aligned} \mathbb {E} \left[ \int _0^T u_tD_tFdt\right] =\mathbb {E}\left[ \delta _T(u)F\right] ,\quad \hbox { for every}\quad F\in \tilde{\mathbb {D}}^{1,\infty }_T. \end{aligned}$$
(2.2)

Sometimes, when \(u\in L^{2}\left( \Omega \times [0,T]\right) \), we consider the Skorohod integral as the adjoint of \(D:\mathbb {D}^{1,2}_T \subset L^{2}\left( \Omega \right) \rightarrow L^{2}\left( \Omega \times [0,T]\right) \). That is, \(u\in Dom\, \delta _T\), if and only if, there exists \(\delta _T(u)\in L^{2}\left( \Omega \right) \) such that (2.2) holds for any \(F\in \mathbb {D}^{1,2}_T\). Note that the first definition of \(\delta _T\) is an extension of the second one.

The operator \(\delta _T\) is an extension of the Itô integral in the sense that the set \(L_{a}^{2}(\Omega \times [0,T])\) of all square-integrable and adapted processes with respect to the filtration generated by W is included in \(Dom\ \delta _T\) and the operator \(\delta _T\) restricted to \(L_{a}^{2}(\Omega \times [0,T] )\) coincides with the Itô stochastic integral with respect to W. For \(u\in Dom \ \delta _T\), we make use of the notation \(\delta _T(u)=\int _{0}^{T}u_{t}\delta W_{t}\) and for \(t\in [0,T]\) and \(u{1\!\!1}_{[0,t]}\) in \(Dom\ \delta _T\), we write \(\delta _T(u{1\!\!1}_{[0,t]})=\int _{0}^{t}u_{s}\delta W_{s}.\) Observe also that for \(0<{{\tilde{T}}}<T,\) if \(u\in Dom \ \delta _{{\tilde{T}}}\), then \(u{1\!\!1}_{[0,{{\tilde{T}}}]}\in Dom \ \delta _{T}\) and in this case, \(\delta _{{\tilde{T}}}(u)=\delta _T(u{1\!\!1}_{[0,{\tilde{T}}]})=\int _{0}^{{\tilde{T}}}u_{s}\delta W_{s}.\)

Let \(\mathcal {S}_{T}\) be the family of processes of the form \(u(\cdot )=\sum _{j=1}^{n}F_{j}h_{j}(\cdot )\), where for any \(j=1,\ldots ,n,\) \(F_{j}\) is a random variable in \(\mathcal {S}\) and \(h_j:[0,T]\rightarrow \mathbb {R}\) is a bounded measurable function. We denote by \(\mathbb {L}^{1,2,f}_T\) the closure of \(\mathcal {S}_T\) with respect to the semi-norm

$$\begin{aligned} ||u||^2_{1,2,f,T}=\mathbb {E}\left( \int _{[0,T]}u_s^{2}ds+ \int _{\Delta _{1}^{T}}(D_{s}u_t)^2dsdt\right) , \end{aligned}$$

where \(\Delta _{1}^{T}=\left\{ (s,t)\in [0,T]^{2}:s\ge t\right\} ,\) and by \(\mathbb {L}_{T}^F,\) the closure of \(\mathcal {S}_{T}\) with respect to the semi-norm

$$\begin{aligned} ||u||_{F,T}^{2}=||u||_{1,2,f,T}^{2}+\mathbb {E}\left( \int _{\Delta _{2}^{T}} (D_{r}D_{s}u_t)^{2}dr ds dt\right) , \end{aligned}$$

with \(\Delta _{2}^{T}=\{(r,s,t)\in [0,T]^{3}:r\vee s\ge t\}.\) Observe that \(L^2_a (\Omega \times [0,T])\subseteq \mathbb {L}^F_T\) for any \(T>0\), with \(D_su_t=D_r D_s u_t=0\) for \(s>t\) and \((r,s,t)\in \Delta _{2}^{T}\).

Finally, for a process \(X\in {{\mathbb {L}}}^{1,2,f}_T\) and given \(p\ge 1\), we denote by \(D^-X\) the process in \(L^p(\Omega \times [0,T])\) such that

$$\begin{aligned} \lim _{n\rightarrow \infty }\int _0^T\sup _{(s-\frac{1}{n})\vee 0<t<s}\mathbb {E}\left( \left| D_s X_t-(D^-X)_s\right| ^p\right) ds=0 \end{aligned}$$
(2.3)

if such a process \(D^-X\) exists. Henceforth, the space \(\mathbb {L}^{1,2,f}_{T,p-}\) represents the family of processes \(X\in {{\mathbb {L}}}^{1,2,f}_T\) such that (2.3) is satisfied.

2.2 Anticipative Girsanov transformations

Following Buckdhan [5], and in order to establish the existence of a unique solution to Eq. (1.1), we introduce two families \(A=\{A_{s,t}, \, 0\le s\le t\}\) and \(\{T_t, \, t\ge 0\}\) of transformations on the Wiener space \(\Omega \) through the equations

$$\begin{aligned} (A_{s,t}\omega )_{\cdot }=\omega _{\cdot }-\int _{s\wedge \cdot }^{t\wedge \cdot } a_r(A_{r,t}\omega )dr \end{aligned}$$
(2.4)

and

$$\begin{aligned} (T_t\omega )_{\cdot }=\omega _{\cdot }+\int _0^{t\wedge \cdot } a_r(T_r \omega )dr, \quad \omega \in \Omega . \end{aligned}$$
(2.5)

Define \(A_t:=A_{0,t}.\) Notice that, from Buckdhan [5] (Section 2.2), if \(a\in L^2([0,T];\mathbb {D}^{1,\infty }_T)\), Eqs. (2.4) and (2.5) have a unique solution for \(0\le s\le t\le T\), and moreover, \(A_{s,t}=T_s A_t\).

Additionally, if a is also an adapted process, the Girsanov theorem (see Buckdhan [5], Proposition 2.2.3) implies

$$\begin{aligned} \mathbb {E}(F(A_{s,t})L_{s,t})=\mathbb {E}(F), \end{aligned}$$
(2.6)

for \(F\in L^{\infty }(\Omega ),\) where

$$\begin{aligned} L_{s,t}:=\exp \left\{ \int _s^t a_r dW_r-\frac{1}{2}\int _s^t a_r^2 dr \right\} . \end{aligned}$$
(2.7)

In the following, we use frequently the fact that if F is \(\mathcal{F}_s-\)measurable, \(t\ge s\) and a is an adapted process, then \(F(A_t)=F(A_s)\) and \(F(T_t)=F(T_s).\)

3 Anticipating semi-linear equations

In this section, for \(T>0\) fixed, we consider the anticipating semi-linear stochastic differential equation

$$\begin{aligned} X_t=X_0+\int _0^t b(s,X_s) ds+\int _0^t a_s X_s \delta W_s,\quad 0\le t\le T, \end{aligned}$$
(3.1)

where the random variable \(X_0\) and the coefficients a and b satisfy suitable conditions.

The following will be the hypotheses used in the paper. Some hypotheses are stronger than another ones, but we introduce them in this way not to ask for conditions stronger than we need in some results.

  1. (X1)

    \(X_0\in L^{\infty } (\Omega ).\)

  2. (X2T)

    For any \(T>0\), \(X_0\in {\tilde{{\mathbb {D}}}}^{2,\infty }_T.\) (Note that this implies (X1)).

  3. (X3T)

    \(X_0\) satisfies (X2T) and there exists a constant \(\eta >0\) such that \(X_0>\eta \) for all \(\omega \) or \(X_0<-\eta \) for all \(\omega .\)

  4. (A1T)

    \(a\in L_a^2([0,T];\mathbb {D}^{1,\infty }_T)\), that is, a is an \({\mathbb {F}}-\)adapted process in \(L^2([0,T];\mathbb {D}^{1,\infty }_T).\)

  5. (A2T)

    a satisfies (A1T) and moreover \(a\in L^{\infty }([0,T]\times \Omega )\) and \(Da\in L^{\infty }(\Omega \times [0,T]^2).\)

  6. (B1T)

    \(b:\Omega \times [0,T]\times \mathbb {R}\rightarrow \mathbb {R}\) is a \(\mathcal {P}_T\otimes \mathcal {B}(\mathbb {R})-\)measurable random field such that there exist an adapted non-negative process \(\gamma \in L^{\infty } (\Omega \times [0,T])\) and a constant \(L>0\) satisfying

    $$\begin{aligned} |b(t,x)-b(t,y)|\le \gamma _t |x-y|, \quad \sup _{t\in [0,T]}||b(t,0)||_{\infty }\le L, \end{aligned}$$

    for all \(x, y\in {{\mathbb {R}}}\) and \(t\in [0,T]\) w.p.1. Recall that \(||\cdot ||_{\infty }\) stands for the essential supremum of a random variable. Let’s denote

    $$\begin{aligned} c_1:=\int _0^T ||\gamma _s||_{\infty } ds. \end{aligned}$$
  7. (B2T)

    b satisfies (B1T), \(b(t,0)=0\) for all \(t\in [0,T]\) and any fixed \(T>0\). b has almost surely continuous trajectories in t and x, and \(\partial _x b (t,x)\) exists and it is continuous in t and x.

  8. (B3T)

    b satisfies (B2T), \(b(\cdot ,x)\in L^p([0,T]; \mathbb {D}_T^{1,p})\) for all  \(p\ge 2\) and \(x\in \mathbb {R}\), \(b(t,x)\in {{\mathbb {D}}}^{1,\infty }_T\) for all \(\,t\in [0,T]\) and \(x\in {{\mathbb {R}}}\), \(D_t b(s,\cdot )\) is a measurable random field continuous on x for any s and t, and there exists a non-negative process \(M\in L^1([0,T]^2, L^{\infty }(\Omega ))\) such that \(|D_s b(t,x,\omega )|\le M(s,t) \, |x|\) and

    $$\begin{aligned} c_2:=\sup _{0\le r\le T} \int _r^T ||M(r,s)||_{\infty } ds<\infty . \end{aligned}$$
  9. (B4T)

    Assume that b satisfy (B3T) for any \(T>0\) and has the form

    $$\begin{aligned} b(t,x)={{\bar{b}}}_t x+\phi (t,x), \end{aligned}$$

    where \({{\bar{b}}}\in L^{\infty } (\Omega \times [0,T])\), \(D{\bar{b}}\in L^{\infty }(\Omega \times [0,T]^2)\) and \(\phi \) satisfies (B3T) with a certain process \(\delta \) in the role of process \(\gamma \) in (B2T). Moreover, the function \(\partial _x^2 \phi (t,x)\) exists, it is continuous in t and x and it is bounded uniformly on \(\Omega \times [0,T]\times \mathbb {R}\).

Now we proceed as in Nualart [14] (Theorem 3.3.6). Consider \(L_{0,t}\) defined in (2.7). Remember that Hypothesis (A1T) implies that for \(0\le s\le t\), \(L_{0,s}(T_t)=L_{0,s}(T_s)\). Also notice that Hypotheses (A1T) and (B1T) imply that for all \(x \in \mathbb {R}\) and almost all \(\omega \in \Omega \), the equation

$$\begin{aligned} Z_t(\omega ,x)=x+\int _0^t L_{0,s}^{-1}(T_t\omega )b(s,L_{0,s}(T_t\omega )Z_s(\omega ,x),T_s\omega )ds, \quad t\in [0,T], \end{aligned}$$
(3.2)

has a unique solution. The relation between this equation and Eq. (3.1) is given by the following theorem:

Theorem 1.1

Assume (X1), (A1T) and (B1T) hold. Define

$$\begin{aligned} X_t=L_{0,t} Z_t(A_t, X_0 (A_t)). \end{aligned}$$
(3.3)

Then, the process \(X=\{X_t, \, 0\le t\le T\}\) satisfies \({1\!\!1}_{[0,t]} (\cdot )\, a_{\cdot } X_{\cdot }\in \textrm{Dom}\,\delta _T\) for all \(t\in [0,T]\), belongs to \(L^1(\Omega \times [0,T])\) and is a solution of Eq. (3.1). Conversely, if \(Y\in L^1 (\Omega \times [0,T])\) is a solution of Eq. (3.1) and a satisfies (A2T) then Y agrees with the right hand side of (3.3).

Remarks 3.2

  1. 1.

    Note that in the linear case (i.e. \(b(s,x)={{\bar{b}}}_s\cdot x\)), Hypothesis (B1T) has to be applied to \(\gamma :=|{{\bar{b}}}|\) This is treated in Buckdhan [5] (Theorem 3.2.1). In this case, (3.3) has the form

    $$\begin{aligned} X_t=L_{0,t}\cdot \exp \left\{ \int _0^t {{\bar{b}}}_s ds\right\} \cdot X_0(A_t), \, t\ge 0. \end{aligned}$$
  2. 2.

    The semi-linear Eq. (3.1), when a is a deterministic function of \(L^2([0,T])\), is considered in Nualart [14] (Theorem 3.3.6). In the present paper, following ideas stated in Buckdhan [5] (Chapter 3), we extend the result in Nualart [14] to the case that a is a process that satisfies Hypothesis (A1T) and \(\gamma \) is random.

  3. 3.

    Assume (X1), (A2T) and (B1T) are satisfied for any \(T>0\). Note that in this case, Theorem 3.1 says that equation (3.1) has a unique solution on \(\Omega \times [0,\infty )\) given by (3.3). For the existence we only need to assume that, for each \(T>0\), \(\gamma \in L^1 ([0,T],L^\infty (\Omega )).\) Condition \(\gamma \in L^\infty (\Omega \times [0,T])\) is needed for the uniqueness.

Proof

Since the proof of this theorem is long and similar to those in Nualart [14] or Buckdhan [5], we only sketch it in the Appendix (Subsection 6.1). For details, the reader can see the references [4, 5, 14]. \(\square \)

4 Some properties of process Z

In this section we establish some properties of process Z introduced in Eq. (3.2).

Lemma 1.3

Let \(T>0\) and assume (A2T) and (B2T) hold. Then, the solution Z of Eq. (3.2) satisfies

$$\begin{aligned} \left| Z_t(\omega ,x)\right| \le |x|\exp \left( \int _0^t\Vert \gamma _s\Vert _{\infty }ds\right) \le |x|e^{c_1} \end{aligned}$$
(4.1)

and

$$\begin{aligned} \left| \partial _x Z_t(\omega ,x)\right| \le \exp \left( \int _0^t\Vert \gamma _s\Vert _{\infty }ds\right) \le e^{c_1}, \end{aligned}$$
(4.2)

for \((t,x)\in [0,T]\times \mathbb {R}\) and for almost all \(\omega \in \Omega \).

Proof

Let \(\omega \in \Omega \) be such that (B2T) is satisfied. Note that being \(L_{0,s}\) adapted to the underlying filtration \({\mathbb {F}}\), Eq. (3.2) can be written as

$$\begin{aligned} Z_t(\omega ,x)=x+\int _0^t L_{0,s}^{-1}(T_s\omega )b(s,L_{0,s}(T_s\omega )Z_s(\omega ,x),T_s\omega )ds, \quad t\in [0,T]. \end{aligned}$$
(4.3)

Inequality (4.1) is an immediate consequence of Gronwall’s lemma, (A1T) and (B2T). Taking partial derivatives with respect to x in Eq. (4.3) and using Hartman [8] (Section 5.3) and (A2T) we obtain that \(\partial _x Z_t(\omega ,x)\) exists and satisfies the equation

$$\begin{aligned} \partial _x Z_t(\omega ,x)=1+\int _0^t (\partial _x b)(s,L_{0,s} (T_s\omega )Z_s(\omega ,x),T_s\omega ) {\partial _x}Z_s(\omega , x)ds, \quad t\in [0,T],\nonumber \\ \end{aligned}$$
(4.4)

whose explicit solution is given by

$$\begin{aligned} \partial _x Z_t(\omega ,x)=\exp \left( \int _0^t (\partial _x b) (s, L_{0,s}(T_s\omega ) Z_s(\omega ,x),T_s\omega )ds\right) . \end{aligned}$$
(4.5)

Finally (B1T) and (A2T) give that inequality (4.2) is true. \(\square \)

Lemma 1.4

Fix \(T>0.\) Assume (A1T) and (B2T) hold. Then, for \(x\in \mathbb {R}\),

$$\begin{aligned} (t,\omega )\mapsto Z_t(A_t \omega ,x) \end{aligned}$$

is \(\mathcal {P}_T\)-measurable and belong to \({{\mathbb {L}}}^F.\)

Proof

Let \(t_0\in (0,T]\). Then, (3.2) implies

$$\begin{aligned} Z_t(A_{t_0} \omega ,x)=x+\int _0^t L_{0,s}^{-1}\, b(s,L_{0,s}Z_s(A_{t_0} \omega ,x))ds, \quad t\in [0,t_0]. \end{aligned}$$
(4.6)

Note that thanks the fact

$$\begin{aligned} \left| L_{0,s}^{-1}\, b(s,L_{0,s}\, y) - L_{0,s}^{-1}\, b(s,L_{0,s}\, {\bar{y}})\right| \le \Vert \gamma \Vert _{L^\infty ([0,T]\times \Omega )}\ |y-{\bar{y}}|, \end{aligned}$$

we have the previous equation has a unique solution. Moreover, this solution is adapted since b is \(\mathcal {P}_T\otimes \mathcal {B}(\mathbb {R})\)-measurable. So, \(t \rightarrow Z_t(A_{t_0},x)\) is \(\mathcal {F}_t\)-measurable for all \(t\in [0,t_0]\). In particular, \(Z_t(A_{t},x)\) is \(\mathcal {F}_t\)-measurable for all \(t\in [0,t_0]\), and consequently, for all \(t\in [0,T]\).

Finally, for \(s<t\),

$$\begin{aligned} Z_s(A_t,x)=Z_s(A_s T_s A_t,x)=Z_s(A_s T_t A_t,x)=Z_s(A_s,x), \end{aligned}$$

where the second equality is a consequence of the fact that \(Z_t(A_{t},x)\) is \(\mathcal {F}_t\)-measurable. Moreover, thanks to inequality (4.1), the solution belongs to \(L^2(\Omega \times [0,T]).\) So, it belongs to \({\mathbb L}^F_T.\) \(\square \)

Lemma 1.5

Let \(T>0.\) Assume that (A1T) and (B2T) are satisfied. Then, for \(x>0\) (resp. \(x<0\)), we have

$$\begin{aligned} Z_t(A_t,x)\ge x\exp \Big \{-\int _0^t \gamma _s ds\Big \}\quad (\text{ resp. } Z_t(A_t,x)\le x\exp \Big \{-\int _0^t \gamma _s ds\Big \}), \end{aligned}$$

for any \(t\in [0,T]\) and \(\omega \in \Omega \) for which (B2T) is true.

Proof

We know that \(Z_t(A_t,x)\) satisfies Eq. (4.6). Assume \(x>0.\) The negative case is analogous. Fix \(\omega \in \Omega \) satisfying (B2T). Assume there exists \(t_0\) such that \(Z_u(A_u,x)>0\) for all \(u<t_0\) and \(Z_{t_0}(A_{t_0},x)=0.\) On \([0,t_0]\), using (B2T), we have

$$\begin{aligned} -\gamma _u L_{0,u} Z_u(A_u,x)\le b(u,L_{0,u} Z_u(A_u,x))\le \gamma _u L_{0,u} Z_u(A_u,x). \end{aligned}$$

Therefore, for \(t\in [0,t_0],\)

$$\begin{aligned} \partial _t Z_t(A_t,x)=L^{-1}_{0,t}\, b(t, L_{0,t}Z_t(A_t,x))\ge -\gamma _t Z_t(A_t,x). \end{aligned}$$
(4.7)

Hence, by Hartman [8] (Remark 1 of Theorem 4.1 in Section 3.3), we have

$$\begin{aligned} Z_t(A_t,x)\ge x\exp \Big \{-\int _0^t \gamma _s ds\Big \}, \quad t\le t_0. \end{aligned}$$

In particular, for \(t=t_0\),

$$\begin{aligned} 0\ge x\exp \Big \{-\int _0^{t_0} \gamma _s ds\Big \}. \end{aligned}$$

and this is a contradiction. Therefore, \(Z_t(A_t,x)\) is positive for all \(t\in [0,T]\) and, consequently, (4.7) is satisfied for \(t\in [0,T]\), which gives that the result holds. \(\square \)

Lemma 1.6

Let \(T>0\). Assume that (A2T) and (B3T) hold. Then, for all \(p\ge 2\) and \(x\in {{\mathbb {R}}},\) the process \(Z_t(A_t,x)\) belongs to \(L^p([0,T], {{\mathbb {D}}}_T^{1,p})\), and for \(r,t\in [0,T]\) we have

$$\begin{aligned} D_rZ_t(A_t,x)= & {} \int _{r\wedge t}^t U(t,s)\left[ (D_rL^{-1}_{0,s}) b(s, L_{0,s}Z_s(A_s,x))\right. \nonumber \\{} & {} +\left. L^{-1}_{0,s}\,(\partial _x b)(s,L_{0,s}Z_s(A_s,x)) (D_rL_{0,s}) Z_s(A_s,x)\right. \\{} & {} \left. +L^{-1}_{0,s}\,D_r b(s,z)|_{z=L_{0,s}Z_s(A_s,x)}\right] ds, \nonumber \end{aligned}$$
(4.8)

where

$$\begin{aligned} U(t,s):=\exp \left\{ \int _s^t (\partial _x b)(u,L_{0,u}Z_u(A_u,x))du\right\} . \end{aligned}$$

Remark 4.5

Note that (4.8) is the solution of the linear stochastic differential equation

$$\begin{aligned} D_rZ_t(A_t,x)= & {} \int _{r\wedge t}^t (D_rL^{-1}_{0,s})b(s, L_{0,s}Z_s(A_s,x))ds \nonumber \\{} & {} +\int _{r\wedge t}^tL^{-1}_{0,s}(\partial _x b)(s,L_{0,s}Z_s(A_s,x)) (D_r L_{0,s}) Z_s(A_s,x)ds\nonumber \\{} & {} +\int _{r\wedge t}^t L^{-1}_{0,s}D_r b(s,z)|_{z=L_{0,s}Z_s(A_s,x)}ds\nonumber \\{} & {} +\int _{r\wedge t}^t (\partial _x b)(s,L_{0,s}Z_s(A_s,x)) D_rZ_s(A_s,x)ds, \, t\in [0,T]. \end{aligned}$$

See for example [2], Section 8.2.

Proof of Lemma 4.4

The proof is inspired in [14] (Section 2.2). Let \(c:=c_1\vee c_2\vee 1.\) Recall that \(c_1\) and \(c_2\) are finite constants thanks (B1T) and (B3T), and \(Z_t(A_t,x)\) satisfies Eq. (4.6).

We consider the Picard approximations of \(Z_t (A_t,x).\) For \(n=0\) we define

$$\begin{aligned} Z_{t,(0)}(A_t,x)=x \end{aligned}$$

and we apply induction on n to define, for \(n\ge 1,\) the adapted and continuous process

$$\begin{aligned} Z_{t,(n)}(A_t,x)=x+\int _0^t L^{-1}_{0,s}\ b(s, L_{0,s} Z_{s,(n-1)}(A_s,x)) ds. \end{aligned}$$
(4.9)

We divide the proof in two steps.

  1. 1.

    In this first step we prove that for any \(n\ge 0,\) \(Z_{\cdot ,(n)} (A_{\cdot },x)\) is a continuous and adapted process bounded in \(L^p ([0,T]\times \Omega )\) for any \(p\ge 1,\) uniformly in n. Moreover, \(Z_{t,(n)} (A_t,x)\) converges to \(Z_t (A_t,x)\) with probability one, uniformly in \(t\in [0,T]\), and in \(L^p ([0,T]\times \Omega )\), for any \(p\ge 1.\)

    Note that from (4.9) and (B2T), we have

    $$\begin{aligned} |Z_{t,(n+1)}(A_t,x)|\le |x|+\int _0^t\gamma _s \, |Z_{s,(n)}(A_s,x)|ds,\quad t\in [0,T]. \end{aligned}$$

    Therefore, iterating this inequality, we have \(|Z_{t,(n)}(A_t,x)|\le |x|e^{c_1}\) for all \(n\in {{\mathbb {N}}}\) and \(t\in [0,T].\)

    On the other hand, we have

    $$\begin{aligned} |Z_{t,(1)}-Z_{t,(0)}|\le \int _0^t L^{-1}_{0,s}\ |b(s, L_{0,s} Z_{s,(0)}(A_s,x))| ds\le |x|\,\int _0^t \gamma _s ds, \end{aligned}$$

    and iterating again, we obtain

    $$\begin{aligned} |Z_{t,(n+1)}-Z_{t,(n)}|\le & {} \int _0^tL^{-1}_{0,s}\ |b(s, L_{0,s} Z_{s,(n)}(A_s,x)) -b(s, L_{0,s} Z_{s,(n-1)}(A_s,x))| ds\\\le & {} \int _0^t \gamma _s |Z_{s,(n)}(A_s,x)-Z_{s,(n-1)}(A_s,x)| ds\\\le & {} \frac{|x|}{(n+1)!} (\int _0^t \gamma _s ds)^{n+1}. \end{aligned}$$

    Thus,

    $$\begin{aligned} \sum _{n=0}^{\infty } |Z_{t,(n+1)}-Z_{t,(n)}|\le c_1 |x|\sum _{n=0}^{\infty }\frac{c_1^n}{n!}=c_1 e^{c_1} |x|<\infty \end{aligned}$$

    implies the statement.

  2. 2.

    Now we want to check the differentiability of \(Z_t (A_t,x)\) in the Malliavin calculus sense. Using Lemma 1.5.3 in [14], it is enough to check that, for any \(n\ge 0\) and \(p\ge 2,\)

    $$\begin{aligned} Z_{t,(n)}(A_t,x)\in L^p ([0,T], {{\mathbb {D}}}^{1,p}_T) \end{aligned}$$

    and

    $$\begin{aligned} \sup _{n\ge 0}\sup _{0\le r\le T} {{\mathbb {E}}}\left( \sup _{r\le t\le T} |D_r Z_{t,(n)} (A_t,x)|^p\right) <\infty . \end{aligned}$$

    Note that \(Z_{t,(0)}(A_t,x)=x\in L^{p}([0,T], {{\mathbb {D}}}^{1,p}_T)\) for all \(p\ge 2.\) Now, assume that \(Z_{t,(n)}(A_t,x)\in L^p ([0,T], {{\mathbb {D}}}^{1,p}_T)\) for all \(p\ge 2.\) Then, using (4.9), (A2T), (B3T) and [10] (Lemma 2.2), we have, for \(r\le t,\)

    $$\begin{aligned} D_r Z_{t,(n+1)}(A_t,x)= & {} \int _{r}^t (D_r L^{-1}_{0,s})\ b(s, L_{0,s} Z_{s,(n)}(A_s,x)) ds\\{} & {} + \int _{r}^t L^{-1}_{0,s} (\partial _x b)(s, L_{0,s} Z_{s,(n)}(A_s,x)) (D_r L_{0,s}) Z_{s,(n)}(A_s,x) ds\\{} & {} + \int _{r}^t (\partial _x b)(s, L_{0,s} Z_{s,(n)}(A_s,x)) D_r Z_{s,(n)}(A_s,x) ds\\{} & {} +\int _{r}^t L^{-1}_{0,s} D_r b(s,z,\omega )|_{z=L_{0,s} Z_{s,(n)}(A_s,x)} ds. \end{aligned}$$

    On the other hand, being \(Z_{\cdot ,(n)}(A_{\cdot },x)\) an adapted process, for any \(r>t\) we have

    $$\begin{aligned} D_r Z_{t,(n)}(A_t,x)=0. \end{aligned}$$

    Now putting together the first two terms on the right hand side we have

    $$\begin{aligned} |D_r Z_{t,(n+1)}(A_t,x)|\le & {} \int _{r}^t \gamma _s \cdot \left( |D_r L^{-1}_{0,s}|\,|L_{0,s}| +|L^{-1}_{0,s}|\,|D_r L_{0,s}|\right) \cdot |Z_{s,(n)}(A_s,x)| ds\\{} & {} + \int _{r}^t \gamma _s |D_r Z_{s,(n)}(A_s,x)| ds\\{} & {} +\int _{r}^t |L^{-1}_{0,s}|\cdot M(r,s)\cdot |L_{0,s}|\cdot |Z_{s,(n)}(A_s,x)|ds, \quad t\in [r,T]. \end{aligned}$$

    Defining

    $$\begin{aligned} K(r,s):=|D_r L^{-1}_{0,s}|\,|L_{0,s}|+|L^{-1}_{0,s}|\,|D_r L_{0,s}| \end{aligned}$$

    and joining the first and the third term on the right hand side we obtain

    $$\begin{aligned} |D_r Z_{t,(n+1)}(A_t,x)|\le & {} \int _r^t [\gamma _s K(r,s) +M(r,s)] |Z_{s,(n)}(A_s,x)| ds\\+ & {} \int _r^t \gamma _s |D_r Z_{s,(n)}(A_s,x)| ds, \quad t\in [r,T]. \end{aligned}$$

    Hence,

    $$\begin{aligned} |D_r Z_{t,(n+1)}(A_t,x)|\le & {} \left( \left( \sup _{r\le s\le T} K(r,s)\right) \int _r^T \gamma _s ds+ \int _r^T M(r,s)ds\right) \nonumber \\{} & {} \sup _{0\le s\le T} |Z_{s,(n)}(A_s,x)|\nonumber \\+ & {} \int _r^t \gamma _s |D_r Z_{s,(n)}(A_s,x)| ds, \quad t\in [0,T]. \end{aligned}$$

    Consequently, using (B3T), Lemma 4.1 and Step 1, we have

    $$\begin{aligned} |D_r Z_{t,(n+1)}(A_t,x)|\le |x|c e^{c}\Big (1+\sup _{r\le s\le T} K(r,s)\Big )\\ +\int _r^t \gamma _s |D_r Z_{s,(n)}(A_s,x)| ds, \quad t\in [r,T]. \end{aligned}$$

    Applying Gronwall’s Lemma with r and \(\omega \) fixed, we obtain

    $$\begin{aligned} |D_r Z_{t,(n+1)}(A_t,x)|\le |x|c e^c g(r,\omega ,T)\exp \Big \{\int _r^t \gamma _s ds\Big \}\le |x|c e^{2c} g(r,\omega ,T),\qquad \end{aligned}$$
    (4.10)

    where \(g(r,\omega ,T):=1+\sup _{r\le s\le T} K(r,s).\) Note that the right-hand side of (4.10) is independent of n and \(t\in [r,T].\)

    We know by Step 1 that \(Z_{t,(n+1)}(A_t,x)\in L^p(\Omega ).\) So, it remains only to check

    $$\begin{aligned} \sup _{0\le r\le T}{{\mathbb {E}}}\left( \sup _{r\le t\le T} |D_r Z_{t,(n+1)}(A_t,x)|^p\right) <\infty , \end{aligned}$$

    uniformly in \(n\ge 0\).

    Note that, by (4.10), we have

    $$\begin{aligned} {{\mathbb {E}}}\left( \sup _{r\le t\le T} |D_r Z_{t,(n+1)}(A_t,x)|^p\right) \le |x|^p c^{p} e^{2cp} {{\mathbb {E}}}\left( |g(r,\omega ,T)|^p\right) , \end{aligned}$$

    for all \(n\in {{\mathbb {N}}}.\) Therefore, the problem reduces to check

    $$\begin{aligned} \sup _{0\le r\le T}{{\mathbb {E}}}\left( \left[ 1+\sup _{r\le s\le T} K(r,s)\right] ^p\right) <\infty . \end{aligned}$$

    Using Hölder inequality, it is enough to see

    $$\begin{aligned} \sup _{0\le r\le T} {{\mathbb {E}}}\left( \sup _{r\le s\le T} K(r,s)^p\right) <\infty , \end{aligned}$$

    which, by applying Hölder inequality again, is equivalent to check

    $$\begin{aligned}{} & {} \sup _{0\le r\le T}{{\mathbb {E}}}\left( |a_r|^{2p}\right)<\infty ,\\{} & {} \sup _{0\le r\le T}{{\mathbb {E}}}\left( \sup _{r\le s\le T} \left| \int _r^s D_r\,a_u dW_u\right| ^{2p}\right) <\infty \end{aligned}$$

    and

    $$\begin{aligned} \sup _{0\le r\le T}{{\mathbb {E}}}\int _r^T |a_u|^p\cdot |D_r\,a_u|^pdu<\infty . \end{aligned}$$

    The first and third statements are obvious from (A2T). The second one is true thanks to Burkholder-Davis-Gundy inequality and (A2T). Therefore, \(Z_{t,(n)} (A_t,x)\) is a well defined object in \({\mathbb D}^{1,p}\) and \(Z_{\cdot }(A_{\cdot },x)\in L^p([0,T], {\mathbb D}_T^{1,p}).\)

Finally, (4.8) follows from (4.6) and Remark 4.5. \(\square \)

5 Stability of the solution

Remember that Theorem 3.1, under Hypotheses (X1T), (A2T) and (B1T), implies that there exists a unique solution of Eq. (3.1) in \(L^1(\Omega \times [0,T])\) for any \(T>0.\)

5.1 Auxiliary results

In this section we establish some auxiliary tools that we need to study the stability of the solution of Eq. (3.1).

Lemma 1.8

Let \(T>0.\) Assume (X2T), (A2T) and (B3T) hold. Then, \(Z_t(A_t,X_0(A_t))\) belongs to \(\mathbb {L}^{1,2,f}_T\) and for \(s>t\) we have

$$\begin{aligned} D_s Z_t(A_t,X_0(A_t))=\partial _x Z_t(A_t,X_0(A_t)) (D_s X_0)(A_t). \end{aligned}$$

Proof

By Lemma 4.2 the process \(t\mapsto Z_t(A_t,x)\) is in the space \(\mathbb {L}^{1,2,f}_T.\) Assume first that \(X_0\in {{\mathcal {S}}}.\) Proceeding as in Ocone and Pardoux [15] (proofs of Lemmas 2.3 and 2.4), together with (4.1) and (4.2), we obtain that for \(s>t,\)

$$\begin{aligned} D_sZ_t(A_t,X_0(A_t))=\partial _x Z_t(A_t,X_0(A_t)) D_s (X_0(A_t))=\partial _x Z_t(A_t,X_0(A_t)) (D_s X_0)(A_t), \end{aligned}$$

where the last equality is a consequence of Buckdhan [5] (equality (2.2.26)). Hence, the result is satisfied due to Buckdhan [5] (Proposition 2.1.2) and (4.4). \(\square \)

Lemma 1.9

Let \(T>0.\) Assume (X2T), (A2T) and (B3T) hold. Let X be the solution of (3.1). Then, \(X\in L^p ([0,T], {\mathbb D}^{1,p}_T)\) for all \(p\ge 1.\)

Proof

Observe that (2.7), Propositions 1.3.8 and 1.5.5 in [14] and (A2T) establish that \(L_{0,\cdot }\in L^p([0,T], {{\mathbb {D}}}^{1,p}_T)\) for any \(p\ge 1.\) Hence, by Theorem 3.1 it is enough to show that \(Z_{\cdot }(A_{\cdot }, X_0(A_{\cdot }))\in L^p([0,T], {\mathbb D}^{1,p}_T)\) for all \(p\ge 1.\) Toward this end we first assume \(X_0\in {{\mathcal {S}}}.\) In this case, Lemmas 4.1 and 4.4, (4.8) together with the dominated convergence theorem, (A2T) and (B2T) yield that we can proceed as in the proof of Lemma 2.1 in [10] to see that \(Z_{\cdot }(A_{\cdot }, X_0(A_{\cdot }))\in L^p([0,T], {\mathbb D}^{1,p}_T)\) with

$$\begin{aligned} D_s Z_t(A_t, X_0(A_t))=D_s Z_t(A_t,x)|_{x=X_0(A_t)}+(\partial _x Z_t)(A_t,X_0(A_t))D_s (X_0(A_t)). \end{aligned}$$

Hence, Buckdhan [5] (Proposition 2.1.2, Lemma 2.2.13, and (2.2.26)) yield, for any \(s,t\in [0,T],\)

$$\begin{aligned} D_s Z_t(A_t, X_0(A_t))=D_s Z_t(A_t,x)|_{x=X_0(A_t)}+(\partial _x Z_t)(A_t,X_0(A_t))(D_s X_0)(A_t). \end{aligned}$$
(5.1)

Finally, the result follows from (A2T), (B2T), (4.4), (4.8), Lemma 4.1, Buckdhan [5] (Proposition 2.1.2) and the dominated convergence theorem. \(\square \)

For any \(\nu \in (0,1]\), we consider the Lyapunov function

$$\begin{aligned} F(x,y)=|x|^{\nu } e^{-\nu y}, \quad x,y\in {{\mathbb {R}}}. \end{aligned}$$
(5.2)

The following result is the main tool for the study of the stability of the solution to (3.1).

Theorem 1.10

Let \(T>0.\) Assume Hypotheses (X3T), (A2T) and (B4T) hold. Let X be the solution of (3.1) given by (3.3) and

$$\begin{aligned} Y_t:=\int _0^t \left( {{\bar{b}}}_s- \frac{a_s^2}{2}+\varepsilon _s\right) ds,\,\, t\in [0,T], \end{aligned}$$

where \(\varepsilon :=\{\varepsilon _s, s\in [0,T]\}\) is a positive adapted process belonging to \(L^{\infty }(\Omega \times [0,T]).\) Then, for any \(t\in [0,T]\) we get

$$\begin{aligned} \mathbb {E}\left( F(X_t, Y_t)\right)= & {} \mathbb {E}\left( |X_0|^\nu \right) +\nu \mathbb {E}\left( \int _0^{t} F(X_s, Y_s) \left[ \nu \frac{a_s^2}{2} +\frac{\phi (s,X_s)}{X_s}-\varepsilon _s\right] ds\right) \nonumber \\{} & {} +\nu (\nu -1)\mathbb {E}\left( \int _0^{t} F(X_s, Y_s)a_s \frac{\partial _xZ_s (A_s, X_0(A_s))}{Z_s(A_s, X_0(A_s))}\cdot (D_sX_0)(A_s)ds\right) .\nonumber \\ \end{aligned}$$
(5.3)

For simplicity we will write

$$\begin{aligned} \mathbb {E}\left( F(X_t, Y_t)\right) =\mathbb {E}\left( |X_0|^\nu \right) +\nu \mathbb {E}\left( \int _0^{t} F(X_s, Y_s) \eta (s)ds\right) \end{aligned}$$

with

$$\begin{aligned} \eta (s):=\nu \frac{a_s^2}{2} +\frac{\phi (s,X_s)}{X_s}-\varepsilon _s+(\nu -1)a_s \frac{\partial _xZ_s (A_s, X_0(A_s))}{Z_s(A_s, X_0(A_s))}(D_sX_0)(A_s). \end{aligned}$$
(5.4)

Remark 5.4

Note that as a consequence of Lemmas 4.1 and 4.3 we have, for either \(X_0>0\) a.s. or \(X_0<0\) a.s.,

$$\begin{aligned} \left| \frac{\partial _xZ_s (A_s, X_0(A_s))}{Z_s(A_s, X_0(A_s))} \right| \le \frac{e^{2c_1}}{|X_0 (A_s)|}, \, a.s. \end{aligned}$$

Proof of Theorem 5.3

Note that by (X2T) the random variable \(X_0\in {{\mathbb {D}}}_T^{1,2}\) and then, there exists a sequence \(\{X_0^{(n)}\in \mathcal {S}, n\ge 1\}\) that converges to \(X_0\) in \(\mathbb {D}^{1,2}_T.\) By (X3T) we can assume that \(|X_0^{(n)}|>\frac{\eta }{2}\) for all \(n\in \mathbb {N}\), where \(\eta >0\) is given by Hypothesis (X3T).

Being \(a, {{\bar{b}}}\) and \(\varepsilon \) processes in \(L_{a}^2(\Omega \times [0,T]),\) it is well-known that we can consider three sequences \(\{a^{(n)}, n\ge 1\}\), \(\{{\bar{b}}^{(n)}, n\ge 1\}\) and \(\{\varepsilon ^{(n)}, n\ge 1\}\) of adapted processes of the form

$$\begin{aligned} a_t^{(n)}=\sum _{i=0}^{m_n-1} F_{i,n} {1\!\!1}_{(t_{i},t_{i+1}]}(t),\quad {{\bar{b}}}_t^{(n)} =\sum _{i=0}^{m_n-1} G_{i,n} {1\!\!1}_{(t_{i},t_{i+1}]}(t), \quad \varepsilon _t^{(n)}=\sum _{i=0}^{m_n-1} E_{i,n} {1\!\!1}_{(t_{i},t_{i+1}]}(t), \end{aligned}$$

with \(F_{i,n}, G_{i,n}, E_{i,n}\in \mathcal {S}\), such that

$$\begin{aligned} \lim _{n \rightarrow +\infty } \mathbb {E}\int _0^T \left[ a_t^{(n)}- a_t\right] ^2 dt =0,\qquad \qquad \lim _{n \rightarrow +\infty } \mathbb {E}\int _0^T \left[ {{\bar{b}}}_t^{(n)}- {{\bar{b}}}_t\right] ^2 dt =0 \end{aligned}$$

and

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {E}\int _0^T |\varepsilon _t^{(n)}- \varepsilon _t|^2 dt =0. \end{aligned}$$

Moreover, observe that given (X2T), (A2T) and (B4T), it is straightforward to prove that \(||X^{(n)}_0||_{\infty }\), \(||a^{(n)}||_{L^{\infty }(\Omega \times [0,T])}\), \(||{\bar{b}}^{(n)}||_{L^{\infty } (\Omega \times [0,T])}\) and \(||{\varepsilon }^{(n)}||_{L^{\infty } (\Omega \times [0,T])}\) are bounded respectively by the norms \(c||X_0||_{\infty },\) \(c||a||_{L^{\infty }(\Omega \times [0,T])},\) \(c||{\bar{b}}||_{L^{\infty }(\Omega \times [0,T])}\) and \(c||{\varepsilon }||_{L^{\infty }(\Omega \times [0,T])}\) for a certain generic constant \(c\ge 1.\)

But we are interested in approximating \(a, {{\bar{b}}}\) and \(\varepsilon \) by continuous processes. Towards this end, let \(n\in \mathbb {N}\). For each n, define

$$\begin{aligned} t_i=\frac{i T}{n}, \qquad i=0,\dots ,n \end{aligned}$$

and

$$\begin{aligned} x_j=\frac{j}{n},\qquad j=-n^2,\dots , -1, 0, 1,\dots , n^2. \end{aligned}$$

We can consider functions \(k_i\in \mathcal {C}_c^{\infty }(\mathbb {R})\), with values in [0, 1], that approximate indicator functions in the following sense:

$$\begin{aligned} k_i^{(n)}(t)=\left\{ \begin{array}{ll} 1, &{} t\in [t_i,t_{i+1}],\\ 0, &{} t \notin \left[ t_i-\frac{1}{n^2},\left( t_{i+1}+ \frac{1}{n^2}\right) \wedge T\right] . \end{array}\right. \end{aligned}$$

Then, we can change the previous processes \(\{a^{(n)}, n\ge 1\}\), \(\{{\bar{b}}^{(n)}, n\ge 1\}\) and \(\{\varepsilon ^{(n)}, n\ge 1\}\) by continuous and adapted versions of the form

$$\begin{aligned} a_t^{(n)}=\sum _{i=1}^{m_n} F_{i,n} k_i^{(n)}(t),\qquad {{\bar{b}}}_t^{(n)}=\sum _{i=1}^{m_n} G_{i,n} k_i^{(n)}(t), \qquad \varepsilon _t^{(n)}=\sum _{i=1}^{m_n} E_{i,n} k_i^{n}(t), \end{aligned}$$

with \(F_{i,n}, G_{i,n}, E_{i,n}\in \mathcal {S}\), such that

$$\begin{aligned} \lim _{n \rightarrow +\infty } \mathbb {E}\int _0^T \left[ a_t^{(n)}- a_t\right] ^2 dt =0,\qquad \qquad \lim _{n \rightarrow +\infty } \mathbb {E}\int _0^T \left[ {\bar{b}}_t^{(n)}- {{\bar{b}}}_t\right] ^2 dt =0 \end{aligned}$$

and

$$\begin{aligned} \lim _{n \rightarrow +\infty } \mathbb {E}\int _0^T |\varepsilon _t^{(n)}- \varepsilon _t|^2 dt =0. \end{aligned}$$

The fact that we can change \({1\!\!1}_{(t_i,t_{i+1}]}\) by \(k_i^{(n)}(t)\) is proved by the arguments used in Sect. 6.2.

The approximation of \(\phi \) is slightly more complicated. See Sect. 6.2 for details. In Sect. 6.2, we consider the adapted random field

$$\begin{aligned} \phi ^{(n)}(t,x)= \int _0^x \psi ^{(n)} (t,y) dy,\qquad n\ge 1, \end{aligned}$$

where

$$\begin{aligned} \psi ^{(n)} (t,x)=\partial _x \phi (t,0)+g\left( \int _0^x{\bar{\psi }}^{(n)} (t,y)dy\right) , \end{aligned}$$

with g smooth enough,

$$\begin{aligned} {\bar{\psi }}^{(n)}(t,x)=\sum _{i=0}^{n-1} \sum _{j=-n^2}^{n^2-1} H_{i,j}^{(n)} k_i^{(n)}(t)\ \ {1\!\!1}_{\ (x_j,x_{j+1}]}(x),\qquad n\ge 1, \end{aligned}$$

and \(H_{i,j}^{(n)}\in \mathcal {S}.\) Taking into account the construction in Appendix (Sect. 6.2) we can prove that \({\bar{\psi }}^{(n)}(t,x)\) is bounded and that \({\bar{\psi }}^{(n)}(t,x) \longrightarrow \partial _x^2 \phi (t,x)\) when n tends to \(+\infty \) for almost all \((\omega ,t,x)\in \Omega \times [0,T]\times \mathbb {R}\). Moreover, we can also check that \(\psi ^{(n)}(t,x) \longrightarrow \partial _x \phi (t,x)\) almost surely when n tends to \(+\infty \), the function \({\bar{\psi }}^{(n)}(t,x)\) is uniformly bounded with respect all the parameters (including n) and

$$\begin{aligned} \lim _{n \rightarrow +\infty } \mathbb {E}\int _K \int _0^T \left[ \partial _x \phi (t,x)-\psi ^{(n)}(t,x)\right] ^2 dt dx=0, \end{aligned}$$

for any compact \(K\subset \mathbb {R}.\) As a consequence, we also have

$$\begin{aligned} \lim _{n \rightarrow +\infty } \sup _{x\in K}\mathbb {E}\int _0^T \left[ \phi (t,x)-\phi ^{(n)}(t,x)\right] ^2 dt=0. \end{aligned}$$
(5.5)

Now, we divide the proof into two steps. First, we prove the result using the simple processes defined above and then, for the general case.

  1. 1.

    Here, we fix \(n\in \mathbb {N}\). Let \(Z^{(n)}\) be the solution to (3.2) when we change a and b by \(a^{(n)}\) and \(b^{(n)}\), respectively. Note that the change of a by \(a^{(n)}\) implies the change of operators \(A_{s,t}\) and \(T_t.\) Here, we also change \(X_0\) by and \(X_0^{(n)}\).

    By Lemma 4.4, we have that \(Z_t^{(n)}(A^{(n)}_t,x) \in L^p([0,T]; \mathbb {D}^{1,p}_T)\) for any \(p>1\) and \(x\in \mathbb {R}\). Moreover, from Lemma 5.1, we also have, for \(s>t\),

    $$\begin{aligned} D_s Z_t^{(n)}(A^{(n)}_t,X_0^{(n)}(A_t^{(n)}))= & {} \partial _x Z_t^{(n)}(A^{(n)}_t, X_0^{(n)}(A_t^{(n)}))\left( D_s X_0^{(n)}\right) (A_t^{(n)})\nonumber \\= & {} \exp \left\{ \int _0^t \partial _x b^{(n)}(u,L^{(n)}_{0,u} Z^{(n)}_u(A^{(n)}_t, X_0^{(n)}(A_t^{(n)}))) du\right\} \nonumber \\{} & {} \times \left( D_s X_0^{(n)}\right) (A_t^{(n)}), \end{aligned}$$
    (5.6)

    where the last equality follows from (4.5). Remember that, as a consequence of the definition of \(b^{(n)}\) we have that \(\partial _x b^{(n)}\) is bounded on \(\Omega \times [0,T]\times \mathbb {R}\). Hence we have that \(Z^{(n)}_t(A^{(n)}_t,X_0^{(n)}(A_t^{(n)}))\) is a bounded process because of Hypothesis (B2T), (3.2) and (4.1). The fact that \(Z^{(n)}_t(A^{(n)}_t,X_0^{(n)}(A_t^{(n)}))\in \mathbb {L}^F\cap L^\beta (\Omega \times [0,T])\), for any \(\beta >2,\) is not obvious since in Lemma 4.2 the initial condition is deterministic. The fact of belonging to \(\mathbb {L}^F\) can be proved by considering the approximation

    $$\begin{aligned} \sum _{j=-m}^{m} \partial _x Z_t^{(n)}(A^{(n)}_t,x_j) \int _0^{X_0^{(n)}(A_t^{(n)})} {1\!\!1}_{(x_j,x_{j+1}]}(x) dx,\end{aligned}$$
    (5.7)

    and taking into account (4.5), Lemmas 4.2 and 4.4 and the assumptions on the coefficients.

    Now it is easy to see that \(X_t^{(n)}=L^{(n)}_{0,t} \, Z^{(n)}_t(A^{(n)}_t,X_0^{(n)}(A_t^{(n)}))\) belongs to \(\mathbb {L}^F\) with

    $$\begin{aligned} D_s X^{(n)}_t{} & {} = L^{(n)}_{0,t} \, D_s Z^{(n)}_t(A^{(n)}_t,X_0^{(n)}(A_t^{(n)})),\nonumber \\ D_r D_s X^{(n)}_t{} & {} = \left( D_rL^{(n)}_{0,t}\right) D_s Z^{(n)}_t(A^{(n)}_t,X_0^{(n)}(A_t^{(n)}))\nonumber \\{} & {} \qquad + L^{(n)}_{0,t} D_r D_s Z^{(n)}_t(A^{(n)}_t,X_0^{(n)}(A_t^{(n)})), \end{aligned}$$
    (5.8)

    \(s>t\) and for any \(r\in [0,T]\).

    Now our goal is to use Remark 4 of Theorem 3 in Alòs-Nualart [1] in order to apply the Itô formula (3.2) of that paper.

    Note first of all that hypotheses on \(a^{(n)}\) implies

    $$\begin{aligned} L^{(n)}_{0,t} \in L^p(\Omega \times [0,T]), \quad \text {for any}\ p>1, \end{aligned}$$
    (5.9)

    and (4.1) and the hypotheses on \(X_0^{(n)}\) imply that

    $$\begin{aligned} \mathbb {E}\left( \int _0^T \left| a^{(n)}_s L^{(n)}_{0,s} Z^{(n)}_s(A^{(n)}_s,X_0^{(n)}(A_s^{(n)})) \right| ^2 ds\right) ^2<\infty . \end{aligned}$$
    (5.10)

    From (5.6) and the hypotheses on \({{\bar{b}}}^{(n)}\), \(\phi ^{(n)}\) and \(X_0^{(n)}\) it is clear that

    $$\begin{aligned} \left| D_s Z^{(n)}_t(A^{(n)}_t,X_0^{(n)}(A_t^{(n)})) \right| \le C, \qquad s>t. \end{aligned}$$
    (5.11)

    So, hypotheses on \(a^{(n)}\) and \(X_0^{(n)}\), (5.9) and (4.2) implies that

    $$\begin{aligned} \int _0^T\left( \int _0^s \left| \mathbb {E} D_s\left( a^{(n)}_r L^{(n)}_{0,r} Z^{(n)}_r(A^{(n)}_r,X_0^{(n)}(A_r^{(n)}))\right) \right| ^2 dr\right) ^2 ds<\infty . \end{aligned}$$
    (5.12)

    We also need to study (5.8). We divide it into three parts. Hypotheses on \(X_0^{(n)}\) and \(a^{(n)}\) (in particular \(a^{(n)}\) is adapted), (5.9) and (5.11) give

    $$\begin{aligned} \int _0^T\mathbb {E}\left( \int _0^T\int _0^s \left| \left( D_u a_r^{(n)}\right) L^{(n)}_{0,r} D_s Z^{(n)}_r(A^{(n)}_r,X_0^{(n)}(A_r^{(n)})) \right| ^2dr du\right) ^2ds<\infty ,\qquad \end{aligned}$$
    (5.13)

    and

    $$\begin{aligned} \int _0^T\mathbb {E}\left( \int _0^T\int _0^s \left| a_r^{(n)}\left( D_u L^{(n)}_{0,r}\right) D_s Z^{(n)}_r(A^{(n)}_r,X_0^{(n)}(A_r^{(n)})) \right| ^2dr du\right) ^2ds<\infty . \qquad \end{aligned}$$
    (5.14)

    In order to deal with the remaining term we need to take into account the following

    $$\begin{aligned} \begin{array}{l} \displaystyle a_r^{(n)}L^{(n)}_{0,r} D_u D_s Z^{(n)}_r(A^{(n)}_r, X_0^{(n)}(A_r^{(n)}))\\ \displaystyle \qquad = a_r^{(n)}L^{(n)}_{0,r} D_u \left[ \exp \left\{ \int _0^r \partial _x b^{(n)}(v,L^{(n)}_{0,v} Z^{(n)}_v(A^{(n)}_r,X_0^{(n)}(A_r^{(n)}))) dv\right\} \left( D_s X_0^{(n)}\right) (A_r^{(n)})\right] . \end{array} \end{aligned}$$

    The factor with \(D_u\left( D_s X_0^{(n)}\right) (A_r^{(n)})\) is bounded as before thanks hypotheses on \(a^{(n)}\) and \(X_0^{(n)}.\) On the other hand, we have

    $$\begin{aligned} \begin{array}{l} \displaystyle a_r^{(n)}L^{(n)}_{0,r} D_u \left[ \exp \left\{ \int _0^r \partial _x b^{(n)}(v,L^{(n)}_{0,v} Z^{(n)}_v(A^{(n)}_r,X_0^{(n)}(A_r^{(n)}))) dv\right\} \right] \left( D_s X_0^{(n)}\right) (A_r^{(n)})\\ \displaystyle \qquad =A(r,u,s)+B(r,u,s), \end{array} \end{aligned}$$

    with

    $$\begin{aligned} A(r,u,s)= & {} a_r^{(n)}L^{(n)}_{0,r} D_u \left[ \exp \left\{ \int _0^r {\bar{b}}^{(n)}_v dv\right\} \right] \\{} & {} \times \exp \left\{ \int _0^r \partial _x \phi ^{(n)}(v, L^{(n)}_{0,v} Z^{(n)}_v(A^{(n)}_r,X_0^{(n)}(A_r^{(n)})))dv\right\} \left( D_s X_0^{(n)}\right) (A_r^{(n)}),\\ B(r,u,s)= & {} a_r^{(n)}L^{(n)}_{0,r} \exp \left\{ \int _0^r {\bar{b}}^{(n)}_v dv\right\} \left( D_s X_0^{(n)}\right) (A_r^{(n)}) \\{} & {} \times D_u \left[ \exp \left\{ \int _0^r \partial _x \phi ^{(n)}(v, L^{(n)}_{0,v}Z^{(n)}_v(A^{(n)}_r,X_0^{(n)}(A_v^{(n)})))dv\right\} \right] . \end{aligned}$$

    Using similar arguments as before we can show

    $$\begin{aligned} \int _0^T\mathbb {E}\left( \int _0^T\int _0^s \left| A(r,u,s)\right| ^2dr du\right) ^2ds<\infty . \end{aligned}$$
    (5.15)

    Hypotheses on \(a^{(n)}\), \({\bar{b}}^{(n)}\) and \(X_0^{(n)}\), the construction of \(\phi ^{(n)}\), (5.9), (5.11) and arguing as in (5.12), we can obtain

    $$\begin{aligned} \int _0^T\mathbb {E}\left( \int _0^T\int _0^s \left| B(r,u,s)\right| ^2dr du\right) ^2ds<\infty . \end{aligned}$$
    (5.16)

    Notice that using \(a^{(n)}\), \({{\bar{b}}}^{(n)}\) and \(\varepsilon ^{(n)}\) we can also define

    $$\begin{aligned} Y^{(n)}_t:=\int _0^t \left( {{\bar{b}}}^{(n)}_s-\frac{(a^{(n)}_s)^2}{2} +\varepsilon ^{(n)}_s\right) ds, \, \, t\in [0,T]. \end{aligned}$$

    Moreover we can consider \(F_m(x,y):=\alpha _m(x)^{\nu }e^{-\nu y}\) where \(\alpha _m\) is an infinite derivable function such that \(\alpha _m(x)=|x|\) on \((-\frac{1}{m}, \frac{1}{m})^c\) and \(\frac{1}{2m}\le \alpha _m(x)\le \frac{1}{m}\) on \((-\frac{1}{m}, \frac{1}{m}).\) The expression (5.8) together with the bounds (5.10) and (5.12)–(5.16) (we can argue in a similar way for the points (ii) and (iii) in Remark 4 of [1]) allow us to apply the Itô formula for the Skorohod integral (see [1]) and to obtain, for \(\frac{1}{m}\le \frac{\eta }{2},\)

    $$\begin{aligned} \begin{array}{l} \displaystyle F_m\left( X^{(n)}_t, Y^{(n)}_t\right) =|X^{(n)}_0|^\nu +\int _0^t \partial _x F_m(X^{(n)}_s, Y^{(n)}_s) {{\bar{b}}}^{(n)}_sX^{(n)}_s ds\\ \displaystyle \qquad + \int _0^t \partial _x F_m(X^{(n)}_s, Y^{(n)}_s) \phi ^{(n)}(s,X^{(n)}_s) ds+ \int _0^t \partial _x F_m(X^{(n)}_s, Y^{(n)}_s)a^{(n)}_sX^{(n)}_s \delta W_s\\ \displaystyle \qquad + \int _0^t \partial _y F_m(X^{(n)}_s, Y^{(n)}_s)\left( {{\bar{b}}}^{(n)}_s -\frac{(a^{(n)}_s)^2}{2}+\varepsilon ^{(n)}_s\right) ds\\ \displaystyle \qquad + \frac{1}{2} \int _0^t \partial ^2_{x,x} F_m(X^{(n)}_s, Y^{(n)}_s) \left( a^{(n)}_sX^{(n)}_s\right) ^2 ds\\ \displaystyle \qquad + \int _0^t \partial ^2_{x,x} F_m(X^{(n)}_s, Y^{(n)}_s) a^{(n)}_s X^{(n)}_s D^-_sX^{(n)}_s ds.\end{array} \end{aligned}$$

    Taking into account the definition of \(F_m\) and (B4T) we have

    $$\begin{aligned} \begin{array}{l} \displaystyle F_m\left( X^{(n)}_t, Y^{(n)}_t\right) =|X^{(n)}_0|^\nu +\nu \int _0^t \partial _x \alpha _m(X^{(n)}_s)\ \alpha _m(X^{(n)}_s)^{\nu -1} \ e^{-\nu Y^{(n)}_s}\ {{\bar{b}}}^{(n)}_sX^{(n)}_s ds\\ \displaystyle \qquad + \nu \int _0^t \partial _x \alpha _m(X^{(n)}_s)\ \alpha _m(X^{(n)}_s)^{\nu -1}\ e^{-\nu Y^{(n)}_s}\ \phi ^{(n)}(s,X^{(n)}_s) ds\\ \displaystyle \qquad + \nu \int _0^t\partial _x \alpha _m(X^{(n)}_s)\ \alpha _m(X^{(n)}_s)^{\nu -1}\ e^{-\nu Y^{(n)}_s}\ a^{(n)}_sX^{(n)}_s \delta W_s\\ \displaystyle \qquad -\nu \int _0^t \alpha _m(X^{(n)}_s)^\nu \ e^{-\nu Y^{(n)}_s}\ \left( {{\bar{b}}}^{(n)}_s-\frac{(a^{(n)}_s)^2}{2}+\varepsilon ^{(n)}_s\right) ds\\ \displaystyle \qquad + \frac{\nu }{2} \int _0^t \partial ^2_{x,x} \alpha _m(X^{(n)}_s)\ \alpha _m(X^{(n)}_s)^{\nu -1} \ e^{-\nu Y^{(n)}_s}\ \left( a^{(n)}_sX^{(n)}_s\right) ^2 ds\\ \displaystyle \qquad + \frac{\nu (\nu -1)}{2} \int _0^t \left( \partial _{x} \alpha _m(X^{(n)}_s)\right) ^2\ \alpha _m(X^{(n)}_s)^{\nu -2} \ e^{-\nu Y^{(n)}_s}\ \left( a^{(n)}_sX^{(n)}_s\right) ^2 ds\\ \displaystyle \qquad + \nu \int _0^t \partial ^2_{x,x} \alpha _m(X^{(n)}_s)\ \alpha _m(X^{(n)}_s)^{\nu -1} \ e^{-\nu Y^{(n)}_s}\ a^{(n)}_s X^{(n)}_s D^-_sX^{(n)}_s ds \\ \displaystyle \qquad + \nu (\nu -1) \int _0^t \left( \partial _{x} \alpha _m(X^{(n)}_s)\right) ^2\ \alpha _m(X^{(n)}_s)^{\nu -2} \ e^{-\nu Y^{(n)}_s}\ a^{(n)}_s X^{(n)}_s D^-_sX^{(n)}_s ds. \end{array} \end{aligned}$$
    (5.17)

    Multiplying the two sides of (5.17) by \(\ {1\!\!1}_{A_m}\) with

    $$\begin{aligned} A_m=\left\{ x\in \Omega ; \inf _{r\in [0,T]} \left| X_r^{(n)}\right| >\frac{1}{m}\right\} , \end{aligned}$$

    and thanks the definition of \(\alpha _m\), and the local property of the Lebesgue and Skorohod integrals (see Lemma 5.2 and Proposition 1.3.15 in [14]) we get

    $$\begin{aligned}\begin{array}{l} \displaystyle {1\!\!1}_{A_m} F\left( X^{(n)}_t, Y^{(n)}_t\right) = {1\!\!1}_{A_m} \Bigg \{|X^{(n)}_0|^\nu +\nu \int _0^t |X^{(n)}_s|^\nu \ e^{-\nu Y^{(n)}_s}\ {{\bar{b}}}^{(n)}_s ds\\ \displaystyle \qquad + \nu \int _0^t |X^{(n)}_s|^\nu \ e^{-\nu Y^{(n)}_s}\ \frac{\phi ^{(n)}(s,X^{(n)}_s)}{X^{(n)}_s} ds+ \nu \int _0^t |X^{(n)}_s|^\nu \ e^{-\nu Y^{(n)}_s}\ a^{(n)}_s \delta W_s\\ \displaystyle \qquad -\nu \int _0^t |X^{(n)}_s|^\nu \ e^{-\nu Y^{(n)}_s}\ \left( {{\bar{b}}}^{(n)}_s-\frac{(a^{(n)}_s)^2}{2}+\varepsilon ^{(n)}_s\right) ds\\ \displaystyle \qquad + \frac{\nu (\nu -1)}{2} \int _0^t |X^{(n)}_s|^\nu \ e^{-\nu Y^{(n)}_s}\ \left( a^{(n)}_s\right) ^2 ds\\ \displaystyle \qquad + \nu (\nu -1) \int _0^t |X^{(n)}_s|^\nu \ e^{-\nu Y^{(n)}_s}\ a^{(n)}_s \frac{D^-_sX^{(n)}_s}{X^{(n)}_s} ds\Bigg \}.\end{array} \end{aligned}$$

    Using that \(a^{(n)}, {{\bar{b}}}^{(n)}\) and \(\phi ^{(n)}\) are adapted to the underlying filtration \({\mathbb {F}}\), (3.3), Lemmas 5.1 and 4.3, Hypothesis (B4T), (4.5), Lemma 2.6 in [12], Proposition 2.1.4 in [5] and the fact that \(D^-_s L^{(n)}_{0,s}=0\), we have

    $$\begin{aligned} D^-_sX^{(n)}_s= X^{(n)}_s \cdot \frac{\partial _xZ^{(n)}_s (A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))}{Z^{(n)}_s(A^{(n)}_s, X^{(n)}_0 (A^{(n)}_s))}(D_sX^{(n)}_0)(A^{(n)}_s). \end{aligned}$$
    (5.18)

    Noting that Lemma 4.3 and (3.3) imply

    $$\begin{aligned}\lim _{m \rightarrow +\infty } {1\!\!1}_{\{\inf _{r\in [0,T]} |X_r^{(n)}|>\frac{1}{m}\}} =\ {1\!\!1}_{\{\inf _{r\in [0,T]} |X_r^{(n)}|>0\}}=1, \end{aligned}$$

    (5.18) leads to write

    $$\begin{aligned} \begin{array}{l} \displaystyle F\left( X^{(n)}_t, Y^{(n)}_t\right) =|X^{(n)}_0|^\nu + \nu \int _0^t F(X^{(n)}_s, Y^{(n)}_s) {{\bar{b}}}^{(n)}_s ds\\ \displaystyle \qquad + \nu \int _0^t F(X^{(n)}_s, Y^{(n)}_s) \frac{\phi ^{(n)}(s,X^{(n)}_s)}{X^{(n)}_s} ds+\nu \int _0^t F(X^{(n)}_s, Y^{(n)}_s)a^{(n)}_s \delta W_s\\ \displaystyle \qquad -\nu \int _0^t F(X^{(n)}_s, Y^{(n)}_s)\left( {{\bar{b}}}^{(n)}_s- \frac{(a^{(n)}_s)^2}{2}+\varepsilon ^{(n)}_s\right) ds\\ \displaystyle \qquad + \frac{\nu (\nu -1)}{2} \int _0^t F(X^{(n)}_s, Y^{(n)}_s) \Bigg [2a^{(n)}_s \frac{\partial _xZ^{(n)}_s (A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))}{Z^{(n)}_s(A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))} (D_sX^{(n)}_0)(A^{(n)}_s)\\ \displaystyle \qquad \qquad + \left( a^{(n)}_s\right) ^2\Bigg ]ds.\end{array} \end{aligned}$$

    So,

    $$\begin{aligned} \begin{array}{l} \displaystyle F\left( X^{(n)}_t, Y^{(n)}_t\right) =|X^{(n)}_0|^\nu +\nu \int _0^t F(X^{(n)}_s, Y^{(n)}_s)a^{(n)}_s \delta W_s\\ \displaystyle \qquad +\nu \int _0^t F(X^{(n)}_s, Y^{(n)}_s) \left[ \nu \frac{\left( a^{(n)}_s\right) ^2}{2} +\frac{\phi ^{(n)}(s,X^{(n)}_s)}{X^{(n)}_s}-\varepsilon ^{(n)}_s\right] ds\\ \displaystyle \qquad +\nu (\nu -1)\int _0^t F(X^{(n)}_s, Y^{(n)}_s) a^{(n)}_s \frac{\partial _xZ^{(n)}_s (A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))}{Z^{(n)}_s(A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))} (D_sX^{(n)}_0)(A^{(n)}_s)ds. \end{array} \end{aligned}$$

    Note that

    $$\begin{aligned} F(X^{(n)}_s, Y^{(n)}_s)\ a^{(n)}_s=\left| Z^{(n)}_s(A_s^{(n)}, X_0^{(n)}(A_s^{(n)})) \right| ^{\nu } (L^{(n)}_{0,s})^{\nu }e^{-\nu Y_s^{(n)}} a^{(n)}_s \end{aligned}$$

    is an element of \(L^{1,2,f}_T\) (using the same argument applied to (5.7), together with Lemmas 4.3, 4.4 and 5.2). But it is not enough to show that the expectation of the Skorohod integral is zero. In order to prove it, we have that \(a^{(n)}, L_{0,s}^{(n)}, e^{- Y_s^{(n)}} \in L^p([0,T];\mathbb {D}^{1,p}_T)\) for all \(p>1\). From Lemma 4.3,

    $$\begin{aligned} \left| Z^{(n)}_s(A_s^{(n)}, X_0^{(n)}(A_s^{(n)}))\right| \ge \frac{\eta }{2} \exp \left( -\int _0^T\Vert \gamma _s\Vert _\infty ds\right) . \end{aligned}$$

    As before, with \(\frac{1}{m} \le \frac{\eta }{2} \exp \left( -\int _0^T\Vert \gamma _s\Vert _\infty ds\right) \), we have

    $$\begin{aligned} \left| Z^{(n)}_s(A_s^{(n)}, X_0^{(n)}(A_s^{(n)}))\right| ^{\nu }=\alpha _m \left( Z^{(n)}_s(A_s^{(n)}, X_0^{(n)}(A_s^{(n)}))\right) ^\nu . \end{aligned}$$

    Note that \(\partial _x \left( \alpha _m(x)^\nu \right) =\nu \ \alpha _m(x)^{\nu -1} \partial _x \alpha _m(x)\). For the case of positive initial condition, we obtain \(\alpha _m(x)^{\nu -1}\le (2\,m)^{1-\nu }\) and we also know that \(\partial _x \alpha _m\) is bounded. So, the proof of Lemma 5.2 implies that \(Z^{(n)}_\cdot (A_\cdot ^{(n)}, X_0^{(n)}(A_\cdot ^{(n)}))\in L^p([0,T];\mathbb {D}^{1,p}_T)\) for all \(p>1\).

    Therefore, taking expectations in the penultimate equality, we prove the result for the particular case of simple processes introduced above:

    $$\begin{aligned} \begin{array}{l} \displaystyle \mathbb {E}\left[ F\left( X^{(n)}_t, Y^{(n)}_t\right) \right] =\mathbb {E}|X^{(n)}_0|^\nu +\nu \ \mathbb {E} \int _0^t F(X^{(n)}_s, Y^{(n)}_s)\eta ^{(n)}(s)ds \end{array} \end{aligned}$$

    where

    $$\begin{aligned} \eta ^{(n)}(s)= & {} \nu \frac{\left( a^{(n)}_s\right) ^2}{2}+\frac{\phi ^{(n)} (s,X^{(n)}_s)}{X^{(n)}_s}-\varepsilon ^{(n)}_s\\{} & {} +(\nu -1)a^{(n)}_s \frac{\partial _xZ^{(n)}_s (A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))}{Z^{(n)}_s(A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))} (D_sX^{(n)}_0)(A^{(n)}_s). \end{aligned}$$
  2. 2.

    In order to prove the general case we will take limits on the last equality as \(n\rightarrow \infty \) to show

    $$\begin{aligned} {{\mathbb {E}}} \left[ F\left( X_t, Y_t\right) \right] = {{\mathbb {E}}}\left( |X_0|^\nu \right) +\nu \ \mathbb {E} \int _0^t F(X_s, Y_s) \eta (s)ds, \end{aligned}$$

    where \(\eta \) is introduced in (5.4). This claim is detailed as follows.

    First of all, we have the convergence of \(\mathbb {E}(|X^{(n)}_0|^\nu )\) to \(\mathbb {E}(|X_0|^\nu )\) as a consequence of the fact that by construction \(X_0^{(n)}\) converges in \(L^2(\Omega )\) to \(X_0\).

    In relation with the second term it is enough to show that

    $$\begin{aligned} \lim _{n\rightarrow \infty }{{\mathbb {E}}}\int _0^T |F(X_s, Y_s) \eta (s)-F(X^{(n)}_s, Y^{(n)}_s) \eta ^{(n)}(s)|ds=0 \end{aligned}$$

    in order to finish the proof. To do so, we utilize the inequality

    $$\begin{aligned} {{\mathbb {E}}}\int _0^T |F(X_s, Y_s) \eta (s)-F(X^{(n)}_s, Y^{(n)}_s) \eta ^{(n)}(s)|ds\le B_{1,n}+B_{2,n}, \end{aligned}$$

    with

    $$\begin{aligned} B_{1,n}={{\mathbb {E}}}\int _0^T |F(X_s, Y_s)-F(X^{(n)}_s, Y^{(n)}_s)| |\eta (s)|ds \end{aligned}$$

    and

    $$\begin{aligned} B_{2,n}={{\mathbb {E}}}\int _0^T F(X^{(n)}_s, Y^{(n)}_s) |\eta (s)-\eta ^{(n)}(s)|ds. \end{aligned}$$

    We first deal with \(B_{1,n}\). Note that from Lemma 4.3, Remark 5.4 and Hypotheses (X3T), (A2T) and (B4T), the process \(\eta \) is bounded. That is, there exists a constant \(C>0\) such that

    $$\begin{aligned} \sup _{(\omega ,t)\in \Omega \times [0,T]}|\eta (t)| \le C. \end{aligned}$$

    Therefore, (3.3), (4.3), Lemma 4.1, (6.20), the hypothesis on the coefficients \(a, \varepsilon \) and b, the definition of their approximations \(a^{(n)}, \varepsilon ^{(n)}\) and \(b^{(n)}\), and the Cauchy-Schwartz inequality yield

    $$\begin{aligned} B_{1,n}\le & {} C\ {{\mathbb {E}}}\int _0^T | F(X_s, Y_s)-F(X^{(n)}_s, Y^{(n)}_s)|ds\nonumber \\\le & {} C\ {{\mathbb {E}}}\int _0^T \left| L_{0,s}^{\nu }-(L_{0,s}^{(n)})^{\nu }\right| ds + C\ {\mathbb E}\int _0^T \left( L^{(n)}_{0,s}\right) ^{\nu } \left| e^{-\nu Y_s}-e^{\nu Y_s^{(n)}}\right| ds\nonumber \\{} & {} + C\ {{\mathbb {E}}}\int _0^T \left( L^{(n)}_{0,s}\right) ^{\nu } \left| |Z_s(A_s, X_0(A_s))|^{\nu }-|Z^{(n)}_s(A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))|^{\nu }\right| ds\nonumber \\\le & {} C \ {{\mathbb {E}}}\int _0^T \left| L_{0,s}^{\nu } -(L_{0,s}^{(n)})^{\nu }\right| ds\nonumber \\{} & {} + C\left( {{\mathbb {E}}}\int _0^T \left( L^{(n)}_{0,s}\right) ^{2\nu } ds\right) ^{\frac{1}{2}} \left( {{\mathbb {E}}}\int _0^T \left| e^{-\nu Y_s} -e^{\nu Y_s^{(n)}}\right| ^2ds\right) ^{\frac{1}{2}}\nonumber \\{} & {} + C \left( {{\mathbb {E}}}\int _0^T \left( L^{(n)}_{0,s}\right) ^{2\nu }ds\right) ^{\frac{1}{2}}\nonumber \\{} & {} \quad \times \left( {{\mathbb {E}}}\int _0^T \left| |Z_s(A_s, X_0(A_s))|^{\nu }-|Z^{(n)}_s(A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))|^{\nu }\right| ^2 ds\right) ^{\frac{1}{2}}.\nonumber \\ \end{aligned}$$
    (5.19)

    We claim that our hypotheses allow to show that all these terms converge to zero. Indeed, the last summand goes to zero as \(n \rightarrow +\infty \) due to (6.12) in Sect. 6.3 and the integral of \(|L_{0,s}^{\nu }-(L_{0,s}^{(n)})^{\nu }|\) also tends zero because of the properties of Itô’s integral. Moreover, it is easy to see that the second summand converges to zero.

    Concerning the second term we can write

    $$\begin{aligned} B_{2,n} ={{\mathbb {E}}}\int _0^T \left( L^{(n)}_{0,s}\right) ^{\nu } \left| Z^{(n)}_s(A_s^{(n)}, X_0^{(n)}(A_s^{(n)}))\right| ^{\nu }\ e^{-\nu Y_s^{(n)}}\ \left| \eta (s)-\eta ^{(n)}(s)\right| ds.\end{aligned}$$

    Note that by Lemma 4.1, the hypotheses on the coefficients and applying Cauchy-Schwarz inequality we have

    $$\begin{aligned} B_{2,n}\le C \left( {{\mathbb {E}}}\int _0^T \left( L^{(n)}_{0,s}\right) ^{2\nu } ds\right) ^{\frac{1}{2}}\left( {{\mathbb {E}}}\int _0^T \left| \eta (s) -\eta ^{(n)}(s)\right| ^2 ds\right) ^{\frac{1}{2}}, \end{aligned}$$

    because \(|Z^{(n)}_s(A_s^{(n)}, X_0^{(n)}(A_s^{(n)}))|^{\nu } \ e^{-\nu Y_s^{(n)}}\le C\) (see (6.20)). The first factor in the right hand side is also bounded. Finally, in a similar way as in Sect. 6.2 we may take a subsequence of \(\eta ^{(n)}(\cdot )\) that is denoted with the same subindex for simplicity and then,

    $$\begin{aligned} \lim _{n\rightarrow \infty }{{\mathbb {E}}}\int _0^T \left| \eta (s)-\eta ^{(n)}(s)\right| ^2 ds=0 \end{aligned}$$
    (5.20)

    thanks (3.3) and (4.5), Remark 5.4, assumptions on the coefficient \(a^{(n)}\) and the initial condition \(X_0^{(n)}\), the definition of \(\phi ^{(n)}\), and Sects. 6.2 and 6.3. Namely, we obtain that there is a constant \(C>0\) such that

    $$\begin{aligned} \left| \eta (s)-\eta ^{(n)}(s)\right| ^2\le C\sum _{i=1}^4 \eta _i^{(n)}(s), \end{aligned}$$

    where

    $$\begin{aligned} \eta _1^{(n)}(s)= & {} \left| a^2_s-\left( a^{(n)}_s\right) ^2\right| ^2,\\ \eta _2^{(n)}(s)= & {} \left| \frac{\phi ^{(n)}(s,X^{(n)}_s)}{X^{(n)}_s}-\frac{\phi (s,X_s)}{X_s}\right| ^2,\\ \eta _3^{(n)}(s)= & {} \left| \varepsilon _s-\varepsilon ^{(n)}_s\right| ^2 \end{aligned}$$

    and

    $$\begin{aligned} \eta _4^{(n)}(s)= & {} \Bigg |a_s\frac{\partial _xZ_s (A_s, X_0(A_s))}{Z_s(A_s, X_0(A_s))} (D_sX_0)(A_s)\\{} & {} - a^{(n)}_s\frac{\partial _xZ^{(n)}_s (A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))}{Z^{(n)}_s(A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))} (D_sX^{(n)}_0)(A^{(n)}_s)\Bigg |^2. \end{aligned}$$

    Using the construction of \(a^{(n)}\) and \(\varepsilon ^{(n)}\), it is obvious that

    $$\begin{aligned} \lim _{n\rightarrow \infty }{{\mathbb {E}}}\int _0^T\left[ \eta _1^{(n)}(s) +\eta _3^{(n)}(s)\right] ds=0. \end{aligned}$$

    In order to deal with the \(\eta _2^{(n)}\), we divide it into three parts as follows:

    $$\begin{aligned} \eta _2^{(n)}(s)= & {} \frac{1}{|X^{(n)}_sX_s|^2} \left[ X_s\ \phi ^{(n)}(s,X^{(n)}_s) - X^{(n)}_s\ \phi (s,X_s) \right] ^2\\\le & {} \frac{C}{|X^{(n)}_sX_s|^2}\Bigg [\left| X_s\right| ^2 \left( \phi ^{(n)}(s,X^{(n)}_s) - \phi (s,X_s)\right) ^2 +\ \phi (s,X_s)^2 \left( X_s-X^{(n)}_s\right) ^2\Bigg ]\\\le & {} \frac{C}{|X^{(n)}_sX_s|^2}\Bigg [\left| X_s\right| ^2\left\{ \left( \phi ^{(n)}(s,X^{(n)}_s) -\phi (s,X^{(n)}_s)\right) ^2 +\left( \phi (s,X^{(n)}_s)- \phi (s,X_s)\right) ^2\right\} \\{} & {} \quad \qquad +\ \phi (s,X_s)^2 \left( X_s-X^{(n)}_s\right) ^2\Bigg ]. \end{aligned}$$

    Now, (3.3), (X3T), the construction of \(X^{(n)}_s\), Lemmas 4.1 and 4.3, and the arguments used in the study of \(B_{1,n}\) in (5.19), (6.11), (6.25) and (6.26) imply that

    $$\begin{aligned} \lim _{n\rightarrow \infty }{{\mathbb {E}}}\int _0^T\eta _2^{(n)}(s) ds=0. \end{aligned}$$

    The convergence of the fourth term is more complicated. It is a consequence of the construction of \(a^{(n)}\) and \(X_0^{(n)}\), Remark 5.4, Equality (4.5), the approximation of Z by \(Z^{(n)}\) given in Sect. 6.3, a similar argument used in (6.14) involving the second derivative of \(X_0\) in order to study the difference between \((D_sX_0)(A_s)\) and \((D_sX^{(n)}_0)(A^{(n)}_s)\) and, finally, the more delicate aspect is to state

    $$\begin{aligned} \lim _{n\rightarrow \infty }{{\mathbb {E}}} \left[ \int _0^T \left| \partial _x \phi (s,X_s)- \partial _x\phi ^{(n)}(s,X^{(n)}_s)\right| ^2ds\right] =0. \end{aligned}$$

    This fact is true because of (4.5) and the mean value theorem. We can see that it holds applying Sect. 6.2. Indeed,

    $$\begin{aligned} {{\mathbb {E}}} \left[ \int _0^T \left| \partial _x \phi (s,X_s)- \partial _x\phi ^{(n)}(s,X^{(n)}_s)\right| ^2ds\right] \le 2{{\mathbb {E}}}\int _0^T\left( \eta _{4,1}^{(n)}(s)+\eta _{4,2}^{(n)}(s)\right) ds. \end{aligned}$$

    Here

    $$\begin{aligned} {{\mathbb {E}}}\int _0^T \eta _{4,1}^{(n)}(s)ds= & {} {{\mathbb {E}}} \left[ \int _0^T \left| \partial _x \phi (s,X_s)- \partial _x\phi ^{(n)}(s,X_s)\right| ^2ds\right] ,\\ {{\mathbb {E}}}\int _0^T\eta _{4,2}^{(n)}(s)ds= & {} {{\mathbb {E}}} \left[ \int _0^T \left| \partial _x\phi ^{(n)}(s,X_s)- \partial _x\phi ^{(n)}(s,X^{(n)}_s)\right| ^2ds\right] . \end{aligned}$$

    The construction of \(\phi ^{(n)}\), together with (B4T), yields

    $$\begin{aligned} {{\mathbb {E}}}\int _0^T\eta _{4,2}^{(n)}(s)ds\le {{\mathbb {E}}} \left[ \int _0^T \left| \int _{X_s^{(n)}}^{X_s} \partial _x^2 \phi ^{(n)}(s,y) dy\right| ^2ds\right] \le C {{\mathbb {E}}} \left[ \int _0^T \left| X_s - X_s^{(n)} \right| ^2ds\right] . \end{aligned}$$

    Section 6.3 implies that this quantity converges to zero as n tends to \(+\infty \). In order to study the other term we observe

    $$\begin{aligned} {{\mathbb {E}}}\int _0^T \eta _{4,1}^{(n)}(s)ds={{\mathbb {E}}} \left[ \int _0^T \left| \int _0^{X_s} \left[ \partial _x^2 \phi (s,y)- \partial _x^2\phi ^{(n)}(s,y)\right] dy\right| ^2ds\right] , \end{aligned}$$
    (5.21)

    since \(\partial _x^2 \phi ^{(n)}(s,0) =\partial _x^2 \phi (s,0)\) by definition. Also by definition, \(\left| \partial _x \phi (s,y)- \partial _x\phi ^{(n)}(s,y)\right| \) converges to zero almost surely in \((\omega ,s,y)\in \Omega \times [0,T]\times \mathbb {R}\), and it is bounded by a constant (see (B4T) and Section 6.2). Consequently, the dominated convergence theorem leads to

    $$\begin{aligned} \lim _{n\rightarrow \infty }{{\mathbb {E}}}\int _0^T\eta _4^{(n)}(s) ds=0. \end{aligned}$$

    Thus, the proof of the theorem is finished.\(\square \)

5.2 Main results

In this section \(X(\cdot , X_0)\) stands for the unique solution to Eq. (3.1), under Hypotheses (X3T), (A2T) and (B4T). Now, we introduce three types of stability for this solution.

Definition 1.12

Assume that \(X_0\) satisfies (X3T). We say that \(X(\cdot , 0)\equiv 0\) is stable in probability if, for every \(\rho >0\) and \(\delta >0\), there exists \(r>0\) such that

$$\begin{aligned} \sup _{t\ge 0}\mathbb {P}\left\{ \left| X(t,X_0)\right| >\rho \right\} <\delta , \end{aligned}$$

for any \(X_0\) satisfying

$$\begin{aligned} \sup _{(s,\omega )\in {{\mathbb {R}}}_{+}\times \Omega }\left[ \frac{|D_s X_0|}{|X_0|}+|X_0|\right] \le r. \end{aligned}$$
(5.22)

Remark 5.6

Note that if we only consider deterministic initial conditions in Eq. (3.1), then the last definition agrees with the usual stability in probability for Eq. (3.1). See Section 1.5 in Khasminskii [9]. Note the same happens if \(X_0\) is a random variable independent of the Brownian filtration and satisfies (X3T). In this case we can assume that \(X_0\) is \(\mathcal{F}_0\)-measurable and we have \(D_s X_0=0\) for all \(s\ge 0\) and (5.22) reduces to \(||X_0||_{\infty }\le r,\) the usual condition in the deterministic case (in addition to the condition that its absolute value is greater than a constant).

Definition 1.14

Assume that \(X_0\) satisfies (X3T) and \(p>0.\) We say that \(X(\cdot , 0)\equiv 0\) is exponentially p-stable if there are positive constants \(A, r, \alpha >0\) such that

$$\begin{aligned} \mathbb {E}\left( \left| X(t,X_0)\right| ^p\right) \le A\mathbb {E}(\left| X_0\right| ^p)\exp (-\alpha t),\quad t\ge 0, \end{aligned}$$

for any \(X_0\) satisfying

$$\begin{aligned} \sup _{(s,\omega )\in {{\mathbb {R}}}_{+}\times \Omega }\left[ \frac{|D_s X_0|}{|X_0|}\right] \le r. \end{aligned}$$
(5.23)

Remark 5.8

For instance, in order to have an example of an initial condition satisfying this definition, we can consider \(X_0=\eta \exp \{\phi (F)\}\) with \(\eta \in \mathbb {R}-\{0\}\), \(F=\int _0^\infty h(s) \delta W_s\), where \(\Vert h\Vert _\infty \) is assumed to be small enough and \(\phi '\) is bounded.

Definition 1.16

Suppose that \(X_0\) satisfies (X3T). We say that \(X(\cdot , 0)\equiv 0\) is exponentially stable in probability if, for a given \(\xi >0\), there are constants \(A, r, \alpha >0\) such that

$$\begin{aligned} \mathbb {P}\left( |X(t,X_0)|> \xi \right) \le A \exp (-\alpha t),\quad \forall t\ge 0, \end{aligned}$$

for any \(X_0\) satisfying (5.23)

Remark 5.10

Note that the exponential p-stability implies

$$\begin{aligned} \lim _{t\rightarrow \infty } \mathbb {E}\left( \left| X(t,X_0)\right| ^p\right) =0 \end{aligned}$$

and the exponential stability in probability implies that

$$\begin{aligned} \lim _{t\rightarrow \infty } {{\mathbb {P}}}(|X(t,X_0)|>\xi )=0. \end{aligned}$$

Theorem 1.18

Suppose that a, b and \(X_0\) satisfy (A2T), (B4T) and (X3T) for any \(T>0,\) respectively. Moreover, assume that \(X_0\) satisfies (5.22) and that

$$\begin{aligned} \sup _{t\ge 0} Y_t=\sup _{t\ge 0}\int _0^t \left[ {\bar{b}}_s-\frac{a_s^2}{2}+\varepsilon _s\right] ds\le k, \, \mathrm{for \ all}\ \omega \in \Omega , \end{aligned}$$
(5.24)

for a constant \(k>0\) and some positive adapted process \(\varepsilon \) such that

$$\begin{aligned} \frac{\nu a_t^2}{2}+\delta _t+r (1-\nu )|a_t|e^{2c_1} \le \varepsilon _t, \quad \mathrm{for \ all}\ t\ge 0, \end{aligned}$$
(5.25)

for some \(\nu \in (0,1]\), \(\delta _t\) defined in (B4T), \(c_1\) in (B1T) and r in (5.22). Then, the solution to equation (3.1) is stable in probability.

Proof. Lemmas 4.1 and 4.3, together with (5.3), (5.4) and (5.25), yield

$$\begin{aligned} {{\mathbb {E}}}F(X_t, Y_t)\le {{\mathbb {E}}}(|X_0|^\nu ). \end{aligned}$$
(5.26)

Indeed, note that

$$\begin{aligned} \eta (s)\le & {} \nu \frac{a_s^2}{2}+\left| \frac{\phi (s,X_s)}{X_s}\right| -\varepsilon _s+(1-\nu ) |a_s|\cdot \left| \frac{\partial _xZ_s (A_s, X_0(A_s))}{Z_s(A_s, X_0(A_s))}\right| \cdot |(D_sX_0)(A_s)|\\\le & {} \nu \frac{a_s^2}{2}+\delta _s-\varepsilon _s+(1-\nu ) |a_s| e^{2c_1}\cdot \left| \frac{(D_sX_0)(A_s)}{X_0(A_s)}\right| \end{aligned}$$

and consequently, (5.25) and (5.22) gives that (5.26) holds.

Therefore, for \(\rho >0\),

$$\begin{aligned} {{\mathbb {E}}}F(X_t, Y_t)\ge \int _{\{|X_t|>\rho \}}F(X_t, Y_t)d{{\mathbb {P}}} =\int _{\{|X_t|>\rho \}} |X_t|^{\nu } e^{-\nu Y_t}d{{\mathbb {P}}} \ge \rho ^{\nu }\, e^{-k\nu } \,{{\mathbb {P}}}(|X_t|>\rho ), \end{aligned}$$

where we have used (5.24) and the definition of the process Y. Thus, (5.26) gives

$$\begin{aligned} {{\mathbb {P}}}(|X_t|>\rho )\le \frac{{{\mathbb {E}}}(|X_0|^\nu )}{\rho ^{\nu }}\,e^{k\nu }. \end{aligned}$$

Hence, choosing \({{\mathbb {E}}}(|X_0|^{\nu })\) small enough, the result holds. \(\square \)

Remark 5.12

Note also that if \(\varepsilon _s=\delta _s+\epsilon \) for a certain \(\epsilon >0\) and a is bounded in \((0,+\infty )\times \Omega \), we always can find positive constants \(\nu \) and r small enough such that (5.26) holds. So, a sufficient condition to prove that the theorem holds is to assume \(\delta _s+\epsilon \) satisfies (5.24).

We also have the following stability criterion.

Theorem 1.20

Assume (A2T), (B4T) and (X3T) are satisfied for any \(T>0\) and (5.23) and (5.24) hold. Assume also there exists a strictly positive constant \(k_0\) such that

$$\begin{aligned} \frac{\nu a_t^2}{2}+\delta _t+r (1-\nu )|a_t|e^{2c_1} -\varepsilon _t\le -k_0, \end{aligned}$$
(5.27)

for all \(t\ge 0\) and some \(\nu \in (0,1].\) Then, the solution to Eq. (3.1) is exponentially \(\nu \)-stable.

Remark 5.14

Note that in comparison with Theorem 5.11, now the condition is (5.27) instead of (5.25).

Proof of Theorem 5.13

In order to apply Theorem 4.1 in Hartman [8] (pag 26) we may use the approximation \(X^{(n)}_\cdot \) because if we use the original \(X_\cdot \), the derivative of \({{\mathbb {E}}}F(X_\cdot , Y_\cdot )\) is not continuous. Using the same arguments given in the proof of Theorem 5.3, we have that there exists a sequence \(\{F(X^{(n)}_t, Y^{(n)}_t), n\ge 1\}\) that converges to \(F(X_t, Y_t)\) in \(L^1(\Omega \times [0,T])\) (see the study of (5.19)) and satisfies

$$\begin{aligned} {{\mathbb {E}}}[F(X^{(n)}_t, Y^{(n)}_t)]={{\mathbb {E}}}(|X^{(n)}_0|^{\nu })+\nu \int _0^t {{\mathbb {E}}}[F(X^{(n)}_s, Y^{(n)}_s)\eta ^{(n)}(s)]ds. \end{aligned}$$

Now, the goal is to apply Theorem 4.1 in Hartman [8]. Borrowing its notation, we define a function \(U(t,u)=-k_1\nu u\), on \((0,T)\times {{\mathbb {R}}}\). Then, the solution of \(u^{'}(t)=U(t,u)\) in [0, T] is \(u(t)=u(0)e^{-k_1\nu t}.\) Moreover, if we define \(v(t)={{\mathbb {E}}}[F(X^{(n)}_t, Y^{(n)}_t)]\), since \(\eta ^{(n)}(\cdot )\) is continuous thanks to the definitions of all the coefficients and the constructions of \(\phi ^{(n)}\), \(X^{(n)}\) and \(Z^{(n)}\), we have

$$\begin{aligned} v^{'}(t)=\nu {{\mathbb {E}}}\left[ F(X^{(n)}_t, Y^{(n)}_t)\eta ^{(n)}(t)\right] ,\qquad \mathrm{for\ all\ } t\in [0,T]. \end{aligned}$$

Furthermore,

$$\begin{aligned} {{\mathbb {E}}}[F(X^{(n)}_t, Y^{(n)}_t)\eta ^{(n)}(t)]= {{\mathbb {E}}}\left[ F(X^{(n)}_t, Y^{(n)}_t)\left( \eta ^{(n)}(t)-\eta (t)\right) \right] + {{\mathbb {E}}}\left[ F(X^{(n)}_t, Y^{(n)}_t)\eta (t)\right] . \end{aligned}$$

So, using Proposition 2.1.2 in [5], (5.20), (5.27) and proceeding as in the proof of Theorem 5.11 we have that there exists \(0<k_1<k_0\) such that for any n big enough,

$$\begin{aligned} v^{'}(t)=\nu {{\mathbb {E}}}\left[ F(X^{(n)}_t, Y^{(n)}_t)\eta ^{(n)}(t)\right] \le -k_1\nu v(t). \end{aligned}$$

Then, defining \(u(0)=v(0)={{\mathbb {E}}}(|X_0^{(n)}|^{\nu })\) and applying Theorem 4.1 in [8] we have

$$\begin{aligned} {{\mathbb {E}}}(F(X^{(n)}_t, Y^{(n)}_t))\le {{\mathbb {E}}}(|X^{(n)}_0|^{\nu })e^{-k_1\nu t} \end{aligned}$$
(5.28)

for any \(t\in [0,T].\) Letting \(n\rightarrow \infty \) in (5.28) we have

$$\begin{aligned} {{\mathbb {E}}}(F(X_t, Y_t))\le {{\mathbb {E}}}(|X_0|^{\nu })e^{-k_1\nu t}. \end{aligned}$$

Finally, (5.24) allows us to get

$$\begin{aligned} e^{-k\nu } {{\mathbb {E}}}(|X_t|^{\nu })\le {{\mathbb {E}}}(|X_t|^{\nu } e^{-\nu Y_t})\le {{\mathbb {E}}}(|X_0|^{\nu })e^{-k_1\nu t}, \end{aligned}$$

which implies the desired result \(\square \)

An immediate consequence of previous theorem is the following result:

Corollary 1.22

Under the hypotheses of Theorem 5.13, the solution to equation (3.1) is exponentially stable in probability.

Proof

Observe that for any \(\rho >0,\)

$$\begin{aligned} {{\mathbb {P}}}(|X_t|>\rho )\le \frac{{{\mathbb {E}}}(|X_t|^{\nu })}{\rho ^{\nu }}\le \left( \frac{e^k}{\rho }\right) ^{\nu } {{\mathbb {E}}}(|X_0|^{\nu })\ e^{-k_1\nu t}. \square \end{aligned}$$

Moreover, we also have the following result:

Theorem 1.23

Assume that (A2T), (B4T) and (X3T) hold for any \(T>0.\) Also assume that (5.24) is satisfied and that for some \(\nu \in (0,1]\) there exists \(\eta <0\) such that

$$\begin{aligned} \frac{\nu a_t^2}{2}+\delta _t-\varepsilon _t<\eta <0, \quad \forall t\ge 0. \end{aligned}$$

Then, the solution to Eq. (3.1) is exponentially \(\nu -\)stable and exponentially stable in probability, for any initial condition \(X_0\in {{\mathbb {D}}}^{1,2}\) such that \(\sup _{(s,\omega )\in {{\mathbb {R}}}_{+}\times \Omega }\{\frac{|D_s X_0|}{|X_0|}\}\) is small enough.

Proof

The result is an immediate consequence of (5.3). \(\square \)

6 Appendix

6.1 Proof of Theorem 3.1

The purpose of this section is to provide a proof of Theorem 3.1.

Here, to simplify the notation, we assume that \(c_1\le L\) without loss of generality. As in Nualart [14] (Proof of Theorem 3.3.6), we apply Gronwall’s lemma and (B1T) to equation (3.2) and then, we use (3.3) to obtain

$$\begin{aligned} |X_t| \le L_{0,t} e^{L} \left[ |X_0(A_t)| + L\int _0^t L_{0,s}^{-1}\ ds \right] . \end{aligned}$$

So, from (2.6), we have

$$\begin{aligned} \mathbb {E}\left( |X_t|\right)\le & {} e^L\mathbb {E}\left[ |X_0(A_t)| L_{0,t}+L\ L_{0,t} \int _0^t \left( L_{0,s}^{-1} \right) ds\right] \\= & {} e^L\mathbb {E}[|X_0|]+L e^L\mathbb {E}\left[ L_{0,t}\int _0^t \left( L_{0,s}^{-1} \right) ds\right] <\infty \end{aligned}$$

as a consequence of the fact that \(\sup _{0\le t\le T} {{\mathbb {E}}}\left[ L_{0,t}^r\right] <+\infty \) and \(\sup _{0\le t\le T} {{\mathbb {E}}}\left[ L_{0,t}^{-r}\right] <+\infty ,\) for any \(r\ge 1\), which follows from (A1T). Moreover, we have

$$\begin{aligned} \sup _{0\le t\le T} {{\mathbb {E}}}|X_t|<\infty . \end{aligned}$$
(6.1)

The proof that X, introduced in (3.3), is a solution to Eq. (3.1) is similar to that of Theorem 3.3.6 in Nualart [14]. Thus, using (2.6), (3.3), (6.1), Buckdhan [5] (Lemma 2.2.13), the integration by parts formula and the Girsanov’s theorem, we obtain

$$\begin{aligned} {{\mathbb {E}}}\left[ \int _0^t a_s \, X_s\, D_s G\, ds\right] ={{\mathbb {E}}}\left[ G\left( X_t-X_0-\int _0^t b(s,X_s)ds\right) \right] , \end{aligned}$$
(6.2)

for any \(G\in \mathcal {S}\). Therefore the duality relation (2.2) implies that

$$\begin{aligned} \int _0^ta_s X_s\delta W_s=X_t-X_0-\int _0^t b(s,X_s)ds \end{aligned}$$

because the right-hand side is an integrable process due to (6.1) and Hypothesis (B1T). Consequently, (3.1) holds.

Now, we prove the uniqueness of the solution to equation (3.1). To do so, we make use of the fact that there is a sequence \(\{a^n_s: s\in [0,T]\}\) of the form

$$\begin{aligned} a^n_s=\sum _{i=0}^{n-1} F_{i,n} \, {1\!\!1}_{(t_{i},t_{i+1}]}(s), \end{aligned}$$

where \(F_{i,n}\in \mathcal {S}\), \(i=0,\ldots , n-1\), and \(0=t_0<t_1<\ldots< t_{n-1}<t_n=T\), such that \(a^n\) goes to a in \(L^2([0,T];\mathbb {D}^{1,2}_T)\), \(\Vert a^n\Vert _{L^{\infty }(\Omega \times [0,T])}\le \Vert a\Vert _{L^{\infty }(\Omega \times [0,T])}\), \(\Vert Da^n\Vert _{L^{\infty }(\Omega \times [0,T]^2)}\le \Vert Da\Vert _{L^{\infty }(\Omega \times [0,T]^2)}+1\) and

$$\begin{aligned} G(A_t^n)= G(A_s^n) -\int _s^t a_u^n D_u (G(A^n_u)) du, \end{aligned}$$
(6.3)

where \(G \in \mathcal {S}\) and \(A^n\) is the solution to equation (2.4) when we change a by \(a^n\) (see Lemmas 3.2.3 and 3.2.4 in Buckdhan [5]).

Let Y be a solution to (3.1) such that Y belongs to \(L^1(\Omega \times [0,T])\) and \({1\!\!1}_{[0,t]}\, a\, Y \in \textrm{Dom}\ \delta \), for all \(t\in [0,T]\). Multiplying the members of (3.1) by \(G(A_t^n)\) and taking expectations, we have

$$\begin{aligned} {{\mathbb {E}}}\left[ Y_t G(A^n_t)\right] ={{\mathbb {E}}}\left[ Y_0 G(A^n_t)\right] +{{\mathbb {E}}}\left[ \int _0^t b(s,Y_s) G(A^n_t) ds\right] +{{\mathbb {E}}}\left[ \int _0^t a_s Y_s D_s (G(A^n_t)) ds\right] . \end{aligned}$$

Integrating by parts and using (6.3), we get

$$\begin{aligned} {{\mathbb {E}}}\left[ Y_t G(A^n_t)\right]= & {} {{\mathbb {E}}}\left[ Y_0 G\right] - {{\mathbb {E}}}\left[ Y_0 \int _0^t a^n_u D_u (G(A^n_u)) du\right] + {{\mathbb {E}}}\left[ \int _0^t b(s,Y_s) G(A^n_s) ds\right] \nonumber \\{} & {} -{{\mathbb {E}}}\left[ \int _0^t b(s,Y_s) \int _s^t a_u^n D_u( G(A^n_u)) du ds\right] +{{\mathbb {E}}}\left[ \int _0^t a_s Y_s D_s (G(A^n_s)) ds\right] \nonumber \\{} & {} - {{\mathbb {E}}}\left[ \int _0^t a_s Y_s \int _s^t D_s( a_u^n D_u (G(A^n_u)) )du ds\right] . \end{aligned}$$
(6.4)

Consequently, by Fubini’s theorem, and proceeding as in Buckdhan [5] (Proof of Theorem 3.2.1), we obtain

$$\begin{aligned} {{\mathbb {E}}}\left[ Y_t G(A_t)\right]= & {} {{\mathbb {E}}}\left[ Y_0 G\right] +\lim _{n\rightarrow \infty } {{\mathbb {E}}}\left[ \int _0^t b(s,Y_s) G(A^n_s) ds\right] \nonumber \\{} & {} +\lim _{n\rightarrow \infty }{{\mathbb {E}}}\left[ \int _0^t Y_s\left( a_s-a^n_s\right) D_s(G(A^n_s))ds\right] \nonumber \\= & {} {{\mathbb {E}}}\left[ Y_0 G\right] + {{\mathbb {E}}}\left[ \int _0^t b(s,Y_s) G(A_s) ds\right] . \end{aligned}$$

Hence, Girsanov theorem (see (2.6)) implies

$$\begin{aligned} {{\mathbb {E}}}\left[ Y_t (T_t) L_{0,t}^{-1}(T_t) G \right] ={{\mathbb {E}}}\left[ Y_0 G\right] + {{\mathbb {E}}}\left[ G\int _0^t b(s,Y_s(T_s), T_s) L_{0,s}^{-1}(T_s) ds\right] , \end{aligned}$$

for any \(G\in \mathcal {S}\). So,

$$\begin{aligned} Y_t (T_t) L_{0,t}^{-1}(T_t)=Y_0+ \int _0^t b(s,Y_s(T_s), T_s) L_{0,s}^{-1}(T_s) ds. \end{aligned}$$

Thus, the uniqueness of the solution to Eq. (3.2) leads to establish

$$\begin{aligned} Y_t (T_t) L_{0,t}^{-1}(T_t)=Z_t(Y_0), \ \mathrm{w.p.1.} \end{aligned}$$

It means Y is equal to the right-hand side of (3.3). So, the proof of Theorem 3.1 is complete.

6.2 Construction of \(\phi ^{(n)}\)

Let \(n\in \mathbb {N}\). Define the partition

$$\begin{aligned} t_i=\frac{i T}{n}, \qquad i=0,\dots ,n \end{aligned}$$

and

$$\begin{aligned} x_j=\frac{j}{n},\qquad j=-n^2,\dots , -1, 0, 1,\dots , n^2. \end{aligned}$$

Thanks to (B4T), \(\partial _x^2 \phi ((t_i-\frac{1}{n^2})\vee 0,x_j)\in L^2(\Omega )\) for any (ij) and we can find \(F_{i,j}^{(n,m)} \in \mathcal {S}\) such that \(F_{i,j}^{(n,m)} \longrightarrow \partial _x^2 \phi ((t_i-\frac{1}{n^2})\vee 0,x_j)\), as \(m\rightarrow + \infty \), in \(L^2(\Omega )\) and a.s. So, let

$$\begin{aligned} Q=\sup _{(\omega ,t,x)\in \Omega \times [0,T]\times \mathbb {R}} \left| \partial _x^2 \phi (t,x)\right| , \end{aligned}$$

which is finite due to (B4T). Let \(f\in \mathcal {C}_c^{\infty }(\mathbb {R)}\) taking values in [0, 1] such that

$$\begin{aligned} f(x)=\left\{ \begin{array}{ll} 1, &{} |x|\le 1,\\ 0, &{} |x|\ge 2,\end{array}\right. \end{aligned}$$

and \(f_{Q}(x)=f(\frac{x}{2Q})\). Then, if we define

$$\begin{aligned} {\tilde{F}}_{i,j}^{(n,m)}=f_{Q}\left( F_{i,j}^{(n,m)}\right) F_{i,j}^{(n,m)}, \end{aligned}$$

we have that \({\tilde{F}}_{i,j}^{(n,m)} \in \mathcal {S}\), \(\left| \tilde{F}_{i,j}^{(n,m)}\right| \le 4Q\), \({\tilde{F}}_{i,j}^{(n,m)}= F_{i,j}^{(n,m)}\) if \(\left| F_{i,j}^{(n,m)}\right| \le 2Q,\) and, moreover, \({\tilde{F}}_{i,j}^{(n,m)} \longrightarrow \partial _x^2 \phi ((t_i-\frac{1}{n^2})\vee 0,x_j)\) in \(L^2(\Omega )\) and a.s., as m goes to \(+\infty \). So, now we can take \(H_{i,j}^{(n)}=\tilde{F}_{i,j}^{(n,n_0)}\) with \(n_0\in \mathbb {N}\) such that

$$\begin{aligned} {{\mathbb {E}}}\left[ \left| \partial _x^2 \phi ((t_i-\frac{1}{n^2}) \vee 0,x_j)-H_{i,j}^{(n)}\right| ^2\right] \le \frac{1}{n^2}. \end{aligned}$$
(6.5)

Using the function \(k_i^{(n)}\) introduced in the proof of Theorem 5.3 we define the following bounded random field

$$\begin{aligned} {\bar{\psi }}^{(n)}(t,x)=\sum _{i=0}^{n-1} \sum _{j=-n^2}^{n^2-1} H_{i,j}^{(n)} k_i^{(n)}(t)\ \ {1\!\!1}_{\ (x_j,x_{j+1}]}(x),\qquad n\ge 1, \end{aligned}$$

where we take into account that the indicator depends on n because \(x_j\) is so. The function \((t,x) \mapsto {\bar{\psi }}^{(n)}(t,x)\) is continuous in time with probability one satisfying \(|\bar{\psi }^{(n)}(t,x)|\le 16 Q\). Our next step is to show that, for any compact \(K\subset \mathbb {R},\) we get

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbb {E}\int _K \int _0^T \left[ \partial _x^2 \phi (t,x)-{\bar{\psi }}^{(n)} (t,x)\right] ^2 dt dx=0. \end{aligned}$$
(6.6)

To do so, we observe

$$\begin{aligned} \mathbb {E}\int _K\int _0^T \left[ \partial _x^2 \phi (t,x)-{\bar{\psi }}^{(n)} (t,x)\right] ^2 dt\le \sum _{i=1}^4 C I^{(n)}_i, \end{aligned}$$
(6.7)

with

$$\begin{aligned} I^{(n)}_1= & {} \mathbb {E}\int _K \int _0^T \left[ \partial _x^2 \phi (t,x)- \sum _{i=0}^{n-1} \sum _{j=-n^2}^{n^2-1} \partial _x^2 \phi (t_i,x_j)\ {1\!\!1}_{\,(t_i,t_{i+1}]}(t) \ {1\!\!1}_{\,(x_j,x_{j+1}]}(x) \right] ^2 dtdx,\\ I^{(n)}_2= & {} \mathbb {E}\int _K \int _0^T \Bigg [ \sum _{i=0}^{n-1} \sum _{j=-n^2}^{n^2-1}\left( \partial _x^2 \phi (t_i,x_j)- \partial _x^2 \phi ((t_i-\frac{1}{n^2})\vee 0,x_j)\right) \\{} & {} \qquad \qquad \times \ {1\!\!1}_{\,(t_i,t_{i+1}]}(t) \ {1\!\!1}_{\,(x_j,x_{j+1}]}(x) \Bigg ]^2 dtdx,\\ I^{(n)}_3= & {} \mathbb {E}\int _k\int _0^T \left[ \sum _{i=0}^{n-1} \sum _{j=-n^2}^{n^2-1}\left( \partial _x^2 \phi ((t_i-\frac{1}{n^2})\vee 0,x_j)-H_{i,j}^{(n)}\right) \ {1\!\!1}_{\,(t_i,t_{i+1}]}(t) \ {1\!\!1}_{\,(x_j,x_{j+1}]}(x) \right] ^2 dtdx,\\ I^{(n)}_4= & {} \mathbb {E}\int _K\int _0^T \left[ \sum _{i=0}^{n-1} \sum _{j=-n^2}^{n^2-1} H_{i,j}^{(n)} \ {1\!\!1}_{\,(x_j,x_{j+1}]}(x) \left( \ {1\!\!1}_{\,(t_i,t_{i+1}]}(t) -k_i^{(n)}(t)\right) \right] ^2 dtdx. \end{aligned}$$

We first study \(I_1^{(n)}\) and \(I_2^{(n)}\). Let \(M>0\) such that \(K\subseteq [-M.M].\) Then, for \(n>M\),

$$\begin{aligned} I^{(n)}_1= & {} \mathbb {E}\int _K\int _0^T \sum _{i=0}^{n-1} \sum _{j=-n^2}^{n^2-1}\left| \partial _x^2 \phi (t,x)-\partial _x^2 \phi (t_i,x_j)\right| ^2\ {1\!\!1}_{\,(t_i,t_{i+1}]}(t) \ {1\!\!1}_{\,(x_j,x_{j+1}]}(x)dtdx\\\le & {} \mathbb {E}\int _K \int _0^T \sum _{i=0}^{n-1} \sum _{j=-n^2}^{n^2-1} \sup _{y\in K}\left[ \left| \partial _x^2 \phi (t,y)-\partial _x^2 \phi (t_i,x_j)\right| ^2\ {1\!\!1}_{\,(t_i,t_{i+1}]}(t) \ {1\!\!1}_{\,(x_j,x_{j+1}]}(y)\right] dtdx. \end{aligned}$$

Now, due to the continuity of \(\partial _x^2 \phi \), uniformly on \([0,T]\times K\), we have

$$\begin{aligned} \lim _{n \rightarrow \infty }\left[ I_1^{(n)}+ I_2^{(n)}\right] =0. \end{aligned}$$
(6.8)

Secondly, we have that (6.5) gives

$$\begin{aligned} I_3^{(n)}\le \frac{2MT}{n^2}, \end{aligned}$$

obtaining

$$\begin{aligned} \lim _{n \rightarrow +\infty } I_3^{(n)}=0. \end{aligned}$$
(6.9)

We now study the last term. For n large enough, we have

$$\begin{aligned} I^{(n)}_4\le & {} C Q^2\mathbb {E}\int _0^T \int _{-M}^M \left[ \sum _{i=0}^{n-1} \sum _{j=-n^2}^{n^2-1} {1\!\!1}_{\,(x_j,x_{j+1}]}(x) \left| \ {1\!\!1}_{\,(t_i,t_{i+1}]}(t) -k_i^{(n)}(t)\right| \right] ^2 dxdt\\\le & {} C Q^2 \int _0^T \int _{-M}^M \left[ \left( \sum _{j=-nM-1}^{nM}{1\!\!1}_{\,(x_j,x_{j+1}]}(x)\right) \left( \sum _{i=0}^{n-1} \left| \ {1\!\!1}_{\,(t_i,t_{i+1}]}(t)-k_i(t)\right| \right) \right] ^2dxdt. \end{aligned}$$

It is not difficult to see that

$$\begin{aligned} \sum _{j=-nM-1}^{nM}{1\!\!1}_{\,(x_j,x_{j+1}]}(x)\le 1 \end{aligned}$$

and

$$\begin{aligned} \sum _{i=0}^{n-1} \left| \ {1\!\!1}_{\,(t_i,t_{i+1}]}(t)-k_i(t)\right| \le \sum _{i=0}^{n-1}\ {1\!\!1}_{\,(t_i-\frac{1}{n^2},t_i]}(t)+\sum _{i=0}^{n-1}\ {1\!\!1}_{\,(t_{i+1}, t_{i+1}+\frac{1}{n^2}]}(t). \end{aligned}$$

So, using these facts we get

$$\begin{aligned} I_{4}^{(n)}\le & {} C Q^2 \int _0^T \int _{-M}^M \left[ \sum _{i=0}^{n-1} |\ {1\!\!1}_{\,(t_i,t_{i+1}]}(t)-k_i(t)|\right] ^2dxdt\\\le & {} C Q^2 \int _0^T \int _{-M}^M \left[ \sum _{i=0}^{n-1}\ {1\!\!1}_{\,(t_i-\frac{1}{n^2},t_i]}(t)+ \sum _{i=0}^{n-1}\ {1\!\!1}_{\,(t_{i+1}, t_{i+1}+\frac{1}{n^2}]}(t)\right] ^2dxdt\\\le & {} CQ^2 \int _0^T \int _{-M}^M \sum _{i=0}^{n-1}\ {1\!\!1}_{\,(t_i-\frac{1}{n^2},t_i]}(t)\ dxdt +C Q^2 \int _0^T \int _{-M}^M \sum _{i=0}^{n-1}\ {1\!\!1}_{\,(t_{i+1}, t_{i+1}+\frac{1}{n^2}]}(t)\ dxdt\\\le & {} \frac{C MTQ^2}{n},\\ \end{aligned}$$

and, therefore

$$\begin{aligned} \lim _{n \rightarrow \infty } I_4^{(n)} =0. \end{aligned}$$
(6.10)

Now, putting together (6.8), (6.9) and (6.10) in (6.7), we get that (6.6) holds.

Finally, let \(g\in \mathcal {C}_c^{\infty } (\mathbb {R}) \) such that \(|g(x)|\le |x|\) and

$$\begin{aligned} g(x)=\left\{ \begin{array}{ll} x, &{} |x|\le 4 \Vert \delta \Vert _\infty ,\\ 0, &{} |x|\ge 8\Vert \delta \Vert _\infty ,\end{array}\right. \end{aligned}$$

where \(\Vert \delta \Vert _\infty =\sup _{(\omega ,t)\in \Omega \times [0,T]}|\delta _t(\omega )|\). With (6.6) in mind, we define, for any \((t,x)\in [0,T]\times \mathbb {R}\),

$$\begin{aligned} \psi ^{(n)} (t,x)=\partial _x \phi (t,0)+g \left( \int _0^x{\bar{\psi }}^{(n)} (t,y)dy\right) , \end{aligned}$$

and

$$\begin{aligned} \phi ^{(n)}(t,x)=\int _0^x \psi ^{(n)} (t,y) dy. \end{aligned}$$

Remember that \(|{\bar{\psi }}^{(n)}(t,x)|\le 16 Q\) and we observe that (6.6) implies

$$\begin{aligned} \partial _x^2 \phi (t,x)=\lim _{n\rightarrow +\infty } {\bar{\psi }}^{(n)} (t,x), \quad \mathrm{for\ almost \ all}\ (\omega , x, t)\in \Omega \times \mathbb {R} \times [0,T], \end{aligned}$$

taking a subsequence if it is necessary. Furthermore, we have that, for any \(x\in \mathbb {R}\),

$$\begin{aligned} \lim _{n\rightarrow +\infty } \int _0^T \left[ \partial _x \phi (t,x) -\psi ^{(n)}(t,x)\right] dt =0, \end{aligned}$$

as a consequence of the dominated convergence theorem since \(|\partial _x^2 \phi (t,y) -{\bar{\psi }}^{(n)}(t,y)|\) is bounded. Indeed, \(\partial _x \phi ^{(n)}(t,x)=\psi ^{(n)}(t,x)\), the function \(\psi ^{(n)}(t,x)\) is bounded (\(|\psi ^{(n)}(t,x)|\le 9 \Vert \delta \Vert _\infty \)) and continuous in x, and

$$\begin{aligned} \psi ^{(n)}(t,x) \longrightarrow \partial _x \phi (t,0)+g\left( \int _0^x \partial _x^2 \phi (t,y)dy\right) =\partial _x \phi (t,x),\qquad \mathrm{a.s.} \end{aligned}$$

In a similar way we obtain that the fact that \(\partial _x \phi ^{(n)}=\psi ^{(n)}\), for any \(x\in \mathbb {R}\), yields

$$\begin{aligned} {{\mathbb {E}}} \int _0^T \left[ \phi (t,x)-\phi ^{(n)}(t,x)\right] ^2 dt= & {} {{\mathbb {E}}} \int _0^T \left[ \int _0^x \left( \partial _x \phi (t,y) -\psi ^{(n)}(t,y)\right) dy\right] ^2 dt\\\le & {} |x| \, {{\mathbb {E}}} \int _0^T \int _0^x \left( \partial _x \phi (t,y)-\psi ^{(n)}(t,y)\right) ^2dy dt. \end{aligned}$$

Hence, we can find \(M>0\) such that \(K\subseteq [-M,M]\) and

$$\begin{aligned}{} & {} \sup _{x\in K} {{\mathbb {E}}} \int _0^T \left[ \phi (t,x)-\phi ^{(n)}(t,x)\right] ^2 dt\\{} & {} \qquad \le M {{\mathbb {E}}} \int _0^T \int _{-M}^M \left[ \partial _x \phi (t,y)-\psi ^{(n)}(t,y)\right] ^2dy dt \longrightarrow 0, \end{aligned}$$

as n goes to \(\infty \).

6.3 Convergence of \(Z^{(n)}\) to Z

In this subsection of the Appendix we show the convergence of \(Z^{(n)}_t (A_t^{(n)}, X_0^{(n)}(A_t^{(n)}))\) to \(Z_t (A_t, X_0(A_t))\) in \(L^1(\Omega \times [0,T]).\) It means

$$\begin{aligned} \lim _{n \rightarrow +\infty } {{\mathbb {E}}}\int _0^T \left| Z_t(A_t, X_0(A_t))-Z_t^{(n)}(A_t^{(n)},X_0^{(n)}(A_t^{(n)}))\right| dt =0. \end{aligned}$$
(6.11)

Note that if (6.11) is true, then we also have

$$\begin{aligned} \lim _{n \rightarrow +\infty } {{\mathbb {E}}}\int _0^T \left| Z_t(A_t, X_0(A_t)) -Z_t^{(n)}(A_t^{(n)},X_0^{(n)}(A_t^{(n)}))\right| ^2 dt =0, \end{aligned}$$
(6.12)

because \(|Z_t(A_t, X_0(A_t))|\) and \(|Z_t^{(n)}(A_t^{(n)},X_0^{(n)}(A_t^{(n)}))|\) are bounded by a constant independent of n due to Lemma 4.1, Hypothesis (X1) and Sect. 6.2 (see also inequality (6.20) below).

Now we will prove that (6.11) is satisfied. For simplicity we write \(Z_s^{(n)}(x)\) and \(Z_s(x)\) instead of \(Z^{(n)}_s (A_s^{(n)}, x)\) and \(Z_s (A_s, x)\), respectively. Using the triangle inequality, we have

$$\begin{aligned} {{\mathbb {E}}}\int _0^T \left| Z_t(X_0(A_t))-Z_t^{(n)}(X_0^{(n)}(A_t^{(n)}))\right| dt \le \theta _1^n+\theta _2^n, \end{aligned}$$
(6.13)

with

$$\begin{aligned} \theta _1^n= & {} {{\mathbb {E}}}\int _0^T \left| Z_t(X_0(A_t))-Z_t(X^{(n)}_0(A^{(n)}_t))\right| dt,\\ \theta _2^n= & {} {{\mathbb {E}}}\int _0^T \left| Z_t(X^{(n)}_0(A^{(n)}_t))-Z_t^{(n)}(X_0^{(n)}(A_t^{(n)}))\right| dt. \end{aligned}$$

We first study \(\theta _1^n\). For this, we observe that (3.2) allows to get

$$\begin{aligned} |Z_t(x)-Z_t(y)|\le & {} |x-y|+\int _0^t L_{0,s}^{-1}\ |b(s, L_{0,s} Z_s(x))- b(s, L_{0,s}Z_s(y))|ds\\\le & {} |x-y|+\int _0^t ||\gamma _s||_{\infty }\ |Z_s(x)-Z_s(y)|ds, \end{aligned}$$

and applying Gronwall’s Lemma we have, for \(c_1\) defined in (B1T),

$$\begin{aligned} |Z_t(x)-Z_t(y)|\le e^{c_1} |x-y|. \end{aligned}$$

Consequently, using the triangle inequality again, we can establish

$$\begin{aligned} \theta _1^n\le e^{c_1}{{\mathbb {E}}}\int _0^T \left| X_0(A_t)-X^{(n)}_0(A^{(n)}_t)\right| dt =e^{c_1}\left[ \theta _{1,1}^n+\theta _{1,2}^n\right] , \end{aligned}$$
(6.14)

with

$$\begin{aligned} \theta _{1,1}^n= & {} {{\mathbb {E}}}\int _0^T\left| X_0(A_t)-X_0 (A^{(n)}_t)\right| dt,\\ \theta _{1,2}^n= & {} {{\mathbb {E}}}\int _0^T\left| X_0(A^{(n)}_t)-X^{(n)}_0 (A_t^{(n)})\right| dt. \end{aligned}$$

By Propositions 2.1.4 and 2.2.12 in [5], we obtain

$$\begin{aligned} \theta _{1,1}^n \le \left\| \left( \int _0^T |D_sX_0|^2 ds\right) ^{\frac{1}{2}}\right\| _{L^\infty (\Omega )} {{\mathbb {E}}}\int _0^T\left| A_t-A^{(n)}_t\right| _{CM}dt, \end{aligned}$$
(6.15)

where CM means the norm of Cameron-Martin. Now, we consider the last factor of (6.15). For \(s\le t\) and a certain generic constant \(C\ge 1\), we can apply (2.4) and [5] (Poposition 2.1.4) to conclude

$$\begin{aligned} \left| A_{s,t}-A^{(n)}_{s,t}\right| _{CM}^2:= & {} \int _s^t \left( a_r(A_{r,t})-a_r^{(n)}(A_{r,t}^{(n)})\right) ^2 dr\\\le & {} 2\int _s^t \left( a_r(A_{r,t})-a_r(A_{r,t}^{(n)})\right) ^2 dr + 2\int _s^t \left( a_r(A_{r,t}^{(n)})-a_r^{(n)}(A_{r,t}^{(n)})\right) ^2 dr\\\le & {} 2 \int _s^t\left\| \int _0^T (D_u a_r)^2 du\right\| _{L^\infty (\Omega \times [0,T])} \left| A_{r,t}-A^{(n)}_{r,t}\right| _{CM}^2 dr\\{} & {} +2 C\Vert a\Vert _{L^\infty (\Omega \times [0,T])} \int _s^t\left| a_r(A_{r,t}^{(n)})-a_r^{(n)}(A_{r,t}^{(n)})\right| dr\\\le & {} 2T\Vert Da\Vert _{L^\infty (\Omega \times [0,T]^2)}^2 \int _s^t \left| A_{r,t}-A^{(n)}_{r,t}\right| _{CM}^2 dr\\{} & {} +2C \Vert a\Vert _{L^\infty (\Omega \times [0,T])} \int _s^t\left| a_r(A_{r,t}^{(n)})-a_r^{(n)}(A_{r,t}^{(n)})\right| dr. \end{aligned}$$

Hence, taking expectation and using (2.6),

$$\begin{aligned} {\mathbb {E}}\left( \left| A_{s,t}-A^{(n)}_{s,t}\right| _{CM}^2\right)\le & {} 2T\Vert Da\Vert _{L^\infty (\Omega \times [0,T]^2)}^2 \int _s^t {\mathbb {E}}\left( \left| A_{r,t}-A^{(n)}_{r,t}\right| _{CM}^2\right) dr\\{} & {} +2C \Vert a\Vert _{L^\infty (\Omega \times [0,T])} \left( {\mathbb {E}}\int _s^t\left| a_r-a_r^{(n)}\right| ^2 dr\right) ^\frac{1}{2} \left( {\mathbb {E}}\int _s^t \left( L_{r,t}^{(n)}\right) ^{-1}dr\right) ^\frac{1}{2}\\\le & {} 2T\Vert Da\Vert _{L^\infty (\Omega \times [0,T]^2)}^2 \int _s^t {\mathbb {E}}\left( \left| A_{r,t}-A^{(n)}_{r,t}\right| _{CM}^2\right) dr\\{} & {} +C\Vert a\Vert _{L^\infty (\Omega \times [0,T])} \left\| a-a^{(n)} \right\| _{L^2(\Omega \times [0,T])}. \end{aligned}$$

Thus, Gronwall Lemma implies that, for \(0\le s\le t\le T\),

$$\begin{aligned} {\mathbb {E}}\left( \left| A_{s,t}-A^{(n)}_{s,t}\right| _{CM}^2\right) \le C\Vert a\Vert _{L^2(\Omega \times [0,T])} \left\| a-a^{(n)}\right\| _{L^2(\Omega \times [0,T])} \exp \left\{ 2T^2\Vert Da\Vert _{L^2(\Omega \times [0,T]^2)}^2\right\} .\nonumber \\ \end{aligned}$$
(6.16)

Similarly, changing a and \(a^{(n)}\) by \(X_0\) and \(X_0^{(n)}\), respectively, we are able to state

$$\begin{aligned} \theta _{1,2}^n\le & {} \left( {\mathbb {E}}\int _0^T \left| X_0(A^{(n)}_t)- X^{(n)}_0 (A_t^{(n)})\right| ^2 L_{0,t}^{(n)} dt\right) ^\frac{1}{2} \left( {\mathbb {E}}\int _0^T \left( L_{0,t}^{(n)}\right) ^{-1}\right) ^\frac{1}{2}\nonumber \\\le & {} C\sqrt{T} \left\| X_0-X^{(n)}_0 \right\| _{L^2(\Omega )}. \end{aligned}$$
(6.17)

So, putting togheter (6.14), (6.15), (6.16) and (6.17) and considering the assumptions on \(a, a^{(n)}, X_0\) and \(X_0^{(n)}\), we get

$$\begin{aligned} \lim _{n\rightarrow +\infty } \theta _1^n=\lim _{n\rightarrow +\infty }{{\mathbb {E}}}\int _0^T \left| Z_t(X_0(A_t))-Z_t(X^{(n)}_0(A^{(n)}_t))\right| dt =0. \end{aligned}$$
(6.18)

Now, we analyze \(\theta _2^n\). Because of (4.6) we have, for \(t\in [0,T]\),

$$\begin{aligned} |Z_t(x)-Z_t^{(n)}(x)|\le & {} \int _0^t \left| L_{0,s}^{-1}\ b(s,L_{0,s}Z_s(x))-(L^n_{0,s})^{-1} \ b^{(n)}(s,L^{(n)}_{0,s}Z^{(n)}_s(x))\right| ds\\\le & {} \int _0^t L_{0,s}^{-1}\ \left| b(s, L_{0,s} Z_s(x))- b(s, L_{0,s}Z^{(n)}_s(x))\right| ds\\{} & {} +\int _0^t \left| L_{0,s}^{-1}\ b(s,L_{0,s}Z^{(n)}_s(x))-(L^n_{0,s})^{-1}\ b^{(n)} (s,L^{(n)}_{0,s}Z^{(n)}_s(x))\right| ds\\\le & {} \int _0^t ||\gamma _s||_{\infty }\ |Z_s(x)-Z^{(n)}_s(x)|ds\\{} & {} +\int _0^t \left| L_{0,s}^{-1}\ b(s,L_{0,s}Z^{(n)}_s(x))-(L^n_{0,s})^{-1}\ b^{(n)}(s,L^{(n)}_{0,s}Z^{(n)}_s(x))\right| ds. \end{aligned}$$

Applying Gronwall’s Lemma we obtain

$$\begin{aligned} |Z_t(x)-Z_t^{(n)}(x)|\le e^{c_1}\int _0^t \left| L_{0,s}^{-1}\ b(s,L_{0,s}Z^{(n)}_s(x))-(L^n_{0,s})^{-1}\ b^{(n)}(s,L^{(n)}_{0,s}Z^{(n)}_s(x))\right| ds. \end{aligned}$$

So, from (B4T), we can decompose

$$\begin{aligned} \theta _2^n \le e^{c_1} \sum _{i=1}^4 H_{i,n}, \end{aligned}$$
(6.19)

with

$$\begin{aligned} H_{1,n}= & {} {{\mathbb {E}}}\int _0^T \int _0^t \left| {{\bar{b}}}_s-{{\bar{b}}}_s^{(n)} \right| \left| Z^{(n)}_s(X_0^{(n)}(A_t^{(n)}))\right| ds dt,\\ H_{2,n}= & {} {{\mathbb {E}}}\int _0^T \int _0^t L_{0,s}^{-1} \left| \phi (s,L_{0,s}Z^{(n)}_s(X_0^{(n)}(A_t^{(n)}))) -\phi (s,L^{(n)}_{0,s}Z^{(n)}_s(X_0^{(n)}(A_t^{(n)})))\right| ds dt,\\ H_{3,n}= & {} {{\mathbb {E}}}\int _0^T \int _0^t \left| L_{0,s}^{-1}-(L^{(n)}_{0,s})^{-1}\right| \left| \phi (s,L^{(n)}_{0,s} Z^{(n)}_s(X_0^{(n)}(A_t^{(n)})))\right| ds dt,\\ H_{4,n}= & {} {{\mathbb {E}}}\int _0^T \int _0^t (L^{(n)}_{0,s})^{-1} \left| \phi (s,L^{(n)}_{0,s}Z^{(n)}_s(X_0^{(n)}(A_t^{(n)})))-\phi ^{(n)} (s,L^{(n)}_{0,s}Z^{(n)}_s(X_0^{(n)}(A_t^{(n)})))\right| ds dt. \end{aligned}$$

As in Lemma 4.1, using that \(\Vert {\bar{b}}^{(n)}\Vert _{L^\infty (\Omega \times [0,T])} \le c \Vert {\bar{b}}\Vert _{L^\infty (\Omega \times [0,T])}\) for a certain generic \(c\ge 1\) (due to (B4T) and the definition of \({\bar{b}}^{(n)}\)) and that \(|\phi ^{(n)} (t,x)|\le 9 |x| \Vert \delta \Vert _{L^\infty (\Omega \times [0,T])}\) (thanks the construction of \(\phi ^{(n)}\) in Sect. 6.2), we have

$$\begin{aligned} \left| Z^{(n)}_s(\omega ,x)\right| \le C |x|, \end{aligned}$$
(6.20)

for all \(\omega \in \Omega \) and for \(n\ge 1\). Then, Cauchy-Schwartz inequality, (6.20) and the fact that \(\Vert X_0^{(n)}\Vert _{L^\infty (\Omega )} \le c \Vert X_0\Vert _{L^\infty (\Omega )}\), for a certain generic \(C\ge 1\), give that

$$\begin{aligned} H_{1,n} \le C T^{\frac{3}{2}} \ \left( {\mathbb {E}}\int _0^T \left| {\bar{b}}_t-{{\bar{b}}}_t^{(n)}\right| ^2 dt\right) ^\frac{1}{2}. \end{aligned}$$
(6.21)

Proceding as in (6.21), we obtain

$$\begin{aligned} H_{2,n}\le & {} \Vert \delta \Vert _{L^{\infty }(\Omega \times [0,T])} {{\mathbb {E}}}\int _0^T \int _0^t L_{0,s}^{-1} \left| L_{0,s}-L^{(n)}_{0,s}\right| \left| Z^{(n)}_s (X_0^{(n)}(A_t^{(n)}))\right| ds dt\nonumber \\\le & {} C T \Vert \delta \Vert _{L^{\infty }(\Omega \times [0,T])}\left( {{\mathbb {E}}}\int _0^T \left| L_{0,s}-L^{(n)}_{0,s}\right| ^2 ds\right) ^\frac{1}{2}. \end{aligned}$$
(6.22)

Moreover, using \(\phi (s,0)=0\) in (B2T), we get

$$\begin{aligned} H_{3,n}\le & {} C\ \Vert \delta \Vert _{L^{\infty }(\Omega \times [0,T])}\ {{\mathbb {E}}}\int _0^T \int _0^t\left| L_{0,s}^{-1}-(L^{(n)}_{0,s})^{-1}\right| L^{(n)}_{0,s} \left| Z^{(n)}_s(X_0^{(n)}(A_t^{(n)}))\right| ds dt\nonumber \\\le & {} CT \ \Vert \delta \Vert _{L^{\infty }(\Omega \times [0,T])}\ {{\mathbb {E}}}\int _0^T\left| L_{0,s}^{-1} L^{(n)}_{0,s}-1\right| ds \nonumber \\\le & {} C T \ \Vert \delta \Vert _{L^{\infty }(\Omega \times [0,T])}\ {{\mathbb {E}}}\int _0^T L_{0,s}^{-1} \left| L^{(n)}_{0,s}- L_{0,s}\right| ds\nonumber \\\le & {} C T \ \Vert \delta \Vert _{L^{\infty }(\Omega \times [0,T])} \left( {{\mathbb {E}}}\int _0^T \left| L^{(n)}_{0,s}- L_{0,s}\right| ^2ds\right) ^\frac{1}{2}. \end{aligned}$$
(6.23)

Now we deal with the last term \(H_{4,n}\). Note that \(H_{4,n}\) has the form

$$\begin{aligned} H_{4,n}=H_{4,1,n}^M+H_{4,2,n}^M, \end{aligned}$$
(6.24)

with

$$\begin{aligned} H_{4,1,n}^M= & {} {{\mathbb {E}}}\int _0^T \int _0^t {1\!\!1}_{\{L_{0,s}^{(n)}<M\}} (L^{(n)}_{0,s})^{-1}\\{} & {} \qquad \times \left| \phi (s,L^{(n)}_{0,s}Z^{(n)}_s(X_0^{(n)} (A_t^{(n)})))-\phi ^{(n)}(s,L^{(n)}_{0,s}Z^{(n)}_s(X_0^{(n)}(A_t^{(n)})))\right| ds dt,\\ H_{4,2,n}^M= & {} {{\mathbb {E}}}\int _0^T \int _0^t {1\!\!1}_{\{L_{0,s}^{(n)}\ge M\}} (L^{(n)}_{0,s})^{-1}\\{} & {} \qquad \times \left| \phi (s,L^{(n)}_{0,s}Z^{(n)}_s(X_0^{(n)} (A_t^{(n)})))-\phi ^{(n)}(s,L^{(n)}_{0,s}Z^{(n)}_s(X_0^{(n)}(A_t^{(n)})))\right| ds dt.\\ \end{aligned}$$

On one hand, on \(\{L_{0,s}^{(n)}<M\}\), we know that \(L^{(n)}_{0,s}Z^{(n)}_s(X_0^{(n)}(A_t^{(n)}))\) is bounded, then, for a compact K big enough, we establish, for certain constant \(C>0\) and \(L>0\) such that \(K\subset [-L,L]\),

$$\begin{aligned} \left[ H_{4,1,n}^M\right] ^2\le & {} T C \ {{\mathbb {E}}}\int _0^T \sup _{x\in K} \left| \phi (s,x)-\phi ^{(n)}(s,x)\right| ^2ds \nonumber \\= & {} T C \ {{\mathbb {E}}}\int _0^T \sup _{x\in K} \left( \int _0^x \left[ \partial _x\phi (s,y)-\psi ^{(n)}(s,y)\right] dy\right) ^2 ds\nonumber \\\le & {} T C \ {{\mathbb {E}}}\int _0^T \sup _{x\in K} \left( |x|\ \int _0^x \left[ \partial _x\phi (s,y)-\psi ^{(n)}(s,y)\right] ^2 dy\right) ds\nonumber \\\le & {} T C L \ {{\mathbb {E}}}\int _0^T \int _{-L}^L \left[ \partial _x\phi (s,y)-\psi ^{(n)}(s,y)\right] ^2 dy ds, \end{aligned}$$
(6.25)

and this converges to zero as a consequence of Sect. 6.2.

On the other hand, Lemma 4.1, (6.20) and (B4T) yield

$$\begin{aligned} H_{4,2,n}^M\le & {} {{\mathbb {E}}}\int _0^T \int _0^t {1\!\!1}_{\{L_{0,s}^{(n)}\ge M\}} (L^{(n)}_{0,s})^{-1}\nonumber \\{} & {} \qquad \times \left[ \left| \phi (s,L^{(n)}_{0,s}Z^{(n)}_s(X_0^{(n)} (A_t^{(n)})))\right| +\left| \phi ^{(n)}(s,L^{(n)}_{0,s}Z^{(n)}_s (X_0^{(n)}(A_t^{(n)})))\right| \right] ds dt\nonumber \\\le & {} C T \ \Vert \delta \Vert _{L^{\infty }(\Omega \times [0,T])}\ {{\mathbb {E}}}\int _0^T {1\!\!1}_{\{L_{0,s}^{(n)}\ge M\}} ds, \end{aligned}$$

and Txebitxeff inequality implies

$$\begin{aligned} \lim _{M\rightarrow +\infty } H_{4,2,n}^M \le \lim _{M\rightarrow +\infty } \frac{C T^2 \ \Vert \delta \Vert _{L^{\infty }(\Omega \times [0,T])}}{M}=0. \end{aligned}$$
(6.26)

So, last part of Section 6.2, the definitions of \(a^{(n)}\) and \(\bar{b}^{(n)}\), together with (6.19) and (6.21)–(6.26), allow us to obtain

$$\begin{aligned} \lim _{n\rightarrow +\infty } \theta _2^n = \lim _{n\rightarrow \infty } {{\mathbb {E}}}\int _0^T \left| Z_t(X^{(n)}_0(A^{(n)}_t))-Z_t^{(n)}(X_0^{(n)}(A_t^{(n)}))\right| dt=0. \end{aligned}$$
(6.27)

Finally, (6.18), (6.27) and (6.13) yield that (6.11) holds. \(\square \)