Abstract
In the present paper, we study different types of stability of the solution of a semi-linear anticipating stochastic differential equation driven by a Brownian motion, with a random variable as initial condition. The involved stochastic integral is the Skorohod one. Being the initial condition random, we need to redefine the stability concepts. The new stability criteria depend on the derivative of the initial condition in the Malliavin calculus sense.
Similar content being viewed by others
1 Introduction
Let \(W:=\{W_t, t\ge 0\}\) be a standard Brownian motion defined on a filtered probability space \((\Omega , \mathcal {F}, {{\mathbb {F}}}, {{\mathbb {P}}}).\) Consider the stochastic differential equation
Here \(X_0:\Omega \rightarrow {{\mathbb {R}}}\) is an \(\mathcal {F}-\)measurable random variable, \(b:[0,\infty )\times {{\mathbb {R}}}\times \Omega \rightarrow {{\mathbb {R}}}\) is an \({\mathbb {F}}\)-adapted random field and \(a:[0,\infty ) \times \Omega \rightarrow {{\mathbb {R}}}\) is an \({\mathbb {F}}\)-adapted random process. Since the initial condition \(X_0\) is a random variable, then the stochastic integral has to be an anticipating one that allows us to integrate processes that are not necessarily adapted to the underlying filtration \({\mathbb {F}}\). Here we use the well-known Skorohod integral, introduced by Skorohod in [17], which is an extension of the classical Itô integral. The existence and uniqueness of the solution, and other properties, of anticipating stochastic differential equations like (1.1) have been studied in [4, 5, 12]. See also [14]. This type of equation has proven to be useful in quantitative finance, for instance in insider trading modeling. See, for example, the recent paper [6] and the references therein, and [13].
The purpose of the present paper is to study different types of stability of the solution of Eq. (1.1). Being \(X_0\) a random variable we need to extend the concept of stability. Concretely we introduce three types of stochastic stability: weak stability in probability, exponential \(p-\)stability and exponential stability in probability. We prove that the solution of equation (1.1) satisfies all these types of stability under suitable conditions. The case in which \(X_0\) is a constant, where anticipative calculus is not necessary, is treated by Khasminskii in [9] (Sections 1.5\(-\)1.8 and Chapter 5). See also Arnold [2] (Chapter 11) and Gard [7] (Chapter 5).
Stability means insensitivity of a system to small changes in the initial state. In a stable system, the trajectories that are close to each other at a specific instant continuous to be close to each other at the subsequent instants. Lyapunov developed in 1892 a method to determine stability of a system despite not knowing its explicit solution. For deterministic dynamical systems a theory of stability of solutions is very well developed, see for example [3]. On other hand, it is clear that stability is a very important property in applications. For example, for stable systems such that explicit solutions are not known we can try to find approximated solutions using numerical methods.
Stochastic stability has been developed much more recently. To generalize deterministic stability to stochastic stability it is not straightforward. Different definitions have been considered in the literature. During the last decades many results based on the Lyapunov point of view have been obtained for Itô stochastic differential equations, see Chapters 1 and 5 of Khasminskii [9] as a main reference. As it is pointed out by Khasminskii, the study of stability is important in many applications of stochastic dynamical systems (see also the references in [9]). In particular, the stability of linear systems has applications in automatic control (for instance, [11, 16]).
In the present paper, as far as we know, we extend for the first time Khasminskii notions of weak stability in probability and exponential stability in probability to the case of random initial condition, and therefore, to the case of an anticipating stochastic differential equation. Different results of stability of the solution are obtained. Malliavin differentiability hypotheses are naturally required.
In Sect. 2 we recall some preliminary results about Malliavin calculus and anticipative Girsanov transformations, following essentially Buckdhan [5] and Nualart [14]. In Sect. 3 we establish the existence and uniqueness of the solution of Eq. (1.1), extending the result for the linear case (\(b(u,x)=b(u)\cdot x\)) proved in Buckdhan [4]; see also Buckdhan [5] (Theorem 3.2.1). The proof of this existence and uniqueness result is sketched in the Appendix (Section 6.1). The solution of Eq. (1.1) is written in terms of an auxiliary process Z, whose properties are analyzed in Sect. 4. Finally, in Sect. 5, we introduce the three new types of stochastic stability, suitable to our context, and prove different stability results for the solution of Eq. (1.1).
2 Preliminaries
In the present paper we assume \((\Omega ,\mathcal {F},\mathbb {P})\) is the canonical Wiener space. That is, \(\Omega \) is the family of all continuous functions from \([0,\infty )\) to \({\mathbb {R}}\) null at 0, \(\mathcal {F}\) is the Borel \(\sigma \)-algebra of \(\Omega \), when this is equipped with the topology of uniform convergence on compact sets, and \({{\mathbb {P}}}\) is the probability measure such that the canonical process \(W_t(\omega )=\omega (t)\) is a standard Brownian motion. Moreover, \({{\mathbb {F}}}:=\{{{\mathcal {F}}}_t, t\ge 0\}\) is the completed natural filtration of W. We denote by \(\mathcal {B}(\mathbb {R})\), the Borel \(\sigma \)-algebra on \(\mathbb {R}\), and, for any \(T>0\), we denote by \(\mathcal {P}_T\), the progressive \(\sigma \)-algebra on \(\Omega \times [0,T].\)
2.1 Malliavin calculus and Sobolev spaces
Let \(C_b^{\infty }(\mathbb {R} ^n)\) be the family of all the \(C^{\infty }\)-functions from \({{\mathbb {R}}}^n\) to \({\mathbb {R}}\) that are bounded together with all their partial derivatives. Consider the class \(\mathcal {S}\) of smooth random variables F of the form
with \(f\in C_b^{\infty }(\mathbb {R}^n)\) and \(t_1,\ldots ,t_n\in \mathbb {R}_+\). For the smooth functional F given in (2.1), we define its derivative in the Malliavin calculus sense as the process
More generally, we define the k-th derivative of F as \(D^k_{s_1,\ldots ,s_k}F=D_{s_k}\cdots D_{s_1}F.\)
Now, we introduce the spaces \({{\mathbb {D}}}^{k,p}_T\), where \(k\in \mathbb {N}\), \(T>0\) and \(p\ge 1\). On \(\mathcal {S}\), consider the semi-norm
where \(||\cdot ||_p\) stands for the norm in \(L^p(\Omega )\). It is well-known that the operator \(D^k\) is closable from \(\mathcal {S}\subset L^p(\Omega )\) into \(L^p(\Omega ;L^2([0,T]^k))\), see Nualart [14] (Section 1.2). Thus, the space \({\mathbb D}^{k,p}_T\) is defined as the completion of the family \(\mathcal {S}\) with respect to the semi-norm \(||\cdot ||_{k,p,T}\). Note that if \(0<{{\tilde{T}}}<T\), we have \({{\mathbb {D}}}^{k,p}_T\subset {\mathbb D}^{k,p}_{{\tilde{T}}}.\)
As in Buckdhan [5], \(\mathbb {D}^{k,\infty }_T\) (resp. \(\tilde{\mathbb {D}}^{k,\infty }_T\)) denotes the family of all random variables \(F\in \mathbb {D}^{k,2}_T\) such that \(F\in {L^{\infty }(\Omega )}\) and \(D^m F\in L^{\infty }(\Omega ;L^2([0,T]^m))\) (resp. \(D^m F\in L^{\infty }(\Omega \times [0,T]^m)\)), for \(m=1,\ldots ,k.\)
For \(T>0\), the Skorohod integral with respect to W, denoted by \(\delta _T,\) is the adjoint of the derivative operator \(D:\tilde{\mathbb {D}}^{1,\infty }_T \subset L^{\infty }\left( \Omega \right) \rightarrow L^{\infty }\left( \Omega \times [0,T]\right) \). That is, u is in \(Dom \ \delta _T\) if and only if \(u\in L^{1}\left( \Omega \times [0,T]\right) \) and there exists a random variable \(\delta _T(u)\in L^1(\Omega )\) satisfying the duality relation
Sometimes, when \(u\in L^{2}\left( \Omega \times [0,T]\right) \), we consider the Skorohod integral as the adjoint of \(D:\mathbb {D}^{1,2}_T \subset L^{2}\left( \Omega \right) \rightarrow L^{2}\left( \Omega \times [0,T]\right) \). That is, \(u\in Dom\, \delta _T\), if and only if, there exists \(\delta _T(u)\in L^{2}\left( \Omega \right) \) such that (2.2) holds for any \(F\in \mathbb {D}^{1,2}_T\). Note that the first definition of \(\delta _T\) is an extension of the second one.
The operator \(\delta _T\) is an extension of the Itô integral in the sense that the set \(L_{a}^{2}(\Omega \times [0,T])\) of all square-integrable and adapted processes with respect to the filtration generated by W is included in \(Dom\ \delta _T\) and the operator \(\delta _T\) restricted to \(L_{a}^{2}(\Omega \times [0,T] )\) coincides with the Itô stochastic integral with respect to W. For \(u\in Dom \ \delta _T\), we make use of the notation \(\delta _T(u)=\int _{0}^{T}u_{t}\delta W_{t}\) and for \(t\in [0,T]\) and \(u{1\!\!1}_{[0,t]}\) in \(Dom\ \delta _T\), we write \(\delta _T(u{1\!\!1}_{[0,t]})=\int _{0}^{t}u_{s}\delta W_{s}.\) Observe also that for \(0<{{\tilde{T}}}<T,\) if \(u\in Dom \ \delta _{{\tilde{T}}}\), then \(u{1\!\!1}_{[0,{{\tilde{T}}}]}\in Dom \ \delta _{T}\) and in this case, \(\delta _{{\tilde{T}}}(u)=\delta _T(u{1\!\!1}_{[0,{\tilde{T}}]})=\int _{0}^{{\tilde{T}}}u_{s}\delta W_{s}.\)
Let \(\mathcal {S}_{T}\) be the family of processes of the form \(u(\cdot )=\sum _{j=1}^{n}F_{j}h_{j}(\cdot )\), where for any \(j=1,\ldots ,n,\) \(F_{j}\) is a random variable in \(\mathcal {S}\) and \(h_j:[0,T]\rightarrow \mathbb {R}\) is a bounded measurable function. We denote by \(\mathbb {L}^{1,2,f}_T\) the closure of \(\mathcal {S}_T\) with respect to the semi-norm
where \(\Delta _{1}^{T}=\left\{ (s,t)\in [0,T]^{2}:s\ge t\right\} ,\) and by \(\mathbb {L}_{T}^F,\) the closure of \(\mathcal {S}_{T}\) with respect to the semi-norm
with \(\Delta _{2}^{T}=\{(r,s,t)\in [0,T]^{3}:r\vee s\ge t\}.\) Observe that \(L^2_a (\Omega \times [0,T])\subseteq \mathbb {L}^F_T\) for any \(T>0\), with \(D_su_t=D_r D_s u_t=0\) for \(s>t\) and \((r,s,t)\in \Delta _{2}^{T}\).
Finally, for a process \(X\in {{\mathbb {L}}}^{1,2,f}_T\) and given \(p\ge 1\), we denote by \(D^-X\) the process in \(L^p(\Omega \times [0,T])\) such that
if such a process \(D^-X\) exists. Henceforth, the space \(\mathbb {L}^{1,2,f}_{T,p-}\) represents the family of processes \(X\in {{\mathbb {L}}}^{1,2,f}_T\) such that (2.3) is satisfied.
2.2 Anticipative Girsanov transformations
Following Buckdhan [5], and in order to establish the existence of a unique solution to Eq. (1.1), we introduce two families \(A=\{A_{s,t}, \, 0\le s\le t\}\) and \(\{T_t, \, t\ge 0\}\) of transformations on the Wiener space \(\Omega \) through the equations
and
Define \(A_t:=A_{0,t}.\) Notice that, from Buckdhan [5] (Section 2.2), if \(a\in L^2([0,T];\mathbb {D}^{1,\infty }_T)\), Eqs. (2.4) and (2.5) have a unique solution for \(0\le s\le t\le T\), and moreover, \(A_{s,t}=T_s A_t\).
Additionally, if a is also an adapted process, the Girsanov theorem (see Buckdhan [5], Proposition 2.2.3) implies
for \(F\in L^{\infty }(\Omega ),\) where
In the following, we use frequently the fact that if F is \(\mathcal{F}_s-\)measurable, \(t\ge s\) and a is an adapted process, then \(F(A_t)=F(A_s)\) and \(F(T_t)=F(T_s).\)
3 Anticipating semi-linear equations
In this section, for \(T>0\) fixed, we consider the anticipating semi-linear stochastic differential equation
where the random variable \(X_0\) and the coefficients a and b satisfy suitable conditions.
The following will be the hypotheses used in the paper. Some hypotheses are stronger than another ones, but we introduce them in this way not to ask for conditions stronger than we need in some results.
-
(X1)
\(X_0\in L^{\infty } (\Omega ).\)
-
(X2T)
For any \(T>0\), \(X_0\in {\tilde{{\mathbb {D}}}}^{2,\infty }_T.\) (Note that this implies (X1)).
-
(X3T)
\(X_0\) satisfies (X2T) and there exists a constant \(\eta >0\) such that \(X_0>\eta \) for all \(\omega \) or \(X_0<-\eta \) for all \(\omega .\)
-
(A1T)
\(a\in L_a^2([0,T];\mathbb {D}^{1,\infty }_T)\), that is, a is an \({\mathbb {F}}-\)adapted process in \(L^2([0,T];\mathbb {D}^{1,\infty }_T).\)
-
(A2T)
a satisfies (A1T) and moreover \(a\in L^{\infty }([0,T]\times \Omega )\) and \(Da\in L^{\infty }(\Omega \times [0,T]^2).\)
-
(B1T)
\(b:\Omega \times [0,T]\times \mathbb {R}\rightarrow \mathbb {R}\) is a \(\mathcal {P}_T\otimes \mathcal {B}(\mathbb {R})-\)measurable random field such that there exist an adapted non-negative process \(\gamma \in L^{\infty } (\Omega \times [0,T])\) and a constant \(L>0\) satisfying
$$\begin{aligned} |b(t,x)-b(t,y)|\le \gamma _t |x-y|, \quad \sup _{t\in [0,T]}||b(t,0)||_{\infty }\le L, \end{aligned}$$for all \(x, y\in {{\mathbb {R}}}\) and \(t\in [0,T]\) w.p.1. Recall that \(||\cdot ||_{\infty }\) stands for the essential supremum of a random variable. Let’s denote
$$\begin{aligned} c_1:=\int _0^T ||\gamma _s||_{\infty } ds. \end{aligned}$$ -
(B2T)
b satisfies (B1T), \(b(t,0)=0\) for all \(t\in [0,T]\) and any fixed \(T>0\). b has almost surely continuous trajectories in t and x, and \(\partial _x b (t,x)\) exists and it is continuous in t and x.
-
(B3T)
b satisfies (B2T), \(b(\cdot ,x)\in L^p([0,T]; \mathbb {D}_T^{1,p})\) for all \(p\ge 2\) and \(x\in \mathbb {R}\), \(b(t,x)\in {{\mathbb {D}}}^{1,\infty }_T\) for all \(\,t\in [0,T]\) and \(x\in {{\mathbb {R}}}\), \(D_t b(s,\cdot )\) is a measurable random field continuous on x for any s and t, and there exists a non-negative process \(M\in L^1([0,T]^2, L^{\infty }(\Omega ))\) such that \(|D_s b(t,x,\omega )|\le M(s,t) \, |x|\) and
$$\begin{aligned} c_2:=\sup _{0\le r\le T} \int _r^T ||M(r,s)||_{\infty } ds<\infty . \end{aligned}$$ -
(B4T)
Assume that b satisfy (B3T) for any \(T>0\) and has the form
$$\begin{aligned} b(t,x)={{\bar{b}}}_t x+\phi (t,x), \end{aligned}$$where \({{\bar{b}}}\in L^{\infty } (\Omega \times [0,T])\), \(D{\bar{b}}\in L^{\infty }(\Omega \times [0,T]^2)\) and \(\phi \) satisfies (B3T) with a certain process \(\delta \) in the role of process \(\gamma \) in (B2T). Moreover, the function \(\partial _x^2 \phi (t,x)\) exists, it is continuous in t and x and it is bounded uniformly on \(\Omega \times [0,T]\times \mathbb {R}\).
Now we proceed as in Nualart [14] (Theorem 3.3.6). Consider \(L_{0,t}\) defined in (2.7). Remember that Hypothesis (A1T) implies that for \(0\le s\le t\), \(L_{0,s}(T_t)=L_{0,s}(T_s)\). Also notice that Hypotheses (A1T) and (B1T) imply that for all \(x \in \mathbb {R}\) and almost all \(\omega \in \Omega \), the equation
has a unique solution. The relation between this equation and Eq. (3.1) is given by the following theorem:
Theorem 1.1
Assume (X1), (A1T) and (B1T) hold. Define
Then, the process \(X=\{X_t, \, 0\le t\le T\}\) satisfies \({1\!\!1}_{[0,t]} (\cdot )\, a_{\cdot } X_{\cdot }\in \textrm{Dom}\,\delta _T\) for all \(t\in [0,T]\), belongs to \(L^1(\Omega \times [0,T])\) and is a solution of Eq. (3.1). Conversely, if \(Y\in L^1 (\Omega \times [0,T])\) is a solution of Eq. (3.1) and a satisfies (A2T) then Y agrees with the right hand side of (3.3).
Remarks 3.2
-
1.
Note that in the linear case (i.e. \(b(s,x)={{\bar{b}}}_s\cdot x\)), Hypothesis (B1T) has to be applied to \(\gamma :=|{{\bar{b}}}|\) This is treated in Buckdhan [5] (Theorem 3.2.1). In this case, (3.3) has the form
$$\begin{aligned} X_t=L_{0,t}\cdot \exp \left\{ \int _0^t {{\bar{b}}}_s ds\right\} \cdot X_0(A_t), \, t\ge 0. \end{aligned}$$ -
2.
The semi-linear Eq. (3.1), when a is a deterministic function of \(L^2([0,T])\), is considered in Nualart [14] (Theorem 3.3.6). In the present paper, following ideas stated in Buckdhan [5] (Chapter 3), we extend the result in Nualart [14] to the case that a is a process that satisfies Hypothesis (A1T) and \(\gamma \) is random.
-
3.
Assume (X1), (A2T) and (B1T) are satisfied for any \(T>0\). Note that in this case, Theorem 3.1 says that equation (3.1) has a unique solution on \(\Omega \times [0,\infty )\) given by (3.3). For the existence we only need to assume that, for each \(T>0\), \(\gamma \in L^1 ([0,T],L^\infty (\Omega )).\) Condition \(\gamma \in L^\infty (\Omega \times [0,T])\) is needed for the uniqueness.
Proof
Since the proof of this theorem is long and similar to those in Nualart [14] or Buckdhan [5], we only sketch it in the Appendix (Subsection 6.1). For details, the reader can see the references [4, 5, 14]. \(\square \)
4 Some properties of process Z
In this section we establish some properties of process Z introduced in Eq. (3.2).
Lemma 1.3
Let \(T>0\) and assume (A2T) and (B2T) hold. Then, the solution Z of Eq. (3.2) satisfies
and
for \((t,x)\in [0,T]\times \mathbb {R}\) and for almost all \(\omega \in \Omega \).
Proof
Let \(\omega \in \Omega \) be such that (B2T) is satisfied. Note that being \(L_{0,s}\) adapted to the underlying filtration \({\mathbb {F}}\), Eq. (3.2) can be written as
Inequality (4.1) is an immediate consequence of Gronwall’s lemma, (A1T) and (B2T). Taking partial derivatives with respect to x in Eq. (4.3) and using Hartman [8] (Section 5.3) and (A2T) we obtain that \(\partial _x Z_t(\omega ,x)\) exists and satisfies the equation
whose explicit solution is given by
Finally (B1T) and (A2T) give that inequality (4.2) is true. \(\square \)
Lemma 1.4
Fix \(T>0.\) Assume (A1T) and (B2T) hold. Then, for \(x\in \mathbb {R}\),
is \(\mathcal {P}_T\)-measurable and belong to \({{\mathbb {L}}}^F.\)
Proof
Let \(t_0\in (0,T]\). Then, (3.2) implies
Note that thanks the fact
we have the previous equation has a unique solution. Moreover, this solution is adapted since b is \(\mathcal {P}_T\otimes \mathcal {B}(\mathbb {R})\)-measurable. So, \(t \rightarrow Z_t(A_{t_0},x)\) is \(\mathcal {F}_t\)-measurable for all \(t\in [0,t_0]\). In particular, \(Z_t(A_{t},x)\) is \(\mathcal {F}_t\)-measurable for all \(t\in [0,t_0]\), and consequently, for all \(t\in [0,T]\).
Finally, for \(s<t\),
where the second equality is a consequence of the fact that \(Z_t(A_{t},x)\) is \(\mathcal {F}_t\)-measurable. Moreover, thanks to inequality (4.1), the solution belongs to \(L^2(\Omega \times [0,T]).\) So, it belongs to \({\mathbb L}^F_T.\) \(\square \)
Lemma 1.5
Let \(T>0.\) Assume that (A1T) and (B2T) are satisfied. Then, for \(x>0\) (resp. \(x<0\)), we have
for any \(t\in [0,T]\) and \(\omega \in \Omega \) for which (B2T) is true.
Proof
We know that \(Z_t(A_t,x)\) satisfies Eq. (4.6). Assume \(x>0.\) The negative case is analogous. Fix \(\omega \in \Omega \) satisfying (B2T). Assume there exists \(t_0\) such that \(Z_u(A_u,x)>0\) for all \(u<t_0\) and \(Z_{t_0}(A_{t_0},x)=0.\) On \([0,t_0]\), using (B2T), we have
Therefore, for \(t\in [0,t_0],\)
Hence, by Hartman [8] (Remark 1 of Theorem 4.1 in Section 3.3), we have
In particular, for \(t=t_0\),
and this is a contradiction. Therefore, \(Z_t(A_t,x)\) is positive for all \(t\in [0,T]\) and, consequently, (4.7) is satisfied for \(t\in [0,T]\), which gives that the result holds. \(\square \)
Lemma 1.6
Let \(T>0\). Assume that (A2T) and (B3T) hold. Then, for all \(p\ge 2\) and \(x\in {{\mathbb {R}}},\) the process \(Z_t(A_t,x)\) belongs to \(L^p([0,T], {{\mathbb {D}}}_T^{1,p})\), and for \(r,t\in [0,T]\) we have
where
Remark 4.5
Note that (4.8) is the solution of the linear stochastic differential equation
See for example [2], Section 8.2.
Proof of Lemma 4.4
The proof is inspired in [14] (Section 2.2). Let \(c:=c_1\vee c_2\vee 1.\) Recall that \(c_1\) and \(c_2\) are finite constants thanks (B1T) and (B3T), and \(Z_t(A_t,x)\) satisfies Eq. (4.6).
We consider the Picard approximations of \(Z_t (A_t,x).\) For \(n=0\) we define
and we apply induction on n to define, for \(n\ge 1,\) the adapted and continuous process
We divide the proof in two steps.
-
1.
In this first step we prove that for any \(n\ge 0,\) \(Z_{\cdot ,(n)} (A_{\cdot },x)\) is a continuous and adapted process bounded in \(L^p ([0,T]\times \Omega )\) for any \(p\ge 1,\) uniformly in n. Moreover, \(Z_{t,(n)} (A_t,x)\) converges to \(Z_t (A_t,x)\) with probability one, uniformly in \(t\in [0,T]\), and in \(L^p ([0,T]\times \Omega )\), for any \(p\ge 1.\)
Note that from (4.9) and (B2T), we have
$$\begin{aligned} |Z_{t,(n+1)}(A_t,x)|\le |x|+\int _0^t\gamma _s \, |Z_{s,(n)}(A_s,x)|ds,\quad t\in [0,T]. \end{aligned}$$Therefore, iterating this inequality, we have \(|Z_{t,(n)}(A_t,x)|\le |x|e^{c_1}\) for all \(n\in {{\mathbb {N}}}\) and \(t\in [0,T].\)
On the other hand, we have
$$\begin{aligned} |Z_{t,(1)}-Z_{t,(0)}|\le \int _0^t L^{-1}_{0,s}\ |b(s, L_{0,s} Z_{s,(0)}(A_s,x))| ds\le |x|\,\int _0^t \gamma _s ds, \end{aligned}$$and iterating again, we obtain
$$\begin{aligned} |Z_{t,(n+1)}-Z_{t,(n)}|\le & {} \int _0^tL^{-1}_{0,s}\ |b(s, L_{0,s} Z_{s,(n)}(A_s,x)) -b(s, L_{0,s} Z_{s,(n-1)}(A_s,x))| ds\\\le & {} \int _0^t \gamma _s |Z_{s,(n)}(A_s,x)-Z_{s,(n-1)}(A_s,x)| ds\\\le & {} \frac{|x|}{(n+1)!} (\int _0^t \gamma _s ds)^{n+1}. \end{aligned}$$Thus,
$$\begin{aligned} \sum _{n=0}^{\infty } |Z_{t,(n+1)}-Z_{t,(n)}|\le c_1 |x|\sum _{n=0}^{\infty }\frac{c_1^n}{n!}=c_1 e^{c_1} |x|<\infty \end{aligned}$$implies the statement.
-
2.
Now we want to check the differentiability of \(Z_t (A_t,x)\) in the Malliavin calculus sense. Using Lemma 1.5.3 in [14], it is enough to check that, for any \(n\ge 0\) and \(p\ge 2,\)
$$\begin{aligned} Z_{t,(n)}(A_t,x)\in L^p ([0,T], {{\mathbb {D}}}^{1,p}_T) \end{aligned}$$and
$$\begin{aligned} \sup _{n\ge 0}\sup _{0\le r\le T} {{\mathbb {E}}}\left( \sup _{r\le t\le T} |D_r Z_{t,(n)} (A_t,x)|^p\right) <\infty . \end{aligned}$$Note that \(Z_{t,(0)}(A_t,x)=x\in L^{p}([0,T], {{\mathbb {D}}}^{1,p}_T)\) for all \(p\ge 2.\) Now, assume that \(Z_{t,(n)}(A_t,x)\in L^p ([0,T], {{\mathbb {D}}}^{1,p}_T)\) for all \(p\ge 2.\) Then, using (4.9), (A2T), (B3T) and [10] (Lemma 2.2), we have, for \(r\le t,\)
$$\begin{aligned} D_r Z_{t,(n+1)}(A_t,x)= & {} \int _{r}^t (D_r L^{-1}_{0,s})\ b(s, L_{0,s} Z_{s,(n)}(A_s,x)) ds\\{} & {} + \int _{r}^t L^{-1}_{0,s} (\partial _x b)(s, L_{0,s} Z_{s,(n)}(A_s,x)) (D_r L_{0,s}) Z_{s,(n)}(A_s,x) ds\\{} & {} + \int _{r}^t (\partial _x b)(s, L_{0,s} Z_{s,(n)}(A_s,x)) D_r Z_{s,(n)}(A_s,x) ds\\{} & {} +\int _{r}^t L^{-1}_{0,s} D_r b(s,z,\omega )|_{z=L_{0,s} Z_{s,(n)}(A_s,x)} ds. \end{aligned}$$On the other hand, being \(Z_{\cdot ,(n)}(A_{\cdot },x)\) an adapted process, for any \(r>t\) we have
$$\begin{aligned} D_r Z_{t,(n)}(A_t,x)=0. \end{aligned}$$Now putting together the first two terms on the right hand side we have
$$\begin{aligned} |D_r Z_{t,(n+1)}(A_t,x)|\le & {} \int _{r}^t \gamma _s \cdot \left( |D_r L^{-1}_{0,s}|\,|L_{0,s}| +|L^{-1}_{0,s}|\,|D_r L_{0,s}|\right) \cdot |Z_{s,(n)}(A_s,x)| ds\\{} & {} + \int _{r}^t \gamma _s |D_r Z_{s,(n)}(A_s,x)| ds\\{} & {} +\int _{r}^t |L^{-1}_{0,s}|\cdot M(r,s)\cdot |L_{0,s}|\cdot |Z_{s,(n)}(A_s,x)|ds, \quad t\in [r,T]. \end{aligned}$$Defining
$$\begin{aligned} K(r,s):=|D_r L^{-1}_{0,s}|\,|L_{0,s}|+|L^{-1}_{0,s}|\,|D_r L_{0,s}| \end{aligned}$$and joining the first and the third term on the right hand side we obtain
$$\begin{aligned} |D_r Z_{t,(n+1)}(A_t,x)|\le & {} \int _r^t [\gamma _s K(r,s) +M(r,s)] |Z_{s,(n)}(A_s,x)| ds\\+ & {} \int _r^t \gamma _s |D_r Z_{s,(n)}(A_s,x)| ds, \quad t\in [r,T]. \end{aligned}$$Hence,
$$\begin{aligned} |D_r Z_{t,(n+1)}(A_t,x)|\le & {} \left( \left( \sup _{r\le s\le T} K(r,s)\right) \int _r^T \gamma _s ds+ \int _r^T M(r,s)ds\right) \nonumber \\{} & {} \sup _{0\le s\le T} |Z_{s,(n)}(A_s,x)|\nonumber \\+ & {} \int _r^t \gamma _s |D_r Z_{s,(n)}(A_s,x)| ds, \quad t\in [0,T]. \end{aligned}$$Consequently, using (B3T), Lemma 4.1 and Step 1, we have
$$\begin{aligned} |D_r Z_{t,(n+1)}(A_t,x)|\le |x|c e^{c}\Big (1+\sup _{r\le s\le T} K(r,s)\Big )\\ +\int _r^t \gamma _s |D_r Z_{s,(n)}(A_s,x)| ds, \quad t\in [r,T]. \end{aligned}$$Applying Gronwall’s Lemma with r and \(\omega \) fixed, we obtain
$$\begin{aligned} |D_r Z_{t,(n+1)}(A_t,x)|\le |x|c e^c g(r,\omega ,T)\exp \Big \{\int _r^t \gamma _s ds\Big \}\le |x|c e^{2c} g(r,\omega ,T),\qquad \end{aligned}$$(4.10)where \(g(r,\omega ,T):=1+\sup _{r\le s\le T} K(r,s).\) Note that the right-hand side of (4.10) is independent of n and \(t\in [r,T].\)
We know by Step 1 that \(Z_{t,(n+1)}(A_t,x)\in L^p(\Omega ).\) So, it remains only to check
$$\begin{aligned} \sup _{0\le r\le T}{{\mathbb {E}}}\left( \sup _{r\le t\le T} |D_r Z_{t,(n+1)}(A_t,x)|^p\right) <\infty , \end{aligned}$$uniformly in \(n\ge 0\).
Note that, by (4.10), we have
$$\begin{aligned} {{\mathbb {E}}}\left( \sup _{r\le t\le T} |D_r Z_{t,(n+1)}(A_t,x)|^p\right) \le |x|^p c^{p} e^{2cp} {{\mathbb {E}}}\left( |g(r,\omega ,T)|^p\right) , \end{aligned}$$for all \(n\in {{\mathbb {N}}}.\) Therefore, the problem reduces to check
$$\begin{aligned} \sup _{0\le r\le T}{{\mathbb {E}}}\left( \left[ 1+\sup _{r\le s\le T} K(r,s)\right] ^p\right) <\infty . \end{aligned}$$Using Hölder inequality, it is enough to see
$$\begin{aligned} \sup _{0\le r\le T} {{\mathbb {E}}}\left( \sup _{r\le s\le T} K(r,s)^p\right) <\infty , \end{aligned}$$which, by applying Hölder inequality again, is equivalent to check
$$\begin{aligned}{} & {} \sup _{0\le r\le T}{{\mathbb {E}}}\left( |a_r|^{2p}\right)<\infty ,\\{} & {} \sup _{0\le r\le T}{{\mathbb {E}}}\left( \sup _{r\le s\le T} \left| \int _r^s D_r\,a_u dW_u\right| ^{2p}\right) <\infty \end{aligned}$$and
$$\begin{aligned} \sup _{0\le r\le T}{{\mathbb {E}}}\int _r^T |a_u|^p\cdot |D_r\,a_u|^pdu<\infty . \end{aligned}$$The first and third statements are obvious from (A2T). The second one is true thanks to Burkholder-Davis-Gundy inequality and (A2T). Therefore, \(Z_{t,(n)} (A_t,x)\) is a well defined object in \({\mathbb D}^{1,p}\) and \(Z_{\cdot }(A_{\cdot },x)\in L^p([0,T], {\mathbb D}_T^{1,p}).\)
Finally, (4.8) follows from (4.6) and Remark 4.5. \(\square \)
5 Stability of the solution
Remember that Theorem 3.1, under Hypotheses (X1T), (A2T) and (B1T), implies that there exists a unique solution of Eq. (3.1) in \(L^1(\Omega \times [0,T])\) for any \(T>0.\)
5.1 Auxiliary results
In this section we establish some auxiliary tools that we need to study the stability of the solution of Eq. (3.1).
Lemma 1.8
Let \(T>0.\) Assume (X2T), (A2T) and (B3T) hold. Then, \(Z_t(A_t,X_0(A_t))\) belongs to \(\mathbb {L}^{1,2,f}_T\) and for \(s>t\) we have
Proof
By Lemma 4.2 the process \(t\mapsto Z_t(A_t,x)\) is in the space \(\mathbb {L}^{1,2,f}_T.\) Assume first that \(X_0\in {{\mathcal {S}}}.\) Proceeding as in Ocone and Pardoux [15] (proofs of Lemmas 2.3 and 2.4), together with (4.1) and (4.2), we obtain that for \(s>t,\)
where the last equality is a consequence of Buckdhan [5] (equality (2.2.26)). Hence, the result is satisfied due to Buckdhan [5] (Proposition 2.1.2) and (4.4). \(\square \)
Lemma 1.9
Let \(T>0.\) Assume (X2T), (A2T) and (B3T) hold. Let X be the solution of (3.1). Then, \(X\in L^p ([0,T], {\mathbb D}^{1,p}_T)\) for all \(p\ge 1.\)
Proof
Observe that (2.7), Propositions 1.3.8 and 1.5.5 in [14] and (A2T) establish that \(L_{0,\cdot }\in L^p([0,T], {{\mathbb {D}}}^{1,p}_T)\) for any \(p\ge 1.\) Hence, by Theorem 3.1 it is enough to show that \(Z_{\cdot }(A_{\cdot }, X_0(A_{\cdot }))\in L^p([0,T], {\mathbb D}^{1,p}_T)\) for all \(p\ge 1.\) Toward this end we first assume \(X_0\in {{\mathcal {S}}}.\) In this case, Lemmas 4.1 and 4.4, (4.8) together with the dominated convergence theorem, (A2T) and (B2T) yield that we can proceed as in the proof of Lemma 2.1 in [10] to see that \(Z_{\cdot }(A_{\cdot }, X_0(A_{\cdot }))\in L^p([0,T], {\mathbb D}^{1,p}_T)\) with
Hence, Buckdhan [5] (Proposition 2.1.2, Lemma 2.2.13, and (2.2.26)) yield, for any \(s,t\in [0,T],\)
Finally, the result follows from (A2T), (B2T), (4.4), (4.8), Lemma 4.1, Buckdhan [5] (Proposition 2.1.2) and the dominated convergence theorem. \(\square \)
For any \(\nu \in (0,1]\), we consider the Lyapunov function
The following result is the main tool for the study of the stability of the solution to (3.1).
Theorem 1.10
Let \(T>0.\) Assume Hypotheses (X3T), (A2T) and (B4T) hold. Let X be the solution of (3.1) given by (3.3) and
where \(\varepsilon :=\{\varepsilon _s, s\in [0,T]\}\) is a positive adapted process belonging to \(L^{\infty }(\Omega \times [0,T]).\) Then, for any \(t\in [0,T]\) we get
For simplicity we will write
with
Remark 5.4
Note that as a consequence of Lemmas 4.1 and 4.3 we have, for either \(X_0>0\) a.s. or \(X_0<0\) a.s.,
Proof of Theorem 5.3
Note that by (X2T) the random variable \(X_0\in {{\mathbb {D}}}_T^{1,2}\) and then, there exists a sequence \(\{X_0^{(n)}\in \mathcal {S}, n\ge 1\}\) that converges to \(X_0\) in \(\mathbb {D}^{1,2}_T.\) By (X3T) we can assume that \(|X_0^{(n)}|>\frac{\eta }{2}\) for all \(n\in \mathbb {N}\), where \(\eta >0\) is given by Hypothesis (X3T).
Being \(a, {{\bar{b}}}\) and \(\varepsilon \) processes in \(L_{a}^2(\Omega \times [0,T]),\) it is well-known that we can consider three sequences \(\{a^{(n)}, n\ge 1\}\), \(\{{\bar{b}}^{(n)}, n\ge 1\}\) and \(\{\varepsilon ^{(n)}, n\ge 1\}\) of adapted processes of the form
with \(F_{i,n}, G_{i,n}, E_{i,n}\in \mathcal {S}\), such that
and
Moreover, observe that given (X2T), (A2T) and (B4T), it is straightforward to prove that \(||X^{(n)}_0||_{\infty }\), \(||a^{(n)}||_{L^{\infty }(\Omega \times [0,T])}\), \(||{\bar{b}}^{(n)}||_{L^{\infty } (\Omega \times [0,T])}\) and \(||{\varepsilon }^{(n)}||_{L^{\infty } (\Omega \times [0,T])}\) are bounded respectively by the norms \(c||X_0||_{\infty },\) \(c||a||_{L^{\infty }(\Omega \times [0,T])},\) \(c||{\bar{b}}||_{L^{\infty }(\Omega \times [0,T])}\) and \(c||{\varepsilon }||_{L^{\infty }(\Omega \times [0,T])}\) for a certain generic constant \(c\ge 1.\)
But we are interested in approximating \(a, {{\bar{b}}}\) and \(\varepsilon \) by continuous processes. Towards this end, let \(n\in \mathbb {N}\). For each n, define
and
We can consider functions \(k_i\in \mathcal {C}_c^{\infty }(\mathbb {R})\), with values in [0, 1], that approximate indicator functions in the following sense:
Then, we can change the previous processes \(\{a^{(n)}, n\ge 1\}\), \(\{{\bar{b}}^{(n)}, n\ge 1\}\) and \(\{\varepsilon ^{(n)}, n\ge 1\}\) by continuous and adapted versions of the form
with \(F_{i,n}, G_{i,n}, E_{i,n}\in \mathcal {S}\), such that
and
The fact that we can change \({1\!\!1}_{(t_i,t_{i+1}]}\) by \(k_i^{(n)}(t)\) is proved by the arguments used in Sect. 6.2.
The approximation of \(\phi \) is slightly more complicated. See Sect. 6.2 for details. In Sect. 6.2, we consider the adapted random field
where
with g smooth enough,
and \(H_{i,j}^{(n)}\in \mathcal {S}.\) Taking into account the construction in Appendix (Sect. 6.2) we can prove that \({\bar{\psi }}^{(n)}(t,x)\) is bounded and that \({\bar{\psi }}^{(n)}(t,x) \longrightarrow \partial _x^2 \phi (t,x)\) when n tends to \(+\infty \) for almost all \((\omega ,t,x)\in \Omega \times [0,T]\times \mathbb {R}\). Moreover, we can also check that \(\psi ^{(n)}(t,x) \longrightarrow \partial _x \phi (t,x)\) almost surely when n tends to \(+\infty \), the function \({\bar{\psi }}^{(n)}(t,x)\) is uniformly bounded with respect all the parameters (including n) and
for any compact \(K\subset \mathbb {R}.\) As a consequence, we also have
Now, we divide the proof into two steps. First, we prove the result using the simple processes defined above and then, for the general case.
-
1.
Here, we fix \(n\in \mathbb {N}\). Let \(Z^{(n)}\) be the solution to (3.2) when we change a and b by \(a^{(n)}\) and \(b^{(n)}\), respectively. Note that the change of a by \(a^{(n)}\) implies the change of operators \(A_{s,t}\) and \(T_t.\) Here, we also change \(X_0\) by and \(X_0^{(n)}\).
By Lemma 4.4, we have that \(Z_t^{(n)}(A^{(n)}_t,x) \in L^p([0,T]; \mathbb {D}^{1,p}_T)\) for any \(p>1\) and \(x\in \mathbb {R}\). Moreover, from Lemma 5.1, we also have, for \(s>t\),
$$\begin{aligned} D_s Z_t^{(n)}(A^{(n)}_t,X_0^{(n)}(A_t^{(n)}))= & {} \partial _x Z_t^{(n)}(A^{(n)}_t, X_0^{(n)}(A_t^{(n)}))\left( D_s X_0^{(n)}\right) (A_t^{(n)})\nonumber \\= & {} \exp \left\{ \int _0^t \partial _x b^{(n)}(u,L^{(n)}_{0,u} Z^{(n)}_u(A^{(n)}_t, X_0^{(n)}(A_t^{(n)}))) du\right\} \nonumber \\{} & {} \times \left( D_s X_0^{(n)}\right) (A_t^{(n)}), \end{aligned}$$(5.6)where the last equality follows from (4.5). Remember that, as a consequence of the definition of \(b^{(n)}\) we have that \(\partial _x b^{(n)}\) is bounded on \(\Omega \times [0,T]\times \mathbb {R}\). Hence we have that \(Z^{(n)}_t(A^{(n)}_t,X_0^{(n)}(A_t^{(n)}))\) is a bounded process because of Hypothesis (B2T), (3.2) and (4.1). The fact that \(Z^{(n)}_t(A^{(n)}_t,X_0^{(n)}(A_t^{(n)}))\in \mathbb {L}^F\cap L^\beta (\Omega \times [0,T])\), for any \(\beta >2,\) is not obvious since in Lemma 4.2 the initial condition is deterministic. The fact of belonging to \(\mathbb {L}^F\) can be proved by considering the approximation
$$\begin{aligned} \sum _{j=-m}^{m} \partial _x Z_t^{(n)}(A^{(n)}_t,x_j) \int _0^{X_0^{(n)}(A_t^{(n)})} {1\!\!1}_{(x_j,x_{j+1}]}(x) dx,\end{aligned}$$(5.7)and taking into account (4.5), Lemmas 4.2 and 4.4 and the assumptions on the coefficients.
Now it is easy to see that \(X_t^{(n)}=L^{(n)}_{0,t} \, Z^{(n)}_t(A^{(n)}_t,X_0^{(n)}(A_t^{(n)}))\) belongs to \(\mathbb {L}^F\) with
$$\begin{aligned} D_s X^{(n)}_t{} & {} = L^{(n)}_{0,t} \, D_s Z^{(n)}_t(A^{(n)}_t,X_0^{(n)}(A_t^{(n)})),\nonumber \\ D_r D_s X^{(n)}_t{} & {} = \left( D_rL^{(n)}_{0,t}\right) D_s Z^{(n)}_t(A^{(n)}_t,X_0^{(n)}(A_t^{(n)}))\nonumber \\{} & {} \qquad + L^{(n)}_{0,t} D_r D_s Z^{(n)}_t(A^{(n)}_t,X_0^{(n)}(A_t^{(n)})), \end{aligned}$$(5.8)\(s>t\) and for any \(r\in [0,T]\).
Now our goal is to use Remark 4 of Theorem 3 in Alòs-Nualart [1] in order to apply the Itô formula (3.2) of that paper.
Note first of all that hypotheses on \(a^{(n)}\) implies
$$\begin{aligned} L^{(n)}_{0,t} \in L^p(\Omega \times [0,T]), \quad \text {for any}\ p>1, \end{aligned}$$(5.9)and (4.1) and the hypotheses on \(X_0^{(n)}\) imply that
$$\begin{aligned} \mathbb {E}\left( \int _0^T \left| a^{(n)}_s L^{(n)}_{0,s} Z^{(n)}_s(A^{(n)}_s,X_0^{(n)}(A_s^{(n)})) \right| ^2 ds\right) ^2<\infty . \end{aligned}$$(5.10)From (5.6) and the hypotheses on \({{\bar{b}}}^{(n)}\), \(\phi ^{(n)}\) and \(X_0^{(n)}\) it is clear that
$$\begin{aligned} \left| D_s Z^{(n)}_t(A^{(n)}_t,X_0^{(n)}(A_t^{(n)})) \right| \le C, \qquad s>t. \end{aligned}$$(5.11)So, hypotheses on \(a^{(n)}\) and \(X_0^{(n)}\), (5.9) and (4.2) implies that
$$\begin{aligned} \int _0^T\left( \int _0^s \left| \mathbb {E} D_s\left( a^{(n)}_r L^{(n)}_{0,r} Z^{(n)}_r(A^{(n)}_r,X_0^{(n)}(A_r^{(n)}))\right) \right| ^2 dr\right) ^2 ds<\infty . \end{aligned}$$(5.12)We also need to study (5.8). We divide it into three parts. Hypotheses on \(X_0^{(n)}\) and \(a^{(n)}\) (in particular \(a^{(n)}\) is adapted), (5.9) and (5.11) give
$$\begin{aligned} \int _0^T\mathbb {E}\left( \int _0^T\int _0^s \left| \left( D_u a_r^{(n)}\right) L^{(n)}_{0,r} D_s Z^{(n)}_r(A^{(n)}_r,X_0^{(n)}(A_r^{(n)})) \right| ^2dr du\right) ^2ds<\infty ,\qquad \end{aligned}$$(5.13)and
$$\begin{aligned} \int _0^T\mathbb {E}\left( \int _0^T\int _0^s \left| a_r^{(n)}\left( D_u L^{(n)}_{0,r}\right) D_s Z^{(n)}_r(A^{(n)}_r,X_0^{(n)}(A_r^{(n)})) \right| ^2dr du\right) ^2ds<\infty . \qquad \end{aligned}$$(5.14)In order to deal with the remaining term we need to take into account the following
$$\begin{aligned} \begin{array}{l} \displaystyle a_r^{(n)}L^{(n)}_{0,r} D_u D_s Z^{(n)}_r(A^{(n)}_r, X_0^{(n)}(A_r^{(n)}))\\ \displaystyle \qquad = a_r^{(n)}L^{(n)}_{0,r} D_u \left[ \exp \left\{ \int _0^r \partial _x b^{(n)}(v,L^{(n)}_{0,v} Z^{(n)}_v(A^{(n)}_r,X_0^{(n)}(A_r^{(n)}))) dv\right\} \left( D_s X_0^{(n)}\right) (A_r^{(n)})\right] . \end{array} \end{aligned}$$The factor with \(D_u\left( D_s X_0^{(n)}\right) (A_r^{(n)})\) is bounded as before thanks hypotheses on \(a^{(n)}\) and \(X_0^{(n)}.\) On the other hand, we have
$$\begin{aligned} \begin{array}{l} \displaystyle a_r^{(n)}L^{(n)}_{0,r} D_u \left[ \exp \left\{ \int _0^r \partial _x b^{(n)}(v,L^{(n)}_{0,v} Z^{(n)}_v(A^{(n)}_r,X_0^{(n)}(A_r^{(n)}))) dv\right\} \right] \left( D_s X_0^{(n)}\right) (A_r^{(n)})\\ \displaystyle \qquad =A(r,u,s)+B(r,u,s), \end{array} \end{aligned}$$with
$$\begin{aligned} A(r,u,s)= & {} a_r^{(n)}L^{(n)}_{0,r} D_u \left[ \exp \left\{ \int _0^r {\bar{b}}^{(n)}_v dv\right\} \right] \\{} & {} \times \exp \left\{ \int _0^r \partial _x \phi ^{(n)}(v, L^{(n)}_{0,v} Z^{(n)}_v(A^{(n)}_r,X_0^{(n)}(A_r^{(n)})))dv\right\} \left( D_s X_0^{(n)}\right) (A_r^{(n)}),\\ B(r,u,s)= & {} a_r^{(n)}L^{(n)}_{0,r} \exp \left\{ \int _0^r {\bar{b}}^{(n)}_v dv\right\} \left( D_s X_0^{(n)}\right) (A_r^{(n)}) \\{} & {} \times D_u \left[ \exp \left\{ \int _0^r \partial _x \phi ^{(n)}(v, L^{(n)}_{0,v}Z^{(n)}_v(A^{(n)}_r,X_0^{(n)}(A_v^{(n)})))dv\right\} \right] . \end{aligned}$$Using similar arguments as before we can show
$$\begin{aligned} \int _0^T\mathbb {E}\left( \int _0^T\int _0^s \left| A(r,u,s)\right| ^2dr du\right) ^2ds<\infty . \end{aligned}$$(5.15)Hypotheses on \(a^{(n)}\), \({\bar{b}}^{(n)}\) and \(X_0^{(n)}\), the construction of \(\phi ^{(n)}\), (5.9), (5.11) and arguing as in (5.12), we can obtain
$$\begin{aligned} \int _0^T\mathbb {E}\left( \int _0^T\int _0^s \left| B(r,u,s)\right| ^2dr du\right) ^2ds<\infty . \end{aligned}$$(5.16)Notice that using \(a^{(n)}\), \({{\bar{b}}}^{(n)}\) and \(\varepsilon ^{(n)}\) we can also define
$$\begin{aligned} Y^{(n)}_t:=\int _0^t \left( {{\bar{b}}}^{(n)}_s-\frac{(a^{(n)}_s)^2}{2} +\varepsilon ^{(n)}_s\right) ds, \, \, t\in [0,T]. \end{aligned}$$Moreover we can consider \(F_m(x,y):=\alpha _m(x)^{\nu }e^{-\nu y}\) where \(\alpha _m\) is an infinite derivable function such that \(\alpha _m(x)=|x|\) on \((-\frac{1}{m}, \frac{1}{m})^c\) and \(\frac{1}{2m}\le \alpha _m(x)\le \frac{1}{m}\) on \((-\frac{1}{m}, \frac{1}{m}).\) The expression (5.8) together with the bounds (5.10) and (5.12)–(5.16) (we can argue in a similar way for the points (ii) and (iii) in Remark 4 of [1]) allow us to apply the Itô formula for the Skorohod integral (see [1]) and to obtain, for \(\frac{1}{m}\le \frac{\eta }{2},\)
$$\begin{aligned} \begin{array}{l} \displaystyle F_m\left( X^{(n)}_t, Y^{(n)}_t\right) =|X^{(n)}_0|^\nu +\int _0^t \partial _x F_m(X^{(n)}_s, Y^{(n)}_s) {{\bar{b}}}^{(n)}_sX^{(n)}_s ds\\ \displaystyle \qquad + \int _0^t \partial _x F_m(X^{(n)}_s, Y^{(n)}_s) \phi ^{(n)}(s,X^{(n)}_s) ds+ \int _0^t \partial _x F_m(X^{(n)}_s, Y^{(n)}_s)a^{(n)}_sX^{(n)}_s \delta W_s\\ \displaystyle \qquad + \int _0^t \partial _y F_m(X^{(n)}_s, Y^{(n)}_s)\left( {{\bar{b}}}^{(n)}_s -\frac{(a^{(n)}_s)^2}{2}+\varepsilon ^{(n)}_s\right) ds\\ \displaystyle \qquad + \frac{1}{2} \int _0^t \partial ^2_{x,x} F_m(X^{(n)}_s, Y^{(n)}_s) \left( a^{(n)}_sX^{(n)}_s\right) ^2 ds\\ \displaystyle \qquad + \int _0^t \partial ^2_{x,x} F_m(X^{(n)}_s, Y^{(n)}_s) a^{(n)}_s X^{(n)}_s D^-_sX^{(n)}_s ds.\end{array} \end{aligned}$$Taking into account the definition of \(F_m\) and (B4T) we have
$$\begin{aligned} \begin{array}{l} \displaystyle F_m\left( X^{(n)}_t, Y^{(n)}_t\right) =|X^{(n)}_0|^\nu +\nu \int _0^t \partial _x \alpha _m(X^{(n)}_s)\ \alpha _m(X^{(n)}_s)^{\nu -1} \ e^{-\nu Y^{(n)}_s}\ {{\bar{b}}}^{(n)}_sX^{(n)}_s ds\\ \displaystyle \qquad + \nu \int _0^t \partial _x \alpha _m(X^{(n)}_s)\ \alpha _m(X^{(n)}_s)^{\nu -1}\ e^{-\nu Y^{(n)}_s}\ \phi ^{(n)}(s,X^{(n)}_s) ds\\ \displaystyle \qquad + \nu \int _0^t\partial _x \alpha _m(X^{(n)}_s)\ \alpha _m(X^{(n)}_s)^{\nu -1}\ e^{-\nu Y^{(n)}_s}\ a^{(n)}_sX^{(n)}_s \delta W_s\\ \displaystyle \qquad -\nu \int _0^t \alpha _m(X^{(n)}_s)^\nu \ e^{-\nu Y^{(n)}_s}\ \left( {{\bar{b}}}^{(n)}_s-\frac{(a^{(n)}_s)^2}{2}+\varepsilon ^{(n)}_s\right) ds\\ \displaystyle \qquad + \frac{\nu }{2} \int _0^t \partial ^2_{x,x} \alpha _m(X^{(n)}_s)\ \alpha _m(X^{(n)}_s)^{\nu -1} \ e^{-\nu Y^{(n)}_s}\ \left( a^{(n)}_sX^{(n)}_s\right) ^2 ds\\ \displaystyle \qquad + \frac{\nu (\nu -1)}{2} \int _0^t \left( \partial _{x} \alpha _m(X^{(n)}_s)\right) ^2\ \alpha _m(X^{(n)}_s)^{\nu -2} \ e^{-\nu Y^{(n)}_s}\ \left( a^{(n)}_sX^{(n)}_s\right) ^2 ds\\ \displaystyle \qquad + \nu \int _0^t \partial ^2_{x,x} \alpha _m(X^{(n)}_s)\ \alpha _m(X^{(n)}_s)^{\nu -1} \ e^{-\nu Y^{(n)}_s}\ a^{(n)}_s X^{(n)}_s D^-_sX^{(n)}_s ds \\ \displaystyle \qquad + \nu (\nu -1) \int _0^t \left( \partial _{x} \alpha _m(X^{(n)}_s)\right) ^2\ \alpha _m(X^{(n)}_s)^{\nu -2} \ e^{-\nu Y^{(n)}_s}\ a^{(n)}_s X^{(n)}_s D^-_sX^{(n)}_s ds. \end{array} \end{aligned}$$(5.17)Multiplying the two sides of (5.17) by \(\ {1\!\!1}_{A_m}\) with
$$\begin{aligned} A_m=\left\{ x\in \Omega ; \inf _{r\in [0,T]} \left| X_r^{(n)}\right| >\frac{1}{m}\right\} , \end{aligned}$$and thanks the definition of \(\alpha _m\), and the local property of the Lebesgue and Skorohod integrals (see Lemma 5.2 and Proposition 1.3.15 in [14]) we get
$$\begin{aligned}\begin{array}{l} \displaystyle {1\!\!1}_{A_m} F\left( X^{(n)}_t, Y^{(n)}_t\right) = {1\!\!1}_{A_m} \Bigg \{|X^{(n)}_0|^\nu +\nu \int _0^t |X^{(n)}_s|^\nu \ e^{-\nu Y^{(n)}_s}\ {{\bar{b}}}^{(n)}_s ds\\ \displaystyle \qquad + \nu \int _0^t |X^{(n)}_s|^\nu \ e^{-\nu Y^{(n)}_s}\ \frac{\phi ^{(n)}(s,X^{(n)}_s)}{X^{(n)}_s} ds+ \nu \int _0^t |X^{(n)}_s|^\nu \ e^{-\nu Y^{(n)}_s}\ a^{(n)}_s \delta W_s\\ \displaystyle \qquad -\nu \int _0^t |X^{(n)}_s|^\nu \ e^{-\nu Y^{(n)}_s}\ \left( {{\bar{b}}}^{(n)}_s-\frac{(a^{(n)}_s)^2}{2}+\varepsilon ^{(n)}_s\right) ds\\ \displaystyle \qquad + \frac{\nu (\nu -1)}{2} \int _0^t |X^{(n)}_s|^\nu \ e^{-\nu Y^{(n)}_s}\ \left( a^{(n)}_s\right) ^2 ds\\ \displaystyle \qquad + \nu (\nu -1) \int _0^t |X^{(n)}_s|^\nu \ e^{-\nu Y^{(n)}_s}\ a^{(n)}_s \frac{D^-_sX^{(n)}_s}{X^{(n)}_s} ds\Bigg \}.\end{array} \end{aligned}$$Using that \(a^{(n)}, {{\bar{b}}}^{(n)}\) and \(\phi ^{(n)}\) are adapted to the underlying filtration \({\mathbb {F}}\), (3.3), Lemmas 5.1 and 4.3, Hypothesis (B4T), (4.5), Lemma 2.6 in [12], Proposition 2.1.4 in [5] and the fact that \(D^-_s L^{(n)}_{0,s}=0\), we have
$$\begin{aligned} D^-_sX^{(n)}_s= X^{(n)}_s \cdot \frac{\partial _xZ^{(n)}_s (A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))}{Z^{(n)}_s(A^{(n)}_s, X^{(n)}_0 (A^{(n)}_s))}(D_sX^{(n)}_0)(A^{(n)}_s). \end{aligned}$$(5.18)Noting that Lemma 4.3 and (3.3) imply
$$\begin{aligned}\lim _{m \rightarrow +\infty } {1\!\!1}_{\{\inf _{r\in [0,T]} |X_r^{(n)}|>\frac{1}{m}\}} =\ {1\!\!1}_{\{\inf _{r\in [0,T]} |X_r^{(n)}|>0\}}=1, \end{aligned}$$(5.18) leads to write
$$\begin{aligned} \begin{array}{l} \displaystyle F\left( X^{(n)}_t, Y^{(n)}_t\right) =|X^{(n)}_0|^\nu + \nu \int _0^t F(X^{(n)}_s, Y^{(n)}_s) {{\bar{b}}}^{(n)}_s ds\\ \displaystyle \qquad + \nu \int _0^t F(X^{(n)}_s, Y^{(n)}_s) \frac{\phi ^{(n)}(s,X^{(n)}_s)}{X^{(n)}_s} ds+\nu \int _0^t F(X^{(n)}_s, Y^{(n)}_s)a^{(n)}_s \delta W_s\\ \displaystyle \qquad -\nu \int _0^t F(X^{(n)}_s, Y^{(n)}_s)\left( {{\bar{b}}}^{(n)}_s- \frac{(a^{(n)}_s)^2}{2}+\varepsilon ^{(n)}_s\right) ds\\ \displaystyle \qquad + \frac{\nu (\nu -1)}{2} \int _0^t F(X^{(n)}_s, Y^{(n)}_s) \Bigg [2a^{(n)}_s \frac{\partial _xZ^{(n)}_s (A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))}{Z^{(n)}_s(A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))} (D_sX^{(n)}_0)(A^{(n)}_s)\\ \displaystyle \qquad \qquad + \left( a^{(n)}_s\right) ^2\Bigg ]ds.\end{array} \end{aligned}$$So,
$$\begin{aligned} \begin{array}{l} \displaystyle F\left( X^{(n)}_t, Y^{(n)}_t\right) =|X^{(n)}_0|^\nu +\nu \int _0^t F(X^{(n)}_s, Y^{(n)}_s)a^{(n)}_s \delta W_s\\ \displaystyle \qquad +\nu \int _0^t F(X^{(n)}_s, Y^{(n)}_s) \left[ \nu \frac{\left( a^{(n)}_s\right) ^2}{2} +\frac{\phi ^{(n)}(s,X^{(n)}_s)}{X^{(n)}_s}-\varepsilon ^{(n)}_s\right] ds\\ \displaystyle \qquad +\nu (\nu -1)\int _0^t F(X^{(n)}_s, Y^{(n)}_s) a^{(n)}_s \frac{\partial _xZ^{(n)}_s (A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))}{Z^{(n)}_s(A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))} (D_sX^{(n)}_0)(A^{(n)}_s)ds. \end{array} \end{aligned}$$Note that
$$\begin{aligned} F(X^{(n)}_s, Y^{(n)}_s)\ a^{(n)}_s=\left| Z^{(n)}_s(A_s^{(n)}, X_0^{(n)}(A_s^{(n)})) \right| ^{\nu } (L^{(n)}_{0,s})^{\nu }e^{-\nu Y_s^{(n)}} a^{(n)}_s \end{aligned}$$is an element of \(L^{1,2,f}_T\) (using the same argument applied to (5.7), together with Lemmas 4.3, 4.4 and 5.2). But it is not enough to show that the expectation of the Skorohod integral is zero. In order to prove it, we have that \(a^{(n)}, L_{0,s}^{(n)}, e^{- Y_s^{(n)}} \in L^p([0,T];\mathbb {D}^{1,p}_T)\) for all \(p>1\). From Lemma 4.3,
$$\begin{aligned} \left| Z^{(n)}_s(A_s^{(n)}, X_0^{(n)}(A_s^{(n)}))\right| \ge \frac{\eta }{2} \exp \left( -\int _0^T\Vert \gamma _s\Vert _\infty ds\right) . \end{aligned}$$As before, with \(\frac{1}{m} \le \frac{\eta }{2} \exp \left( -\int _0^T\Vert \gamma _s\Vert _\infty ds\right) \), we have
$$\begin{aligned} \left| Z^{(n)}_s(A_s^{(n)}, X_0^{(n)}(A_s^{(n)}))\right| ^{\nu }=\alpha _m \left( Z^{(n)}_s(A_s^{(n)}, X_0^{(n)}(A_s^{(n)}))\right) ^\nu . \end{aligned}$$Note that \(\partial _x \left( \alpha _m(x)^\nu \right) =\nu \ \alpha _m(x)^{\nu -1} \partial _x \alpha _m(x)\). For the case of positive initial condition, we obtain \(\alpha _m(x)^{\nu -1}\le (2\,m)^{1-\nu }\) and we also know that \(\partial _x \alpha _m\) is bounded. So, the proof of Lemma 5.2 implies that \(Z^{(n)}_\cdot (A_\cdot ^{(n)}, X_0^{(n)}(A_\cdot ^{(n)}))\in L^p([0,T];\mathbb {D}^{1,p}_T)\) for all \(p>1\).
Therefore, taking expectations in the penultimate equality, we prove the result for the particular case of simple processes introduced above:
$$\begin{aligned} \begin{array}{l} \displaystyle \mathbb {E}\left[ F\left( X^{(n)}_t, Y^{(n)}_t\right) \right] =\mathbb {E}|X^{(n)}_0|^\nu +\nu \ \mathbb {E} \int _0^t F(X^{(n)}_s, Y^{(n)}_s)\eta ^{(n)}(s)ds \end{array} \end{aligned}$$where
$$\begin{aligned} \eta ^{(n)}(s)= & {} \nu \frac{\left( a^{(n)}_s\right) ^2}{2}+\frac{\phi ^{(n)} (s,X^{(n)}_s)}{X^{(n)}_s}-\varepsilon ^{(n)}_s\\{} & {} +(\nu -1)a^{(n)}_s \frac{\partial _xZ^{(n)}_s (A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))}{Z^{(n)}_s(A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))} (D_sX^{(n)}_0)(A^{(n)}_s). \end{aligned}$$ -
2.
In order to prove the general case we will take limits on the last equality as \(n\rightarrow \infty \) to show
$$\begin{aligned} {{\mathbb {E}}} \left[ F\left( X_t, Y_t\right) \right] = {{\mathbb {E}}}\left( |X_0|^\nu \right) +\nu \ \mathbb {E} \int _0^t F(X_s, Y_s) \eta (s)ds, \end{aligned}$$where \(\eta \) is introduced in (5.4). This claim is detailed as follows.
First of all, we have the convergence of \(\mathbb {E}(|X^{(n)}_0|^\nu )\) to \(\mathbb {E}(|X_0|^\nu )\) as a consequence of the fact that by construction \(X_0^{(n)}\) converges in \(L^2(\Omega )\) to \(X_0\).
In relation with the second term it is enough to show that
$$\begin{aligned} \lim _{n\rightarrow \infty }{{\mathbb {E}}}\int _0^T |F(X_s, Y_s) \eta (s)-F(X^{(n)}_s, Y^{(n)}_s) \eta ^{(n)}(s)|ds=0 \end{aligned}$$in order to finish the proof. To do so, we utilize the inequality
$$\begin{aligned} {{\mathbb {E}}}\int _0^T |F(X_s, Y_s) \eta (s)-F(X^{(n)}_s, Y^{(n)}_s) \eta ^{(n)}(s)|ds\le B_{1,n}+B_{2,n}, \end{aligned}$$with
$$\begin{aligned} B_{1,n}={{\mathbb {E}}}\int _0^T |F(X_s, Y_s)-F(X^{(n)}_s, Y^{(n)}_s)| |\eta (s)|ds \end{aligned}$$and
$$\begin{aligned} B_{2,n}={{\mathbb {E}}}\int _0^T F(X^{(n)}_s, Y^{(n)}_s) |\eta (s)-\eta ^{(n)}(s)|ds. \end{aligned}$$We first deal with \(B_{1,n}\). Note that from Lemma 4.3, Remark 5.4 and Hypotheses (X3T), (A2T) and (B4T), the process \(\eta \) is bounded. That is, there exists a constant \(C>0\) such that
$$\begin{aligned} \sup _{(\omega ,t)\in \Omega \times [0,T]}|\eta (t)| \le C. \end{aligned}$$Therefore, (3.3), (4.3), Lemma 4.1, (6.20), the hypothesis on the coefficients \(a, \varepsilon \) and b, the definition of their approximations \(a^{(n)}, \varepsilon ^{(n)}\) and \(b^{(n)}\), and the Cauchy-Schwartz inequality yield
$$\begin{aligned} B_{1,n}\le & {} C\ {{\mathbb {E}}}\int _0^T | F(X_s, Y_s)-F(X^{(n)}_s, Y^{(n)}_s)|ds\nonumber \\\le & {} C\ {{\mathbb {E}}}\int _0^T \left| L_{0,s}^{\nu }-(L_{0,s}^{(n)})^{\nu }\right| ds + C\ {\mathbb E}\int _0^T \left( L^{(n)}_{0,s}\right) ^{\nu } \left| e^{-\nu Y_s}-e^{\nu Y_s^{(n)}}\right| ds\nonumber \\{} & {} + C\ {{\mathbb {E}}}\int _0^T \left( L^{(n)}_{0,s}\right) ^{\nu } \left| |Z_s(A_s, X_0(A_s))|^{\nu }-|Z^{(n)}_s(A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))|^{\nu }\right| ds\nonumber \\\le & {} C \ {{\mathbb {E}}}\int _0^T \left| L_{0,s}^{\nu } -(L_{0,s}^{(n)})^{\nu }\right| ds\nonumber \\{} & {} + C\left( {{\mathbb {E}}}\int _0^T \left( L^{(n)}_{0,s}\right) ^{2\nu } ds\right) ^{\frac{1}{2}} \left( {{\mathbb {E}}}\int _0^T \left| e^{-\nu Y_s} -e^{\nu Y_s^{(n)}}\right| ^2ds\right) ^{\frac{1}{2}}\nonumber \\{} & {} + C \left( {{\mathbb {E}}}\int _0^T \left( L^{(n)}_{0,s}\right) ^{2\nu }ds\right) ^{\frac{1}{2}}\nonumber \\{} & {} \quad \times \left( {{\mathbb {E}}}\int _0^T \left| |Z_s(A_s, X_0(A_s))|^{\nu }-|Z^{(n)}_s(A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))|^{\nu }\right| ^2 ds\right) ^{\frac{1}{2}}.\nonumber \\ \end{aligned}$$(5.19)We claim that our hypotheses allow to show that all these terms converge to zero. Indeed, the last summand goes to zero as \(n \rightarrow +\infty \) due to (6.12) in Sect. 6.3 and the integral of \(|L_{0,s}^{\nu }-(L_{0,s}^{(n)})^{\nu }|\) also tends zero because of the properties of Itô’s integral. Moreover, it is easy to see that the second summand converges to zero.
Concerning the second term we can write
$$\begin{aligned} B_{2,n} ={{\mathbb {E}}}\int _0^T \left( L^{(n)}_{0,s}\right) ^{\nu } \left| Z^{(n)}_s(A_s^{(n)}, X_0^{(n)}(A_s^{(n)}))\right| ^{\nu }\ e^{-\nu Y_s^{(n)}}\ \left| \eta (s)-\eta ^{(n)}(s)\right| ds.\end{aligned}$$Note that by Lemma 4.1, the hypotheses on the coefficients and applying Cauchy-Schwarz inequality we have
$$\begin{aligned} B_{2,n}\le C \left( {{\mathbb {E}}}\int _0^T \left( L^{(n)}_{0,s}\right) ^{2\nu } ds\right) ^{\frac{1}{2}}\left( {{\mathbb {E}}}\int _0^T \left| \eta (s) -\eta ^{(n)}(s)\right| ^2 ds\right) ^{\frac{1}{2}}, \end{aligned}$$because \(|Z^{(n)}_s(A_s^{(n)}, X_0^{(n)}(A_s^{(n)}))|^{\nu } \ e^{-\nu Y_s^{(n)}}\le C\) (see (6.20)). The first factor in the right hand side is also bounded. Finally, in a similar way as in Sect. 6.2 we may take a subsequence of \(\eta ^{(n)}(\cdot )\) that is denoted with the same subindex for simplicity and then,
$$\begin{aligned} \lim _{n\rightarrow \infty }{{\mathbb {E}}}\int _0^T \left| \eta (s)-\eta ^{(n)}(s)\right| ^2 ds=0 \end{aligned}$$(5.20)thanks (3.3) and (4.5), Remark 5.4, assumptions on the coefficient \(a^{(n)}\) and the initial condition \(X_0^{(n)}\), the definition of \(\phi ^{(n)}\), and Sects. 6.2 and 6.3. Namely, we obtain that there is a constant \(C>0\) such that
$$\begin{aligned} \left| \eta (s)-\eta ^{(n)}(s)\right| ^2\le C\sum _{i=1}^4 \eta _i^{(n)}(s), \end{aligned}$$where
$$\begin{aligned} \eta _1^{(n)}(s)= & {} \left| a^2_s-\left( a^{(n)}_s\right) ^2\right| ^2,\\ \eta _2^{(n)}(s)= & {} \left| \frac{\phi ^{(n)}(s,X^{(n)}_s)}{X^{(n)}_s}-\frac{\phi (s,X_s)}{X_s}\right| ^2,\\ \eta _3^{(n)}(s)= & {} \left| \varepsilon _s-\varepsilon ^{(n)}_s\right| ^2 \end{aligned}$$and
$$\begin{aligned} \eta _4^{(n)}(s)= & {} \Bigg |a_s\frac{\partial _xZ_s (A_s, X_0(A_s))}{Z_s(A_s, X_0(A_s))} (D_sX_0)(A_s)\\{} & {} - a^{(n)}_s\frac{\partial _xZ^{(n)}_s (A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))}{Z^{(n)}_s(A^{(n)}_s, X^{(n)}_0(A^{(n)}_s))} (D_sX^{(n)}_0)(A^{(n)}_s)\Bigg |^2. \end{aligned}$$Using the construction of \(a^{(n)}\) and \(\varepsilon ^{(n)}\), it is obvious that
$$\begin{aligned} \lim _{n\rightarrow \infty }{{\mathbb {E}}}\int _0^T\left[ \eta _1^{(n)}(s) +\eta _3^{(n)}(s)\right] ds=0. \end{aligned}$$In order to deal with the \(\eta _2^{(n)}\), we divide it into three parts as follows:
$$\begin{aligned} \eta _2^{(n)}(s)= & {} \frac{1}{|X^{(n)}_sX_s|^2} \left[ X_s\ \phi ^{(n)}(s,X^{(n)}_s) - X^{(n)}_s\ \phi (s,X_s) \right] ^2\\\le & {} \frac{C}{|X^{(n)}_sX_s|^2}\Bigg [\left| X_s\right| ^2 \left( \phi ^{(n)}(s,X^{(n)}_s) - \phi (s,X_s)\right) ^2 +\ \phi (s,X_s)^2 \left( X_s-X^{(n)}_s\right) ^2\Bigg ]\\\le & {} \frac{C}{|X^{(n)}_sX_s|^2}\Bigg [\left| X_s\right| ^2\left\{ \left( \phi ^{(n)}(s,X^{(n)}_s) -\phi (s,X^{(n)}_s)\right) ^2 +\left( \phi (s,X^{(n)}_s)- \phi (s,X_s)\right) ^2\right\} \\{} & {} \quad \qquad +\ \phi (s,X_s)^2 \left( X_s-X^{(n)}_s\right) ^2\Bigg ]. \end{aligned}$$Now, (3.3), (X3T), the construction of \(X^{(n)}_s\), Lemmas 4.1 and 4.3, and the arguments used in the study of \(B_{1,n}\) in (5.19), (6.11), (6.25) and (6.26) imply that
$$\begin{aligned} \lim _{n\rightarrow \infty }{{\mathbb {E}}}\int _0^T\eta _2^{(n)}(s) ds=0. \end{aligned}$$The convergence of the fourth term is more complicated. It is a consequence of the construction of \(a^{(n)}\) and \(X_0^{(n)}\), Remark 5.4, Equality (4.5), the approximation of Z by \(Z^{(n)}\) given in Sect. 6.3, a similar argument used in (6.14) involving the second derivative of \(X_0\) in order to study the difference between \((D_sX_0)(A_s)\) and \((D_sX^{(n)}_0)(A^{(n)}_s)\) and, finally, the more delicate aspect is to state
$$\begin{aligned} \lim _{n\rightarrow \infty }{{\mathbb {E}}} \left[ \int _0^T \left| \partial _x \phi (s,X_s)- \partial _x\phi ^{(n)}(s,X^{(n)}_s)\right| ^2ds\right] =0. \end{aligned}$$This fact is true because of (4.5) and the mean value theorem. We can see that it holds applying Sect. 6.2. Indeed,
$$\begin{aligned} {{\mathbb {E}}} \left[ \int _0^T \left| \partial _x \phi (s,X_s)- \partial _x\phi ^{(n)}(s,X^{(n)}_s)\right| ^2ds\right] \le 2{{\mathbb {E}}}\int _0^T\left( \eta _{4,1}^{(n)}(s)+\eta _{4,2}^{(n)}(s)\right) ds. \end{aligned}$$Here
$$\begin{aligned} {{\mathbb {E}}}\int _0^T \eta _{4,1}^{(n)}(s)ds= & {} {{\mathbb {E}}} \left[ \int _0^T \left| \partial _x \phi (s,X_s)- \partial _x\phi ^{(n)}(s,X_s)\right| ^2ds\right] ,\\ {{\mathbb {E}}}\int _0^T\eta _{4,2}^{(n)}(s)ds= & {} {{\mathbb {E}}} \left[ \int _0^T \left| \partial _x\phi ^{(n)}(s,X_s)- \partial _x\phi ^{(n)}(s,X^{(n)}_s)\right| ^2ds\right] . \end{aligned}$$The construction of \(\phi ^{(n)}\), together with (B4T), yields
$$\begin{aligned} {{\mathbb {E}}}\int _0^T\eta _{4,2}^{(n)}(s)ds\le {{\mathbb {E}}} \left[ \int _0^T \left| \int _{X_s^{(n)}}^{X_s} \partial _x^2 \phi ^{(n)}(s,y) dy\right| ^2ds\right] \le C {{\mathbb {E}}} \left[ \int _0^T \left| X_s - X_s^{(n)} \right| ^2ds\right] . \end{aligned}$$Section 6.3 implies that this quantity converges to zero as n tends to \(+\infty \). In order to study the other term we observe
$$\begin{aligned} {{\mathbb {E}}}\int _0^T \eta _{4,1}^{(n)}(s)ds={{\mathbb {E}}} \left[ \int _0^T \left| \int _0^{X_s} \left[ \partial _x^2 \phi (s,y)- \partial _x^2\phi ^{(n)}(s,y)\right] dy\right| ^2ds\right] , \end{aligned}$$(5.21)since \(\partial _x^2 \phi ^{(n)}(s,0) =\partial _x^2 \phi (s,0)\) by definition. Also by definition, \(\left| \partial _x \phi (s,y)- \partial _x\phi ^{(n)}(s,y)\right| \) converges to zero almost surely in \((\omega ,s,y)\in \Omega \times [0,T]\times \mathbb {R}\), and it is bounded by a constant (see (B4T) and Section 6.2). Consequently, the dominated convergence theorem leads to
$$\begin{aligned} \lim _{n\rightarrow \infty }{{\mathbb {E}}}\int _0^T\eta _4^{(n)}(s) ds=0. \end{aligned}$$Thus, the proof of the theorem is finished.\(\square \)
5.2 Main results
In this section \(X(\cdot , X_0)\) stands for the unique solution to Eq. (3.1), under Hypotheses (X3T), (A2T) and (B4T). Now, we introduce three types of stability for this solution.
Definition 1.12
Assume that \(X_0\) satisfies (X3T). We say that \(X(\cdot , 0)\equiv 0\) is stable in probability if, for every \(\rho >0\) and \(\delta >0\), there exists \(r>0\) such that
for any \(X_0\) satisfying
Remark 5.6
Note that if we only consider deterministic initial conditions in Eq. (3.1), then the last definition agrees with the usual stability in probability for Eq. (3.1). See Section 1.5 in Khasminskii [9]. Note the same happens if \(X_0\) is a random variable independent of the Brownian filtration and satisfies (X3T). In this case we can assume that \(X_0\) is \(\mathcal{F}_0\)-measurable and we have \(D_s X_0=0\) for all \(s\ge 0\) and (5.22) reduces to \(||X_0||_{\infty }\le r,\) the usual condition in the deterministic case (in addition to the condition that its absolute value is greater than a constant).
Definition 1.14
Assume that \(X_0\) satisfies (X3T) and \(p>0.\) We say that \(X(\cdot , 0)\equiv 0\) is exponentially p-stable if there are positive constants \(A, r, \alpha >0\) such that
for any \(X_0\) satisfying
Remark 5.8
For instance, in order to have an example of an initial condition satisfying this definition, we can consider \(X_0=\eta \exp \{\phi (F)\}\) with \(\eta \in \mathbb {R}-\{0\}\), \(F=\int _0^\infty h(s) \delta W_s\), where \(\Vert h\Vert _\infty \) is assumed to be small enough and \(\phi '\) is bounded.
Definition 1.16
Suppose that \(X_0\) satisfies (X3T). We say that \(X(\cdot , 0)\equiv 0\) is exponentially stable in probability if, for a given \(\xi >0\), there are constants \(A, r, \alpha >0\) such that
for any \(X_0\) satisfying (5.23)
Remark 5.10
Note that the exponential p-stability implies
and the exponential stability in probability implies that
Theorem 1.18
Suppose that a, b and \(X_0\) satisfy (A2T), (B4T) and (X3T) for any \(T>0,\) respectively. Moreover, assume that \(X_0\) satisfies (5.22) and that
for a constant \(k>0\) and some positive adapted process \(\varepsilon \) such that
for some \(\nu \in (0,1]\), \(\delta _t\) defined in (B4T), \(c_1\) in (B1T) and r in (5.22). Then, the solution to equation (3.1) is stable in probability.
Proof. Lemmas 4.1 and 4.3, together with (5.3), (5.4) and (5.25), yield
Indeed, note that
and consequently, (5.25) and (5.22) gives that (5.26) holds.
Therefore, for \(\rho >0\),
where we have used (5.24) and the definition of the process Y. Thus, (5.26) gives
Hence, choosing \({{\mathbb {E}}}(|X_0|^{\nu })\) small enough, the result holds. \(\square \)
Remark 5.12
Note also that if \(\varepsilon _s=\delta _s+\epsilon \) for a certain \(\epsilon >0\) and a is bounded in \((0,+\infty )\times \Omega \), we always can find positive constants \(\nu \) and r small enough such that (5.26) holds. So, a sufficient condition to prove that the theorem holds is to assume \(\delta _s+\epsilon \) satisfies (5.24).
We also have the following stability criterion.
Theorem 1.20
Assume (A2T), (B4T) and (X3T) are satisfied for any \(T>0\) and (5.23) and (5.24) hold. Assume also there exists a strictly positive constant \(k_0\) such that
for all \(t\ge 0\) and some \(\nu \in (0,1].\) Then, the solution to Eq. (3.1) is exponentially \(\nu \)-stable.
Remark 5.14
Note that in comparison with Theorem 5.11, now the condition is (5.27) instead of (5.25).
Proof of Theorem 5.13
In order to apply Theorem 4.1 in Hartman [8] (pag 26) we may use the approximation \(X^{(n)}_\cdot \) because if we use the original \(X_\cdot \), the derivative of \({{\mathbb {E}}}F(X_\cdot , Y_\cdot )\) is not continuous. Using the same arguments given in the proof of Theorem 5.3, we have that there exists a sequence \(\{F(X^{(n)}_t, Y^{(n)}_t), n\ge 1\}\) that converges to \(F(X_t, Y_t)\) in \(L^1(\Omega \times [0,T])\) (see the study of (5.19)) and satisfies
Now, the goal is to apply Theorem 4.1 in Hartman [8]. Borrowing its notation, we define a function \(U(t,u)=-k_1\nu u\), on \((0,T)\times {{\mathbb {R}}}\). Then, the solution of \(u^{'}(t)=U(t,u)\) in [0, T] is \(u(t)=u(0)e^{-k_1\nu t}.\) Moreover, if we define \(v(t)={{\mathbb {E}}}[F(X^{(n)}_t, Y^{(n)}_t)]\), since \(\eta ^{(n)}(\cdot )\) is continuous thanks to the definitions of all the coefficients and the constructions of \(\phi ^{(n)}\), \(X^{(n)}\) and \(Z^{(n)}\), we have
Furthermore,
So, using Proposition 2.1.2 in [5], (5.20), (5.27) and proceeding as in the proof of Theorem 5.11 we have that there exists \(0<k_1<k_0\) such that for any n big enough,
Then, defining \(u(0)=v(0)={{\mathbb {E}}}(|X_0^{(n)}|^{\nu })\) and applying Theorem 4.1 in [8] we have
for any \(t\in [0,T].\) Letting \(n\rightarrow \infty \) in (5.28) we have
Finally, (5.24) allows us to get
which implies the desired result \(\square \)
An immediate consequence of previous theorem is the following result:
Corollary 1.22
Under the hypotheses of Theorem 5.13, the solution to equation (3.1) is exponentially stable in probability.
Proof
Observe that for any \(\rho >0,\)
Moreover, we also have the following result:
Theorem 1.23
Assume that (A2T), (B4T) and (X3T) hold for any \(T>0.\) Also assume that (5.24) is satisfied and that for some \(\nu \in (0,1]\) there exists \(\eta <0\) such that
Then, the solution to Eq. (3.1) is exponentially \(\nu -\)stable and exponentially stable in probability, for any initial condition \(X_0\in {{\mathbb {D}}}^{1,2}\) such that \(\sup _{(s,\omega )\in {{\mathbb {R}}}_{+}\times \Omega }\{\frac{|D_s X_0|}{|X_0|}\}\) is small enough.
Proof
The result is an immediate consequence of (5.3). \(\square \)
6 Appendix
6.1 Proof of Theorem 3.1
The purpose of this section is to provide a proof of Theorem 3.1.
Here, to simplify the notation, we assume that \(c_1\le L\) without loss of generality. As in Nualart [14] (Proof of Theorem 3.3.6), we apply Gronwall’s lemma and (B1T) to equation (3.2) and then, we use (3.3) to obtain
So, from (2.6), we have
as a consequence of the fact that \(\sup _{0\le t\le T} {{\mathbb {E}}}\left[ L_{0,t}^r\right] <+\infty \) and \(\sup _{0\le t\le T} {{\mathbb {E}}}\left[ L_{0,t}^{-r}\right] <+\infty ,\) for any \(r\ge 1\), which follows from (A1T). Moreover, we have
The proof that X, introduced in (3.3), is a solution to Eq. (3.1) is similar to that of Theorem 3.3.6 in Nualart [14]. Thus, using (2.6), (3.3), (6.1), Buckdhan [5] (Lemma 2.2.13), the integration by parts formula and the Girsanov’s theorem, we obtain
for any \(G\in \mathcal {S}\). Therefore the duality relation (2.2) implies that
because the right-hand side is an integrable process due to (6.1) and Hypothesis (B1T). Consequently, (3.1) holds.
Now, we prove the uniqueness of the solution to equation (3.1). To do so, we make use of the fact that there is a sequence \(\{a^n_s: s\in [0,T]\}\) of the form
where \(F_{i,n}\in \mathcal {S}\), \(i=0,\ldots , n-1\), and \(0=t_0<t_1<\ldots< t_{n-1}<t_n=T\), such that \(a^n\) goes to a in \(L^2([0,T];\mathbb {D}^{1,2}_T)\), \(\Vert a^n\Vert _{L^{\infty }(\Omega \times [0,T])}\le \Vert a\Vert _{L^{\infty }(\Omega \times [0,T])}\), \(\Vert Da^n\Vert _{L^{\infty }(\Omega \times [0,T]^2)}\le \Vert Da\Vert _{L^{\infty }(\Omega \times [0,T]^2)}+1\) and
where \(G \in \mathcal {S}\) and \(A^n\) is the solution to equation (2.4) when we change a by \(a^n\) (see Lemmas 3.2.3 and 3.2.4 in Buckdhan [5]).
Let Y be a solution to (3.1) such that Y belongs to \(L^1(\Omega \times [0,T])\) and \({1\!\!1}_{[0,t]}\, a\, Y \in \textrm{Dom}\ \delta \), for all \(t\in [0,T]\). Multiplying the members of (3.1) by \(G(A_t^n)\) and taking expectations, we have
Integrating by parts and using (6.3), we get
Consequently, by Fubini’s theorem, and proceeding as in Buckdhan [5] (Proof of Theorem 3.2.1), we obtain
Hence, Girsanov theorem (see (2.6)) implies
for any \(G\in \mathcal {S}\). So,
Thus, the uniqueness of the solution to Eq. (3.2) leads to establish
It means Y is equal to the right-hand side of (3.3). So, the proof of Theorem 3.1 is complete.
6.2 Construction of \(\phi ^{(n)}\)
Let \(n\in \mathbb {N}\). Define the partition
and
Thanks to (B4T), \(\partial _x^2 \phi ((t_i-\frac{1}{n^2})\vee 0,x_j)\in L^2(\Omega )\) for any (i, j) and we can find \(F_{i,j}^{(n,m)} \in \mathcal {S}\) such that \(F_{i,j}^{(n,m)} \longrightarrow \partial _x^2 \phi ((t_i-\frac{1}{n^2})\vee 0,x_j)\), as \(m\rightarrow + \infty \), in \(L^2(\Omega )\) and a.s. So, let
which is finite due to (B4T). Let \(f\in \mathcal {C}_c^{\infty }(\mathbb {R)}\) taking values in [0, 1] such that
and \(f_{Q}(x)=f(\frac{x}{2Q})\). Then, if we define
we have that \({\tilde{F}}_{i,j}^{(n,m)} \in \mathcal {S}\), \(\left| \tilde{F}_{i,j}^{(n,m)}\right| \le 4Q\), \({\tilde{F}}_{i,j}^{(n,m)}= F_{i,j}^{(n,m)}\) if \(\left| F_{i,j}^{(n,m)}\right| \le 2Q,\) and, moreover, \({\tilde{F}}_{i,j}^{(n,m)} \longrightarrow \partial _x^2 \phi ((t_i-\frac{1}{n^2})\vee 0,x_j)\) in \(L^2(\Omega )\) and a.s., as m goes to \(+\infty \). So, now we can take \(H_{i,j}^{(n)}=\tilde{F}_{i,j}^{(n,n_0)}\) with \(n_0\in \mathbb {N}\) such that
Using the function \(k_i^{(n)}\) introduced in the proof of Theorem 5.3 we define the following bounded random field
where we take into account that the indicator depends on n because \(x_j\) is so. The function \((t,x) \mapsto {\bar{\psi }}^{(n)}(t,x)\) is continuous in time with probability one satisfying \(|\bar{\psi }^{(n)}(t,x)|\le 16 Q\). Our next step is to show that, for any compact \(K\subset \mathbb {R},\) we get
To do so, we observe
with
We first study \(I_1^{(n)}\) and \(I_2^{(n)}\). Let \(M>0\) such that \(K\subseteq [-M.M].\) Then, for \(n>M\),
Now, due to the continuity of \(\partial _x^2 \phi \), uniformly on \([0,T]\times K\), we have
Secondly, we have that (6.5) gives
obtaining
We now study the last term. For n large enough, we have
It is not difficult to see that
and
So, using these facts we get
and, therefore
Now, putting together (6.8), (6.9) and (6.10) in (6.7), we get that (6.6) holds.
Finally, let \(g\in \mathcal {C}_c^{\infty } (\mathbb {R}) \) such that \(|g(x)|\le |x|\) and
where \(\Vert \delta \Vert _\infty =\sup _{(\omega ,t)\in \Omega \times [0,T]}|\delta _t(\omega )|\). With (6.6) in mind, we define, for any \((t,x)\in [0,T]\times \mathbb {R}\),
and
Remember that \(|{\bar{\psi }}^{(n)}(t,x)|\le 16 Q\) and we observe that (6.6) implies
taking a subsequence if it is necessary. Furthermore, we have that, for any \(x\in \mathbb {R}\),
as a consequence of the dominated convergence theorem since \(|\partial _x^2 \phi (t,y) -{\bar{\psi }}^{(n)}(t,y)|\) is bounded. Indeed, \(\partial _x \phi ^{(n)}(t,x)=\psi ^{(n)}(t,x)\), the function \(\psi ^{(n)}(t,x)\) is bounded (\(|\psi ^{(n)}(t,x)|\le 9 \Vert \delta \Vert _\infty \)) and continuous in x, and
In a similar way we obtain that the fact that \(\partial _x \phi ^{(n)}=\psi ^{(n)}\), for any \(x\in \mathbb {R}\), yields
Hence, we can find \(M>0\) such that \(K\subseteq [-M,M]\) and
as n goes to \(\infty \).
6.3 Convergence of \(Z^{(n)}\) to Z
In this subsection of the Appendix we show the convergence of \(Z^{(n)}_t (A_t^{(n)}, X_0^{(n)}(A_t^{(n)}))\) to \(Z_t (A_t, X_0(A_t))\) in \(L^1(\Omega \times [0,T]).\) It means
Note that if (6.11) is true, then we also have
because \(|Z_t(A_t, X_0(A_t))|\) and \(|Z_t^{(n)}(A_t^{(n)},X_0^{(n)}(A_t^{(n)}))|\) are bounded by a constant independent of n due to Lemma 4.1, Hypothesis (X1) and Sect. 6.2 (see also inequality (6.20) below).
Now we will prove that (6.11) is satisfied. For simplicity we write \(Z_s^{(n)}(x)\) and \(Z_s(x)\) instead of \(Z^{(n)}_s (A_s^{(n)}, x)\) and \(Z_s (A_s, x)\), respectively. Using the triangle inequality, we have
with
We first study \(\theta _1^n\). For this, we observe that (3.2) allows to get
and applying Gronwall’s Lemma we have, for \(c_1\) defined in (B1T),
Consequently, using the triangle inequality again, we can establish
with
By Propositions 2.1.4 and 2.2.12 in [5], we obtain
where CM means the norm of Cameron-Martin. Now, we consider the last factor of (6.15). For \(s\le t\) and a certain generic constant \(C\ge 1\), we can apply (2.4) and [5] (Poposition 2.1.4) to conclude
Hence, taking expectation and using (2.6),
Thus, Gronwall Lemma implies that, for \(0\le s\le t\le T\),
Similarly, changing a and \(a^{(n)}\) by \(X_0\) and \(X_0^{(n)}\), respectively, we are able to state
So, putting togheter (6.14), (6.15), (6.16) and (6.17) and considering the assumptions on \(a, a^{(n)}, X_0\) and \(X_0^{(n)}\), we get
Now, we analyze \(\theta _2^n\). Because of (4.6) we have, for \(t\in [0,T]\),
Applying Gronwall’s Lemma we obtain
So, from (B4T), we can decompose
with
As in Lemma 4.1, using that \(\Vert {\bar{b}}^{(n)}\Vert _{L^\infty (\Omega \times [0,T])} \le c \Vert {\bar{b}}\Vert _{L^\infty (\Omega \times [0,T])}\) for a certain generic \(c\ge 1\) (due to (B4T) and the definition of \({\bar{b}}^{(n)}\)) and that \(|\phi ^{(n)} (t,x)|\le 9 |x| \Vert \delta \Vert _{L^\infty (\Omega \times [0,T])}\) (thanks the construction of \(\phi ^{(n)}\) in Sect. 6.2), we have
for all \(\omega \in \Omega \) and for \(n\ge 1\). Then, Cauchy-Schwartz inequality, (6.20) and the fact that \(\Vert X_0^{(n)}\Vert _{L^\infty (\Omega )} \le c \Vert X_0\Vert _{L^\infty (\Omega )}\), for a certain generic \(C\ge 1\), give that
Proceding as in (6.21), we obtain
Moreover, using \(\phi (s,0)=0\) in (B2T), we get
Now we deal with the last term \(H_{4,n}\). Note that \(H_{4,n}\) has the form
with
On one hand, on \(\{L_{0,s}^{(n)}<M\}\), we know that \(L^{(n)}_{0,s}Z^{(n)}_s(X_0^{(n)}(A_t^{(n)}))\) is bounded, then, for a compact K big enough, we establish, for certain constant \(C>0\) and \(L>0\) such that \(K\subset [-L,L]\),
and this converges to zero as a consequence of Sect. 6.2.
On the other hand, Lemma 4.1, (6.20) and (B4T) yield
and Txebitxeff inequality implies
So, last part of Section 6.2, the definitions of \(a^{(n)}\) and \(\bar{b}^{(n)}\), together with (6.19) and (6.21)–(6.26), allow us to obtain
Finally, (6.18), (6.27) and (6.13) yield that (6.11) holds. \(\square \)
Data Availability
Not applicable.
References
Alòs, E., Nualart, D.: An extension of Itô’s formula for anticipating processes. J. Theor. Probab. 11(2), 493–514 (1998)
Arnold, L.: Stochastic Differential Equations: Theory and Applications. Wiley, Hoboken (1974)
Bhatia, N.P., Szegö, G.P.: Stability Theory of Dynamical Systems. Springer, Berlin (1970)
Buckdahn, R.: Linear Skorohod stochastic differential equations. Probab. Theory Relat. Fields 90, 223–240 (1991)
Buckdahn, R.: Anticipative Girsanov Transformations and Skorohod Stochastic Differential Equations. Memoirs of AMS, Volume 111 (533) (1994)
Escudero, C., Ranilla-Cortina, S.: Optimal portfolios for different anticipating integrals under insider information. Mathematics (MDPI) 9, 75 (2021)
Gard, T.C.: Introduction to Stochastic Differential Equations. Marcel Dekker, New York (1988)
Hartman, P.: Ordinary Differential Equations, 2nd edn. SIAM, Philadelphia (2002)
Khasminskii, R.: Stochastic Stability of Differential Equations, 2nd edn. Springer, Berlin (2012)
Kohatsu-Higa, A., León, J.A.: Anticipating stochastic differential equation of Stratonovich type. Appl. Math. Optim. 36, 263–289 (1997)
Laning, J.H., Battin, R.H.: Random Processes in Automatic Control. McGraw-Hill, New York (1956)
León, J.A., Márquez-Carreras, D., Vives, J.: Anticipating linear stochastic differential equations driven by a Lévy process. Electron. J. Probab. 17(89), 1–26 (2012)
León, J.A., Navarro, R., Nualart, D.: An anticipating calculus approach to the utility maximization of an insider. Math. Finance 13(1), 171–185 (2003)
Nualart, D.: The Malliavin Calculus and Related Topics, 2nd edn. Springer, Berlin (2006)
Ocone, D., Pardoux, E.: A generalized Itô–Ventzell formula. Application to a class of anticipating stochastic differential equations. Annales de l’IHP Section B 25(1), 39–71 (1989)
Pugachev, V.S.: Theory of Random Functions: And Its Application to Control Problems. Fizmatgiz, Moscow (1960)
Skorokhod, A.V.: On a generalization of a stochastic integral. Theory Probab. Appl. 20, 219–233 (1975)
Funding
Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Work of David Márquez-Carreras is partially supported by Grant PID2021-123733NB-I00 from the Spanish Ministerio de Ciencia e Innovación. Work of Josep Vives is partially supported by Grant PID2020-118339GB-100 (2021-2024) from the Spanish Ministerio de Ciencia e Innovación.
Author information
Authors and Affiliations
Contributions
All the authors have contributed equally to all aspects of the necessary work writting the paper.
Corresponding author
Ethics declarations
Ethical Approval
Not applicable
Conflict of interest
The authors declare not to have conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
David Márquez-Carreras: Partially supported by Spanish grant PID2021-123733NB-I00. Josep Vives: Partially supported by Spanish grant PID2020-118339GB-100.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
León, J.A., Márquez-Carreras, D. & Vives, J. Stability of Some Anticipating Semilinear Stochastic Differential Equations of Skorohod Type. J Dyn Diff Equat (2023). https://doi.org/10.1007/s10884-023-10312-z
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10884-023-10312-z
Keywords
- Anticipating stochastic differential equations
- Stability
- Malliavin calculus
- Anticipative Girsanov transformations