Abstract
We show that the comparison results for a backward SDE with jumps established in Royer (Stoch. Process. Appl 116: 1358–1376, 2006) and Yin and Mao (J. Math. Anal. Appl 346: 345–358, 2008) hold under more simplified conditions. Moreover, we prove existence and uniqueness allowing the coefficients in the linear growth- and monotonicity-condition for the generator to be random and time-dependent. In the L2-case with linear growth, this also generalizes the results of Kruse and Popier (Stochastics 88: 491–539, 2016). For the proof of the comparison result, we introduce an approximation technique: Given a BSDE driven by Brownian motion and Poisson random measure, we approximate it by BSDEs where the Poisson random measure admits only jumps of size larger than 1/n.
Similar content being viewed by others
1 Introduction
In this paper, we study backward stochastic differential equations (BSDEs) of the form
where W denotes a one-dimensional Brownian motion and \(\tilde {N}\) a compensated Poisson random measure belonging to a given Lévy process with Lévy measure ν. In particular, our focus lies on comparison results and existence and uniqueness of solutions.
Comparison theorems state that—under certain conditions—if ξ≤ξ′ and f≤f′, then the process Y of the solution satisfies Yt≤Yt′ for all t∈[0,T]. These types of theorems in the case of one-dimensional, Brownian BSDEs has been treated by Peng (1992), El Karoui et al. (1997, 2009), and Cao and Yan (1999).
In (Barles et al. (1997), Remark 2.7) a counterexample was given, which shows that in the jump case the conditions ξ≤ξ′ and f≤f′ are not sufficient to guarantee Y≤Y′. They propose an additional sufficient condition which has been generalized by Kruse and Popier (2016), Royer (2006), Yin and Mao (2008), Becherer et al. (2018) (allowing more general jump processes), and Cohen et al. (2010) (for BSDEs driven by martingales). The condition of Kruse and Popier (2016) reads (in our L2-setting) as follows: for each \(s,y,z,u, u^{\prime } \in [0,T]\times \mathbb {R} \times \mathbb {R} \times L^{2}(\nu)\times L^{2}(\nu)\) there is a progressively measurable process \(\gamma ^{y,z,u,u{\prime }}\colon \Omega \times {[0,T]}\times \mathbb {R}\setminus \{0\}\to \mathbb {R}\) such that
One of the main results in the present paper is Theorem 3.5 which states that (2) can be replaced by the simpler condition
Notice that the r.h.s. is infinite for u′(x)−u(x)∉L1(ν). Clearly, (3) is a weaker condition than (2), because one only needs to check the inequality for those u,u′∈L2(ν) for which u≤u′ holds. Moreover, we do not need any L2(ν) condition for \(\gamma ^{y,z,u,u{\prime }}_{s}\) but we choose \(\gamma ^{y,z,u,u{\prime }}_{s}(x)=-1.\) Under the constraint \( -1 \leq \gamma ^{y,z,u,u{\prime }}_{s}(x),\) the choice \(\gamma ^{y,z,u,u{\prime }}_{s}(x)=-1\) yields for u′−u≥0 the largest possible expression on the r.h.s. of (2), so that (3) can be seen as the weakest possible condition which (2) could impose on f.
For a finite Lévy measure ν, Theorem 3.5 can be shown using only elementary means.
Another main result is a method of how to approximate a BSDE driven by a Lévy process with an infinite measure ν, by a sequence of BSDEs where the driving processes have a finite Lévy measure. We apply this result to show the comparison theorem for BSDEs driven by a general Lévy process. The proof relies on the Jankov–von Neumann theorem on measurable sections/uniformizations (this theorem is also important for dynamic programming, see El Karoui and Tan (2013). Under certain conditions on the generator, the approximating solutions can be interpreted as nonlinear conditional expectations (in the sense of Peng (2010)), conditioned on a Lévy process whose jumps are not of arbitrarily small size. (See the comments after Theorem 3.4.)
Studying the existence, uniqueness, and comparison results by Darling and Pardoux (1997), Pardoux and Zhang (1996), Pardoux (1997), Fan and Jiang (2012), Royer (2006), Situ (1997), Yin and Mao (2008), Kruse and Popier (2016, 2017), Yao (2017), and Sow (2014), one notices that one can unify and generalize the assumptions on f.
Indeed, and this is our third main result, in the case of L2-solutions, for a progressively measurable generator f with linear growth, it suffices to assume (cf. Theorems 3.1 and 3.5) the following growth- and monotonicity conditions with time-dependent, random coefficients:
-
|f(ω,s,y,z,u)|≤F(s,ω)+K1(s,ω)|y|+K2(s,ω)(|z|+∥u∥),
-
(y−y′)(f1(ω,s,y,z,u)−f1(ω,s,y′,z′,u′))
≤α(s)ρ(|y−y′|2)+β(s,ω)|y−y′|(|z−z′|+∥u−u′∥),
with α∈L1([0,T]) and F being nonnegative and progressively measurable such that \(\mathbb {E}\left [ \left (\int _{0}^{T} F(\omega, t)dt \right)^{2}\right ] < \infty.\) The processes K1,K2, and β are nonnegative and progressively measurable such that for a constant c>0,
The concave function ρ in the monotonicity condition may grow faster than linear at zero and satisfies \(\int _{0^{+}} 1/\rho (x)dx=\infty.\) This type of function already appeared in context with BSDEs in Mao (1995) in 1997.
These assumptions also extend the monotonicity condition of Kruse and Popier (2016, 2017), for the L2-case with linear growth, since the coefficients in our setting take randomness, the function ρ and time-dependence into account. BSDEs with time-dependent coefficients appear, for example, in Gobet and Turkedjiev (2016).
The existence and uniqueness result Theorem 3.1 and the comparison result Theorem 3.5 are basic tools in the forthcoming paper (Geiss and Steinicke 2018) on Malliavin differentiability and boundedness of solutions to BSDEs. To compute the Malliavin derivative for the jump part of the Lévy process, more structure from the generator is required in its dependency on u, usually via an integral w.r.t. ν(dx), for example,
where \([0,T] \times \mathbb {R} \ni (s,v) \mapsto h(s,v).\) One can find h and κ such that the assumptions of Theorem 3.5 are satisfied while conditon (2) does not hold: By the mean value theorem there exists a ζ∈]0,1[ and
such that
Assumption (3) holds if \( \gamma _{s}^{u,u{\prime }}(x):= \partial _{v} h \left (s,v_{\zeta }\right) \kappa (s,x) \ge -1\) for all (s,u,u′,x). Choosing, for example, a bounded function h such that also sups,v|∂vh(s,v)|<∞, but ∂vh(s,v)≠0 for a.e. s and v, and putting \(\kappa (s,x) = s^{-\frac {1}{4}} (|x|\wedge 1),\) then (2) does not hold since
However, the Assumptions (A2), (A3) of Section 3 are satisfied for
The paper is structured as follows: Section 2 contains preliminaries and basic definitions. In Section 3, we present the main theorems of this paper about existence and uniqueness of solutions, the approximation using BSDEs based on Lévy processes with finite Lévy measure, and the comparison result. The latter we also prove there. Having stated and proved some auxiliary results in Section 4, including an a-priori estimate for our type of BSDEs, we are able to prove existence and uniqueness and the approximation result from Section 3. In the appendix, we recall the Bihari–LaSalle inequality and the Jankov–von Neumann theorem.
2 Setting
Let X=(Xt)t∈[0,T] be a càdlàg Lévy process on a complete probability space \((\Omega,\mathcal {F},\mathbb {P})\) with Lévy measure ν. We will denote the augmented natural filtration of X by \(\left ({\mathcal {F}_{t}}\right)_{t\in {[0,T]}}\) and assume that \(\mathcal {F}=\mathcal {F}_{T}.\) For 0<p≤∞ we use the notation \(\left (L^{p},\|\cdot \|_{p}\right):=\left (L^{p}(\Omega,\mathcal {F},\mathbb {P}),\|\cdot \|_{L^{p}}\right)\). Equations or inequalities for objects of these spaces throughout the paper are considered up to \(\mathbb {P}\)-null sets.
The Lévy–Itô decomposition of a Lévy process X can be written as
where \(a\in \mathbb {R}\), σ≥0, W is a Brownian motion and N (\(\tilde N\)) is the (compensated) Poisson random measure corresponding to X, see Applebaum (2004) or Sato (1999).
Notation
-
Let \(\mathcal {S}^{2}\) denote the space of all \((\mathcal {F}_{t})\)-progressively measurable and càdlàg processes \(Y\colon \Omega \times {[0,T]} \rightarrow \mathbb {R}\) such that
$$\begin{array}{@{}rcl@{}} \left\|Y\right\|^{2}_{\mathcal{S}^{2}}:=\mathbb{E}\sup_{0\leq t\leq T} \left|Y_{t}\right|^{2} <\infty. \end{array} $$ -
We define L2(W) as the space of all \((\mathcal {F}_{t})\)-progressively measurable processes \(Z\colon \Omega \times {[0,T]}\rightarrow \mathbb {R}\) such that
$$\begin{array}{@{}rcl@{}} \left\|Z\right\|_{L^{2}(W) }^2:=\mathbb{E}{\int}_{0}^{T}\left|Z_{s}\right|^{2} ds<\infty. \end{array} $$ -
Let \(\mathbb {R}_{0}:= \mathbb {R}\!\setminus \!\{0\}\). We define \(L^{2}\left (\tilde N\right)\) as the space of all random fields \(U\colon \Omega \times {[0,T]}\times {\mathbb {R}_{0}}\rightarrow \mathbb {R}\) which are measurable with respect to \(\mathcal {P}\otimes \mathcal {B}\left (\mathbb {R}_{0}\right)\) (where \(\mathcal {P}\) denotes the predictable σ-algebra on Ω×[0,T] generated by the left-continuous \((\mathcal {F}_{t})\)-adapted processes) such that
$$\begin{array}{@{}rcl@{}} \left\|U\right\|_{L^{2}\left(\tilde N\right)}^{2}:=\mathbb{E}{\int}_{{[0,T]}\times{\mathbb{R}_{0}}}\left|U_{s}(x)\right|^{2} ds\,\nu(dx)<\infty. \end{array} $$ -
\(L^{2}(\nu):= L^{2}\left (\mathbb {R}_{0}, \mathcal {B}\left (\mathbb {R}_{0}\right), \nu \right), \|\cdot \|:=\|\cdot \|_{L^{2}(\nu)}.\)
-
\(L^{p}([0,T]):=L^{p}([0,T],\mathcal {B}([0,T]), \lambda)\) for p>0, where λ is the Lebesgue measure on [0,T].
-
With a slight abuse of the notation, we define
$$\begin{array}{@{}rcl@{}} && L^{2}\left(\Omega; L^{1}([0,T])\right) \\ & :=&\!\!\!\!\! \left \{F \in L^{0}(\Omega \times [0, T], \mathcal{F} \otimes \mathcal{B}([0,T]), \mathbb{P} \otimes \lambda): \mathbb{E} \!\left[{\int}_{0}^{T} \!\!|F(\omega, t)| dt\right]^{2}\!\! <\! \infty. \right \} \end{array} $$(5)For F∈L2(Ω;L1([0,T])), put
$$\begin{array}{@{}rcl@{}} I_{F}(\omega):= {\int}_{0}^{T} F(\omega, t)dt \quad \text{ and} \quad K_{F}(\omega, s) := \frac{F(\omega, s)}{I_{F}(\omega)}. \end{array} $$(6) -
A solution to a BSDE with terminal condition ξ and generator f is a triplet \((Y,Z,U)\in \mathcal {S}^{2}\times L^{2}(W)\times L^{2}\left (\tilde N\right)\) which satisfies for all t∈[0,T]:
$$ Y_{t}=\xi+{\int}_{t}^{T} f(s,Y_{s},Z_{s},U_{s})ds-{\int}_{t}^{T} Z_{s} {dW}_{s}-{\int}_{{]t,T]}\times\mathbb{R}_{0}}U_{s}(x)\tilde{N}(ds,dx). $$(7)The BSDE (7) itself will be denoted by (ξ,f).
3 Main results
We start with a result about existence and uniqueness which is proved in Section 5.
Theorem 3.1
There exists a unique solution to the BSDE (ξ,f) with ξ∈L2 and generator \(f:\Omega \times {[0,T]}\times \mathbb {R}\times \mathbb {R}\times L^{2}(\nu)\to \mathbb {R}\) satisfying the properties
-
(A1)
For all (y,z,u):(ω,s)↦f(ω,s,y,z,u) is progressively measurable.
-
(A2)
There are nonnegative, progressively measurable processes K1,K2, and F with
$$\begin{array}{@{}rcl@{}} C_{K}:= \left \|{\int}_{0}^{T}\left(K_{1}(\cdot,s)+K_{2}(\cdot,s)^{2}\right)ds \right \|_{\infty}<\infty \end{array} $$(8)and F∈L2(Ω;L1([0,T])) (see (5)) such that for all (y,z,u),
$$\begin{array}{@{}rcl@{}} &|f(s,y,z,u)|\leq F(s)+K_{1}(s)|y|+K_{2}(s)(|z|+\|u\|), \quad \mathbb{P}\otimes\lambda\text{-a.e.} \end{array} $$ -
(A3)
For λ-almost all s, the mapping (y,z,u)↦f(s,y,z,u) is \(\mathbb {P}\)-a.s. continuous. Moreover, there is a nonnegative function α∈L1([0,T]), c>0 and a progressively measurable process β with \(\int _{0}^{T} \beta (\omega,s)^{2} ds< c\), \(\mathbb {P}\)-a.s. such that for all (y,z,u),(y′,z′,u′),
$$\begin{array}{@{}rcl@{}} &\left(y-y^{\prime}\right)\left(f\left(s,y,z,u\right)-f\left(s,y^{\prime},z^{\prime},u^{\prime}\right)\right)\\ &\leq \alpha(s)\rho\left(|y-y^{\prime}|^{2}\right)+\beta(s)\left|y-y^{\prime}\right|\left(\left|z-z^{\prime}\right|+\left\|u-u^{\prime}\right\|\right), \mathbb{P}\otimes\lambda\text{-a.e.}, \end{array} $$where ρ is a nondecreasing, continuous and concave function from [0,∞[ to itself, satisfying ρ(0)=0, and \(\int _{0^{+}}\frac {1}{\rho (x)}dx=\infty.\)
-
(A4)
The function ρ in (A3) satisfies \(\limsup _{x \downarrow 0} \frac {\rho (x^{2})}{x} =0.\)
If f satisfies only (A1)–(A3), then there exists at most one solution.
For ρ(x)=x, we are in the case of the ordinary monotonicity condition. Another example for a function ρ is given by
Remark 3.2
white.
-
1.
Condition (A2) implies that f(s,y,z,u) is integrable for a.e. s∈[0,T] since, by Fubini’s theorem,
$$\begin{array}{@{}rcl@{}} {\int}_{0}^T&& \mathbb{E} |f(s,y,z,u)| ds \\ &\le& \mathbb{E} {\int}_{0}^{T} [F(s) + K_{1}(s)|y| + K_{2}(s)(|z|+\|u\|))]ds < \infty. \end{array} $$(9) -
2.
If \(\limsup _{x \downarrow 0} \frac {\rho (x^{2})}{x} =0\) is satisfied one can derive Lipschitz continuity of f(s,y,z,u) in z and u from the monotonicity condition in (A3). We require (A4) since we later want to apply (Yin and Mao (2008), Theorem 2.1), where Lipschitz continuity in u is used to show uniqueness of solutions. If only (A1)–(A3) are satisfied but not (A4), and a Lipschitz condition in z,u holds nevertheless, all of the article’s theorems remain valid. One can show that (A4) does not follow from the other conditions imposed on ρ in (A3): Assume a decreasing sequence \(\left (x_{n}\right)_{n=0}^{\infty }\) with x0=1 and \({\lim }_{n \to \infty } x_{n} =0.\) Define
$$\rho(x):= \left \{ \begin{array}{ll} \sqrt{x_{n}} &\text{if }\,\,\, x=x_{n}, \, n=0,1,2,... \\ \sqrt{x} &\text{if }\,\,\, x > 1 \text{ or} x=0. \end{array} \right. $$and let ρ be continuous and piecewise linear on ]0,1]. The so defined ρ is a concave function with \(\limsup _{x \downarrow 0} \frac {\rho (x)}{\sqrt {x}} =1.\) The sequence \((x_{n})_{n=0}^{\infty }\) can be constructed such that \(\int _{0}^{1}\frac {1}{\rho (x)}dx=\infty.\) For example, choose x1 such that \(\int _{x_{1}}^{1}\frac {1}{\rho (x)}dx\ge 1,\) and if xn has been chosen find xn+1 such that
$${\int}_{x_{n+1}}^{x_{n}}\frac{1}{\rho(x)}dx = \frac{1}{2}\left(\log(x_{n})-\log(x_{n+1})\right)\left(\sqrt{x_{n}}+\sqrt{x_{n+1}}\right) \ge 1.$$
The next result shows how a solution to a BSDE can be approximated by a sequence of solutions of BSDEs which are driven by Lévy processes with a finite Lévy measure. We do this by approximating the underlying Lévy process defined through
for n≥1 by
The process Xn has a finite Lévy measure νn. Furthermore, note that the compensated Poisson random measure associated with Xn can be expressed as \(\tilde {N}^{n}=\chi _{\{1/n \leq |x|\}}\tilde {N}.\) Let
where \(\mathcal {N}\) stands for the null sets of \(\mathcal {F}.\) Note that \(\left (\mathcal {J}^{n}\right)_{n=0}^{\infty }\) forms a filtration. The notation \(\left (\mathcal {J}^{n}\right)_{n=0}^{\infty }\) was chosen to indicate that this filtration describes the inclusion of smaller and smaller jumps of the Lévy process. We will use
for the conditional expectation.
The intuitive idea now would be to work with a BSDE driven by Xn where one uses the data \(\left (\mathbb {E}_{n} \xi, \mathbb {E}_{n} f\right).\) The problem is that the generator f needs to be progressively, and also jointly measurable w.r.t. (ω,t,y,z,u), but it is not obvious whether the conditional expectation \( \mathbb {E}_{n} f\) preserves this property from f. For BSDEs driven by a Brownian motion, this problem has been solved in (Ylinen (2017), Proposition 7.3), but this proposition does not apply to our situtation. Therefore, we next propose a method for the construction of a unique progressively measurable and jointly measurable w.r.t. (ω,t,y,z,u) version of \( \mathbb {E}_{n} f.\)
Definition 3.3
(Definition of fn) Assume that f satisfies (A1), (A2) and that \(\mathbb {J}:= \left (\mathcal {J}^{[s]}\right)_{s\in [0,\infty [}\) is built using (10), where [·] denotes the floor function. Let \(^{o,\mathbb {J}} f\) be the optional projection of the process
in the variables (s,ω) with respect to \(\mathbb {J},\) and with parameters (t,y,z,u). For each n≥0, assume that the filtration \(\mathbb {F}^{n} :=\left (\mathcal {F}_{t}^{n}\right)_{t\in {[0,T]}}\) is given by \(\mathcal {F}_{t}^{n}:=\mathcal {F}_{t}\cap \mathcal {J}^{n}.\) Let fn be the optional projection of
with respect to \(\mathbb {F}^{n}\) with parameters (y,z,u).
The reason for using the filtration \(\left (\mathcal {J}^{[s]}\right)_{s\in [0,\infty [}\) instead of the \( \left (\mathcal {J}^{n}\right)_{n=0}^{\infty }\) from (10) is that one can apply known measurability results w.r.t. right continuous filtrations instead of proving measurability here directly. Indeed, the optional projection \({\!~\!}^{\!\!\!o,\mathbb {J}\!\!}f\) defined above is jointly measurable in (s,ω,t,y,z,u). For this we refer to Meyer (1979), where optional and predictable projections of random processes depending on parameters were considered, and their uniqueness up to indistinguishability was shown.
It follows that for all (t,y,z,u),
Then, since f is \(\left (\mathcal {F}_{t}\right)_{t\in {[0,T]}}\)-progressively measurable, for all n≥0, t∈[0,T] and all (y,z,u), it holds that
Hence, fn(t,y,z,u) is a jointly measurable version of \(\mathbb {E}_{n} f(t,y,z,u)\) which is \(\left (\mathcal {F}_{t}^{n}\right)_{t\in {[0,T]}}\)-optional, so especially it is progressively measurable.
We comment on the compatibility of the solutions (Yn,Zn,Un) from the BSDE corresponding to \(\left (\mathbb {E}_{n}\xi,f_{n}\right),\)
with the space \(S^{2}\times L^{2}(W)\times L^{2}\left (\tilde N\right)\):
The triplet \(\left (Y^{n},Z^{n},U^{n}\right)\in S^{2}\times L^{2}(W)\times L^{2}\left (\tilde N^{n}\right)\) can be canonically embedded in the space \(S^{2}\times L^{2}(W)\times L^{2}\left (\tilde N\right)\), basically by extending \(U^{n}_{s}(x)\) onto \(\mathbb {R}_{0}\) by defining \(U^{n}_{s}(x):=0\) for \(|x|<\frac {1}{n}\). Moreover, recall that \(\tilde {N}^{n}=\chi _{\{1/n \leq |x|\}}\tilde {N},\) so that
Therefore, \(\left (Y^{n},Z^{n},U^{n}\chi _{\mathbb {R}\setminus {]-1/n,1/n[}}\right)\) solves \(\left (\mathbb {E}_{n}\xi,f_{n}\right)\) in \(S^{2}\times L^{2}(W)\times L^{2}\left (\tilde N\right)\).
Theorem 3.4
Let ξ∈L2 and let f satisfy (A1)–(A3). Assume that the BSDE driven by Xn with data \(\left (\mathbb {E}_{n}\xi,f_{n}\right)\) (where fn is given by Definition 3.3) has a unique solution denoted by (Yn,Zn,Un). If the solution (Y,Z,U) to (ξ,f) exists as well, then,
in \(L^{2}(W)\times L^{2}(W)\times L^{2}\left (\tilde {N}\right)\) on \((\Omega,\mathcal {F},\mathbb {P})\). Moreover, if f additionally satisfies (A4), then the mentioned solution triplets exist.
The benefit of this approximation becomes clear in the proof of the comparison theorem which we state next. There, we only need to prove the comparison result assuming a finite Lévy measure, since the general case then follows by approximation.
Another consequence of this approximation result concerns nonlinear expectations. (For a survey article on nonlinear expectations the reader is referred to Peng (2010)). In the case of Lévy processes, provided that f(s,y,0,0)=0 for all s and y, the process Yt has been described by Royer (2006) as a conditional nonlinear expectation, denoted by \(\mathbb {E}^{f}_{t} \xi :=Y_{t}.\) Hence, our theorem implies that
Theorem 3.5
Let f,f′ be two generators satisfying the conditions (A1)–(A3) of Theorem 3.1 (f and f′ may have different coefficients). We assume ξ≤ξ′, \(\mathbb {P}\)-a.s. and for all (y,z,u), f(s,y,z,u)≤f′(s,y,z,u), for \(\mathbb {P}\otimes \lambda \)-a.a. (ω,s)∈Ω×[0,T]. Moreover, assume that f or f′ satisfy the condition (here formulated for f)
-
(A γ) \(f(s,y,z,u)- f(s,y,z,u^{\prime })\leq \int _{\mathbb {R}_{0}}\left (u^{\prime }(x)-u(x)\right)\nu (dx), \quad \mathbb {P}\otimes \lambda \)-a.e.
for all u,u′∈L2(ν) with u≤u′.
Let (Y,Z,U) and (Y′,Z′,U′) be the solutions to (ξ,f) and (ξ′,f′), respectively.
Then, \(Y_{t}\leq Y^{\prime }_{t}\), \(\mathbb {P}\)-a.s.
Proof
The basic idea for this proof was inspired by the one of Theorem 8.3 in El Karoui et al. (2009).
Step 1:
In this step we assume that the Lévy measure ν is finite. We use Tanaka–Meyer’s formula (cf. Protter (2004), Theorem 70) to see that for \(\eta (s):=2\beta (s)^{2}+\nu (\mathbb {R}_{0})\),
Here, M(t) is a stochastic integral term having zero expectation which follows from \(Y,Y^{\prime }\in \mathcal {S}^{2}\) (this holds according to Theorem 3.1). Moreover, we used that on the set {ΔYs≥0} (where ΔY:=Y−Y′) we have \(\left (Y_{s}-Y^{\prime }_{s}\right)_{+}=\left |Y_{s}-Y^{\prime }_{s}\right |\). Taking means and denoting the differences by Δξ:=ξ−ξ′, ΔZ:=Z−Z′, ΔU:=U−U′ and Δf:=f−f′ leads us to
We split up the set \(\mathbb {R}_{0}\) into
Taking into account that ξ≤ξ′, we estimate
We focus on the term \((\Delta Y_{s})_{+}\left (f\left (s,Y_{s},Z_{s},U_{s}\right)-f^{\prime }\left (s,Y^{\prime }_{s},Z^{\prime }_{s}, U^{\prime }_{s}\right)\right)\), and denoting ((Y,Z),(Y′,Z′)) by (Θ,Θ′), we derive from f≤f′ that
We continue with the observation that on {ω:ΔYs>0} we have
so that
Therefore, we split \((\Delta Y_{s})_{+}\left (f\left (s,\Theta _{s},U_{s}\right)-f\left (s,\Theta ^{\prime }_{s}, U^{\prime }_{s}\right)\right)\) into two terms; one we estimate with (A3) and the first inequality of (14), while for the other we use (A γ):
Thus, by the last two inequalities, (13) evolves to
Because of \(\left \|\Delta U_{s}\chi _{B}\right \|^{2}={\int }_{B}\!|\Delta U_{s}(x)|^{2}\nu (dx)\), we cancel out terms and get
Bounding \({\int }_{B^{c}}\!(\Delta Y_{s})^{2}_{+}\nu (dx)\) by \(\nu (\mathbb {R}_{0})(\Delta Y_{s})^{2}_{+}\), leads us to
It remains, also using the definition of η,
The term \(e^{\int _{0}^{T} \eta (\tau)d\tau }\) is \(\mathbb {P}\)-a.s. bounded by a constant C>0. Thus, by the concavity of ρ, we arrive at
Then, the Bihari–LaSalle inequality (Proposition 5.2)—a generalization of Gronwall’s inequality—shows that \(\mathbb {E}(\Delta Y_{t})^{2}_{+}=0\) for all t∈[0,T], which is the desired result for \(\nu (\mathbb {R}_{0})<\infty \).
Step 2:
The goal of this step is to extend the result of the first step to general Lévy measures. We adapt the notation of Theorem 3.4 for Yn,Yn′,fn, and \(f_{n}^{\prime }\). Now, we claim that for solutions Yn and Yn′ of \((\mathbb {E}_{n}\xi,f_{n})\) and \(\left (\mathbb {E}_{n}\xi ^{\prime },f_{n}^{\prime }\right),\) Step 1 granted that Yn≤Yn′: Indeed, fn≤fn′ holds by the monotonicity of \(\mathbb {E}_{n}\), and also (A γ) holds for fn if it did for f. One notes that the process Xn which is related to \((\mathbb {E}_{n}\xi,f_{n})\) and \((\mathbb {E}_{n}\xi ^{\prime },f_{n}^{\prime })\) has a finite Lévy measure νn satisfying \(\nu _{n}(|x|<\frac {1}{n})=0,\) while in (A γ) we still have ν. However, the solution processes Un and Un′ are zero for \(|x|<\frac {1}{n}\) (see the comment before Theorem 3.4).
Hence, we need (A γ) only for u and u′ which are zero for \(|x|<\frac {1}{n},\) and for those u and u′ we may replace ν by νn and then apply Step 1. Finally, the convergence of the sequences to the solutions Y and Y′ of (ξ,f) and (ξ′,f′), respectively, in L2(W) shows Y≤Y′, and our theorem is proven. □
4 Auxiliary results
We will frequently use the following basic algebraic inequalities (special cases of Young’s inequality) which hold for all R>0:
The following proposition states, roughly speaking, that for the BSDEs considered here it is sufficient to find solution processes of a BSDE in the (larger) space \(L^{2}(W)\times L^{2}(W)\times L^{2}(\tilde {N})\).
Proposition 4.1
If \((Y,Z,U)\in L^{2}(W)\times L^{2}(W)\times L^{2}\left (\tilde {N}\right)\) is a triplet of processes that satisfies the BSDE (ξ,f) with ξ∈L2 and (A1), (A2), then (Y,Z,U)is a solution to (7), i.e., \((Y,Z,U)\in \mathcal {S}^{2}\times L^{2}(W)\times L^{2}\left (\tilde {N}\right)\). In particular, there exists a constant C1>0 such that
where CK was defined in (8) and IF in (6).
Proof
Since (Y,Z,U) satisfies (7), it holds that
We apply the first inequality of (14), where Yt takes the role of a, to get for an arbitrary R>0:
Condition (A2) implies
We estimate with the help of the inequalities (14),
Hence,
Note that \(\int _{0}^{T} K_{F}(s)ds =1\) and choose \(R=R_{0}:= 5+\int _{0}^{T}\left (K_{1}(s)+K_{2}(s)^{2}\right)ds\) so that
Since Y is a càdlàg process, we may apply (46) from the appendix which leads to
The inequality (a+b)2≤2a2+2b2 and then Doob’s martingale inequality used on
yield, since a.s. R0≤5+CK and \(\int _{0}^{T} K_{1}(s)ds \le C_{K},\)
with
For a progressively measurable process η, which we will determine later, Itô’s formula implies that
where
Provided that \(\left \| \int _{0}^{T}\eta (\tau)d\tau \right \|_{L^{\infty }(\mathbb {P})} < \infty,\) one gets \(\mathbb {E} M(t)=0\) as a consequence of (15) and the Burkholder–Davis–Gundy inequality (see, for instance, (He et al. (1992), Theorem 10.36)), where the term \(\left ((Y_{s-}+ U_{s}(x))^{2}-Y_{s-}^{2}\right)^{2}\) appearing in the integrand can be estimated by
By (A2) and (14), we have
We use this estimate for R=2, and taking the expectation in (17), we have
Then, we choose η(s)=2(K1(s)+2K2(s)2) and subtract the terms containing Y,Z, and U from the left hand side of (19). Moreover, we apply the first inequality of (14) to the term containing the supremum. It follows that
Note that
Hence, by (20) and \(\int _{0}^{T}\eta (\tau)d\tau \le 4C_{K}\) a.s., we have
Now, we can plug in (21) into (15) and vice versa which yields for R:=48c1 that
and
Using (16) it is easy to see that there exists a constant C1>0 such that each factor in front of the expectations on the right side of the previous two inequalities is less than \(\phantom {\dot {i}\!}e^{C_{1}(1+C_{K})^{2}}.\) □
Our next proposition will be an L2 a-priori estimate for BSDEs of our type. For the Brownian case, Lp a-priori estimates are done for p∈[1,∞[ in Briand et al. (2003), and for quadratic BSDEs, for p∈[2,∞[ in Geiss and Ylinen (2018). For BSDEs with jumps, for p∈]1,∞[, see Kruse and Popier (2016, 2017); while Becherer et al. (2018) contains an a-priori estimate w.r.t. L∞. The following assertion is similar to (Barles et al. (1997), Proposition 2.2), but fits our extended setting.
Proposition 4.2
Let ξ,ξ′∈L2 and let f,f′ be two generator functions satisfying (A1)–(A3), where the bounds in (A2) and the coefficients in (A3) may differ for f and f′. The coefficients of f′ in (A3) will be referred to as α′ and β′. Moreover, let the triplets (Y,Z,U) and \((Y^{\prime },Z^{\prime },U^{\prime })\in L^{2}(W)\times L^{2}(W)\times L^{2}\left (\tilde {N}\right)\), satisfy the BSDEs (ξ,f) and (ξ′,f′), respectively.
Then,
where \(a=\int _{0}^{T}\alpha ^{\prime }(s)ds, b=\left \|\int _{0}^{T}\beta ^{\prime }(s)^{2}ds\right \|_{\infty },\) and
is a function such that h(a,b,x)→0=h(a,b,0) if x→0.
Proof
We start with the following observation gained by Itô’s formula for the difference of the BSDEs (ξ,f) and (ξ′,f′). We denote differences of expressions by Δ. If η=4β′(s)2, we have analogously to (17)
where
By the same reasoning as for (18), we have \(\mathbb {E}M(t)=0\). We now proceed with the (standard) arguments similar to those used for (17)–(19). By (A3) and the first inequality from (14),
Taking the expectation in (22) and then using (23) with R=1 (such that we can cancel out the terms with Z and U on the left side), leads to
The choice η(s)=4β′(s)2 and the fact that \( \int _{0}^{T}\beta ^{\prime }(s)^{2}ds \le b\) a.s. leads to
since ρ is a concave function.
By Proposition 5.2, a backward version of the Bihari–LaSalle inequality, shows
where \(G(x)=\int _{1}^{x}\frac {1}{{\rho }(h)}dh.\)
If we take the expectation in (22) but choose this time (23) with \(R=\frac {1}{2}\) and omit \(\mathbb {E} e^{\int _{0}^{t}\eta (s)ds}|\Delta Y_{t}|^{2},\) then
We subtract the quadratic terms with ΔY,ΔZ, and ΔU which appear on the right hand side. This results in the inequality
We continue our estimate by
since η(s)=4β′(s)2. We put
so that (24) reads now as \( \sup _{t\in {[0,T]}}\mathbb {E}|\Delta Y_{t}|^{2} \leq H.\) If we add this inequality to (25) and note that \({\rho }\left (\sup _{t\in {[0,T]}}\mathbb {E}|\Delta Y_{s}|^{2}\right)\le \rho (H),\) we have
Note that the integral condition on ρ implies that, if the argument of G approaches zero, then the right hand side vanishes. □
The following Lemma will be used to estimate the expectation of integrals which contain |Ys|2.
Lemma 4.3
Let ξ∈L2 and assume that (A1) and (A2) hold. If (Y,Z,U) is a solution to (ξ,f) and H is a nonnegative, progressively measurable process with \(\left \|\int _{0}^{T} H(s)ds \right \|_{\infty } < \infty,\) then
Proof
From the relations (17), (18) and integration by parts applied to the term \(\int _{0}^{T} H(s)ds \cdot e^{\int _{0}^{T} \eta (s)ds}|Y_{T}|^{2}, \) we get
We take expectations and rearrange the equation so that
By Assumption (A2) and (14), we have
so that for η(s)=2K1(s)+2K2(s)2 it follows
□
5 Proofs of Theorems 3.1 and 3.4
5.1 Proof of Theorem 3.1
Step 1: Uniqueness
Uniqueness of the solution is a consequence of Proposition 4.2, since the terms |ξ−ξ′| and |f(s,Ys,Zs,Us)−f′(s,Ys,Zs,Us)| are zero.
The proof of existence will be split up in further steps.
Step 2:
In this step, we construct an approximating sequence of generators f(n) for f and show several estimates for the solution processes (Yn,Zn,Un) to the BSDEs (ξ,f(n)).
For n≥1, define cn(z):= min(max(−n,z),n) and \(\tilde {c}_{n}(u)\in L^{2}(\nu)\) to be the projection of u onto {v∈L2(ν):∥v∥≤n}. Let (Yn,Zn,Un) be the unique solution of the BSDE (ξ,f(n)), with the definitions
and
and
Note that f(n) satisfies (A1)–(A4), with the same coefficients as f. Moreover, by (A4), f(n) satisfies a Lipschitz condition with respect to u (see Remark 3.2). Thus, thanks to (Yin and Mao (2008), Theorem 2.1), (ξ,f(n)) has a unique solution (Yn,Zn,Un). Moreover, by Proposition 4.1, we get that
uniformly in n. This implies that the families
are uniformly integrable with respect to \(\mathbb {P}\), \(\mathbb {P}\otimes \lambda \) and \(\mathbb {P}\otimes \lambda \), respectively.
Step 3:
The goal of this step is to use Proposition 4.2 to get convergence of (Yn,Zn,Un)n in \(L^{2}(W)\times L^{2}(W)\times L^{2}(\tilde N)\) for a subsequence nk↑∞ if \(\delta _{n_{k},n_{l}} \to 0\) for k>l→∞, where
We observe that the difference of the generators is zero if two conditions are satisfied at the same time: First, if \(|Z^{n}|,\|U^{n}_{s}\|< n\), and additionally, by the cut-off procedure for F,K1,K2, if
Thus, putting
we have
due to the linear growth condition (A2). We estimate this further by
For \(\delta ^{(1)}_{n,m},\) we use the Cauchy–Schwarz inequality,
Since \(\sup _{n} \|Y^{n}\|_{\mathcal {S}^{2}}< \infty \) according to (28), it remains to show that the integral term converges to 0 for a subsequence.
Since \(|Z^{n}_{s}|\) and \(\|U^{n}_{s}\|\) are uniformly integrable w.r.t. \(\mathbb {P}\otimes \lambda,\) we imply from (29) that χn→0 in \(L^{1}(\mathbb {P}\otimes \lambda).\) Hence, there exists a subsequence (nk)k≥1 such that
By dominated convergence, we have \( \mathbb {E}\left | \int _{0}^{T}\chi _{n_{k}}(s) F(s)ds \right |^{2} \to 0 \) for k→∞ since F∈L2(Ω;L1([0,T])).
For \( \delta ^{(2)}_{n,m},\) we start with the Cauchy–Schwarz inequality and get
By Lemma 4.3,
Hence, (31) implies \(\delta ^{(2)}_{n_{k},m} \to 0\) for k→∞.
Finally,
so that we can argue like in (32) to get that \(\delta ^{(3)}_{n_{k},m} \to 0\) for k→∞.
Thus \(\phantom {\dot {i}\!}(Y^{n_{k}},Z^{n_{k}},U^{n_{k}})_{k\ge 1}\) converges to an object (Y,Z,U) in \(L^{2}(W)\times L^{2}(W)\times L^{2}\left (\tilde {N}\right)\).
Step 4:
In the final step, we want to show that (Y,Z,U) solves (ξ,f). For the approximating sequence \(\left (Y^{n_{k}},Z^{n_{k}},U^{n_{k}}\right)_{k\ge 1},\) the stochastic integrals and the left hand side of the BSDEs \(\left (\xi,f^{(n_{k})}\right)\) obviously converge in L2 to the corresponding terms of (ξ,f). Therefore, this subsequence of \(\left (\int _{t}^{T} f^{(n)}\left (s,Y^{n}_{s},Z^{n}_{s},U^{n}_{s}\right)ds\right)_{n=1}^{\infty }\) converges to a random variable Vt. We need to show that \(V_{t}=\int _{t}^{T} f(s,Y_{s},Z_{s},U_{s})ds\). To achieve this, consider
We start with the first integrand where, by the definition of fn and (29), and the growth condition (A2),
The estimates are similar as in the previous step. Thanks to (31), we have \(\mathbb {E} \int _{t}^{T}\kappa ^{(1)}_{n_{k}}(s)ds \to 0.\) For the next term, the Cauchy–Schwarz inequality yields
so that by (31) the first factor converges to zero along the subsequence (nk). The last term we estimate using the Cauchy–Schwarz inequality w.r.t. \(\mathbb {P}\otimes \lambda,\)
and again by (31), we have convergence to zero along the subsequence (nk).
We continue showing the convergence of the second term in (33). We extract a sub-subsequence of (nk)k≥1, which we call—slightly abusing the notation—again (nk)k≥1 such that \(\phantom {\dot {i}\!}(Y^{n_{k}},Z^{n_{k}},U^{n_{k}})\), regarded as a triplet of measurable functions with values in \(\mathbb {R}\times \mathbb {R}\times L^{2}(\nu)\), converges to (Y,Z,U) for \(\mathbb {P}\otimes \lambda \)-a.a. (ω,s). Then, for an arbitrary K>0, we have
By dominated convergence and the continuity of f,
since by (A2) we can bound the integrand by
which is integrable. We let
Then, the remaining terms of (34) are bounded by
If we choose a K large enough, then \(\delta ^{(1)}_{n_{k}}\) can be made arbitrarily small since the families \(\left (|Y_{s}^{n}|,n\geq 0\right)\) and \(\left (|Z^{n}_{s}|+\|U^{n}_{s}\|,n\geq 0\right)\) are uniformly integrable with respect to \(\mathbb {P}\otimes \lambda \). The same holds for
and
Hence, for δn defined in (33), we have that \({\lim }_{k\to \infty } \delta _{n_{k}}=0,\) which implies
We infer that for a sub-subsequence \((n_{k_{l}},l\geq 0)\) we get the a.s. convergence
Thus, for the original sequence, a.s.
and therefore the triplet (Y,Z,U) satisfies the BSDE (ξ,f).
5.2 Proof of Theorem 3.4
We start with a preparatory lemma:
Lemma 5.1
If f satisfies (A1)–(A4), then for all n≥0, fn constructed in Definition 3.3 also satisfies (A1)–(A4) (with different coefficients).
Proof
By definition, (ω,t)↦fn(t,y,z,u) is progressively measurable for all (y,z,u), thus (A1) is satisfied. The inequalities in (A2) and (A3) are a.s. satisfied, with coefficients \(\mathbb {E}_{n} F, \mathbb {E}_{n} K_{1}, \mathbb {E}_{n} K_{2}, \mathbb {E}_{n}\beta \). To ensure that these coefficients have a \(\left (\mathcal {F}_{t}^{n}\right)_{t\in {[0,T]}}\)-progressively measurable version, one applies the procedure from Definition 3.3 to the inequalities in (A2) and (A3) and notes that an equation analogous to (11) holds true.
It remains to show a.s. continuity of fn in the (y,z,u)-variables required in (A3) for a.e. t. In (Ylinen (2017), Proposition 7.3), this was shown by the fact that the approximation of the generators appearing there can be done using spaces of continuous functions. However, since our situation involves L2(ν), a non-locally compact space, we can not easily adapt the proof from Ylinen (2017) and therefore we will use different means.
Let D[0,T] be the space of càdlàg functions endowed by the Skorohod metric (which makes this space a Polish space). The Borel σ-algebra \(\mathcal {B}(\mathrm {D}[0,T])\) is generated by the coordinate projections \(p_{t}\colon \mathrm {D}[0,T]\to \mathbb {R}, \mathrm {x}\mapsto \mathrm {x}(s)\) (see Theorem 12.5 of Billingsley (1968), for instance). On this σ-algebra, let \(\mathbb {P}_{X}\) be the image measure induced by the Lévy process X: Ω→D[0,T],ω↦X(ω). We denote by \(\mathcal {G}\) the completion with respect to \(\mathbb {P}_{X}\). For t∈[0,T], the notation
induces the natural identification
By this identification, we define a filtration on this space through
where \(\mathcal {N}_{X}{[0,T]}\) denotes the null sets of \(\mathcal {B}\left (\mathrm {D}{[0,T]}\right)\) with respect to the image measure \(\mathbb {P}_{X}\) of the Lévy process X. The same procedure applied to the Lévy process Xn yields a filtration \((\mathcal {G}_{t}^{n})_{t\in {[0,T]}}\) defined in the same way.
According to (Steinicke (2016), Theorem 3.4), which is a generalization of Doob’s factorization lemma to random variables depending on parameters, there is a \(\mathcal {G}_{t}\otimes \mathcal {B}([0,t]\times \mathbb {R}^{2}\times L^{2}(\nu))\)-measurable functional
and a \(\mathcal {G}_{t}^{n}\otimes \mathcal {B}([0, t]\times \mathbb {R}^{2}\times L^{2}(\nu))\)-measurable functional
such that \(\mathbb {P}\)-a.s.,
Note also, that if \(\mathbb {P}_{X}(M)=0\) for \(M\in \mathcal {G}\), then also \(\mathbb {P}(X^{-1}(M))=0\). Thus, without loss of generality, we may assume that \((\Omega,\mathcal {F},\mathbb {P})=(\mathrm {D}[0,T],\mathcal {G},\mathbb {P}_{X})\) and \((\Omega,\mathcal {F}_{t}^{n},\mathbb {P})=(\mathrm {D}([0,t]),\mathcal {G}_{t}^{n},\mathbb {P}_{X})\), which are standard Borel spaces. For more details on D[0,T], see Billingsley (1968) and (Delzeith (2004), Section 4).
Now, fix \(N\in \mathbb {N}\) and let \(c_{0}:=\{(a_{n})_{n}\in (\mathbb {R}^{2}\times L^{2}(\nu))^{\mathbb {N}}: a_{n}\to 0\}.\) For a∈c0, let \(\|a\|_{c_{0}}=\sup _{n\in \mathbb {N}} (|a_{n}(1)|+|a_{n}(2)|+\|a_{n}(3)\|)\), where a(k),k=1,2,3 are the components of a in \(\mathbb {R}\), \(\mathbb {R}\) and L2(ν). The space c0 is a Polish space. Let BN be the ball with radius \(N\in \mathbb {N}\) in c0 and let \(B^{\prime }_{N}\) be the ball of radius N in \(\mathbb {R}^{2}\times L^{2}(\nu)\). The balls \(B_{N}, B^{\prime }_{N}\) are again Polish spaces.
We consider a Borel set MT of t∈[0,T] for which f is continuous in (y,z,u) and for which it holds that f has an integrable bound:
From (A3) and (9) it follows that one can choose MT such that λ(MT)=T.
For a fixed t∈MT we define the function
where φ denotes a triplet \((y,z,u)\in \mathbb {R}^{2}\times L^{2}(\nu)\). This function is measurable since fn(·,t,·) is measurable, \(\pi _{m}:B_{N}\times B^{\prime }_{N}\to \mathbb {R}^{2}\times L^{2}(\nu), (a,\varphi)\mapsto (a_{m}+\varphi)\) is continuous and \(\text {id}\times \pi _{m}:\Omega \times B_{N}\times B^{\prime }_{N}\to \Omega \times \mathbb {R}^{2}\times L^{2}(\nu)\) is measurable.
Next, we consider the map
The set, where the limit exists is measurable, since it can be written as
Therefore, H can be written as the pointwise limit of measurable functions and is thus measurable.
We now know that, for a fixed pair \((a,\varphi)\in B_{N}\times B^{\prime }_{N}\),
Thus, by (36)
By the continuity of f and the dominated convergence theorem for conditional expectations, we infer that up to a null set \(M(a,\varphi)\in \mathcal {F}^{n}_{t}\), we have the relation
In other words, on the complement of M(a,φ), we have H(ω,a,φ)=fn(ω,t,φ). This means that H and fn(·,t,·) are ”versions” of each other. What we need is ”indistinguishability” of the processes.
For this purpose, let \((A,\Phi):\Omega \to B_{N}\times B^{\prime }_{N}\) be an arbitrary \(\mathcal {F}^{n}_{t}\)-measurable function. Like above, by the definition of the optional projection, (A2), and the continuity of f, we get the equation
which is also satisfied \(\mathbb {P}\)-a.s. This equality means, that
All \(\mathcal {F}^{n}_{t}\) were complete σ-algebras (in fact they contain all null sets of \(\mathcal {F}\)) and the spaces \(B_{N},B^{\prime }_{N}\) were Polish. Thus we may use a generalized version of the section theorem, the Jankov–von Neumann theorem (Theorem 5.3), by choosing a uniformizing function \(\left (\hat A,\hat \Phi \right)\) for the set
Note that P is a Borel set and therefore especially analytic, since H and fn(·,t,·) (interpreted as a constant map w.r.t. a) are measurable functions in (ω,a,φ). Since for this choice of \(\left (\hat A,\hat \Phi \right)\) it holds, as seen above, that
it follows that the projection of P to Ω is a null set. Therefore, H and fn are indistinguishable. Hence, we find a null set \(M_{N}\in \mathcal {F}^{n}_{t}\), such that for ω outside this set and for all \((a,\varphi)\in B_{N}\times B^{\prime }_{N}\):
But this means continuity in all points of \(B^{\prime }_{N}\) a.s. It remains to unite the sets MN for all \(N\in \mathbb {N}\), to obtain a set such that on its complement the function is continuous in all points of \(\mathbb {R}^{2}\times L^{2}(\nu)\). □
Proof
Step 1:
If f satisfies (A1)–(A4), by Lemma 5.1 all fn do so as well. In this case, for all n≥0, the equations \((\mathbb {E}_{n}\xi,f_{n})\) have unique solutions by Theorem 3.1. In general, the coefficients in (A2) and β differ dependent on n since F,K1,K2,β will be replaced by the coefficients \(\mathbb {E}_{n} F, \mathbb {E}_{n} K_{1}, \mathbb {E}_{n} K_{2}, \mathbb {E}_{n}\beta \).
Let us compare the solutions (Yn,Zn,Un) and (Y,Z,U). We start comparing (Yn,Zn,Un) and \((\mathbb {E}_{n} Y, \mathbb {E}_{n} Z, \mathbb {E}_{n} U)\). Here, for instance, the process \(((\mathbb {E}_{n} Y)_{t})_{t\in {[0,T]}}\) is defined as an optional projection with respect to the filtration \(\left (\mathcal {F}^{n}_{t}\right)_{t\in {[0,T]}}\), similar to Definition 3.3. The so defined processes are versions of the processes \((\mathbb {E}_{n} Y_{t},\mathbb {E}_{n} Z_{t}, \mathbb {E}_{n} U_{t})_{t\in {[0,T]}}\).
Using the BSDE for (Y,Z,U), we get \(\mathbb {P}\)-a.s.
since
Now, to estimate \(\|Y^{n}-\mathbb {E}_{n} Y\|_{L^{2}(W)}+\|Z^{n}-\mathbb {E}_{n} Z\|_{L^{2}(W)}+\|U^{n}-\mathbb {E}_{n} U\|_{L^{2}(\tilde N)}\), we apply Itô’s formula to the difference of the BSDE \((\mathbb {E}_{n} \xi,f_{n})\) and (37). Similar to the proof of Proposition 4.2, we get, denoting differences by Δn and η:=4β(s)2,
By the measurability of (Yn,Zn,Un), the equality
holds \(\mathbb {P}\)-a.s. for all s. We now estimate
Now, we can conduct exactly the same steps as in the standard procedure used in the proof of Proposition 4.2. This means that \(\|\Delta ^{n} Y\|_{L^{2}(W)}+\|\Delta ^{n} Z\|_{L^{2}(W)} +\|\Delta ^{n} U\|_{L^{2}(\tilde N)}\) converges to zero if
does, which we will show in the following steps.
Step 2:
In this step, we show that the solution processes (Yn,Zn,Un) satisfy the estimate
This, as in the proof of Theorem 3.1, leads to the uniform integrability of the processes (|Yn|,n≥0) and (|Zn|+∥Un∥,n≥0) with respect to \(\mathbb {P}\otimes \lambda \).
By Proposition 4.1, we get that
where \(C_{K,n}= \left \|\int _{0}^{T}\left (\mathbb {E}_{n} K_{1}(s)+(\mathbb {E}_{n} K_{2}(s))^{2}\right)ds\right \|_{\infty }\). By the monotonicity of \(\mathbb {E}_{n}\) and Jensen’s inequality, we get that
Doob’s martingale inequality applied to \(n\mapsto \mathbb {E}_{n}\xi \) and \(n\mapsto I_{\mathbb {E}_{n}F}=\mathbb {E}_{n}\int _{0}^{T}F(s)ds\) yields that
Furthermore,
follows from martingale convergence and Jensen’s inequality and implies uniform integrability of the processes \(\left ({|\mathbb {E}_{n} Y|},n\geq 0\right)\text { and }\left (|\mathbb {E}_{n} Z|+\|\mathbb {E}_{n} U\|,n\geq 0\right)\) with respect to \(\mathbb {P}\otimes \lambda \).
Step 3:
In this step, we show the convergence (38). From martingale convergence, we get that for all t∈[0,T], \(\mathbb {E}_{n} Y_{t}\to Y_{t}\), \(\mathbb {E}_{n} Z_{t}\to Z_{t}\) and \(\mathbb {E}_{n} U_{t}\to U_{t}\), \(\mathbb {P}\)-a.s. This implies that \(f(s,\mathbb {E}_{n} Y_{s},\mathbb {E}_{n} Z_{s},\mathbb {E}_{n} U_{s})\to f(s,Y_{s},Z_{s},U_{s})\) in \(\mathbb {P}\otimes \lambda \). Therefore,
since the integrals form a uniformly integrable sequence with respect to \(\mathbb {P}\otimes \lambda \). Indeed, we have, using (A2) for f and the first equation of (14), the estimate
where \( n\mapsto \mathbb {E}_{n} Z_{s}, n\mapsto \mathbb {E}_{n} U_{s}\) converge since they are closable martingales.
Next, we will show that
can be made arbitrarily small by the choice of K>0, uniformly in n. Again by (A2) and using the notation \(\chi ^{n}_{K}(s):= \chi _{\{|Y^{n}_{s}|+|\mathbb {E}_{n} Y_{s}|> K\}},\) we estimate like in (30)
and get
For \(\delta ^{(1)}_{n,K},\) we estimate
which tends to zero as K→∞, since we have \(\chi ^{n}_{K}\to 0\) in \(\mathbb {P}\otimes \lambda \), uniformly in n, as K→∞. The latter is implied by the uniform integrability of the families (|Yn|)n≥0 and \((|\mathbb {E}_{n} Y|)_{n\geq 0}\) with respect to \(\mathbb {P}\otimes \lambda.\) We continue with the next summands,
and
where, for \(\mathbb {E}\int _{0}^{T}\chi ^{n}_{K}(s) \left (|Y_{s}|^{2} +|Y^{n}_{s}|^{2}\right)K_{1}(s)ds\) and \(\mathbb {E}\int _{0}^{T} \chi ^{n}_{K}(s) |Y^{n}_{s}|^{2} K_{2}(s)^{2}ds,\) we will apply the estimate (27) from the proof of Lemma 4.3. For example (the other terms can be treated similarly), we get
with \(\int _{0}^{T} \eta _{n}(s)ds =\int _{0}^{T} \mathbb {E}_{n} K_{1}(s)+(\mathbb {E}_{n} K_{2}(s))^{2}ds \le C_{K}\) a.s. Now, one gets that
Furthermore, using \(\sup _{n\geq 0}\mathbb {E}_{n}\int _{0}^{T}F(s)ds<\infty \), \(\mathbb {P}\)-a.s. (which follows from martingale convergence),
independently of n. Since, by Doob’s maximal inequality,
dominated convergence is applicable to the last expression in (45). The first summand containing ξ can be treated in the same way.
The terms containing \(|\mathbb {E}_{n} Y_{s}|\) in the inequalities (43) and (44), e.g., the expression \(\mathbb {E}\int _{0}^{T} \chi ^{n}_{K}(s) |\mathbb {E}_{n} Y_{s}|^{2}K_{1}(s)ds,\) can be estimated by
where we used Doob’s maximal inequality again. Since \(\int _{0}^{T}\chi ^{n}_{K}(s) K_{1}(s)ds\to 0\) in \(\mathbb {P}\) as K→∞, all the terms in (43) and (44) become small, uniformly in n, if K is large. So the expressions \(\delta ^{(2)}_{n,K}\) and \(\delta ^{(3)}_{n,K}\) can be made arbitrarily small by the choice of K, which gives us the desired convergence
Step 5:
Since, by the last step,
and also, by martingale convergence,
we get
□
6 Appendix
The Bihari–LaSalle inequality. For the Bihari–LaSalle inequality we refer to (Mao (1997), pp. 45-46). Here, we formulate a backward version of it which has been applied in Yin and Mao (2008). The proof is analogous to that in Mao (1997).
Proposition 5.2
Let c>0. Assume that ρ:[0,∞[→[0,∞[ is a continuous and non-decreasing function such that ρ(x)>0 for all x>0. Let K be a non-negative, integrable Borel function on [0,T], and y a non-negative, bounded Borel function on [0,T], such that
Then, it holds that
for all t∈[0,T] such that \(G(c) + \int _{t}^{T} K(s) ds \in \text {dom}\left (G^{-1}\right).\) Here
and G−1 is the inverse function of G.Especially, if ρ(r)=r for r∈[0,∞[, it holds that
The Jankov–von Neumann theorem. If X and Y are sets and P⊆X×Y, then P∗⊆P is called a uniformization of P if and only if P∗ is the graph of a function f:projX(P)→Y, i.e., P∗={(x,f(x)):x∈projX(P)}. Such a function f is called a uniformizing function for P. Let \( \Sigma _{1}^{1}(X)\) denote the class of analytic subsets of X. The following theorem can be found, for example, in (Kechris (1994), Theorem 18.1).
Theorem 5.3
(Jankov–von Neumann theorem) Assume that X and Y are standard Borel spaces and P⊆X×Y is an analytic set. Then, P has a uniformizing function that is \(\sigma \left (\Sigma _{1}^{1}(X)\right)\)- measurable.
Change history
15 August 2019
■■■
References
Applebaum, D: Lévy Processes and Stochastic Calculus. Cambridge University Press, Cambridge (2004).
Barles, G, Buckdahn, R, Pardoux, É: Backward stochastic differential equations and integral-partial differential equations. Stoch. Stoch. Rep. 60(1-2), 57–83 (1997).
Becherer, D, Büttner, M, Kentia, K: On the monotone stability approach to BSDEs with jumps: Extensions, concrete criteria and examples (2018). https://arxiv.org/abs/1607.06644.
Billingsley, P: Convergence of probability measures. Wiley, New York (1968).
Briand, P, Delyon, B, Hu, Y, Pardoux, E, Stoica, L: L p solutions of backward stochastic differential equations. Stoch. Proc. Appl. 108, 109–129 (2003).
Cao, Z, Yan, J: A comparison theorem for solutions of backward stochastic differential equations. Adv. Math. 28(4), 304–308 (1999).
Cohen, S, Elliott, R, Pearce, C: A General Comparison Theorem for Backward Stochastic Differential Equations. Adv. Appl. Probab. 42(3), 878–898 (2010).
Darling, R, Pardoux, É: Backwards sde with random terminal time and applications to semilinear elliptic pde. Ann. Probab. 25(3), 1135–1159 (1997).
Delzeith, O: On Skorohod spaces as universal sample path spaces (2004). https://arxiv.org/abs/math/0412092v1.
El Karoui, N, Hamadène, S, Matoussi, A: Backward stochastic differential equations. In: Carmona, R (ed.)Indifference Hedging: Theory and Applications, pp. 267–320. Princeton University Press (2009).
El Karoui, N, Tan, X: Capacities, measurable selection and dynamic programming part I: abstract framework (2013). https://arxiv.org/abs/1310.3363.
El Karoui, N, Peng, S, Quenez, M: Backward Stochastic Differential Equations in Finance. Math. Financ. 7(1), 1–71 (1997).
Fan, S, Jiang, L: A Generalized Comparison Theorem for BSDEs and Its Applications. J. Theor. Probab. 25, 50–61 (2012).
Geiss, C, Steinicke, A: Existence, Uniqueness and Malliavin Differentiability of Lévy-driven BSDEs with locally Lipschitz Driver (2018). https://arxiv.org/abs/1805.05851.
Geiss, S, Ylinen, J: Decoupling on the Wiener Space, Related Besov Spaces, and Applications to BSDEs (2018). is to appear in: Memoirs AMS, 2018.
Gobet, E, Turkedjiev, P: Linear regression MDP scheme for discrete backward stochastic differential equations under general conditions. Math. Comp. 85, 1359–1391 (2016).
He, SW, Wang, JG, Yan, JA: Semimartingale Theory and Stochastic Calculus. CRC Press, Boca Raton (1992).
Kechris, A: Classical Descriptive Set Theory. Springer, New York (1994).
Kruse, T, Popier, A: BSDEs with monotone generator driven by Brownian and Poisson noises in a general filtration. Stochastics. 88(4), 491–539 (2016).
Kruse, T, Popier, A: L p-solution for BSDEs with jumps in the case p<2. Stochastics (2017). https://doi.org/10.1080/17442508.2017.1290095.
Mao, X: Adapted solutions of backward stochastic differential equations with non-Lipschitz coefficients. Stoch. Process. Appl. 58, 281–292 (1995).
Mao, X: Stochastic Differential Equations and Applications. Woodhead Publishing Limited, Cambridge (1997).
Meyer, PA: Une remarque sur le calcul stochastique dépendant d’un paramètre. Séminaire probabilités (Strasbourg), tome. 13, 199–203 (1979).
Pardoux, É: Generalized discontinuous backward stochastic differential equations. In: El Karoui, N, Mazliak, L (eds.)Backward Stochastic Differential Equations, Pitman Res. Notes Math., vol. 364, pp. 207–219. Longman, Harlow (1997).
Pardoux, É, Zhang, S: Generalized BSDEs and nonlinear Neumann boundary value problems. Probab. Theory Relat. Fields. 110, 535–558 (1996).
Peng, S: A generalized dynamic programming principle and hamilton-jacobi-bellman equation. Stoch. Stoch. Rep. 38, 119–134 (1992).
Peng, S: Backward stochastic differential equation, nonlinear expectation and their applications. In: Proceedings of the International Congress of Mathematicians, Volume I, pp. 393–432. Hindustan Book Agency, New Delhi (2010).
Protter, P: Stochastic Integration and Differential Equations. Springer, Berlin (2004).
Royer, M: Backward stochastic differential equations with jumps and related non-linear expectations. Stoch. Process. Appl. 116(10), 1358–1376 (2006).
Sato, K: Lévy Processes and Infinitely Divisible Distributions. Cambridge University Press, Cambridge (1999).
Situ, R: On solutions of backward stochastic differential equations with jumps and applications. Stoch. Process. Appl. 66, 209–236 (1997).
Steinicke, A: Functionals of a Lévy Process on Canonical and Generic Probability Spaces. J. Theoret. Probab. 29, 443–458 (2016).
Sow, AB: BSDE with jumps and non-Lipschitz coefficients: Application to large deviations. Braz. J. Probab. Stat. 28(1), 96–108 (2014).
Yao, S: \(\mathbb {L}^{p}\)-solutions of Backward Stochastic Differential Equations with Jumps. Stoch. Proc. Appl. 127(11), 3465–3511 (2017).
Yin, J, Mao, X: The adapted solution and comparison theorem for backward stochastic differential equations with Poisson jumps and applications. J. Math. Anal. Appl. 346, 345–358 (2008).
Ylinen, J: Weighted Bounded Mean Oscillation applied to Backward Stochastic Differential Equations (2017). https://arxiv.org/abs/1501.01183.
Acknowledgements
The authors thank Stefan Geiss and Juha Ylinen, University of Jyväskylä, for fruitful discussions and valuable suggestions.
Moereover, we are sincerly grateful to the anonymous reviewers for their helpful comments and questions.
Christel Geiss would like to thank the Erwin Schrödinger Institute, Vienna, for hospitality and support, where a part of this work was written.
Funding
Large parts of this article were written when Alexander Steinicke was member of the Institute of Mathematics and Scientific Computing, University of Graz, Austria, and supported by the Austrian Science Fund (FWF): Project F5508-N26, which is part of the Special Research Program “Quasi-Monte Carlo Methods: Theory and Applications.”
Availability of data and material
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
Author information
Authors and Affiliations
Contributions
Both authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Geiss, C., Steinicke, A. Existence, uniqueness and comparison results for BSDEs with Lévy jumps in an extended monotonic generator setting. Probab Uncertain Quant Risk 3, 9 (2018). https://doi.org/10.1186/s41546-018-0034-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s41546-018-0034-y