Skip to main content
Log in

The Maximum Principle for Stochastic Control Problem with Jumps in Progressive Structure

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

In this paper, we introduce a new structure named “progressive structure” to deal with stochastic control problem with jumps. One example is given to show the motivation of our new structure compared with the traditional one at the beginning. And then, we obtain the maximum principle for a forward stochastic control problem by virtue of a new variation method. The control is allowed to enter both diffusion and jump terms and the control domain is convex. In the end, we apply our theoretical results to another example to illustrate the efficiency of our method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. He, S., Wang, J., Yan, J.: Semimartingale Theory and Stochastic Calculus. Routledge, Abingdon (1992)

    MATH  Google Scholar 

  2. Peng, S.: A general stochastic maximum principle for optimal control problems. SIAM J. Control Optim. 28(4), 966–979 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  3. Peng, S.: Backward stochastic differential equations and applications to optimal control. Appl. Math. Optim. 27(2), 125–144 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  4. Situ, R.: Theory of Stochastic Differential Equations with Jumps and Applications: Mathematical and Analytical Techniques with Applications to Engineering. Springer, New York (2006)

    Google Scholar 

  5. Shi, J., Wu, Z.: Maximum principle for forward-backward stochastic control system with random jumps and applications to finance. J. Syst. Sci. Complex. 23(2), 219–231 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  6. Song, Y., Tang, S., Wu, Z.: The maximum principle for progressive optimal stochastic control problems with random jumps. SIAM J. Control Optim. 58(4), 2171–2187 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  7. Tang, S., Li, X.: Necessary conditions for optimal control of stochastic systems with random jumps. SIAM J. Control Optim. 32(5), 1447–1475 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  8. Wu, Z.: Maximum principle for optimal control problem of fully coupled forward-backward stochastic systems. J. Systems Sci. Math. Sci. 11, 249–259 (1998)

    MathSciNet  MATH  Google Scholar 

  9. Yan, J.: Introduction to Martingales and Stochastic Integrals. Shanghai Sci. and Tech. Publ. House, Shanghai (1981)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the Natural Science Foundation of China (11831010, 61961160732), Shandong Provincial Natural Science Foundation (ZR2019ZD42) and the Taishan Scholars Climbing Program of Shandong (TSPD20210302).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhen Wu.

Additional information

Communicated by Nizar Touzi.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: The Existence and Uniqueness of Solution of SDE

We are given the following SDE

$$\begin{aligned} X_t&=x_0+\int _0^t\int _Z b(s,e,X_s)\lambda (de)ds+\int _0^t \int _Z a(s,e, X_{s-})N(ds, de)\nonumber \\&\quad +\int _0^t\int _Z \sigma (s,e, X_s)\lambda (de)dB_s+\int _0^t \int _Z c(s,e, X_{s-}){\tilde{N}}(ds, de), \end{aligned}$$
(24)

where \(x_0\in R^n\), \(b,a,\sigma ,c:\varOmega \times [0, T]\times Z\times R^n \rightarrow R^n\), n is the dimension of X. We introduce a Banach space

$$\begin{aligned} S^2[0, T]:=\left\{ X\mid X \text { has c}\grave{\text {a}}\text {dl}\grave{\text {a}}\text {g paths and adapted and }E\left[ \sup _{0\le t\le T}|X_t|^2\right] <\infty \right\} \end{aligned}$$

with norm \(\Vert X\Vert ^2=E\left[ \sup _{0\le t\le T}|X_t|^2\right] \).

Lemma A.1

Suppose that \(X_t\) is an adapted process with càdlàg paths such that

$$\begin{aligned} E\left[ \sup _{0\le t\le T}|X_t|^p\right] <\infty , \end{aligned}$$

for some \(p>0\). Then, we have

$$\begin{aligned} E\left( \int _0^T|X_{s-}|N(ds,Z)\right) ^p\le Ce^{CT}T E\left[ \sup _{0\le s\le T}|X_s|^p\right] , \end{aligned}$$

where C is a constant only related to p.

Proof

Set \(A_t=\int _0^t |X_{s-}| N(ds,Z)\). Since \(N([0,t]\times Z)\) is a pure jump process, \(A_t\) is a pure jump process. Notice that the jump time of \(A_t\) is also a jump time of \(N([0,t]\times Z)\) and the jump size of \(N([0,t]\times Z)\) is always equal to 1, so we have

$$\begin{aligned} A_t^p&=\sum _{s\le t}A_s^p-A_{s-}^p=\sum _{s\le t}\left( A_s^p-A_{s-}^p\right) I_{\{N(\{s\}\times Z)\ne 0\}}\\&=\sum _{s\le t}\left( (A_{s-}+|X_{s-}|)^p-A_{s-}^p\right) N(\{s\}\times Z)\\&=\int _0^t(A_{s-}+|X_{s-}|)^p-A_{s-}^pN(ds, Z)\\&\le C\int _0^tA_{s-}^p+|X_{s-}|^pN(ds, Z),\\ \end{aligned}$$

for any \(k\ge 1\). Since \(A_{\cdot -}\), \(X_{\cdot -}\) and \(I_{[0,T_k]}\) are predictable, we have

$$\begin{aligned} E\left[ A_{t\wedge T_k}^p\right]&\le CE\left[ \int _0^tA_{s\wedge T_k}^p+|X_{s\wedge T_k}|^pds\right] \le C\int _0^tE\left[ A_{s\wedge T_k}^p\right] ds\\&\quad +CTE\left[ \sup _{0\le t\le T}|X_t|^p\right] . \end{aligned}$$

Since \(E\left[ A_{s\wedge T_k}^p\right] <kE\left[ \sup _{0\le t\le T}|X_t|^p\right] \), by Gronwall’s inequality, we have

$$\begin{aligned} E\left[ A_{t\wedge T_k}^p\right] \le CTE\left[ \sup _{0\le t\le T}|X_t|^p\right] e^{Ct}\le Ce^{CT}TE\left[ \sup _{0\le t\le T}|X_t|^p\right] . \end{aligned}$$

Let k goes to infinity, by Fatou’s lemma, we have

$$\begin{aligned} E\left[ A_{t}^p\right] \le \varliminf _{k\rightarrow \infty }E\left[ A_{t\wedge T_k}^p\right] \le Ce^{CT}TE\left[ \sup _{0\le t\le T}|X_t|^p\right] . \end{aligned}$$

\(\square \)

We have the following assumptions:

Assumption H1:

  1. (i)

    \(b,a,\sigma ,c\) are \({\mathscr {G}} \otimes {\mathscr {Z}}\otimes {\mathscr {B}}(R^n)/{\mathscr {B}}(R^n)\) measurable.

  2. (ii)

    \(b, a,\sigma , c\) are uniform Lipschitz continuous with respect to x.

  3. (iii)

    \(E\left( \int _0^T \left( \int _Z|b(\omega ,t,e, 0)|\lambda (de)\right) dt\right) ^2<\infty ,E\int _0^T \left( \int _Z|\sigma (\omega ,t,e, 0)|\lambda (de)\right) ^2dt<\infty \), \(E\left( \int _0^T\int _Z |a(\omega ,t,e,0)|N(dt, de)\right) ^2<\infty , E\int _0^T\int _Z |c(\omega ,t,e, 0)|^2N(dt, de)<\infty \).

Theorem A.1

Under Assumption H1, (24) has a unique solution in \(S^2[0, T]\).

Proof

Firstly, we show that there is a unique solution in small time duration. We construct a map from \(S^2[0, T]\) to \(S^2[0, T]\)

$$\begin{aligned} {\mathscr {T}}(X)_t&=x_0+\int _0^t \int _Zb(s,e, X_s)\lambda (de)ds+\int _0^t\int _Z a(s,e, X_{s-})N(ds, de)\\&\quad +\int _0^t\int _Z\sigma (s,e, X_s)\lambda (de)dB_s+\int _0^t\int _Z c(s,e, X_{s-}){\tilde{N}}(ds, de). \end{aligned}$$

It is easy to show that the image of \({\mathscr {T}}\) is actually in \(S^2[0, T]\) by (ii) and (iii) in Assumption H1 and Lemma A.1. For any \(X, Y\in S^2[0, T]\),

$$\begin{aligned} \Vert {\mathscr {T}}(X)-{\mathscr {T}}(Y)\Vert ^2&\le CE\left[ \left( \int _0^T \int _Z|b(t,e, X_t)-b(t, e, Y_t)|\lambda (de)dt\right) ^2\right] \\&\quad +CE\left[ \sup _{0\le t\le T}\left| \int _0^t \int _Z\sigma (t,e, X_t)-\sigma (t,e, Y_t)\lambda (de)dB_t\right| ^2\right] \\&\quad +CE\left[ \left( \int _0^T \int _Z|a(t,e, X_{t-})-a(t, e, Y_{t-})|N(dt,de)\right) ^2\right] \\&\quad +CE\left[ \sup _{0\le t\le T}\left| \int _0^t\int _Z c(t, e,X_{t-})-c(t,e, Y_{t-}){\tilde{N}}(dt, de)\right| ^2\right] \\&\le C\Vert (X-Y)\Vert ^2(T+T^2)\\&\quad +CE\left[ \left( \int _0^T|X_{t-}-Y_{t-}|N(dt,Z)\right) ^2\right] \\&\le C\left( T+T^2+e^{CT}T\right) \Vert (X-Y)\Vert ^2. \end{aligned}$$

We choose T small enough that \(C\left( T+T^2+e^{CT}T\right) <1\), then \({\mathscr {T}}\) is a contraction.

For arbitrary T, we can split T into finite small pieces, so that we get a unique solution on each piece and connect them together. \(\square \)

Now, we give the \(L^2\) estimate of the solution.

Theorem A.2

Suppose that we are given two SDEs:

$$\begin{aligned} X^i_t&=x^i_0+\int _0^t\int _Z b^i(s,e,X^i_s)\lambda (de)ds+\int _0^t \int _Z a^i(s,e, X^i_{s-})N(ds, de)\nonumber \\&\quad +\int _0^t\int _Z \sigma ^i(s,e, X^i_s)\lambda (de)dB_s+\int _0^t \int _Z c^i(s,e, X^i_{s-}){\tilde{N}}(ds, de), \end{aligned}$$
(25)

where \(i=1,2\). Suppose the two equations satisfy Assumption H1, and \(X^1\) (resp. \(X^2\)) is the solution of the first (resp. second) equation, then we have the following estimate:

$$\begin{aligned} E\left[ \sup _{0\le t\le T}|X^1_t-X^2_t|^2\right]&\le C|x_0^1-x_0^2|^2\nonumber \\&\quad +CE\left( \int _0^{T}\left| \int _Zb^1(t,e,X^2_t)-b^2(t,e,X^2_t)\lambda (de)\right| dt\right) ^2\nonumber \\&\quad +CE\left[ \int _0^{T}\left| \int _Z\sigma ^1(t,e,X^2_t)-\sigma ^2(t,e,X^2_t)\lambda (de)\right| ^2dt\right] \nonumber \\&\quad +CE\left( \int _0^{T}\int _Z|a^1(t,e,X^2_{t-})-a^2(t,e,X^2_{t-})|N(dt,de)\right) ^2\nonumber \\&\quad +CE\left[ \int _0^{T}\int _Z|c^1(t,e,X^2_{t-})-c^2(t,e,X^2_{t-})|^2N(dt,de)\right] . \end{aligned}$$
(26)

Proof

We first suppose that T is sufficiently small. Denote \({\mathscr {T}}^i\) the contraction mapping with respect to the ith equation. Set

$$\begin{aligned} L(T):=C\left( T+T^2+e^{CT}T\right) ; \end{aligned}$$

then, by the argument in Theorem A.1, we have

$$\begin{aligned} \Vert X^1-X^2\Vert&=\Vert {\mathscr {T}}^1(X^1)-{\mathscr {T}}^2(X^2)\Vert =\Vert {\mathscr {T}}^1(X^1)-{\mathscr {T}}^1(X^2)+{\mathscr {T}}^1(X^2)-{\mathscr {T}}^2(X^2)\Vert \\&\le L(T)\Vert X^1-X^2\Vert +\Vert {\mathscr {T}}^1(X^2)-{\mathscr {T}}^2(X^2)\Vert . \end{aligned}$$

Choosing T small enough such that \(L(T)<1\), we have

$$\begin{aligned} \Vert X^1-X^2\Vert \le \frac{1}{1-L(T)}\Vert {\mathscr {T}}^1(X^2)-{\mathscr {T}}^2(X^2)\Vert , \end{aligned}$$

which is the estimate (26). For arbitrary T, we can split T into finite small pieces, and then we connect the estimate and get the result. \(\square \)

Appendix B: Proof that \(\llbracket T_{n}, U_n\rrbracket \) is Z-Progressive

Lemma B.1

For each \(n\ge 1\), \(\llbracket T_{n}, U_n\rrbracket \) is Z-progressive.

Proof

For any \(t\ge 0\) and \(U\in {\mathscr {Z}}\), we have

$$\begin{aligned} N(\omega ,[0,t]\times U)=\sum _{n=1}^{\infty }I_{\{U_n\in U\}}I_{\{T_n\le t\}}. \end{aligned}$$
(27)

Since for any \(U\in {\mathscr {Z}}\), \(X_t:=N(\omega ,[0,t]\times U)\) is a progressive process, \(X_{T_1}=I_{\{U_1\in U\}}\) is \({\mathscr {F}}_{T_1}\) measurable. Inductively, we can show that for each \(n\ge 1\), \(U_n\) is \({\mathscr {F}}_{T_n}\) measurable.

For simplicity, we consider Z to be R (real number) here. For fixed n, define the following set:

$$\begin{aligned} A:=\{(\omega , t, e)\mid T_n(\omega )\le t \text { and }U_n(\omega )\le e\}. \end{aligned}$$

For fixed e, since \(U_n\) is \({\mathscr {F}}_{T_n}\) measurable, \(A(e)= \{T_n(\omega )\le t\}\cap \{U_n(\omega )\le e\}\) is a progressive set. For fixed \((\omega , t)\), \(I_{A(\omega ,t)}(e)\) is a right continuous function on R. Therefore, A is Z-progressive. By the same way, we can show that

$$\begin{aligned} \begin{aligned} A_1:&=\{(\omega , t, e)\mid T_n(\omega )\le t \text { and }U_n(\omega )< e\},\\ A_2:&=\{(\omega , t, e)\mid T_n(\omega )< t \text { and }U_n(\omega )\le e\} \end{aligned} \end{aligned}$$

are also Z-progressive. Then, \(\llbracket T_{n}, U_n\rrbracket =A-(A_1\cup A_2)\) is Z-progressive. \(\square \)

Appendix C: Complement Proof of Lemma 4.1

In the proof of Lemma 4.1, the convergence of the term related to a is not proved. In order to prove that we need some lemmas. First, let us introduce some notations.

Suppose \((E,{\mathscr {E}},\mu )\) is a measure space such that \(\mu \) is a finite measure. \((F,{\mathscr {F}})\) is a measurable space. \(K:E\times {\mathscr {F}}\rightarrow R_+\) is a finite transition kernel from E to F such that \(\mu \times K\) is a finite measure on \((E\times F,{\mathscr {E}}\otimes {\mathscr {F}})\).

Lemma C.1

Let \(f_n,f: E\times F\rightarrow R\) be \({\mathscr {E}}\otimes {\mathscr {F}}\) measurable functions such that \(f_n\) converges to f in measure \(\mu \times K\). If there exists a \({\mathscr {E}}\otimes {\mathscr {F}}\) measurable function g such that \(|f_n|\le g\) and \(\int _{F}g(x,y)K(x,dy)<\infty \), \(\mu \)-almost surely, then \(\int _Ff_n(x,y)K(x,dy)\) converges to \(\int _Ff(x,y)K(x,dy)\) in measure \(\mu \) as n tends to infinity.

Proof

We set \(h_n(x):=\int _Ff_n(x,y)K(x,dy)\) and \(h(x):=\int _Ff(x,y)K(x,dy)\). In order to show that \(h_n\) converges to h in \(\mu \), since \(\mu \) is a finite measure, we show for every subsequence \(h_{n_k}\) of \(h_n\), there is a subsequence \(h_{n_{k_l}}\) of \(h_{n_k}\) such that \(h_{n_{k_l}}\) converge to h \(\mu \) almost surely. Suppose \(h_{n_k}\) is any subsequence of \(h_n\), as \(f_{n_k}\) converges to f in measure \(\mu \times K\), then there exists a subsequence \(f_{n_{k_l}}\) of \(f_{n_k}\) such that \(f_{n_{k_l}}\) converges to f \(\mu \times K\) almost surely. Suppose \(A\in {\mathscr {E}}\otimes {\mathscr {F}}\) is the almost sure set.

Now, define the section set of A, for any \(x\in E\),

$$\begin{aligned} A_x:=\{y\in F \mid (x,y)\in A \}. \end{aligned}$$

It is obvious that \(A_x\in {\mathscr {F}}\). Since

$$\begin{aligned} \int _E K(x,F)\mu (dx)&=(\mu \times K)(E\times F)=\int _E \left( \int _F I_{A_x}(y)K(x,dy)\right) \mu (dx)\\&=\int _E K(x,A_x)\mu (dx), \end{aligned}$$

we have for \(\mu \)-a.s. \(x\in E\), \(K(x,A_x)=K(x,F)\), which means that \(A_x\) is a \(K(x,\cdot )\) almost sure set. Let B be the \(\mu \) almost sure set of E such that \(K(x,A_x)=K(x,F)\) and let C be the \(\mu \) almost sure set of E such that \(\int _{F}g(x,y)K(x,dy)<\infty \). Set \(D=B\cap C\), then for any \(x\in D\), \(f_{n_{k_l}}(x,\cdot )\) converges to \(f(x,\cdot )\), \(K(x,\cdot )\)-a.s., and are bounded by an integrable function \(g(x,\cdot )\). By dominate convergence theorem, we have \(h_{n_{k_l}}\) converges to h, \(\mu \)-a.s. \(\square \)

Lemma C.2

Suppose that the conditions in Lemma C.1 are satisfied except for the integrability condition of g turning into the following condition:

$$\begin{aligned} \int _E\left( \int _{F}g(x,y)K(x,dy)\right) ^2\mu (dx)<\infty ; \end{aligned}$$
(28)

then,

$$\begin{aligned} \lim _{n\rightarrow \infty }\int _E\left( \int _Ff_n(x,y)K(x,dy)\right) ^2\mu (dx)=\int _E\left( \int _Ff(x,y)K(x,dy)\right) ^2\mu (dx). \end{aligned}$$

Proof

By (28), we have \(\int _{F}g(x,y)K(x,dy)<\infty \), \(\mu \)-a.s., so \(\int _Ff_n(x,y)K(x,dy)\) converges to \(\int _Ff(x,y)K(x,dy)\) in measure \(\mu \). Then, we get the result by applying dominate convergence theorem. \(\square \)

Now, in order to show the convergence of the term related to a in Lemma 4.1, we set \((E,{\mathscr {E}},\mu )=(\varOmega , {\mathscr {F}},P)\), \((F,{\mathscr {F}})=([0,T]\times Z\times [0,1], {\mathscr {B}}([0,T])\otimes {\mathscr {Z}}\otimes {\mathscr {B}}([0,1]))\), \(K(\omega ,A\times B\times C)=Leb(C)N(\omega ,A\times B)\), \(g=M\left( |{\hat{X}}_t|^2+|v_t-u_t|^2\right) \). Then, applying Lemma C.2, we can obtain the convergence.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Song, Y., Wu, Z. The Maximum Principle for Stochastic Control Problem with Jumps in Progressive Structure. J Optim Theory Appl 199, 415–438 (2023). https://doi.org/10.1007/s10957-023-02302-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-023-02302-4

Keywords

Mathematics Subject Classification

Navigation