Skip to main content
Log in

Forward–Backward Stochastic Differential Games and Stochastic Control under Model Uncertainty

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

We study optimal stochastic control problems with jumps under model uncertainty. We rewrite such problems as stochastic differential games of forward–backward stochastic differential equations. We prove general stochastic maximum principles for such games, both in the zero-sum case (finding conditions for saddle points) and for the nonzero sum games (finding conditions for Nash equilibria). We then apply these results to study robust optimal portfolio-consumption problems with penalty. We establish a connection between market viability under model uncertainty and equivalent martingale measures. In the case with entropic penalty, we prove a general reduction theorem, stating that a optimal portfolio-consumption problem under model uncertainty can be reduced to a classical portfolio-consumption problem under model certainty, with a change in the utility function, and we relate this to risk sensitive control. In particular, this result shows that model uncertainty increases the Arrow–Pratt risk aversion index.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Øksendal, B., Sulem, A.: Applied Stochastic Control of Jump Diffusions, 2nd edn. Springer, Berlin (2007)

    Book  Google Scholar 

  2. Øksendal, B., Sulem, A.: Maximum principles for optimal control of forward-backward stochastic differential equations with jumps. SIAM J. Control Optim. 48(5), 2845–2976 (2009)

    Google Scholar 

  3. Hamadène, S.: Backward-forward SDE’s and stochastic differential games. Stoch. Process. Appl. 77, 1–15 (1998)

    Article  MATH  Google Scholar 

  4. An, T.T.K., Øksendal, B.: A maximum principle for stochastic differential games with g-expectation and partial information. Stochastics (2011). doi:10.1080/17442508.2010.532875

    Google Scholar 

  5. Bordigoni, G., Matoussi, A., Schweizer, M.: A stochastic control approach to a robust utility maximization problem. In: Benth, F.E., et al. (eds.) Stochastic Analysis and Applications, The Abel Symposium, 2005, pp. 125–152. Springer, Berlin (2007)

    Chapter  Google Scholar 

  6. Jeanblanc, M., Matoussi, A., Ngoupeyou, A.: Robust Utility Maximization in a Discontinuous Filtration (2012)

  7. Lim, T., Quenez, M.-C.: Exponential utility maximization and indifference price in an incomplete market with defaults. Electron. J. Probab. 16, 1434–1464 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  8. Øksendal, B., Sulem, A.: Robust stochastic control and equivalent martingale measures. In: Kohatsu-Higa, A., et al. (eds.) Stochastic Analysis and Applications. Progress in Probability, vol. 65, pp. 179–189 (2011)

    Google Scholar 

  9. Øksendal, B., Sulem, A.: Portfolio optimization under model uncertainty and BSDE games. Quant. Finance 11(11), 1665–1674 (2011)

    Article  MathSciNet  Google Scholar 

  10. Pliska, S.: Introduction to Mathematical Finance. Blackwell, Oxford (1997)

    Google Scholar 

  11. Kreps, D.: Arbitrage and equilibrium in economics with infinitely many commodities. J. Math. Econ. 8, 15–35 (1981)

    Article  MATH  MathSciNet  Google Scholar 

  12. Loewenstein, M., Willard, G.: Local martingales, arbitrage, and viability. Econ. Theory 16, 135–161 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  13. Øksendal, B., Sulem, A.: Viability and martingale measures in jump diffusion markets under partial information. Manuscript (2011)

  14. Aase, K., Øksendal, B., Privault, N., Ubøe, J.: White noise generalizations of the Clark–Haussmann–Ocone theorem, with application to mathematical finance. Finance Stoch. 4, 465–496 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  15. Di Nunno, G., Øksendal, B., Proske, F.: Malliavin Calculus for Lévy Processes with Applications to Finance. Springer, Berlin (2009)

    Book  Google Scholar 

  16. Maenhout, P.: Robust portfolio rules and asset pricing. Rev. Financ. Stud. 17, 951–983 (2004)

    Article  Google Scholar 

  17. Royer, M.: Backward stochastic differential equations with jumps and related non-linear expectations. Stoch. Process. Appl. 116, 1358–1376 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  18. Quenez, M.C., Sulem, A.: BSDEs with jumps, optimization and applications to dynamic risk measures. Inria research report rr-7997 (2012)

  19. Föllmer, H., Schied, A., Weber, S.: Robust preferences and robust portfolio choice. In: Ciarlet, P., Bensoussan, A., Zhang, Q. (eds.) Mathematical Modelling and Numerical Methods in Finance. Handbook of Numerical Analysis, vol. 15, pp. 29–88 (2009)

    Google Scholar 

  20. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    MATH  Google Scholar 

Download references

Acknowledgements

We thank Olivier Menoukeu Pamen and Marie-Claire Quenez for helpful comments.

The research leading to these results has received funding from the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no [228087]

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bernt Øksendal.

Appendices

Appendix A: Proofs of the Maximum Principles for FBSDE Games

We first recall some basic concepts and results from Banach space theory. Let V be an open subset of a Banach space \(\mathcal{X}\) with norm ∥⋅∥, and let F:V→ℝ.

  1. (i)

    We say that F has a directional derivative (or Gâteaux derivative) at xX in the direction \(y \in\mathcal{X}\) if

    $$D_y F(x) := \lim_{\varepsilon\rightarrow0} \frac{1}{\varepsilon} \bigl(F(x + \varepsilon y) - F(x)\bigr) $$

    exists.

  2. (ii)

    We say that F is Fréchet differentiable at xV if there exists a linear map

    $$L: \mathcal{X}\rightarrow\mathbb{R} $$

    such that

    $$\lim_{\substack{h \rightarrow0 \\ h \in\mathcal{X}}} \frac{1}{ \|h\| } \big| F(x+h) - F(x) - L(h)\big| = 0. $$

    In this case, we call L the gradient (or Fréchet derivative) of F at x, and we write

    $$L = \nabla_x F. $$
  3. (iii)

    If F is Fréchet differentiable, then F has a directional derivative in all directions \(y \in\mathcal{X}\), and

    $$D_y F(x) = \nabla_x F(y). $$

Proof of Theorem 2.1

(Sufficient maximum principle) We first prove that

$$J_1(u_1,\hat{u}_2) \leq J_1( \hat{u}_1, \hat{u}_2)\quad \text{ for all } u_1 \in \mathcal{A}_1. $$

To this end, fix \(u_{1} \in\mathcal{A}_{1}\) and consider

$$ \Delta:= J_1(u_1, \hat{u}_2) - J_1 (\hat{u}_1, \hat{u}_2) = I_1 + I_2 + I_3, $$
(A.1)

where

(A.2)
(A.3)
(A.4)

By (8) we have

(A.5)

By the concavity of φ 1, (10), and the Itô formula,

(A.6)

By the concavity of ψ 1, (5), (9), and the concavity of φ ,

(A.7)

Adding (A.5), (A.6), and (A.7), we get

(A.8)

Since \(\hat{\mathcal{H}}_{1}(x,y,z,k)\) is concave, it follows by a standard separating hyperplane argument (see, e.g., [20], Chap. 5, Sect. 23) that there exists a supergradient \(a=(a_{0}, a_{1}, a_{2}, a_{3}(\cdot)) \in \mathbb{R}^{3} \times\mathcal{R}\) for \(\hat{\mathcal{H}}_{1}(x,y,z,k)\) at \(x = \hat {X}(t),\ y = \hat{Y}_{1}(t),\ z = \hat{Z}_{1}(t^{-})\), and \(k = \hat{K}_{1}(t^{-}, \cdot)\) such that if we define

then

$$\varphi_1(x,y,z,k) \leq0\quad \text{ for all } x,y,z,k. $$

On the other hand, we clearly have

$$\varphi_1\bigl(\hat{X}(t), \hat{Y}(t), \hat{Z}(t), \hat{K}_1(t,\cdot)\bigr) = 0. $$

It follows that

Combining this with (A.8), we get

Hence,

$$J_1(u_1, \hat{u}_2) \leq J_1( \hat{u}_1,\hat{u}_2) \quad\text{ for all } u_1 \in\mathcal{A}_1. $$

The inequality

$$J_2(\hat{u}_1,u_2) \leq J_2( \hat{u}_1, \hat{u}_2)\quad \text{ for all } u_2 \in \mathcal{A}_2 $$

is proved similarly. This completes the proof of Theorem 2.1. □

Proof of Theorem 2.2

(Necessary maximum principle) Consider

(A.9)

By (10), (13), and the Itô formula,

(A.10)

By (9), (13), and the Itô formula,

(A.11)

Adding (A.10) and (A.11), we get, by (A.9),

(A.12)

If D 1=0 for all bounded \(\beta_{1} \in\mathcal{A}_{1}\), then this holds in particular for β 1 of the form in (a1), i.e.,

$$\beta_1(t) = \chi_{(t_0,T]}(t) \alpha_1(\omega), $$

where α 1(ω) is bounded and \(\mathcal{E}^{(1)}_{t_{0}}\)-measurable. Hence,

$$E \biggl[ \int_{t_0}^T E \biggl[ \frac{\partial H_1}{\partial u_1}(t) \mid\mathcal{E}^{(1)}_t \biggr] \alpha_1 \,dt \biggr] = 0. $$

Differentiating with respect to t 0, we get

$$E \biggl[ \frac{\partial H_1}{\partial u_1} (t_0) \alpha_1 \biggr] = 0 \quad\text{ for a.a. } t_0. $$

Since this holds for all bounded \(\mathcal{E}^{(1)}_{t_{0}}\)-measurable random variables α 1, we conclude that

$$E \biggl[ \frac{\partial H_1}{\partial u_1}(t) \mid\mathcal{E}^{(1)}_t \biggr] = 0 \quad\text{ for a.a. } t \in[0,T]. $$

A similar argument gives that

$$E \biggl[ \frac{\partial H_2}{\partial u_2}(t) \mid\mathcal{E}^{(2)}_t \biggr] = 0, $$

provided that

$$D_2 := \frac{d}{ds} J_2(u_1,u_2 + s \beta_2) \mid_{s=0} = 0\quad \text{ for all bounded } \beta_2 \in\mathcal{A}_2. $$

This shows that (i) ⇒ (ii). The argument above can be reversed, to give that (ii) ⇒ (i). We omit the details. □

Appendix B: Linear BSDEs with Jumps

Lemma B.1

(Linear BSDEs with jumps)

Let Λ be an \(\mathcal{F}_{T}\)-measurable and square-integrable random variable. Let β and ξ 0 be bounded predictable processes, and ξ 1 a predictable process such that ξ 1(t,ζ)≥C 1 with C 1>−1 and |ξ 1(t,ζ)|≤C 2(1∧|ζ|) for a constant C 2≥0. Let φ be a predictable process such that \(E[\int_{0}^{T} \varphi^{2}(t) \,dt] < \infty\). Then the linear BSDE

(B.1)

has the unique solution

$$ Y(t) = E\biggl[\varLambda\varUpsilon(t, T) + \int _t^T \varUpsilon(t,s) \varphi(s) \, ds \mid \mathcal{ F}_t\biggr], \quad0 \leq t \leq T, $$
(B.2)

where ϒ(t,s), 0≤tsT, is defined by

(B.3)

i.e.,

(B.4)

Hence,

$$\varUpsilon(t,s) = \frac{\varUpsilon(0,s)}{\varUpsilon(0,t)}. $$

Proof

For completeness, we give the proof, also given in [18]. The existence and uniqueness follow by general theorems for BSDEs with Lipschitz coefficients. See, e.g., [17]. Hence, it only remains to prove that if we define Y(t) to be the solution of (B.1), then (B.2) holds. To this end, define

$$\varUpsilon(s) = \varUpsilon(0,s). $$

Then by the Itô formula (see, e.g., [1], Chap. 1),

Hence, \(\varUpsilon(t) Y(t) + \int_{0}^{t} \varUpsilon(s)\varphi(s) \, ds \) is a martingale, and therefore

$$\varUpsilon(t) Y(t) + \int_0^t \varUpsilon(s) \varphi(s) \, ds = E\biggl[ \varLambda\varUpsilon(T) + \int_0^T \varUpsilon(s)\varphi(s) \, ds \mid \mathcal{F}_t\biggr] $$

or

$$Y(t) = E\biggl[ \varLambda\frac{\varUpsilon(T)}{\varUpsilon(t)} + \int_t^T \frac {\varUpsilon (s)}{\varUpsilon(t)} \varphi(s) \, ds \mid\mathcal{F}_t\biggr], $$

as claimed. □

Rights and permissions

Reprints and permissions

About this article

Cite this article

Øksendal, B., Sulem, A. Forward–Backward Stochastic Differential Games and Stochastic Control under Model Uncertainty. J Optim Theory Appl 161, 22–55 (2014). https://doi.org/10.1007/s10957-012-0166-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-012-0166-7

Keywords

Navigation