Skip to main content
Log in

Dynamic Robust Duality in Utility Maximization

  • Published:
Applied Mathematics & Optimization Aims and scope Submit manuscript

Abstract

A celebrated financial application of convex duality theory gives an explicit relation between the following two quantities:

  1. (i)

    The optimal terminal wealth \(X^*(T) : = X_{\varphi ^*}(T)\) of the problem to maximize the expected U-utility of the terminal wealth \(X_{\varphi }(T)\) generated by admissible portfolios \(\varphi (t); 0 \le t \le T\) in a market with the risky asset price process modeled as a semimartingale;

  2. (ii)

    The optimal scenario \(\frac{dQ^*}{dP}\) of the dual problem to minimize the expected V-value of \(\frac{dQ}{dP}\) over a family of equivalent local martingale measures Q, where V is the convex conjugate function of the concave function U.

In this paper we consider markets modeled by Itô-Lévy processes. In the first part we use the maximum principle in stochastic control theory to extend the above relation to a dynamic relation, valid for all \(t \in [0,T]\). We prove in particular that the optimal adjoint process for the primal problem coincides with the optimal density process, and that the optimal adjoint process for the dual problem coincides with the optimal wealth process; \(0 \le t \le T\). In the terminal time case \(t=T\) we recover the classical duality connection above. We get moreover an explicit relation between the optimal portfolio \(\varphi ^*\) and the optimal measure \(Q^*\). We also obtain that the existence of an optimal scenario is equivalent to the replicability of a related T-claim. In the second part we present robust (model uncertainty) versions of the optimization problems in (i) and (ii), and we prove a similar dynamic relation between them. In particular, we show how to get from the solution of one of the problems to the other. We illustrate the results with explicit examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bordigoni, G., Matoussi, A., Schweizer, M.: A stochastic control approach to a robust utility maximization problem. In: Benth, F.E., et al. (eds.) Stochastic Analysis and Applications. The Abel Symposium 2005. Springer, Berlin (2007)

    Google Scholar 

  2. El Karoui, N., Quenez, M.-C.: Dynamic programming and pricing of contingent claims in an incomplete market. SIAM J. Control Optim. 33, 29–66 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  3. El Karoui, N., Peng, S., Quenez, M.-C.: Backward stochastic differential equations in finance. Math. Financ. 7, 1–71 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  4. Föllmer, H., Schie, A., Weber, S.: Robust preferences and robust portfolio choice. In: Ciarlet, P., Bensoussan, A., Zhang, Q. (eds.) Mathematical Modelling and Numerical Methods in Finance. Handbook of Numerical Analysis 15, pp. 29–88. Springer, New York (2009)

    Google Scholar 

  5. Fontana, C., Øksendal, B., Sulem, A.: Viability and martingale measures in jump diffusion markets under partial information. Methodol. Comput. Appl. Probab. 1(2), 209–222 (2014). doi:10.1007/s11009-014-9397-4

    Google Scholar 

  6. Gushkin, A.: Dual characterization of the value function in the robust utility maximization problem. Theory Probab. Appl. 55, 611–630 (2011)

    Article  MathSciNet  Google Scholar 

  7. Jeanblanc, M., Matoussi, A., Ngoupeyou, A.: Robust utility maximization in a discontinuous filtration, arXiv 1201.2690 v3 (2013)

  8. Kramkov, D., Schachermayer, W.: Necessary and sufficient conditions in the problem of optimal investment in incomplete markets. Annal. Appl. Probab. 13, 1504–1516 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  9. Kreps, D.: Arbitrage and equilibrium in economics with infinitely many commodities. J. Math. Econ. 8, 15–35 (1981)

    Article  MATH  Google Scholar 

  10. Lim, T., Quenez, M.-C.: Exponential utility maximization and indifference price in an incomplete market with defaults. Electron. J. Probab. 16, 1434–1464 (2011)

    Article  MATH  Google Scholar 

  11. Loewenstein, M., Willard, G.: Local martingales, arbitrage, and viability. Econ. Theory 16, 135–161 (2000)

    Article  MATH  Google Scholar 

  12. Maenhout, P.: Robust portfolio rules and asset pricing. Rev. Financ. Stud. 17, 951–983 (2004)

    Article  Google Scholar 

  13. Øksendal, B., Sulem, A.: Applied Stochastic Control of Jump Diffusions, 2nd edn. Springer, Berlin (2007)

    Book  MATH  Google Scholar 

  14. Øksendal, B., Sulem, A.: Forward-backward stochastic differential games and stochastic control under model uncertainty. J. Optim. Theory Appl. 2014(161), 22–55 (2012). doi:10.1007/S10957-012-0166-7

    MathSciNet  MATH  Google Scholar 

  15. Øksendal, B., Sulem, A.: Risk minimization in financial markets modeled by Itô-Lévy processes. Afrika Math. 26, 939–979 (2015). doi:10.1007/s13370-014-0248-9

    Article  MATH  Google Scholar 

  16. Quenez, M.-C.: Optimal portfolio in a multiple-priors model. In: Dalang, R.C., Dozzi, M., Russo, F. (eds.) Seminar on Stochastic Analysis, pp. 291–321. Random Fields and Applications IV, Birkauser (2004)

    Google Scholar 

  17. Quenez, M.-C., Sulem, A.: BSDEs with jumps, optimization and applications to dynamic risk measures. Stoch. Process. Appl. 123, 3328–3357 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  18. Royer, M.: Backward stochastic differential equations with jumps and related non-linear expectations. Stoch. Process. Appl. 116, 1358–1376 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  19. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  MATH  Google Scholar 

  20. Tang, S.H., Li, X.: Necessary conditions for optimal control of stochastic systems with random jumps. SIAM J. Control Optim. 32, 1447–1475 (1994)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The research leading to these results has received funding from the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no [228087].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bernt Øksendal.

Appendix: Maximum Principles for Optimal Control

Appendix: Maximum Principles for Optimal Control

Consider the following controlled stochastic differential equation

$$\begin{aligned} dX(t)&= b(t, X(t), u(t), \omega ) dt + \sigma (t, X(t), u(t), \omega ) dB(t) \nonumber \\&\quad + \int _\mathbb {R}\gamma (t, X(t), u(t), \omega , \zeta ) \tilde{N}(dt, d\zeta ) \; ; \; 0 \le t \le T \; ; \; X(0) = x \in \mathbb {R}. \end{aligned}$$
(4.1)

The performance functional is given by

$$\begin{aligned} J(u) = E \left[ \int _0^T f(t, X(t), u(t), \omega ) dt + \phi (X(T), \omega ) \right] \end{aligned}$$
(4.2)

where \(T>0\) and u is in a given family \(\mathcal {A}\) of admissible \(\mathcal {F}\)-predictable controls. For \(u \in \mathcal {A}\) we let \(X^u(t)\) be the solution of (4.1). We assume this solution exists, is unique and satisfies

$$\begin{aligned} E\left[ \int _0^T |X^u(t) |^{2 } dt\right] < \infty . \end{aligned}$$
(4.3)

We want to find \(u^* \in \mathcal {A}\) such that

$$\begin{aligned} \sup _{u \in \mathcal {A}} J(u) = J(u^*). \end{aligned}$$
(4.4)

We make the following assumptions

$$\begin{aligned} f \in C^1 \text{ and } E\left[ \int _0^T | \nabla f |^2(t) dt\right] < \infty , \end{aligned}$$
(4.5)
$$\begin{aligned} b, \sigma , \gamma \in C^1 \text{ and } E\left[ \int _0^T \big ( | \nabla b |^2 + | \nabla \sigma |^2 + \Vert \nabla \gamma \Vert ^2\big ) (t) dt\right] < \infty , \end{aligned}$$
(4.6)
$$\begin{aligned} \text { where } \Vert \nabla \gamma (t, \cdot ) \Vert ^2 := \int _\mathbb {R}\gamma ^2(t, \zeta ) \nu (d \zeta ) \nonumber \\ \phi \in C^1 \text{ and } \text { for all } u \in \mathcal {A}, \; E\left[ \phi {^\prime }(X(T))^{2 }\right] < \infty . \end{aligned}$$
(4.7)

Let \(\mathbb {U}\) be a convex closed set containing all possible control values \(u(t); t \in [0,T] \).

The Hamiltonian associated to the problem (4.4) is defined by

$$\begin{aligned} H: [0,T] \times \mathbb {R}\times \mathbb {U}\times \mathbb {R}\times \mathbb {R}\times \mathcal{R } \times \Omega \mapsto \mathbb {R}\end{aligned}$$
$$\begin{aligned} H(t,x, u,p,q,r, \omega )= & {} f(t,x,u, \omega ) + b (t,x,u, \omega ) p + \sigma (t,x,u, \omega ) q\\&+ \displaystyle \int _\mathbb {R}\gamma (t,x,u, \zeta , \omega ) r(t, \zeta ) \nu (d \zeta ). \end{aligned}$$

For simplicity of notation the dependence on \(\omega \) is suppressed in the following. We assume that H is Fréchet differentiable in the variables xu. We let m denote the Lebesgue measure on [0, T].

The associated BSDE for the adjoint processes (pqr) is

(4.8)

Here and in the following we are using the abbreviated notation

$$\begin{aligned} \frac{\partial H}{\partial x} (t) = \frac{\partial H}{\partial x} (t, X(t), u(t)) \text { etc } \end{aligned}$$

We first formulate a sufficient maximum principle.

Theorem 4.1

(Sufficient maximum principle)Let \(\hat{u}\in \mathcal {A}\) with corresponding solutions \(\hat{X}\), \(\hat{p}, \hat{q}, \hat{r}\) of equations (4.1)–(4.8). Assume the following:

  • The function \(x \mapsto \phi (x)\) is concave

  • (The Arrow condition) The function

    $$\begin{aligned} \mathcal{H }(x) := \sup _{v \in \mathbb {U}} H(t, x, v, \hat{p}(t),\hat{q}(t),\hat{r}(t, \cdot )) \end{aligned}$$
    (4.9)

    is concave for all \(t \in [0,T]\).

  • $$\begin{aligned} \sup _{v \in \mathbb {U}} H(t, \hat{X}(t), v, \hat{p}(t),\hat{q}(t),\hat{r}(t, \cdot )) = H(t, \hat{X}(t), \hat{u}(t), \hat{p}(t),\hat{q}(t),\hat{r}(t, \cdot )) ; \; t \in [0,T]. \end{aligned}$$
    (4.10)

Then \(\hat{u}\) is an optimal control for the problem (4.4).

Next, we state a necessary maximum principle. For this, we need the following assumptions:

  • For all \(t_0 \in [0, T] \) and all bounded \(\mathcal {F}_{t_0}\)-measurable random variables \(\alpha (\omega )\) the control

    $$\begin{aligned} \theta (t) := \chi _{[t_0, T]}(t) \alpha (\omega ) \end{aligned}$$
    (4.11)

    belongs to \(\mathcal {A}\).

  • For all \(u, \beta \in \mathcal {A}\) with \(\beta \) bounded, there exists \(\delta >0\) such that the control

    $$\begin{aligned} \tilde{u}(t) := u(t) + a \beta (t) ; \; t \in [0,T] \end{aligned}$$

    belongs to \(\mathcal {A}\) for all \(a \in ( - \delta , \delta )\).

  • The derivative process

    $$\begin{aligned} x(t) := \frac{d}{da} X^{u + a \beta } (t) \mid _{a =0}, \end{aligned}$$

    exists and belongs to \(L^2(dm \times dP)\), and

    $$\begin{aligned} \left\{ \begin{array}{ll} dx(t) = \{ \frac{\partial b}{\partial x} (t) x(t) + \frac{\partial b}{\partial u} (t) \beta (t)\} dt + \{ \frac{\partial \sigma }{\partial x} (t) x(t) + \frac{\partial \sigma }{\partial u} (t) \beta (t) \} dB(t) \\ \qquad \displaystyle + \displaystyle \int _\mathbb {R}\{ \frac{\partial \gamma }{\partial x} (t, \zeta ) x(t) + \frac{\partial \gamma }{\partial u} (t, \zeta ) \beta (t) \} \tilde{N}(dt, d \zeta ) \\ x(0) =0 \end{array}\right. \end{aligned}$$
    (4.12)

Theorem 4.2

(Necessary maximum principle) The following are equivalent

$$\begin{aligned} \bullet&\;\; \frac{d}{da} J(u + a \beta ) \mid _{a=0} = 0 \text { for all bounded } \beta \in \mathcal {A}\\ \bullet&\;\; \frac{\partial H}{\partial u}(t) = 0 \text { for all } t \in [0, T]. \end{aligned}$$

For detailed proofs of Theorems 4.1 and 4.2 we refer to proofs of Theorem 2.1 and 2.2 of [15]. We give below the ideas of the proofs. Sketch of proof of Theorem 4.2: By introducing a suitable family of stopping times as in the proof of Theorem 2.1 in [15], we may assume that all the local martingales below are martingales and hence have expectation 0.

Choose \(u \in \mathcal {A}\) and consider

$$\begin{aligned} J(u) - J(\hat{u}) = J_1 + J_2 , \end{aligned}$$

where

$$\begin{aligned} J_1 = E \left[ \int _0^T \{ f(t) - \hat{f}(t)\} dt \right] ,\; J_2 = E [ \phi (X(T)) - \phi (\hat{X}(T))], \end{aligned}$$

where \(f(t) = f(t, X(t), u(t))\), with \(X(t) = X^u(t)\) etc.

By the definition of H we have

$$\begin{aligned} J_1&= E \left[ \int _0^T \{ H(t) - \hat{H}(t) - \hat{p}(t) (b(t) - \hat{b}(t)) \right. \nonumber \\&\left. \left. \quad - \hat{q}(t) (\sigma (t) - \hat{\sigma }(t)) - \int _\mathbb {R}\hat{r}(t,\zeta )(\gamma (t,\zeta ) - \hat{\gamma }(t,\zeta )) \nu (d\zeta ) \right\} dt \right] . \end{aligned}$$
(4.13)

By concavity of \(\phi \) and the Itô formula,

$$\begin{aligned} J_2&\le E[ \phi {^\prime }(\hat{X}(T))(X(T) - \hat{X}(T))] \nonumber \\&= E [ \hat{p}(T) (X(T) - \hat{X}(T))] - E [ \hat{\lambda }(T) h'(\hat{X}(T))(X(T) - \hat{X}(T))] \nonumber \\&= E \left[ \int _0^{T} \hat{p}(t^-) (dX(t) - d \hat{X}(t)) + \int _0^{T} (X(t^-) - \hat{X}(t^-)) d \hat{p}(t) \right. \nonumber \\&\left. \quad + \int _0^{T} \hat{q}(t) (\sigma (t) - \hat{\sigma }(t))dt + \int _0^{T} \int _\mathbb {R}\hat{r}(t,\zeta )(\gamma (t,\zeta ) - \hat{\gamma }(t,\zeta )) \nu (d \zeta )\right] \nonumber \\&= E \left[ \int _0^T \hat{p}(t) (b(t) - \hat{b}(t))dt + \int _0^T (X(t) - \hat{X}(t)) \left( - \frac{\partial \hat{H}}{\partial x}(t)\right) dt \right. \nonumber \\&\left. \quad \quad + \int _0^T \hat{q}(t)(\sigma (t) - \hat{\sigma }(t))dt + \int _0^T \int _\mathbb {R}\hat{r}(t,\zeta ) (\gamma (t,\zeta ) - \hat{\gamma }(t,\zeta )) \nu (d\zeta ) dt \right] . \end{aligned}$$
(4.14)

Adding (4.13) and (4.14) we get

$$\begin{aligned} J(u)&- J(\hat{u}) = J_1 +J_2 \nonumber \\&\le E \left[ \int _0^T\left\{ H(t) - \hat{H}(t) - \frac{\partial \hat{H}}{\partial x} (X(t) - \hat{X}(t)) \right\} dt \right] . \end{aligned}$$
(4.15)

By a separating hyperplane argument (see e.g. [19], , Chapt. 5, Sec. 23) we get that

$$\begin{aligned} J(u) - J(\hat{u})&\le \hat{\mathcal H}(X(t)) - \hat{\mathcal H}(\hat{X}(t), ) - \frac{\partial \hat{\mathcal H}}{\partial x} (\hat{X}(t))(X(t) - \hat{X}(t)) \\&\le 0, \text { by concavity of } \mathcal H. \end{aligned}$$

\(\square \)

Sketch of proof of Theorem 4.2 : By introducing a suitable sequence of stopping times as in the proof of Theorem 2.2 in [15], we may assume that all the local martingales below are martingales and hence have expectation 0.

We can write \(\displaystyle \frac{d}{da} J(u + a \beta ) \mid _{a=0} = I_1 + I_2 \), where

$$\begin{aligned} I_1&= \frac{d}{da} E \left[ \int _0^T f(t,X^{u+a \beta }(t), u(t)) + a \beta (t))dt\right] _{a=0} \\ I_2&= \frac{d}{da} E[ \phi (X^{u+a \beta }(T))]_{a=0}. \end{aligned}$$

By our assumptions on f and \(\phi \) we have

$$\begin{aligned} I_1&= \left[ \int _0^T \left\{ \frac{\partial f}{\partial x}(t) x(t) + \frac{\partial f}{\partial u}(t) \beta (t)\right\} dt\right] \nonumber \\ I_2&= E[ \phi {^\prime } (X(T)) x(T)] = E[p(T) x(T)]. \end{aligned}$$
(4.16)

By the Itô formula

$$\begin{aligned} I_2&= E[p(T) x(T)] \nonumber \\&=E \left[ \int _0^{T} p(t) dx(t) + \int _0^{T} x(t) dp(t) + \int _0^{T} d[p,x](t) \right] \nonumber \\&= E \left[ \int _0^{T} p(t) \left\{ \frac{\partial b}{\partial x}(t) x(t) + \frac{\partial b}{\partial u}(t) \beta (t) \right\} dt + \int _0^{T}x(t) \left( - \frac{\partial H}{\partial x}(t)\right) dt \right. \nonumber \\&= + \int _0^{T} q(t) \left\{ \frac{\partial \sigma }{\partial x}(t) x(t) + \frac{\partial \sigma }{\partial u}(t) \beta (t) \right\} dt \nonumber \\&\left. \quad + \int _0^{T}\int _\mathbb {R}r(t,\zeta ) \left\{ \frac{\partial \gamma }{\partial x}(t,\zeta )x(t) + \frac{\partial \gamma }{\partial u}(t,\zeta )\beta (t)\right\} \nu (d\zeta ) dt \right] \nonumber \\&= E \left[ \int _0^{T} x(t) \left\{ \frac{\partial b}{\partial x}(t) p(t) + \frac{\partial \sigma }{\partial x}(t) q(t) + \int _\mathbb {R}\frac{\partial \gamma }{\partial x}(t,\zeta ) r(t,\zeta ) \nu (d\zeta ) - \frac{\partial H}{\partial x}(t) \right\} dt\right. \nonumber \\&= E \left[ \int _0^{T} x(t) \left\{ - \frac{\partial f}{\partial x}(t) \right\} dt \right. \nonumber \\&\quad \left. + \int _0^{T} \beta (t) \left\{ \frac{\partial H}{\partial u}(t)- \frac{\partial f}{\partial u}(t) \right\} dt \right] \nonumber \\&= - I_1 + E \left[ \int _0^T \frac{\partial H}{\partial u} (t) \beta (t) dt\right] . \end{aligned}$$
(4.17)

Summing (4.16) and (4.17) we get

$$\begin{aligned} \frac{d}{da} J(u + a \beta ) \mid _{a=0} = I_1 + I_2 = E \left[ \int _0^T \frac{\partial H}{\partial u}(t) \beta (t)dt \right] . \end{aligned}$$

We conclude that

$$\begin{aligned} \frac{d}{da} J(u + a \beta ) \mid _{a=0} = 0 \end{aligned}$$

if and only if

$$\begin{aligned} E \left[ \int _0^T \frac{\partial H}{\partial u}(t) \beta (t) dt \right] = 0 \; ; \; \text { for all bounded } \beta \in \mathcal {A}. \end{aligned}$$

In particular, applying this to \(\beta (t) = \theta (t)\) as in (4.11), we get that this is again equivalent to

$$\begin{aligned} E \left[ \frac{\partial H}{\partial u}(t) \mid \mathcal {E}_t\right] = 0 \text { for all } t \in [0,T]. \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Øksendal, B., Sulem, A. Dynamic Robust Duality in Utility Maximization. Appl Math Optim 75, 117–147 (2017). https://doi.org/10.1007/s00245-016-9329-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00245-016-9329-5

Keywords

Mathematics Subject Classification

Navigation