Skip to main content
Log in

Dynamic credit investment in partially observed markets

  • Published:
Finance and Stochastics Aims and scope Submit manuscript

Abstract

We consider the problem of maximizing the expected utility for a power investor who can allocate his wealth in a stock, a defaultable security, and a money market account. The dynamics of these security prices are governed by geometric Brownian motions modulated by a hidden continuous-time finite-state Markov chain. We reduce the partially observed stochastic control problem to a complete observation risk-sensitive control problem via the filtered regime switching probabilities. We separate the latter into predefault and postdefault dynamic optimization subproblems and obtain two coupled Hamilton–Jacobi–Bellman (HJB) partial differential equations. We prove the existence and uniqueness of a globally bounded classical solution to each HJB equation and give the corresponding verification theorem. We provide a numerical analysis showing that the investor increases his holdings in stock as the filter probability of being in high-growth regimes increases, and decreases his credit risk exposure as the filter probability of being in high default risk regimes gets larger.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Bélanger, A., Shreve, S., Wong, D.: A general framework for pricing credit risk. Math. Finance 14, 317–350 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bielecki, T., Jang, I.: Portfolio optimization with a defaultable security. Asia-Pac. Financ. Mark. 13, 113–127 (2006)

    Article  MATH  Google Scholar 

  3. Bielecki, T., Rutkowski, M.: Credit Risk: Modelling, Valuation and Hedging. Springer, New York (2001)

    MATH  Google Scholar 

  4. Bo, L., Capponi, A.: Optimal investment in credit derivatives portfolio under contagion risk. Math. Financ. (2014, forthcoming). Available online http://onlinelibrary.wiley.com/doi/10.1111/mafi.12074/abstract. doi:10.1111/mafi.12074

  5. Bo, L., Wang, Y., Yang, X.: An optimal portfolio problem in a defaultable market. Adv. Appl. Probab. 42, 689–705 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  6. Capponi, A., Figueroa-López, J.E.: Dynamic portfolio optimization with a defaultable security and regime-switching markets. Math. Finance 24, 207–249 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  7. Carr, P., Linetsky, V., Mendoza-Arriaga, R.: Time-changed Markov processes in unified credit-equity modeling. Math. Finance 20, 527–569 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  8. Di Francesco, M., Pascucci, A., Polidoro, S.: The obstacle problem for a class of hypoelliptic ultraparabolic equations. Proc. R. Soc., Math. Phys. Eng. Sci. 464, 155–176 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  9. El Karoui, N., Jeanblanc, M., Jiao, Y.: What happens after a default: the conditional density approach. Stoch. Process. Appl. 120, 1011–1032 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  10. Elliott, R.J., Aggoun, L., Moore, J.B.: Hidden Markov Models: Estimation and Control. Springer, Berlin (1994)

    MATH  Google Scholar 

  11. Elliott, R.J., Siu, T.K.: A hidden Markov model for optimal investment of an insurer with model uncertainty. Int. J. Robust Nonlinear Control 22, 778–807 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  12. Frey, R., Runggaldier, W.J.: Credit risk and incomplete information: a nonlinear-filtering approach. Finance Stoch. 14, 495–526 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  13. Frey, R., Runggaldier, W.J.: Nonlinear filtering in models for interest-rate and credit risk. In: Crisan, D., Rozovski, B. (eds.) Oxford Handbook on Nonlinear Filtering, pp. 923–959. Oxford University Press, London (2011)

    Google Scholar 

  14. Frey, R., Schmidt, T.: Pricing and hedging of credit derivatives via the innovations approach to nonlinear filtering. Finance Stoch. 16, 105–133 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  15. Friedman, A.: Partial Differential Equations of Parabolic Type. Prentice Hall, New York (1964)

    MATH  Google Scholar 

  16. Fujimoto, K., Nagai, H., Runggaldier, W.J.: Expected log-utility maximization under incomplete information and with Cox-process observations. Asia-Pac. Financ. Mark. 21, 35–66 (2014)

    Article  MATH  Google Scholar 

  17. Fujimoto, K., Nagai, H., Runggaldier, W.J.: Expected power-utility maximization under incomplete information and with Cox-process observations. Appl. Math. Optim. 67, 33–72 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  18. Giesecke, K., Longstaff, F., Schaefer, S., Strebulaev, I.: Corporate bond default risk: a 150-year perspective. J. Financ. Econ. 102, 233–250 (2011)

    Article  Google Scholar 

  19. Jacod, J., Shiryaev, A.: Limit Theorems for Stochastic Processes. Springer, New York (2003)

    Book  MATH  Google Scholar 

  20. Jiao, Y., Kharroubi, I., Pham, H.: Optimal investment under multiple defaults risk: a BSDE-decomposition approach. Ann. Appl. Probab. 23, 455–491 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  21. Jiao, Y., Pham, H.: Optimal investment with counterparty risk: a default density approach. Finance Stoch. 15, 725–753 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  22. Kharroubi, I., Lim, T.: Progressive enlargement of filtrations and backward SDEs with jumps. J. Theor. Probab. 27, 683–724 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  23. Kliemann, W., Koch, G., Marchetti, F.: On the unnormalized solution of the filtering problem with counting process observations. IEEE Trans. Inf. Theory 36, 1415–1425 (1990)

    Article  MATH  Google Scholar 

  24. Kraft, H., Steffensen, M.: Portfolio problems stopping at first hitting time with application to default risk. Math. Methods Oper. Res. 63, 123–150 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  25. Kraft, H., Steffensen, M.: Asset allocation with contagion and explicit bankruptcy procedures. J. Math. Econ. 45, 147–167 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  26. Kraft, H., Steffensen, M.: How to invest optimally in corporate bonds. J. Econ. Dyn. Control 32, 348–385 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  27. Liechty, J., Roberts, G.: Markov chain Monte Carlo methods for switching diffusion models. Biometrika 88, 299–315 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  28. Linetsky, V.: Pricing equity derivatives subject to bankruptcy. Math. Finance 16, 255–282 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  29. Nagai, H., Runggaldier, W.: PDE approach to utility maximization for market models with hidden Markov factors. In: Dalang, R.C., Dozzi, M., Russo, F. (eds.) Seminar on Stochastic Analysis, Random Fields, and Applications V. Progress in Probability, vol. 59, pp. 493–506. Birkhäuser, Basel (2008)

    Chapter  Google Scholar 

  30. Pascucci, A.: PDE and martingale methods in option pricing. Bocconi & Springer Series. Springer, Milan. Bocconi University Press, Milan (2011)

    Book  MATH  Google Scholar 

  31. Pham, H.: Stochastic control under progressive enlargement of filtrations and applications to multiple defaults risk management. Stoch. Process. Appl. 120, 1795–1820 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  32. Protter, P., Shimbo, K.: No arbitrage and general semimartingales. In: Ethier, S.N., Feng, J., Stockbridge, R.H. (eds.) Markov Processes and Related Topics: A Festschrift for Thomas G. Kurtz. IMS Collections, vol. 4, pp. 267–283 (2008)

    Chapter  Google Scholar 

  33. Rogers, C., Williams, D.: Diffusions, Markov Processes, and Martingales: Itô Calculus. Wiley, New York (1987)

    MATH  Google Scholar 

  34. Sass, J., Haussmann, U.: Optimizing the terminal wealth under partial information: the drift process as a continuous time Markov chain. Finance Stoch. 8, 553–577 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  35. Siu, T.K.: A BSDE approach to optimal investment of an insurer with hidden regime switching. Stoch. Anal. Appl. 31, 1–18 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  36. Sotomayor, L., Cadenillas, A.: Explicit solutions of consumption-investment problems in financial markets with regime switching. Math. Finance 19, 251–279 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  37. Tamura, T., Watanabe, Y.: Risk-sensitive portfolio optimization problems for hidden Markov factors on infinite time horizon. Asymptot. Anal. 75, 169–209 (2011)

    MathSciNet  MATH  Google Scholar 

  38. Wong, E., Hajek, B.: Stochastic Processes in Engineering Systems. Springer, Berlin (1985)

    Book  MATH  Google Scholar 

  39. Zariphopoulou, T.: Investment–consumption models with transaction fees and Markov-chain parameters. SIAM J. Control Optim. 30, 613–636 (1992)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors gratefully acknowledge two anonymous reviewers and Wolfgang Runggaldier for providing constructive and insightful comments, which improved significantly the quality of the manuscript. Agostino Capponi would also like to thank Ramon van Handel for very useful discussions and insights provided in the original model setup.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Agostino Capponi.

Additional information

The second author’s research was partially supported by the NSF Grant: DMS-1149692.

Appendices

Appendix A: Proofs related to Sect. 3

Lemma A.1

Let

$$ q_{t}^{i} = \mathbb{E}^{\hat{\mathbb{P}}} \big[L_{t} {\mathbf{1}_{\{X_{t}=e_{i}\} }} \big| \mathcal{G}_{t}^{I} \big]. $$
(A.1)

Then the dynamics of \((q_{t}^{i})_{t\geq0}\), \(i=1,\dots,N\), under the measure \(\hat{\mathbb{P}}\) is given by the system of stochastic differential equations (SDEs)

$$\begin{aligned} dq_{t}^{i} =& {\sum_{\ell=1}^{N}\varpi_{\ell,i}(t)q^{\ell}_{t}} \,dt + q_{t-}^{i} Q^{\top}(t,e_{i},\pi_{t}) {\varSigma_{Y}} \,d \hat{W}_{t} \\ &{} + {q_{t- }^{i}} (h_{i}-1) \,d\hat{\xi}_{t} - \gamma \eta(t,e_{i},\pi_{t}) q_{t}^{i} \,dt,\\ q_{0}^{i} =& p_{0}^{i}. \end{aligned}$$
(A.2)

Proof

Let us introduce the notation \(H_{t}^{i}:=\mathbf{1}_{\{X_{t}=e_{i}\}}\). Note that \(X_{t}=(H^{1}_{t},\dots,H^{N}_{t})^{\top}\) and, by (2.1),

$$ H^{i}_{t}=H_{0}^{i}+\int_{0}^{t} \sum_{\ell=1}^{N}\varpi_{\ell ,i}(s)H^{\ell}_{s} \,ds + \varphi_{i}(t). $$

From (3.14) and (3.10) we deduce that, under \({\hat{\mathbb{P}}}\),

$$dL_{t} = L_{t-} (h_{t}-1) \,d\hat{\xi}_{t} + L_{t-} Q^{\top}(t,X_{t},\pi_{t}) \,dY_{t} - L_{t} \gamma{\eta(t,X_{t},\pi_{t})}\,dt, $$

which yields that

$$[L, {H^{i}}]_{t} = \int_{0}^{t} L_{s-} Q^{\top}(s,X_{s},\pi_{s}) \,d[Y, {H^{i}}]_{s} + \int_{0}^{t} L_{s-} (h_{s}-1) \,d[\hat{\xi}, {H^{i}} ]_{s}. $$

Since \((Y_{t})_{t\geq0}\) and \((H_{t})_{t\geq0}\) are independent of \((X_{t})_{t\geq0}\) (and hence of \(H^{i}\)), under \({\hat{\mathbb{P}}}\), we have (see also [38, Lemma 7.3.1]) that, \({\hat{\mathbb {P}}}\)-almost surely,

$$[Y, {H^{i}}]_{s} = [\hat{\xi}, {H^{i}} ]_{s} = 0\quad\text{for all } s\geq0. $$

Thus, applying Itô’s formula, we obtain

$$\begin{aligned} L_{t} {H^{i}_{t}} =& {H^{i}_{0}} + \int_{0}^{t} {H^{i}_{s- }} \,dL_{s} + \int_{0}^{t} L_{s-} \,d{H_{s}^{i}} \\ =& {H^{i}_{0}} + \int_{0}^{t} {H^{i}_{s-}} L_{s-} Q^{\top}(s,X_{s},\pi_{s}) \,dY_{s} + \int_{0}^{t} {H^{i}_{s- } L_{s- } (h_{s- } - 1)} \,d\hat{\xi}_{s} \\ &{} - \int_{0}^{t} {H^{i}_{s}} L_{s} \gamma \eta(s,X_{s},\pi_{s}) \,ds + \int_{0}^{t} L_{s} {\sum_{\ell=1}^{N}\varpi_{\ell,i}(s)H^{\ell}_{s}} \,ds + \int_{0}^{t} L_{s-} \,d{\varphi_{i}(s).} \end{aligned}$$
(A.3)

Since \((\varphi_{i}(t))_{t\geq0}\) is an \(((\mathcal{F}_{t}^{X})_{t\geq 0}, {\hat{\mathbb{P}}})\)-martingale and \(\mathcal{G}_{T}^{I}\) is independent of \(\mathcal{F}_{T}^{X}\) under \({\hat{\mathbb{P}}}\), we have that \(\mathbb{E}^{{\hat{\mathbb{P}}}}[\int_{0}^{t} L_{s-} d{\varphi_{i}(s)} | {\mathcal{G}_{t}^{I}} ] = 0\). Therefore, taking the \(\mathcal{G}_{t}^{I}\)-conditional expectations in (A.3), we obtain

$$\begin{aligned} \mathbb{E}^{{\hat{\mathbb{P}}}}\big[L_{t} {H^{i}_{t} \big|\mathcal{G}_{t}^{I}}\big] =& 1 + \int_{0}^{t} \mathbb{E}^{{\hat{\mathbb{P}}}} \big[L_{s-} {H^{i}_{s-}} Q^{\top}(s,{e_{i}}, \pi_{s}) \big| \mathcal{G}_{s}^{I} \big] \,dY_{s} \\ &{} + \int_{0}^{t} \mathbb{E}^{{\hat{\mathbb{P}}}} \big[L_{s- } {H^{i}_{s- }} ({h_{s- }} - 1) \big| \mathcal{G}_{s}^{I} \big] \,d{\hat{\xi}_{s}} \\ &{} - \int_{0}^{t} \mathbb{E}^{{\hat{\mathbb{P}}}} \big[{H^{i}_{s}} L_{s} \gamma\eta(s,e_{i}, \pi_{s}) \big| \mathcal{G}_{s}^{I} \big] \,ds \\ &{} + \int_{0}^{t} \mathbb{E}^{{\hat{\mathbb{P}}}} \bigg[ {\sum_{\ell =1}^{N}\varpi_{\ell,i}(s)L_{s} H^{\ell}_{s}} \bigg| \mathcal{G}_{s}^{I} \bigg] \,ds, \end{aligned}$$
(A.4)

where we have used that if \(\phi\) is \(\mathbb{G}\)-predictable, then (see, e.g., [38, Lemma 7.3.2])

$$\begin{aligned} \mathbb{E}^{{\hat{\mathbb{P}}}} \bigg[\int_{0}^{t} \phi_{s} L_{s-} \,dY_{s} \bigg| \mathcal{G}_{t}^{I} \bigg] =& \int_{0}^{t} \mathbb{E}^{{\hat {\mathbb{P}}}}\big[\phi_{s} L_{s-} \big| \mathcal{G}_{s}^{I} \big] \,dY_{s}, \\ \mathbb{E}^{{\hat{\mathbb{P}}}} \bigg[\int_{0}^{t} \phi_{s} L_{s-} \,d\hat{\xi}_{s} \bigg| \mathcal{G}_{t}^{I} \bigg] =& \int_{0}^{t} \mathbb {E}^{{\hat{\mathbb{P}}}}\big[\phi_{s} L_{s-} \big| \mathcal{G}_{s}^{I} \big] \,d\hat{\xi}_{s}, \\ \mathbb{E}^{{\hat{\mathbb{P}}}} \bigg[\int_{0}^{t} \phi_{s} L_{s-} \,ds \bigg| \mathcal{G}_{t}^{I} \bigg] =& \int_{0}^{t} \mathbb{E}^{{\hat{\mathbb {P}}}}\big[\phi_{s} L_{s-} \big| \mathcal{G}_{s}^{I} \big] \,ds. \end{aligned}$$

Observing that \(dY_{t} = {\varSigma_{Y}} d \hat{W}_{t}\) under \({\hat{\mathbb {P}}}\), using that \(Q(t,{e_{i}},\pi_{t})\) and \(\eta(t,e_{i}, \pi_{t})\) are \((\mathcal {G}_{t}^{I})_{t\geq0}\)-adapted and that the Markov chain generator \(A(t)\) is deterministic, we obtain (A.2) upon taking the differential of (A.4). □

Lemma A.2

We have the following identities:

$$\begin{aligned} q_{t}^{i} =& \hat{L}_{t} p_{t}^{i}, \end{aligned}$$
(A.5)
$$\begin{aligned} p_{t}^{i} =& \frac{q_{t}^{i}}{\sum_{j=1}^{N} q_{t}^{j}}, \end{aligned}$$
(A.6)

where \(q_{t}^{i}\), \(\hat{L}_{t}\), and \(p_{t}^{i}\) are defined, respectively, by (A.1), (3.20), and (3.18).

Proof

We first establish relation (A.5) by comparing the dynamics of \((q_{t}^{i})\) and of \(({\hat{L}_{t} p_{t}^{i}})\). The former is known from Lemma A.1 and given in (A.2). Next, we derive the latter. We have

$$ d (\hat{L}_{t} p_{t}^{i}) = \hat{L}_{{t- }} \,dp_{t}^{i} + p_{{t- }}^{i} \,d \hat{L}_{t} + d[\hat{L},p^{i} ]_{t}. $$

From (3.21) and (3.19) we obtain

$$\begin{aligned} d[\hat{L},p^{i} ]_{t} =& p_{t}^{i} \hat{L}_{t} \hat{\vartheta}^{\top}(t,p_{t}) \varSigma_{Y} \varSigma_{Y}^{-1} \big(\vartheta(t,e_{i}) - \hat{\vartheta}(t,p_{t}) \big) \,dt \\ &{} + p_{t}^{i} \hat{L}_{t} \gamma\pi_{t}^{\top} \big( \vartheta(t,e_{i}) - \hat {\vartheta}(t,p_{t}) \big) \,dt \\ &{} + (\hat{h}_{{t- }} -1 ) \frac{h_{i} - \hat{h}_{{t- }}}{\hat{h}_{{t-}}} \hat{L}_{{t- }} p_{{t- }}^{i} \,d H_{t}. \end{aligned}$$

Using these equations, along with (3.19), we obtain

$$\begin{aligned} d (\hat{L}_{t} p_{t}^{i} ) =& \hat{L}_{t} \bigg(\sum_{\ell=1}^{N}\varpi_{\ell ,i}(t) p^{\ell}_{t} \,dt \bigg) \\ &{} + \hat{L}_{t-} p_{t-}^{i} \big( \vartheta^{\top}(t,e_{i}) - \hat {\vartheta}^{\top}(t,p_{t-}) \big) (\varSigma_{Y} \varSigma^{\top}_{Y} )^{-1} \big(dY_{t} - \hat{\vartheta}(t,p_{t}) \,dt \big) \\ &{} + \hat{L}_{{t- }} p_{{t- }}^{i} \frac{h_{i} - \hat{h}_{{t- }}}{\hat{h}_{{t- }}} \big(d H_{t} - \hat{h}_{{t- }}{\bar{H}_{t- }}\,dt \big) \\ &{} + p_{t-}^{i} \hat{L}_{t} \hat{Q}^{\top}(t,p_{t-},\pi_{t}) \,dY_{t} - p_{t}^{i} {\hat{L}_{t} \gamma\hat{\eta}(t,p_{t},\pi_{t})}\,dt \\ &{} + p_{{t- }}^{i} \hat{L}_{{t- }} (\hat{h}_{{t- }} - 1) (d H_{t} - \bar{H}_{t- }\,dt) + (\hat{h}_{{t- }}-1) \frac{h_{i} - \hat{h}_{{t- }}}{\hat{h}_{{t- }}} \hat{L}_{{t- }} p_{{t- }}^{i} \,d H_{t} \\ &{}+ p_{t}^{i} \hat{L}_{t} \hat{\vartheta}^{\top}(t,p_{t}) (\varSigma_{Y} \varSigma_{Y}^{\top} )^{-1} {\big(\vartheta(t,e_{i}) - {\hat {\vartheta}}(t,p_{t}) \big) \,dt} \\ &{} + \gamma p_{t}^{i} \hat{L}_{t} \pi_{t}^{\top} \big( \vartheta (t,e_{i}) - \hat{\vartheta}(t,p_{t}) \big) \,dt. \end{aligned}$$
(A.7)

Next, observe that

$$\begin{aligned} &\hat{L}_{t-} p_{t-}^{i} \big(\vartheta^{\top}(t,e_{i}) - \hat{\vartheta }^{\top}(t,p_{t-}) \big) (\varSigma_{Y} \varSigma_{Y}^{\top})^{-1} \big(dY_{t} - \hat{\vartheta}(t,p_{t})\,dt\big) \\ &\qquad{}+ p_{t-}^{i} \hat{L}_{t-} \hat{Q}^{\top}(t,p_{t-},\pi_{t}) \,dY_{t} \\ &\quad{}=\hat{L}_{t-} p_{t-}^{i} Q^{\top}(t,e_{i},\pi_{t}) \,dY_{t} \\ &\qquad{}- \hat{L}_{t} p_{t}^{i} \big( \vartheta^{\top}(t,e_{i})-\hat {\vartheta}^{\top}(t,p_{t}) \big) (\varSigma_{Y} \varSigma_{Y}^{\top})^{-1} \hat{\vartheta}(t,p_{t}) \,dt. \end{aligned}$$
(A.8)

Moreover,

$$ \eta(t,e_{i},\pi_{t}) - \hat{\eta}(t,p_{t},\pi_{t}) = \pi_{t}^{\top} \big(\hat {\vartheta}(t,p_{t}) -\vartheta(t,e_{i}) \big). $$
(A.9)

Using (A.8) and (A.9) and straightforward simplifications, we may simplify (A.7) to

$$\begin{aligned} d(\hat{L}_{t} p_{t}^{i}) =& \bigg( \sum_{\ell=1}^{N}\varpi_{\ell,i}(t) \hat {L}_{t} p^{\ell}_{t} \,dt \bigg) + \hat{L}_{t-} p_{t-}^{i} Q^{\top}(t,e_{i},\pi_{t}) \,dY_{t} \\ &{} - \gamma\hat{L}_{t} p_{t}^{i} \eta(t,e_{i},\pi_{t}) \,dt + \hat{L}_{{t- }} p_{{t- }}^{i} (h_{i}-1) \,d\hat{\xi}_{t}. \end{aligned}$$
(A.10)

Using that \(dY_{t} = \varSigma_{Y} d\hat{W}_{t}\), we have that (A.5) holds via a direct comparison of (A.10) and (A.2). Next, we establish (A.6). Using (A.5) and \(\sum _{i=1}^{N} p_{t}^{i} = 1\), we deduce that

$$d \bigg(\sum_{i=1}^{N} q_{t}^{i} \bigg) = d \bigg(\sum_{i=1}^{N} \hat{L}_{t} p_{t}^{i} \bigg) = d\hat{L}_{t}, $$

hence obtaining that \(\sum_{i=1}^{N} q_{t}^{i} = \hat{L}_{t}\). Using again (A.5), this gives

$$p_{t}^{i} = \frac{q_{t}^{i}}{\hat{L}_{t}} = \frac{q_{t}^{i}}{\sum_{j=1}^{N} q_{t}^{j}}. $$

This completes the proof. □

Proof of Proposition 3.3

Using (3.13), (A.1), and relation (A.5) established in Lemma A.2, we have that

$$\begin{aligned} J(v,\pi,T) =&\frac{1}{\gamma} \mathbb{E}^{\mathbb{P}} [V_{T}^{\gamma} ] = \frac{v^{\gamma}}{\gamma} \mathbb{E}^{\hat{\mathbb {P}}} [L_{T} ] = \frac{v^{\gamma}}{\gamma} \mathbb{E}^{\hat{\mathbb{P}}} \big[ \mathbb{E}^{\hat{\mathbb{P}}} [L_{T} \mid\mathcal{G}_{T}^{I} ] \big] \\ =& \frac{v^{\gamma}}{\gamma} \sum_{i=1}^{N} \mathbb{E}^{\hat {\mathbb{P}}} \left[ \mathbb{E}^{\hat{\mathbb{P}}} \big[L_{T} \mathbf {1}_{{\{X_{T}=e_{i}\}}} \big| \mathcal{G}_{T}^{I} \big] \right] = \frac {v^{\gamma}}{\gamma} {\sum_{i=1}^{N} \mathbb{E}^{\hat{\mathbb{P}}} [q_{T}^{i} ]} \\ =& \frac{v^{\gamma}}{\gamma} \sum_{i=1}^{N} \mathbb{E}^{\hat{\mathbb {P}}} [\hat{L}_{T} p_{T}^{i} ] = \frac{v^{\gamma}}{\gamma} \mathbb{E}^{\hat {\mathbb{P}}} [\hat{L}_{T} ], \end{aligned}$$

thus proving the statement. □

Appendix B: Proofs related to Sect. 5

We start with a lemma that will be needed in the section where the verification theorem is proved.

Lemma B.1

For any \(T>0\) and \(i\in\{1,\dots,N\}\), we have:

  1. (1)

    \({\mathbb{P}}[p^{i}_{t}> 0\ \textit{for}\ \textit{all}\ t\in[0,T)]=1\).

  2. (2)

    \({\mathbb{P}}[p^{i}_{t}< 1\ \textit{for}\ \textit{all}\ t\in[0,T)]=1\).

Proof

Define \(\varsigma= \inf\{t : p^{i}_{t} = 0\} \wedge T\). If \(p^{i}\) can hit zero, then \(\mathbb{P}[p^{i}_{\varsigma} = 0] > 0\). Recall that \(p_{t}^{i} = \frac{q_{t}^{i}}{\sum_{j} q_{t}^{j}}\) from (A.6); hence, \(p_{\varsigma}^{i} = \frac{q_{\varsigma}^{i}}{\sum_{j} q_{\varsigma}^{j}}\), where

$$q_{\varsigma}^{i} = \mathbb{E}^{\hat{\mathbb{P}}}\big[L_{\varsigma} \mathbf{1}_{\{X_{\varsigma} = e_{i}\}} \big| \mathcal{G}_{\varsigma}^{I} \big] $$

by the optional projection property; see [33, Thm. VI.7.10]. Define the two-dimensional (observed) log-price process \(Y_{t} = (\log S_{t}, {\log P_{t}})^{\top}\). Because of \(q_{\varsigma}^{i} = \mathbb{E}^{\hat{\mathbb{P}}} [L_{\varsigma} \mathbf{1}_{\{X_{\varsigma}=e_{i}\}} \mid\mathcal {G}_{\varsigma}^{I} ]\) and using that \(L_{\varsigma}>0\), we can choose a modification \(g(Y,H,X_{\varsigma})\) of \(\mathbb{E}^{{\hat{\mathbb{P}}}}[L_{\varsigma} \mid\mathcal {G}_{\varsigma}^{I}, X_{\varsigma} ]\) such that \(g>0\) and for each \(e_{i}\), \(g(Y,H,{e_{i}})\) is \(G_{\varsigma}^{I}\)-measurable. By the tower property,

$$\begin{aligned} q_{\varsigma}^{i} =& \mathbb{E}^{\hat{\mathbb{P}}}\big[g(Y,H,X_{\varsigma }) \mathbf{1}_{\{X_{\varsigma}=e_{i}\}} \big| \mathcal{G}_{\varsigma }^{I}\big] \\ =& g(Y,H,e_{i}) \hat{\mathbb{P}}[X_{\varsigma}=e_{i}\mid\mathcal {G}_{\varsigma}^{I}] \\ =& g(Y,H,e_{i}) \mathbb{P}[X_{t}=e_{i}]\big|_{t=\varsigma}, \end{aligned}$$

where the first equality follows because \(\varsigma\) is \(\mathcal {G}^{I}_{\varsigma}\)-measurable, and the last two because \(X\) is independent of \(\mathcal{G}^{I}\) under \(\hat{\mathbb{P}}\). Since \(\mathbb{P}[X_{t}=e_{i}] >0\) and \(g>0\), we get that \(q_{\varsigma}^{i}>0\) a.s., which contradicts that \(\mathbb{P}[p_{\varsigma}^{i}=0]>0\). This proves the first statement in the lemma. Next, we notice that

$$\begin{aligned} \mathbb{P}[p^{i}_{t} = 0 \mbox{ for some } t\in[0,T) ] = 1- \mathbb {P}[p^{i}_{t} > 0 \mbox{ for all } t\in[0,T) ] = 0, \end{aligned}$$

where the last equality follows from the first statement. This immediately yields the second statement. □

Proof of (5.14)

Let us first analyze the first term \(\beta_{\gamma}^{\top} \nabla_{\tilde{p}} w\) in the sup of (5.12). For brevity, we use \(\beta_{\varpi} := \beta_{\varpi}(t,\tilde{p},0)\). By definition of \(\beta_{\gamma}\) and using the maximizer \(\pi:=\pi^{*}\) in (5.13), we have

$$\begin{aligned} \beta^{\top}_{\gamma} =&\beta_{\varpi}^{\top} + \gamma{\pi ^{\top}} \varSigma_{Y} \bar{\kappa}^{\top} \\ =& \beta_{\varpi}^{\top} + \frac{\gamma}{1-\gamma} \big(\varSigma_{Y} \bar{\kappa}^{\top} (\nabla_{\tilde{p}} \bar{w})^{\top} - \varUpsilon\big)^{\top} (\varSigma_{Y}^{\top} \varSigma_{Y})^{-1} \varSigma_{Y} \bar {\kappa}^{\top} \\ =& \beta_{\varpi}^{\top} + \frac{\gamma}{1-\gamma} (\nabla_{\tilde {p}} \bar{w}) \bar{\kappa} \varSigma_{Y}^{\top} (\varSigma_{Y}^{\top} \varSigma _{Y})^{-1} \varSigma_{Y} \bar{\kappa}^{\top} \\ &{} - \frac{\gamma}{1-\gamma} \varUpsilon^{\top} (\varSigma_{Y}^{\top} \varSigma _{Y})^{-1} \varSigma_{Y} \bar{\kappa}^{\top}. \end{aligned}$$
(B.1)

Further, again using the expression for \(\pi=\pi^{*}\), the second term in the sup is equal to

$$\begin{aligned} {-}\gamma\pi^{\top} \varUpsilon =& {-}\frac{\gamma}{1-\gamma} \big(- \varUpsilon+ \varSigma_{Y} \bar{\kappa}^{\top} {(\nabla_{\tilde{p}} \bar {w})^{\top}} \big)^{\top} (\varSigma_{Y}^{\top} \varSigma_{Y})^{-1} \varUpsilon\\ =& \frac{\gamma}{1-\gamma} \varUpsilon^{\top} (\varSigma_{Y}^{\top} \varSigma _{Y})^{-1} \varUpsilon{-} \frac{\gamma}{1-\gamma} {(\nabla_{\tilde{p}} \bar {w})} \bar{\kappa} \varSigma_{Y}^{\top} (\varSigma_{Y}^{\top} \varSigma_{Y})^{-1} {\varUpsilon.} \end{aligned}$$
(B.2)

The third term in the sup may be simplified to

$$\begin{aligned} &-\frac{1}{2} \frac{\gamma}{1-\gamma} \big( - \varUpsilon+ \varSigma_{Y} \bar{\kappa}^{\top} {(\nabla_{\tilde{p}} \bar{w})^{\top}} \big)^{\top} (\varSigma_{Y}^{\top} \varSigma_{Y})^{-1} \big(- \varUpsilon+ \varSigma_{Y} \bar {\kappa}^{\top} {(\nabla_{\tilde{p}} \bar{w})^{\top}}\big) \\ &\quad{}=-\frac{1}{2} \frac{\gamma}{1-\gamma}\varUpsilon^{\top} (\varSigma _{Y}^{\top} \varSigma_{Y})^{-1} \varUpsilon + \frac{1}{2} \frac{\gamma}{1-\gamma }\varUpsilon^{\top}(\varSigma_{Y}^{\top} \varSigma_{Y})^{-1}\varSigma_{Y} \bar{\kappa }^{\top} {(\nabla_{\tilde{p}} \bar{w})^{\top}} \\ &\qquad{} + \frac{1}{2} \frac{\gamma}{1-\gamma} {(\nabla_{\tilde{p}} \bar{w})} \bar{\kappa} \varSigma_{Y}^{\top} (\varSigma_{Y}^{\top} \varSigma_{Y})^{-1} \varUpsilon\\ &\qquad{} - \frac{1}{2} \frac{\gamma}{1-\gamma} {(\nabla_{\tilde{p}} \bar{w})} \bar{\kappa} \varSigma_{Y}^{\top}(\varSigma_{Y}^{\top} \varSigma_{Y})^{-1} \varSigma_{Y} \bar{\kappa}^{\top} {(\nabla_{\tilde{p}} \bar{w})^{\top}.} \end{aligned}$$
(B.3)

Using (B.1)–(B.3), we obtain that

$$\begin{aligned} & \sup_{\pi}\Big(\beta_{\gamma}^{\top} {(\nabla_{\tilde{p}} \bar{w})^{\top}} - \gamma\pi_{t}^{\top} \varUpsilon- \frac{1}{2} \gamma(1-\gamma) \pi_{t}^{\top} \varSigma^{\top}_{Y} \varSigma_{Y} \pi _{t} \Big) \\ &\quad{}= \beta_{\varpi}^{\top} (\nabla_{\tilde{p}} \bar{w})^{\top} + \frac {1}{2} \frac{\gamma}{1-\gamma} {(\nabla_{\tilde{p}} \bar{w})} \bar {\kappa} \varSigma_{Y}^{\top} (\varSigma_{Y}^{\top} \varSigma_{Y})^{-1} \varSigma_{Y} \bar {\kappa}^{\top} {(\nabla_{\tilde{p}} \bar{w})^{\top}} \\ &\qquad{}+ \frac{1}{2} \frac{\gamma}{1-\gamma} \varUpsilon^{\top} (\varSigma_{Y}^{\top} \varSigma_{Y})^{-1} \varUpsilon{- \frac{\gamma}{1-\gamma} (\nabla_{\tilde{p}} \bar{w})} \bar{\kappa} \varSigma_{Y}^{-1} \varUpsilon, \end{aligned}$$

and therefore, after rearrangement, we obtain (5.14). □

Proof of Theorem 5.3

In order to ease the notational burden, throughout the proof, we write \(\tilde{p}\) for \(\tilde{p}^{\circ}\), \(\tilde{p}_{s}\) for \(\tilde{p}^{t}_{s}\), \(\pi\) for \(\pi^{t}\), \(\tilde{\mathbb{P}}\) for \({\tilde{\mathbb{P}}^{t}}\), ℙ for \({\mathbb{P}^{t}}\), \(\tilde {W}\) for \(\tilde{W}^{t}\), \(X\) for \(X^{t}\), and \(\mathcal{G}_{s}^{I}\) for \(\mathcal{G}_{s}^{t,I}\). Let us first remark that

$$ {\mathbb{P}}[\tilde{p}_{s}\in\tilde{\Delta}_{N-1},\ t\leq s\leq T]=1. $$
(B.4)

Indeed, set \(\tilde{p}^{N}_{s}=1-\sum_{j=1}^{N-1}\tilde{p}^{j}_{s}\) and recall from Remark 3.4 that the process \(\tilde{p}^{i}\) is given by

$$ \tilde{p}_{s}^{i} := {{\mathbb{P}}}\big[X_{s} = e_{i} \big| \mathcal {G}_{s}^{I}\big], \quad t\leq s\leq T,\ i=1,\dots,N. $$

Therefore, using Lemma B.1, we deduce that all the \(\tilde{p}^{i}\), \(i=1,\dots,N\), remain positive in \([t,T]\) a.s., and hence (B.4) is satisfied.

Next, we prove that the feedback strategy \(\widetilde{\pi}_{s}:=(\tilde {\pi}^{S}_{s},\tilde{\pi}^{P}_{s})^{\top}\), \(\tilde{\pi}^{P}_{s} := 0\), is admissible, that is,

$$ \mathbb{E}^{\mathbb{P}}\left[\exp\left(\frac{\sigma^{2}\gamma ^{2}}{2}\int_{t}^{T}\big(\widetilde{\pi}^{S}(s,\tilde{p}_{s})\big)^{2}ds\right)\right]< \infty. $$
(B.5)

We have that (B.5) follows from (B.4) and the fact that \((\widetilde{\pi}^{S}(s,\tilde{p}))^{2}\) is uniformly bounded on \([0,T]\times\widetilde{\Delta}_{N-1}\). To see the latter property, note that

$$\begin{aligned} \sup_{(s,\tilde{p})\in[0,T]\times\widetilde{\Delta}_{N-1}}\big(\widetilde{\pi}^{S}(s,\tilde{p})\big)^{2} \leq& \frac{2}{\sigma^{4}(1-\gamma)^{2}}\sup_{(s,\tilde{p})\in[0,T]\times \widetilde{\Delta}_{N-1}}\big(\tilde{\mu}(\tilde{p})-r\big)^{2}\\ &{} +\frac{2}{\sigma^{2}(1-\gamma)^{2}} \sup_{(s,\tilde{p})\in[0,T]\times\widetilde{\Delta}_{N-1}}\left ({\nabla_{\tilde{p}} \underline{w}(s,\tilde{p})\underline{\kappa}(\tilde {p})} \right)^{2}. \end{aligned}$$

The first term on the right-hand side is clearly bounded since \(|\tilde{\mu}(\tilde{p})|\leq\max_{i}|\mu_{i}|\) for any \(\tilde{p}\in \widetilde{\Delta}_{N-1}\). For the second term, using the definition of \(\underline{\kappa}\) given in (5.3), we have

$$\begin{aligned} &\sup_{(s,\tilde{p})\in[0,T]\times\widetilde{\Delta}_{N-1}}\left ({\nabla_{\tilde{p}} \underline{w}(s,\tilde{p})\underline{\kappa}(\tilde {p})} \right)^{2} \\ &\quad{}= \frac{1}{\sigma^{2}}\sup_{(s,\tilde{p})\in[0,T]\times \widetilde{\Delta}_{N-1}}\bigg(\sum_{j=1}^{N-1}\partial_{\tilde {p}^{j}}\underline{w}(s,\tilde{p})\tilde{p}^{j}\bigg(\mu_{j}-\sum _{i=1}^{N}\mu_{i}\tilde{p}^{i}\bigg)\bigg)^{2}, \end{aligned}$$
(B.6)

where \(\tilde{p}^{N}:=1-\sum_{i=1}^{N-1}\tilde{p}^{i}\). The last expression is bounded since each partial derivative term \(\partial_{\tilde{p}^{j}}\underline{w}(s,\tilde{p})\), \(j=1,\dots,N-1\), is bounded on \([0,T]\times\widetilde{\Delta}_{N-1}\) by Lemma 5.1 and following Remark 5.2. Therein we have shown \(\mathcal{C}_{P}^{2,\alpha}\) regularity for \(\underline{w}(s,\tilde{p})\), which directly implies bounded first- and second-order space derivatives on \(\widetilde{\Delta}_{N-1}\). Now fix an arbitrary feedback control \(\pi_{s}^{S}:=\pi^{S}(s,\tilde {p}_{s})\) such that \((\pi^{S},\pi^{P})\in{\mathcal{A}}(t,T;\tilde{p},1)\), where \(\pi^{P}_{s}\equiv0\) and \({\mathcal{A}}(t,T;\tilde{p},1)\) is as in Definition 4.1, and define the process

$$ M_{s}^{\pi^{S}} := e^{-\gamma\int_{t}^{s} \underline{\eta}(u,\tilde {p}_{u},\pi^{S}_{u}) du} e^{\underline{w}(s,\tilde{p}_{s})}, \quad t\leq s\leq T, $$

where

$$\begin{aligned} \underline{\eta}(u,\tilde{p},\pi^{S})= {\eta}\big(u,\tilde{p},(\pi^{S},0)^{\top}\big)=-r + {\pi^{S}}\big(r-\tilde{\mu}(\tilde{p})\big) + \frac{1-\gamma}{2} \sigma^{2}(\pi^{S})^{2}. \end{aligned}$$
(B.7)

In what follows, we write for simplicity \(M^{\pi}\) for \(M^{\pi^{S}}\) and \(\pi\) for \(\pi^{S}\). Note that the process \((M^{\pi}_{s})_{t\leq s\leq T}\) is uniformly bounded. Indeed, (B.7) is convex in \(\pi^{S}\), and by minimizing it over \(\pi^{S}\), it follows that for any \(\tilde{p}\in\tilde{\Delta}_{N-1}\),

$$- \underline{\eta}(t,\tilde{p},\pi)\leq r +\frac{(\tilde{\mu}(\tilde {p})-r)^{2}}{2(1-\gamma)\sigma^{2}} \leq r +\frac{(\max_{i}\mu_{i}^{2}+r^{2})}{(1-\gamma)\sigma^{2}} < {\infty}. $$

Therefore, since \(\underline{w}\in C([0,T]\times\tilde{\Delta }_{N-1})\), there exists a constant \(K<\infty\) for which

$$ M_{s}^{\pi}=e^{-\gamma\int_{t}^{s} \underline{\eta}(u,\tilde{p}_{u},\pi_{u}) du} e^{\underline{w}(s,\tilde{p}_{s})}\leq K e^{\gamma\|\underline{\eta }\|_{\infty} (T-t)}=:A< \infty. $$
(B.8)

We prove the two assertions of Theorem 5.3 through the following steps:

(i) Define the process \(\mathcal{Y}_{s}=e^{\underline{w}(s,\tilde {p}_{s})}\). By Itô’s formula and the generator formula (4.7) with \(f(s,\tilde{p})=e^{\underline{w}(s,\tilde{p})}\),

$$\begin{aligned} M_{s}^{\pi} =& M_{t}^{\pi}+\int_{t}^{s}e^{-\gamma\int_{t}^{u} \underline {\eta}(r,\tilde{p}_{r},\pi_{r}) \,dr} \,d \mathcal{Y}_{u} -\gamma\int_{t}^{s} \underline{\eta}(u,\tilde{p}_{u},\pi_{u}) e^{-\gamma \int_{t}^{u} \underline{\eta}(r,\tilde{p}_{r},\pi_{r}) \,dr}\mathcal{Y}_{u}\,du\\ =&M_{t}^{\pi}+\int_{t}^{s} M_{u}^{\pi}\left({\frac{\partial\underline {w}}{\partial u}} + \frac{1}{2} \operatorname{tr}(\underline{\kappa} \underline {\kappa}^{\top} D^{2} \underline{w}) + \frac{1}{2}{(\nabla_{\tilde{p}} \underline{w}) \underline{\kappa} \underline{\kappa}^{\top} (\nabla _{\tilde{p}} \underline{w})^{\top}} \right) \,du \\ &{}+ \int_{t}^{s} M_{u}^{\pi} \big( {(\nabla_{\tilde{p}} \underline{w})\underline{\beta}_{\gamma}} -\gamma\underline{\eta} \big)\,du +\int_{t}^{s}M_{u}^{\pi} \nabla_{\tilde{p}} \underline{w} \underline {\kappa}\,d {\tilde{W}}^{(1)}_{u}. \end{aligned}$$

Using the expression of \(\underline{\eta}\) in (B.7) and some rearrangement, we may write \(M^{\pi}\) as

$$M_{s}^{\pi} =M_{t}^{\pi}+\int_{t}^{s} M_{u}^{\pi}R(u,\tilde{p}_{u},\pi_{u})\,du +\int_{t}^{s}M_{u}^{\pi} \nabla_{\tilde{p}} \underline{w} \underline {\kappa} \,d {\tilde{W}}^{(1)}_{u} $$

with

$$\begin{aligned} R(u,\tilde{p},\pi) =& {\underline{w}}_{u} + \frac{1}{2} \operatorname{tr}(\underline{\kappa} \underline{\kappa}^{\top} D^{2} \underline{w}) + \frac{1}{2} {(\nabla_{\tilde{p}} \underline{w}) \underline{\kappa} \underline{\kappa}^{\top} (\nabla_{\tilde{p}} \underline{w})^{\top}} \\ &{} + \gamma r +{(\nabla_{\tilde{p}} w)\underline{\beta}_{\gamma}} - \gamma\pi\big(r- \tilde{\mu}(\tilde{p})\big) - \frac{{\sigma^{2}}}{2} \gamma(1-\gamma) \pi^{2}. \end{aligned}$$
(B.9)

Clearly, \(R(u,\tilde{p},\pi)\) is a concave function in \(\pi\) since \(R_{\pi\pi}=-\sigma^{2}\gamma(1-\gamma) < 0\). If we maximize \(R(u,\tilde{p},\pi)\) as a function of \(\pi\) for each \((u,\tilde{p})\), then we find that the optimum is given by (5.7). Upon substituting (5.7) into (B.9), we get that

$$\begin{aligned} R(u,\tilde{p},\pi) \leq& R\big(u,\tilde{p},{\widetilde{\pi}^{S}(u,\tilde {p})}\big)\\ =&\underline{w}_{u} + \frac{1}{2} \operatorname{tr}(\underline{\kappa} \underline{\kappa}^{\top} D^{2} \underline{w}) + \frac{1}{2(1-\gamma)} {(\nabla_{\tilde{p}} \underline{w}) \underline{\kappa} \underline{\kappa}^{\top} (\nabla_{\tilde{p}} \underline{w})^{\top}}+ {(\nabla_{\tilde{p}} \underline{w})\underline{\varPhi}} + \underline{\varPsi }\\ =& 0, \end{aligned}$$

where the last equality follows from (5.5). Therefore, we get the inequality

$$\begin{aligned} {\mathbb{E}^{\tilde{\mathbb{P}}}}[M^{\pi}_{T}]&\leq M_{t}^{\pi} + {\mathbb{E}^{\tilde{\mathbb{P}}}}\left[\int_{t}^{T} M_{u}^{\pi} (\nabla_{\tilde{p}} \underline{w}) \underline{\kappa} \,d {\tilde{W}}^{(1)}_{u} \right] \end{aligned}$$

with equality if \(\pi=\widetilde{\pi}^{S}\). From (5.3), \(\sup_{\tilde{p}\in\tilde{\Delta }_{N-1}}\|\underline{\kappa}(\tilde{p})\|^{2}\leq2\max_{i}\{\mu_{i}\} /\sigma\). Since the partial derivatives \(\partial_{\tilde {p}^{j}}\underline{w}(s,\tilde{p})\) are uniformly bounded on \([0,T]\times\tilde{\Delta}_{N-1}\) (see also the argument after (B.6)), (B.8) implies that

$$\sup_{t\leq u\leq T} |M_{u}^{\pi}(\nabla_{\tilde{p}} \underline{w}) \underline{\kappa}|^{2} \leq A \sup_{t\leq u\leq T}\|\underline{\kappa}(\tilde{p}_{u})\|^{2} \sup_{t\leq u\leq T}\|\nabla_{\tilde{p}} \underline{w}(u,\tilde {p}_{u})\|^{2} \leq B $$

for some nonrandom constant \(B<\infty\). We conclude that

$$ \mathbb{E}^{\tilde{\mathbb{P}}} [M^{\pi}_{T}]\leq M_{t}^{\pi }=e^{\underline{w}(t,\tilde{p}_{t})}=e^{\underline{w}(t,\tilde{p})} $$
(B.10)

with equality if \(\pi=\widetilde{\pi}^{S}\).

(ii) For simplicity, let us write \(\widetilde{\pi}_{s}:=\widetilde{\pi }^{S}(s,\tilde{p}_{s})\). First, note that from the fact that we have equality in (B.10) when \(\pi=\tilde{\pi }\) it follows that

$$\begin{aligned} e^{\underline{w}(t,\tilde{p})} =& \mathbb{E}^{\tilde{\mathbb{P}}}\big[M^{\tilde{\pi}}_{T}\big]\\ =&\mathbb{E}^{\tilde{\mathbb{P}}} \left[ e^{-\gamma\int _{t}^{T} \underline{\eta}(u,\tilde{p}_{u},\tilde{\pi}_{u}) \,du} e^{\underline {w}(T,\tilde{p}_{T})}\right]\\ =&\mathbb{E}^{\tilde{\mathbb{P}}}\left[ e^{-\gamma\int_{t}^{T} \underline {\eta}(u,\tilde{p}_{u},\tilde{\pi}_{u}) \,du} \right]. \end{aligned}$$
(B.11)

Similarly, for every feedback control \(\pi_{s}=\pi(s,\tilde{p}_{s})\) such that \((\pi,0)\in\mathcal{A}(t,T;\tilde{p},1)\),

$$\begin{aligned} {{\mathbb{E}^{\tilde{\mathbb{P}}}\left[ e^{-\gamma\int_{t}^{T} \underline {\eta}(u,\tilde{p}_{u},\pi_{u}) \,du} \right]=\mathbb{E}^{\tilde{\mathbb{P}}} [M^{\pi}_{T}] \leq e^{\underline{w}(t,\tilde{p})}=\mathbb{E}^{{\tilde{\mathbb {P}}}}\left[ e^{-\gamma\int_{t}^{T} \underline{\eta}(u,\tilde{p}_{u},\tilde{\pi}_{u}) \,du} \right],}} \end{aligned}$$

where the inequality in the previous equation comes from (B.10), and the last equality follows from (B.11). The previous relationships show the optimality of \(\tilde{\pi}\) and prove the assertions (1) and (2). □

Proof of Theorem 5.4

For brevity, define the operator

$$\mathcal{B}=\partial_{t}+ \frac{1}{2} \operatorname{tr}(\bar{\kappa} \bar{\kappa }^{\top} D^{2})+ \nabla_{\tilde{p}} \bar{\varPhi} $$

and denote by

$$H(t,\tilde{p},u)=-\tilde{h}({\tilde{p}}) e^{\underline{w} (t, \frac {1}{\tilde{h}({\tilde{p}})} {\tilde{p}} \cdot h^{\prime})}\frac {u^{\gamma}}{1-\gamma}, \quad u\in\mathbb{R}_{+}, $$

the nonlinear term of the PDE (5.17). Notice that since \(\tilde{h} > 0\) by construction, we have \(H\le0\). Moreover, \(u\mapsto H(t,{\tilde{p}},u)\) is smooth and Lipschitz-continuous on \([\bar{c},+\infty)\) for any \(\bar{c}>0\), uniformly with respect to \((t,{\tilde{p}})\). We set

$$ \bar{\psi}_{0}(t,{\tilde{p}})= e^{c(T-t)},\quad t\in[0,T], $$

where \(c\) is a suitably large positive constant such that

$$ c u+H(t,\tilde{p},u)-\frac{{\bar{\varPsi}(t,\tilde{p})}}{1-\gamma}u\ge0, \quad (t,\tilde{p})\in(0,T)\times\tilde{\Delta}_{N-1},\ u\ge1. $$
(B.12)

Then we define recursively the sequence \((\bar{\psi}_{j})_{j \in\mathbb {N}}\) by

$$ \textstyle\begin{cases} (\mathcal{B}+\frac{\bar{\varPsi}}{1-\gamma}) \bar{\psi}_{j}-\lambda\bar{\psi }_{j}=H(\cdot,\cdot,\bar{\psi}_{j-1})-\lambda\bar{\psi}_{j-1},\\ \bar{\psi}_{j}(T,\cdot) =1, \end{cases} $$
(B.13)

where \(\lambda\) is the Lipschitz constant of \(u\mapsto H(\cdot,\cdot,u)\) on \([\bar{c},+\infty)\), and \(\bar{c}\) is the strictly positive constant defined as

$$ \bar{c}=e^{-\frac{T}{1-\gamma}\|\bar{\varPsi}\|_{\infty}}. $$
(B.14)

Let us recall that the linear problem (B.13) has a classical solution in \(C^{2,\alpha}_{P}\) whose existence can be proved as in Lemma 5.1; see also the following Remark 5.2. Next, we prove by induction that

  1. (i)

    \((\bar{\psi}_{j})\) is a decreasing sequence, that is,

    $$ \bar{\psi}_{j+1}\le\bar{\psi}_{j},\quad j\ge0; $$
    (B.15)
  2. (ii)

    \((\bar{\psi}_{j})\) is uniformly strictly positive, and in particular

    $$ \bar{\psi}_{j+1}\ge\bar{c},\quad j\ge0, $$
    (B.16)

    with \(\bar{c}\) as in (B.14).

First, we observe that

$$ \bar{\psi}_{0}\ge1,\quad\text{and}\quad\Big(\mathcal{B}+\frac{\bar {\varPsi}}{1-\gamma}\Big) \bar{\psi}_{0}=\Big(-c+\frac{\bar{\varPsi}}{1-\gamma}\Big)\bar{\psi}_{0}. $$
(B.17)

Next, we prove (B.15) and (B.16) for \(j=0\). By (B.17) and (B.12) we have

$$ \textstyle\begin{cases} (\mathcal{B}+\frac{\bar{\varPsi}}{1-\gamma}-\lambda)(\bar{\psi}_{1}-\bar{\psi}_{0})= H(\cdot,\cdot,\bar{\psi}_{0})+c \bar{\psi}_{0}-\frac{\bar{\varPsi}}{1-\gamma }\bar{\psi}_{0}\ge0,\\ (\bar{\psi}_{1}-\bar{\psi}_{0})(T,{\tilde{p}}) =0, \end{cases} $$
(B.18)

where the inequality follows from the fact that \(c\) is chosen as in (B.12) and \(\bar{\psi}_{0} \geq1\) as observed in (B.17). Since the process \(\tilde{p}\) never reaches the boundary of the simplex by Lemma B.1, it follows from the Feynman–Kac representation theorem (or, equivalently, the maximum principle) that \(\bar{\psi}_{1}\le \bar{\psi}_{0}\). Indeed, we have

$$\begin{aligned} & (\bar{\psi}_{0}-\bar{\psi}_{1})(t,\tilde{p}) \\ &\quad{}= \mathbb{E}^{\tilde{\mathbb{P}}} \bigg[\int_{t}^{T} e^{-\int_{t}^{s} \frac{\bar{\varPsi}(r,\tilde{p}_{r}) - \lambda}{1-\gamma} \,dr} \\ &\qquad{} \times\bigg(H(s,\tilde{p}_{s},\bar{\psi}_{0}) + c \bar{\psi}_{0}(s,\tilde{p}_{s}) - \frac{\bar{\varPsi}(s,\tilde{p}_{s}) \bar{\psi }_{0}(s,\tilde{p}_{s})}{1-\gamma} \bigg) \bigg| \tilde{p}_{t}=\tilde{p} \bigg] \\ &\quad{}\geq0, \end{aligned}$$
(B.19)

where the last inequality follows directly from (B.18). This proves (B.15) when \(j=0\). Using the recursive definition (B.13) along with the facts that \(H\le0\) and \(\lambda>0\) and inequality (B.19), we obtain

$$ \Big(\mathcal{B}+\frac{\bar{\varPsi}}{1-\gamma}\Big)\bar{\psi}_{1}=H(\cdot ,\cdot,\bar{\psi}_{0})+\lambda(\bar{\psi}_{1}-\bar{\psi}_{0})\le0. $$
(B.20)

Then (B.16) with \(j=0\) follows again from the Feynman–Kac theorem; indeed, by (B.20) we have

$$\begin{aligned} \bar{\psi}_{1}(t,{\tilde{p}}) =& {\mathbb{E}^{\tilde{\mathbb {P}}}}\left[-\int_{t}^{T} e^{-\int_{t}^{s} \frac{\bar{\varPsi}(r,\tilde {p}_{r})}{1-\gamma} \,dr} \left(H(s,\tilde{p}_{s},\bar{\psi}_{0})+\lambda(\bar{\psi }_{1}-\bar{\psi}_{0})(s,\tilde{p}_{s})\right) \bigg| \tilde{p}_{t} = \tilde{p} \right] \\ &{} + \mathbb{E}^{\tilde{\mathbb{P}}}\Big[e^{\frac{1}{1-\gamma }\int_{t}^{T}\bar{\varPsi}(s,\tilde{p}_{s})\,ds} \Big| \tilde{p}_{t} = \tilde{p} \Big] \\ \ge& e^{-\frac{T}{1-\gamma}\|\bar{\varPsi}\|_{\infty}}, \end{aligned}$$
(B.21)

where the last inequality follows from the positivity of the first expectation guaranteed by (B.20).

Next, we assume the inductive hypothesis, that is,

$$ \bar{c}\le\bar{\psi}_{j}\le\bar{\psi}_{j-1}, $$
(B.22)

and prove (B.15), (B.16). Recalling that \(\lambda\) is the Lipschitz constant of the function \(u\mapsto H(\cdot,\cdot,u)\) on \([\bar{c},+\infty)\), by (B.22) we have

$$ \textstyle\begin{cases} (\mathcal{B}+\frac{\bar{\varPsi}}{1-\gamma}-\lambda)(\bar{\psi}_{j+1}-\bar{\psi}_{j})= H(\cdot,\cdot,\bar{\psi}_{j})-H(\cdot,\cdot,\bar{\psi}_{j-1})-\lambda(\bar {\psi}_{j}-\bar{\psi}_{j-1})\ge0, \\ (\bar{\psi}_{j+1}-\bar{\psi}_{j})(T,{\tilde{p}}) =0. \end{cases} $$

Thus, (B.15) follows from the Feynman–Kac theorem using the same procedure as in (B.18) and (B.19). Moreover, we have

$$ \Big(\mathcal{B}+\frac{\bar{\varPsi}}{1-\gamma}\Big)\bar{\psi }_{j+1}=H(\cdot,\cdot,\bar{\psi}_{j})+\lambda(\bar{\psi}_{j+1}-\bar{\psi }_{j})\le0, $$

where the inequality above follows by (B.15) and using that \(H\le 0\) and \(\lambda>0\). Then, as in (B.21), we have that (B.16) follows from the Feynman–Kac theorem.

In conclusion, for \(j\in\mathbb{N}\), we have

$$ \bar{c} \leq\bar{\psi}_{j+1} \leq\bar{\psi}_{j} \leq\bar{\psi}_{0}. $$
(B.23)

Now the thesis follows by proceeding as in the proof of Theorem 3.3 in [8]. Indeed, let us denote by \(\bar {\psi}\) the pointwise limit of \((\bar{\psi}_{j})\) as \(j\to+\infty\). Since \(\bar{\psi}_{j}\) is a solution of (B.13) and, by the uniform estimate (B.23), we can apply standard a priori Morrey–Sobolev-type estimates (see Theorems 2.1 and 2.2 in [8]) to conclude that for any \(\alpha\in(0,1)\), \(\|\bar{\psi}_{j}\|_{C_{P}^{1,\alpha}((0,T)\times\tilde{\Delta}_{N-1})}\) is bounded by a constant only dependent on ℬ, \(\alpha\), and \(\lambda\). Hence, by the classical Schauder interior estimate (see, e.g., Theorem 2.3 in [8]), we deduce that \(\|\bar{\psi}_{j}\|_{C_{P}^{2,\alpha}((0,T)\times\tilde{\Delta}_{N-1})}\) is bounded uniformly in \(j\in\mathbb{N}\). It follows that \((\bar{\psi}_{j})_{j\in\mathbb{N}}\) admits a subsequence (denoted by itself) that converges in \(C^{2,\alpha}\). Thus passing to the limit in (B.13) as \(j\to \infty\), we have

$$\Big(\mathcal{B}+\frac{\bar{\varPsi}}{1-\gamma}\Big) \bar{\psi} = H(\cdot ,\cdot,\bar{\psi}) \quad\text{in } (0,T)\times\tilde{\Delta}_{N-1} $$

and \(\bar{\psi}(T,\cdot)=1\).

Finally, in order to prove that \(\bar{\psi}\in C((0,T]\times\tilde{\Delta}_{N-1})\), we use the standard argument of barrier functions. We recall that \(w\) is a barrier function for the operator \((\mathcal{B}+\frac{\bar{\varPsi}}{1-\gamma})\) on the domain \((0,T]\times\tilde{\Delta}_{N-1}\) at the point \((T,\bar{p})\) if \(w\in C^{2} (V \cap((0,T]\times\tilde{\Delta}_{N-1}))\), where \(V\) is a neighborhood of \((T,\bar{p})\), and we have

  1. (i)

    \((\mathcal{B}+\frac{\bar{\varPsi}}{1-\gamma}) w \leq-1\) in \(V \cap((0,T)\times\tilde{\Delta}_{N-1})\);

  2. (ii)

    \(w > 0\) in \(V \cap((0,T)\times\tilde{\Delta}_{N-1}) \setminus \{(T,\bar{p})\}\) and \(w(T,\bar{p}) = 0\).

Next, we fix \(\bar{p}\in\tilde{\Delta}_{N-1}\). Following [15, Sect. 3.4], it is not difficult to check that

$$w(t,\tilde{p})=\big(|\tilde{p}-\bar{x}|^{2}+c_{1}(T-t)\big)e^{c_{2} (T-t)} $$

is a barrier at \((T,\bar{p})\), provided that \(c_{1},c_{2}\) are sufficiently large. Then we set

$$v^{\pm}(t,\tilde{p})=1\pm k w(t,\tilde{p}), $$

where \(k\) is a suitably large positive constant, independent of \(j\), such that

$$\begin{aligned} \Big(\mathcal{B}+\frac{\bar{\varPsi}}{1-\gamma}\Big)(\bar{\psi}_{j}-v^{+}) \ge& H(\cdot,\cdot,\bar{\psi}_{j-1})-\lambda(\bar{\psi}_{j-1} - \bar{\psi}_{j})\\ &{} -\frac{\bar{\varPsi}}{1-\gamma}-k\Big(\mathcal{B}+\frac{\bar{\varPsi }}{1-\gamma}\Big)w\\ \ge& 0, \end{aligned}$$

and \(\bar{\psi}_{j}\le v^{+}\) on \(\partial(V \cap((0,T)\times\tilde{\Delta}_{N-1}))\). The maximum principle yields \(\bar{\psi}_{j}\le v^{+}\) on \(V \cap ((0,T)\times\tilde{\Delta}_{N-1})\); analogously, \(\bar{\psi}_{j}\ge v^{-}\) on the domain \(V \cap ((0,T)\times\tilde{\Delta}_{N-1})\), and letting \(j\to\infty\), we get

$$1-k w(t,\tilde{p})\le\bar{\psi}(t,\tilde{p})\le1+k w(t,\tilde{p}), \quad (t,\tilde{p})\in V \cap \left((0,T)\times\tilde{\Delta}_{N-1}\right). $$

Therefore, we deduce that

$$\lim_{(t,\tilde{p})\to(T,\bar{p})} \bar{\psi}(t,\tilde{p})=1, $$

which concludes the proof. □

Proof of Theorem 5.5

As in the proof of Theorem 5.3, to ease the notational burden, we write \(\tilde{p}\) for \(\tilde{p}^{\circ}\), \(\tilde{p}_{s}\) for \(\tilde{p}^{t}_{s}\), \(\pi\) for \(\pi^{t}\), \(\tilde{W}\) for \(\tilde{W}^{t}\), \(\tilde{\mathbb{P}}\) for \(\tilde{\mathbb{P}}^{t}\), ℙ for \(\mathbb{P}^{t}\), and \(\mathcal{G}_{s}^{I}\) for \(\mathcal{G}_{s}^{t,I}\). Similarly to the proof of Theorem 5.3, it is easy to see that the strategy \({\widetilde{\pi}_{s}:=(\widetilde{\pi}^{S}_{s},\widetilde{\pi }^{P}_{s})^{\top}}=(\widetilde{\pi}^{S}(s,{\tilde{p}_{s- },H^{t}_{s- }}), {\widetilde{\pi}^{P}(s,{\tilde{p}_{s- },H_{s- }})})^{\top}\) defined from (5.20), (5.21) is admissible, that is, satisfies (4.5). This essentially follows from condition (2.6) and the fact that both \(\underline{w}(s,\tilde{p})\) and \(\bar{w}(s,\tilde{p})\) belong to \(\mathcal{C}_{P}^{2\alpha}\); hence, their first- and second-order space derivatives are bounded on \([0,T]\times\widetilde{\Delta}_{N-1}\). Here, it is also useful to recall that \({\mathbb{P}}[\tilde{p}_{s}\in \tilde{\Delta}_{N-1},\ t\leq s\leq T]=1\) as shown in the proof of Theorem 5.3.

For a feedback control \(\pi_{s}:=(\pi_{s}^{S},\pi^{P}_{s}):=(\pi ^{S}(s,\tilde{p}_{s- },H_{s- }),\pi^{P}(s,\tilde{p}_{s- },H_{s- }))\) such that \((\pi^{S},\pi^{P})\in\bar{\mathcal{A}}(t,T;\tilde{p},0)\), define the process

$$ M_{s}^{\pi} := e^{-\gamma\int_{t}^{s} {\tilde{\eta}(u,\tilde{p}_{u},\pi _{u})} \,du} e^{w(s,\tilde{p}_{s},H_{s})}, \quad t\leq s\leq T, $$

where \(w(s,\tilde{p},z):=(1-z)\bar{w}(s,\tilde{p})+z\underline {w}(s,\tilde{p})\), and \(\tilde{\eta}\) is defined as in (4.13). Note that \(\tilde{\eta}\) can be written as

$$\begin{aligned} {\tilde{\eta}(t,\tilde{p},\pi)} =&-r + \pi^{S}\big(r-\tilde{\mu}(\tilde {p})\big) + \frac{1-\gamma}{2} \sigma^{2}(\pi^{S})^{2} \\ &{} +\pi^{P}\big(r- {\tilde{a}(t,\tilde{p})} \big)+ \frac{1-\gamma}{2} \upsilon^{2}(\pi^{P})^{2}, \end{aligned}$$

and thus \(-\tilde{\eta}\) is concave in \(\pi\). This in turn implies that there exists a nonrandom constant \(A<\infty\) such that

$$ 0< M_{s}^{\pi}\leq A< \infty,\quad{t\leq s\leq T}, $$
(B.24)

since \(\underline{w}, \bar{w}\in C([0,T]\times\tilde{\Delta}_{N-1})\). We prove the two assertions of Theorem 5.5 through the following steps:

(i) Define the processes \(\mathcal{Y}_{s}=e^{w(s,\tilde{p},H_{s})}\) and \(\mathcal{U}_{s}=e^{-\gamma\int_{t}^{s} {\tilde{\eta}(u,\tilde {p}_{u},\pi_{u})} \,du}\). By Itô’s formula, the generator formula (4.7) with \(f(s,\tilde{p},z)=e^{w(s,\tilde{p},z)}\), and the same arguments as those used to derive (4.11),

$$\begin{aligned} M_{s}^{\pi} =& M_{t}^{\pi}+\int_{t}^{s}{\mathcal{U}_{u- }} \,d \mathcal{Y}_{u} -\gamma\int_{t}^{s} {\tilde{\eta}(u,\tilde{p}_{u},\pi_{u})} {\mathcal {U}_{u}} \mathcal{Y}_{u}\,du\\ =&M_{t}^{\pi}+\int_{t}^{s} M_{u}^{\pi}\bigg(\frac{\partial w}{\partial u} + \frac{1}{2} \operatorname{tr}({\kappa} {\kappa}^{\top} D^{2} w) + \frac{1}{2}{(\nabla_{\tilde{p}} w) {\kappa} {\kappa}^{\top} (\nabla _{\tilde{p}} w)^{\top}} +(\nabla_{\tilde{p}} w){\beta}_{\gamma}\\ &{} +(1-H_{u}) \tilde{h}(\tilde{p}_{u})\Big( e^{\underline{w} (u,\frac {1}{\tilde{h}(\tilde{p}_{u})}\tilde{p}_{u}\cdot h^{\prime})-\bar{w} \left(u,\tilde{p}_{u}\right)}-1 \Big) -\gamma\tilde{\eta} \bigg)\,du +\mathcal{M}^{c}_{s}+\mathcal{M}^{d}_{s}, \end{aligned}$$

where

$$\begin{aligned} \mathcal{M}^{c}_{s} :=&\int_{t}^{s} M_{u}^{\pi}\nabla_{\tilde{p}}w {{\kappa }(u,\tilde{p}_{u})} \,d \tilde{W}_{u}, \\ \mathcal{M}^{d}_{s} :=&\int_{t}^{s} \mathcal{U}_{u- } \Big( e^{\underline{w} (u,\frac{1}{\tilde{h}(\tilde{p}_{u- })}\tilde{p}_{u- }\cdot h^{\prime})}-e^{\bar{w} (u,\tilde{p}_{u- })} \Big) \,{d\tilde{\xi }_{u}}. \end{aligned}$$
(B.25)

Using the expression of \({\eta}\) in (4.13) and arguments similar to those used to derive (4.13), we may write \(M^{\pi}\) as

$$\begin{aligned} M_{s}^{\pi} &=M_{t}^{\pi}+\int_{t}^{s} M_{u}^{\pi}R(u,\tilde{p}_{u},\pi _{u},H_{u})\,du+\mathcal{M}^{c}_{s}+\mathcal{M}^{d}_{s} \end{aligned}$$

with

$$\begin{aligned} &R(u,\tilde{p},\pi,z) \end{aligned}$$
(B.26)
$$\begin{aligned} &\quad{}= \frac{\partial w}{\partial u} + \frac{1}{2} \operatorname{tr}({\kappa} {\kappa}^{\top} D^{2} w) + \frac{1}{2} {(\nabla_{\tilde{p}} w ){\kappa} {\kappa}^{\top} ({\nabla_{\tilde{p}}} w)^{\top}} + \gamma r \\ &\qquad{} + (1-z) \tilde{h}(\tilde{p})\Big( e^{\underline{w} (u,\frac {1}{\tilde{h}(\tilde{p})}\tilde{p}\cdot h^{\prime})-\bar{w} (u,\tilde {p})}-1 \Big) \\ &\qquad{} + z\left((\nabla_{\tilde{p}} \underline{w}) {\beta}_{\gamma} - \gamma\pi^{S} \big(r-\tilde{\mu}(\tilde{p})\big)- \frac{\gamma(1-\gamma )}{2} \sigma^{2}(\pi^{S})^{2}\right) \\ &\qquad{} +(1-z) \left((\nabla_{\tilde{p}} \bar{w}) {\beta}_{\gamma} -\gamma \pi^{P} \big(r - \tilde{a}(t,\tilde{p})\big) - \frac{\gamma(1-\gamma)}{2} \upsilon ^{2}(\pi^{P})^{2}\right). \end{aligned}$$
(B.27)

Clearly, \(R(u,\tilde{p},\pi,z)\) is a concave function in \(\pi\) for each \((u,\tilde{p},z)\), and this function reaches its maximum at \(\tilde{\pi }(u,\tilde{p},z)=(\tilde{\pi}^{S}(u,\tilde{p},z),\tilde{\pi }^{P}(u,\tilde{p},z))\) as defined in (5.19), (5.20). Upon substituting this maximum into (B.26) and rearrangements similar to those leading to (5.5) and (5.14) (depending on whether \(z=1\) or \(z=0\)), we get

$$\begin{aligned} R(u,\tilde{p},\pi,z)&\leq R\big(u,\tilde{p},{\widetilde{\pi}(u,\tilde{p},z)},z\big)= 0, \end{aligned}$$

in light of (5.5) or (5.14), respectively. Therefore, we get the inequality

$$\begin{aligned} \mathbb{E}^{{\tilde{\mathbb{P}}}} [M^{\pi}_{T}]&\leq M_{t}^{\pi} +\mathbb{E}^{{\tilde{\mathbb{P}}}} [\mathcal{M}^{c}_{T}+\mathcal{M}^{d}_{T}] \end{aligned}$$

with equality if \(\pi=\widetilde{\pi}\). Note that \(\mathbb{E}^{{\tilde{\mathbb{P}}}}[\mathcal{M}^{c}_{T}]=0\) since it is possible to find a nonrandom constant \(B\) such that

$$\sup_{t\leq u\leq T} |M_{u}^{\pi}(\nabla_{\tilde{p}}w) \kappa(u,\tilde{p}_{u})|^{2}\leq A \sup_{t\leq u\leq T} \|{\kappa}(u,\tilde{p}_{u})\|^{2} \sup_{t\leq u\leq T}\|\nabla_{\tilde {p}} w(u,\tilde{p}_{u})\|^{2} \leq B, $$

in view of (B.24) and the fact that the partial derivatives of \(\underline{w}\) and \(\bar{w}\) are uniformly bounded on \([0,T]\times\widetilde{\Delta}_{N-1}\). The latter statement follows from the fact that both \(\underline{w}\) and \(\bar{w}\) are \(\mathcal{C}_{P}^{2,\alpha}\) on \(\widetilde{\Delta}_{N-1}\) in light of Lemma 5.1 and Theorem 5.4. To deal with \(\mathcal{M}^{d}\), note that since \(\underline{w},\bar{w}\in C([0,T]\times\tilde{\Delta}_{N-1})\) and \((\mathcal{U}_{s})_{t\leq s\leq T}\) is uniformly bounded (due to the fact that \(-\tilde{\eta}\) is concave), we have that the integrand of the second integral in (B.25) is uniformly bounded, and thus \(\mathbb{E}^{\tilde{\mathbb{P}}} [\mathcal{M}^{d}_{T}]=0\) as well. The two previous facts, together with the initial conditions \(H_{t}=0\) and \(\tilde{p}_{t}=\tilde{p}\), lead to

$$ {{\mathbb{E}^{\tilde{\mathbb{P}}}[M^{\pi}_{T}]\leq M_{t}^{\pi }=e^{w(t,\tilde{p}_{t},H_{t})}=e^{w(t,\tilde{p},0)}}=e^{\bar{w}(t,\tilde{p})}} $$
(B.28)

with equality if \(\pi=\widetilde{\pi}\).

(ii) The rest of the proof is similar to the postdefault case. Concretely, using the fact that we have equality in (B.28) when \(\pi =\tilde{\pi}\), we get

$$\begin{aligned} e^{\bar{w}(t,\tilde{p})} =& \mathbb{E}^{\tilde{\mathbb{P}}} [M^{\tilde{\pi}}_{T}]=\mathbb{E}^{\tilde {\mathbb{P}}}\left[ e^{-\gamma\int_{t}^{T} {\tilde{\eta}(u,\tilde {p}_{u},\tilde{\pi}_{u})} \,du} e^{w(T,\tilde{p}_{T},H_{T})}\right]\\ =&\mathbb{E}^{\tilde{\mathbb{P}}} \left[ e^{-\gamma\int_{t}^{T} {\tilde{\eta}(u,\tilde{p}_{u},\tilde{\pi}_{u})} \,du}\right] \end{aligned}$$
(B.29)

since \(w(T,\tilde{p}_{T},H_{T}):=(1-H_{T})\bar{w}(T,\tilde {p}_{T})+H_{T}\underline{w}(T,\tilde{p}_{T})\equiv0\). Also, by (B.28), for every feedback control \(\pi_{s}=\pi(s,\tilde{p}_{s},H_{s})\in \mathcal{A}(t,T;\tilde{p},0)\), we have

$$ \mathbb{E}^{\tilde{\mathbb{P}}}\left[ e^{-\gamma\int_{t}^{T} {\tilde {\eta}(u,\tilde{p}_{u},\pi_{u})}\, du} \right]=\mathbb{E}^{\tilde{\mathbb {P}}}[M^{\pi}_{T}] \leq M_{t}^{\pi} =e^{\bar{w}(t,\tilde{p})}=\mathbb{E}^{\tilde{\mathbb{P}}}\left[ e^{-\gamma\int_{t}^{T} {\tilde{\eta}(u,\tilde{p}_{u},\tilde{\pi}_{u})} \,du}\right], $$

where the last equality follows from (B.29). This proves assertions (1) and (2). □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Capponi, A., Figueroa-López, J.E. & Pascucci, A. Dynamic credit investment in partially observed markets. Finance Stoch 19, 891–939 (2015). https://doi.org/10.1007/s00780-015-0272-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00780-015-0272-0

Keywords

Mathematics Subject Classification (2010)

JEL Classification

Navigation