Skip to main content
Log in

Robust Time-Inconsistent Stochastic Linear-Quadratic Control with Drift Disturbance

  • Published:
Applied Mathematics & Optimization Submit manuscript

Abstract

This paper studies stochastic linear-quadratic control with a time-inconsistent objective and worst-case drift disturbance. We allow the agent to introduce disturbances to reflect her uncertainty about the drift coefficient of the controlled state process. We adopt a two-step equilibrium control approach to characterize the robust time-consistent controls, which can preserve the order of preference. Under a general framework allowing random parameters, we derive a sufficient condition for equilibrium controls using the forward-backward stochastic differential equation approach. We also provide analytical solutions to mean-variance portfolio problems for various settings. Our empirical studies confirm the improvement in portfolio’s performance in terms of out-of-sample Sharpe ratio by incorporating with robustness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  1. Anderson, E.W., Hansen, L.P., Sargent, T.J.: A quartet of semigroups for model specification, robustness, prices of risk, and model detection. J. Eur. Econ. Assoc. 1(1), 68–123 (2003)

    Article  Google Scholar 

  2. Basak, S., Chabakauri, G.: Dynamic mean-variance asset allocation. Rev. Financ. Stud. 23(8), 2970–3016 (2010)

    Article  Google Scholar 

  3. Björk, T., Khapko, M., Murgoci, A.: On time-inconsistent stochastic control in continuous time. Financ. Stoch. 21(2), 331–360 (2017)

    Article  MathSciNet  Google Scholar 

  4. Björk, T., Murgoci, A.: A theory of Markovian time-inconsistent stochastic control in discrete time. Financ. Stoch. 18(3), 545–592 (2014)

    Article  MathSciNet  Google Scholar 

  5. Björk, T., Murgoci, A., Zhou, X.Y.: Mean-variance portfolio optimization with state-dependent risk aversion. Math. Financ. 24(1), 1–24 (2014)

    Article  MathSciNet  Google Scholar 

  6. Ellsberg, D.: Risk, ambiguity, and the savage axioms. Q. J. Econ. 75(4), 643–669 (1961)

    Article  MathSciNet  Google Scholar 

  7. Fouque, J.P., Pun, C.S., Wong, H.Y.: Portfolio optimization with ambiguous correlation and stochastic volatilities. SIAM J. Control Optim. 54(5), 2309–2338 (2016)

    Article  MathSciNet  Google Scholar 

  8. Han, B., Pun, C.S., Wong, H.Y.: Robust state-dependent mean–variance portfolio selection: a closed-loop approach. Financ. Stoch. 25, 1–33 (2021)

    Article  MathSciNet  Google Scholar 

  9. Hu, Y., Huang, J., Li, X.: Equilibrium for time-inconsistent stochastic linear—quadratic control under constraint. arXiv preprint arXiv:1703.09415 (2017)

  10. Hu, Y., Jin, H., Zhou, X.Y.: Time-inconsistent stochastic linear-quadratic control. SIAM J. Control Optim. 50(3), 1548–1572 (2012)

    Article  MathSciNet  Google Scholar 

  11. Hu, Y., Jin, H., Zhou, X.Y.: Time-inconsistent stochastic linear-quadratic control: characterization and uniqueness of equilibrium. SIAM J. Control Optim. 55(2), 1261–1279 (2017)

    Article  MathSciNet  Google Scholar 

  12. Huang, J., Huang, M.: Robust mean field linear-quadratic-Gaussian games with unknown \(L^2\)-disturbance. SIAM J. Control Optim. 55(5), 2811–2840 (2017)

    Article  MathSciNet  Google Scholar 

  13. Ismail, A., Pham, H.: Robust Markowitz mean-variance portfolio selection under ambiguous covariance matrix. Math. Financ. 29(1), 174–207 (2019)

    Article  MathSciNet  Google Scholar 

  14. Jin, H., Zhou, X.Y.: Continuous-time portfolio selection under ambiguity. Math. Control Relat. Fields 5(3), 475–488 (2015)

    Article  MathSciNet  Google Scholar 

  15. Knight, F.H.: Risk, Uncertainty and Profit. Houghton Mifflin, New York (1921)

    Google Scholar 

  16. Kobylanski, M.: Backward stochastic differential equations and partial differential equations with quadratic growth. Ann. Probab. 28(2), 558–602 (2000)

    Article  MathSciNet  Google Scholar 

  17. Li, D., Ng, W.L.: Optimal dynamic portfolio selection: multiperiod mean-variance formulation. Math. Financ. 10(3), 387–406 (2000)

    Article  MathSciNet  Google Scholar 

  18. Lim, A.E., Zhou, X.Y.: Mean-variance portfolio selection with random parameters in a complete market. Math. Oper. Res. 27(1), 101–120 (2002)

    Article  MathSciNet  Google Scholar 

  19. Ma, J., Yin, H., Zhang, J.: On non-Markovian forward-backward SDEs and backward stochastic PDEs. Stoch. Process. Appl. 122(12), 3980–4004 (2012)

    Article  MathSciNet  Google Scholar 

  20. Maenhout, P.J.: Robust portfolio rules and asset pricing. Rev. Financ. Stud. 17(4), 951–983 (2004)

    Article  Google Scholar 

  21. Moon, J., Yang, H.J.: Linear-quadratic time-inconsistent mean-field type Stackelberg differential games: time-consistent open-loop solutions. IEEE Trans. Autom. Control 66(1), 375–382 (2020)

    Article  MathSciNet  Google Scholar 

  22. Morlais, M.A.: Quadratic BSDEs driven by a continuous martingale and applications to the utility maximization problem. Financ. Stoch. 13(1), 121–150 (2009)

    Article  MathSciNet  Google Scholar 

  23. Peng, S.: A general stochastic maximum principle for optimal control problems. SIAM J. Control Optim. 28(4), 966–979 (1990)

    Article  MathSciNet  Google Scholar 

  24. Pun, C.S.: Robust time-inconsistent stochastic control problems. Automatica 94, 249–257 (2018)

    Article  MathSciNet  Google Scholar 

  25. Pun, C.S., Wong, H.Y.: Robust investment-reinsurance optimization with multiscale stochastic volatility. Insurance 62, 245–256 (2015)

    MathSciNet  MATH  Google Scholar 

  26. Richou, A.: Numerical simulation of BSDEs with drivers of quadratic growth. Ann. Appl. Probab. 21(5), 1933–1964 (2011)

    Article  MathSciNet  Google Scholar 

  27. Richou, A.: Markovian quadratic and superquadratic BSDEs with an unbounded terminal condition. Stoch. Process. Appl. 122(9), 3173–3208 (2012)

    Article  MathSciNet  Google Scholar 

  28. Sun, J.: Mean-field stochastic linear quadratic optimal control problems: open-loop solvabilities. ESAIM 23(3), 1099–1127 (2017)

    MathSciNet  MATH  Google Scholar 

  29. Sun, J., Li, X., Yong, J.: Open-loop and closed-loop solvabilities for stochastic linear quadratic optimal control problems. SIAM J. Control Optim. 54(5), 2274–2308 (2016)

    Article  MathSciNet  Google Scholar 

  30. Van Den Broek, W., Engwerda, J., Schumacher, J.M.: Robust equilibria in indefinite linear-quadratic differential games. J. Optim. Theory Appl. 119(3), 565–595 (2003)

    Article  MathSciNet  Google Scholar 

  31. Wald, A.: Statistical decision functions which minimize the maximum risk. Ann. Math. 57, 265–280 (1945)

    Article  MathSciNet  Google Scholar 

  32. Wang, T.: Equilibrium controls in time inconsistent stochastic linear quadratic problems. Appl. Math. Optim. 81(2), 591–619 (2020)

    Article  MathSciNet  Google Scholar 

  33. Wang, T.: On closed-loop equilibrium strategies for mean-field stochastic linear quadratic problems. ESAIM 26, 41 (2020)

    MathSciNet  MATH  Google Scholar 

  34. Wei, Q., Yu, Z.: Time-inconsistent recursive zero-sum stochastic differential games. Math. Control Relat. Fields 8(3&4), 1051 (2018)

    Article  MathSciNet  Google Scholar 

  35. Yan, T., Han, B., Pun, C.S., Wong, H.Y.: Robust time-consistent mean-variance portfolio selection problem with multivariate stochastic volatility. Math. Financ. Econ. 14(4), 699–724 (2020). https://doi.org/10.1007/s11579-020-00271-0

    Article  MathSciNet  MATH  Google Scholar 

  36. Yong, J.: Linear-quadratic optimal control problems for mean-field stochastic differential equations. SIAM J. Control Optim. 51(4), 2809–2838 (2013)

    Article  MathSciNet  Google Scholar 

  37. Yong, J.: Linear-quadratic optimal control problems for mean-field stochastic differential equations-time-consistent solutions. Trans. Am. Math. Soc. 369(8), 5467–5523 (2017)

    Article  MathSciNet  Google Scholar 

  38. Yong, J., Zhou, X.Y.: Stochastic Controls: Hamiltonian Systems and HJB Equations, vol. 43. Springer Science & Business Media, New York (1999)

    Book  Google Scholar 

  39. Zhou, X.Y., Li, D.: Continuous-time mean-variance portfolio selection: a stochastic LQ framework. Appl. Math. Optim. 42(1), 19–33 (2000)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous referee and the editors for their careful reading and valuable comments, which have greatly improved the manuscript.

Funding

The authors have not disclosed any funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chi Seng Pun.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Bingyan Han is supported by UIC Start-up Research Fund (Reference No: R72021109). Chi Seng Pun gratefully acknowledges the Ministry of Education (MOE), AcRF Tier 2 Grant (Reference No: MOE2017-T2-1-044) for the funding of this research. Hoi Ying Wong acknowledges the support from the Research Grants Council of Hong Kong via GRF 14303915.

Appendices

Proofs of Results in Section 3

1.1 Proof of Lemma 3.5

Proof

Let k be a positive constant. For \(j=1,...,d\),

$$\begin{aligned}&\Big \Vert \lim _{\varepsilon \downarrow 0} \frac{1}{\varepsilon }{\mathbb {E}}^{\mathbb {P}}_t\int _t^{t+\varepsilon } \left\{ \langle \varLambda ^\varepsilon (s;t),\eta \rangle \right\} ds - \lim _{\varepsilon \downarrow 0} \frac{1}{\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\int _t^{t+\varepsilon } \left\{ \langle \varLambda (s;t),\eta \rangle \right\} ds \Big \Vert _2 \nonumber \\&\quad \le \Big \Vert \lim _{\varepsilon \downarrow 0} \frac{1}{\varepsilon }{\mathbb {E}}^{\mathbb {P}}_t\int _t^{t+\varepsilon } \Big \{ (C^j_s X^*_s + D^j_s u^*_s + \sigma ^j_s)' p(s;t) - \frac{h^*_j(X^*_s , u^*_s, s)}{\varPhi _j (X^*_s , u^*_s, s)} \Big \} ds \nonumber \\&\qquad - \lim _{\varepsilon \downarrow 0} \frac{1}{\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\int _t^{t+\varepsilon } \Big \{ (C^j_s {\tilde{X}}^*_s + D^j_s u^{t,\varepsilon ,v}_s + \sigma ^j_s)' p^\varepsilon (s;t) - \frac{h^*_j({\tilde{X}}^*_s , u^{t,\varepsilon ,v}_s, s)}{\varPhi _j ({\tilde{X}}^*_s , u^{t,\varepsilon ,v}_s, s)} \Big \} ds\Big \Vert _2 \nonumber \\&\quad \le \lim _{\varepsilon \downarrow 0} \frac{k}{\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\int _t^{t+\varepsilon } \Big \{ \Big \Vert (C^j_s {\tilde{X}}^*_s)' p^\varepsilon (s;t)- (C^j_s X^*_s)' p(s;t) \Big \Vert _2 + \Big \Vert (\sigma ^j_s)'(p^\varepsilon (s;t) - p(s;t)) \Big \Vert _2 \nonumber \\&\qquad + \Big \Vert (D^j_s u_s^{t,\varepsilon ,v})' p^\varepsilon (s;t) - \frac{h^*_j ({\tilde{X}}^*_s , u^{t,\varepsilon ,v}_s, s)}{\varPhi _j ({\tilde{X}}^*_s , u^{t,\varepsilon ,v}_s, s)} - (D^j_s u^*_s)' p(s;t) + \frac{h^*_j (X^*_s , u^*_s, s)}{\varPhi _j (X^*_s , u^*_s, s)} \Big \Vert _2 \Big \} ds. \end{aligned}$$
(A.1)

Note that \(C^j,~j=1,\ldots ,d\) are essentially bounded. Let k be a positive constant, we have

$$\begin{aligned}&\lim _{\varepsilon \downarrow 0} \frac{1}{\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\int _t^{t+\varepsilon } \Big \Vert (C^j_s {\tilde{X}}^*_s)' p^\varepsilon (s;t)- (C^j_s X^*_s)' p(s;t) \Big \Vert _2 ds \\&\quad \le \lim _{\varepsilon \downarrow 0} \frac{k}{\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\int _t^{t+\varepsilon } \Vert {\tilde{X}}^*_s\Vert _2 \Vert p^\varepsilon (s;t)-p(s;t)\Vert _2 + \Vert p(s;t)\Vert _2 \Vert {\tilde{X}}^*_s - X^*_s\Vert _2 ds \\&\quad \le \lim _{\varepsilon \downarrow 0} \Big ( \frac{k}{\varepsilon } \sqrt{\int _t^{t+\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\Vert {\tilde{X}}^*_s\Vert ^2_2ds} \sqrt{\int _t^{t+\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\Vert p^\varepsilon (s;t)-p(s;t)\Vert ^2_2ds} \\&\qquad + \frac{k}{\varepsilon } \sqrt{\int _t^{t+\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\Vert p(s;t)\Vert ^2_2 ds} \sqrt{\int _t^{t+\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\Vert {\tilde{X}}^*_s - X^*_s\Vert ^2_2 ds} \Big ). \end{aligned}$$

We have

$$\begin{aligned}&\frac{k}{\varepsilon } \sqrt{\int _t^{t+\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\Vert {\tilde{X}}^*_s\Vert ^2_2 ds} \sqrt{\int _t^{t+\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\Vert p^\varepsilon (s;t)-p(s;t)\Vert ^2_2 ds}\\&\quad \le k \sqrt{{\mathbb {E}}^{\mathbb {P}}_t\big [\sup _{s\in [t,T]} \Vert p^\varepsilon (s;t)-p(s;t)\Vert ^2_2\big ]}\sqrt{{\mathbb {E}}^{\mathbb {P}}_t\big [ \sup _{s\in [t,T]} \Vert {\tilde{X}}^*_s\Vert ^2_2 \big ]}. \end{aligned}$$

By the stability results of BSDEs, (see, e.g., Theorem 3.3, Chapter 7 in [38]), we have

$$\begin{aligned}&{\mathbb {E}}^{\mathbb {P}}_t\left[ \sup _{s\in [t,T]} \Vert p^\varepsilon (s;t)-p(s;t)\Vert ^2_2\right] \rightarrow 0,\quad {\mathbb {E}}^{\mathbb {P}}_t\big [\sup _{s\in [t,T]}\Vert {\tilde{X}}^*_s - X^*_s\Vert ^2_2\big ]=O(\varepsilon ), \quad \\&\quad {\mathbb {E}}^{\mathbb {P}}_t\big [ \sup _{s\in [t,T]} \Vert X^*_s\Vert ^2_2 \big ] < \infty . \end{aligned}$$

Hence, we have

$$\begin{aligned} k \sqrt{{\mathbb {E}}^{\mathbb {P}}_t\big [\sup _{s\in [t,T]} \Vert p^\varepsilon (s;t)-p(s;t) \Vert ^2_2\big ]}\sqrt{{\mathbb {E}}^{\mathbb {P}}_t\big [ \sup _{s\in [t,T]} \Vert {\tilde{X}}^*_s\Vert ^2_2 \big ]} \rightarrow 0. \end{aligned}$$

Other two terms in (A.1) can be proven in similar manner. Then the result follows. \(\square \)

1.2 Proof of Theorem 3.7

Proof

Let \(X^{t,\varepsilon ,v}\) be the state process corresponding to \(u^{t,\varepsilon ,v}, h^*\). Define \(Y\equiv Y^{t,\varepsilon ,v}\) and \(Z\equiv Z^{t,\varepsilon ,v}\) satisfying

$$\begin{aligned}&\left\{ \begin{array}{ll} dY_s=&{} \mu ^*_x (X^*_s, u^*_s, s)Y_sds + \sum _{j=1}^d[C_s^jY_s+D_s^j v{{\mathbf {1}}}_{s\in [t, t+\varepsilon )}] dW^{\mathbb {P}}_{js},\;\;s\in [t,T],\\ Y_t =&{} 0; \end{array}\right. \\&\left\{ \begin{array}{ll} dZ_s=&{} [\mu ^*_x (X^*_s, u^*_s, s) Z_s+ \delta \mu ^*(X^*_s, u^*_s, s) + \delta \mu ^*_x(X^*_s, u^*_s, s) Y_s]ds\\ &{}+ \frac{1}{2} \mu ^*_{xx} (X^*_s, u^*_s, s) Y_s Y_s ds \\ &{} + \sum _{j=1}^dC_s^jZ_sdW^{\mathbb {P}}_{js},\;\; s\in [t,T],\\ Z_t =&{} 0. \end{array}\right. \end{aligned}$$

Since Assumption 3.6 holds, by Lemma 1 in [23], we have the following moment estimates,

$$\begin{aligned} {\mathbb {E}}^{\mathbb {P}}_t\big [ \sup _{s\in [t,T]} \Vert X^{t,\varepsilon ,v}_s - X^*_s - Y^{t,\varepsilon ,v}_s - Z^{t,\varepsilon ,v}_s\Vert ^2_2 \big ]= & {} o(\varepsilon ^2), \quad \\ {\mathbb {E}}^{\mathbb {P}}_t\big [\sup _{s\in [t,T]} \Vert Y_s\Vert ^2_2\big ]= & {} O(\varepsilon ), \quad \\ {\mathbb {E}}^{\mathbb {P}}_t\big [\sup _{s\in [t,T]}\Vert Z_s\Vert ^2_2\big ]= & {} O(\varepsilon ^2). \end{aligned}$$

Furthermore, since \(\mu ^*_x (X^*_s, u^*_s, s)\) is deterministic, we take conditional expectation on both sides of the SDE for Y, then \({\mathbb {E}}^{\mathbb {P}}_t[Y_s]\) satisfies an ODE with 0 as its unique solution. Therefore, \({\mathbb {E}}^{\mathbb {P}}_t[Y_s]=0, \; s\in [t,T]\).

Then, we obtain the following

$$\begin{aligned}&2[{\mathcal {L}}(t,X^*_t; u^{t,\varepsilon ,v},h^*({\tilde{X}}^*_s,u^{t,\varepsilon ,v}_s,s))-{\mathcal {L}}(t,X^*_t;u^*,h^*(X^*_s,u^*_s,s))]\\&\quad ={\mathbb {E}}^{\mathbb {P}}_t\int _t^T\Big [\langle 2 f^*_x, Y_s +Z_s \rangle + \langle f^*_{xx} (Y_s + Z_s), Y_s + Z_s \rangle \\&\qquad + \langle 2f^*_u, v \rangle {{\mathbf {1}}}_{s\in [t, t+\varepsilon )} + \langle f^*_{uu} v, v \rangle {{\mathbf {1}}}_{s\in [t, t+\varepsilon )} \Big ]ds\\&\qquad +{\mathbb {E}}^{\mathbb {P}}_t[ 2\langle GX^*_T-\nu {\mathbb {E}}^{\mathbb {P}}_t[X^*_T]-\mu _1 X_t^*-\mu _2, Y_T+Z_T \rangle ]\\&\qquad +{\mathbb {E}}^{\mathbb {P}}_t[\langle G(Y_T+Z_T),Y_T+Z_T \rangle ] + o(\varepsilon ). \end{aligned}$$

Using the definition of \(({\tilde{p}}(\cdot ; t), {\tilde{k}}(\cdot ; t))\) and \(({\tilde{P}}(\cdot ; t), {\tilde{K}}(\cdot ; t))\) in (3.7) and (3.8) and applying Itô’s lemma to the last two terms, we have

$$\begin{aligned}&{\mathbb {E}}^{\mathbb {P}}_t[\langle GX^*_T-\nu {\mathbb {E}}^{\mathbb {P}}_t[X^*_T]-\mu _1 X_t^*-\mu _2, Y_T+Z_T \rangle ]\\&\quad ={\mathbb {E}}^{\mathbb {P}}_t\int _t^T \Big [\langle - f^*_x, Y_s+Z_s \rangle + \langle (\mu ^*_u)' {\tilde{p}}(s;t)+\sum _{j=1}^d(D_s^j)'{\tilde{k}}^j(s;t), v {{\mathbf {1}}}_{s\in [t, t+\varepsilon )} \rangle \\&\qquad +\frac{1}{2} \langle \mu ^*_{xx} (X^*_s, u^*_s, s) Y_s {\tilde{p}}(s;t) , Y_s \rangle + \frac{1}{2} \langle \mu ^*_{uu}vv, {\tilde{p}}(s;t) \rangle {{\mathbf {1}}}_{s\in [t, t+\varepsilon )}\Big ]ds + o(\varepsilon ), \end{aligned}$$

and

$$\begin{aligned} {\mathbb {E}}^{\mathbb {P}}_t[\langle G(Y_T+Z_T), Y_T+Z_T \rangle ]&={\mathbb {E}}^{\mathbb {P}}_t\int _t^T\Big [ -\langle f^*_{xx}(Y_s+Z_s), Y_s+Z_s \rangle \\&\quad +\sum _{j=1}^d \langle (D_s^j)' {\tilde{P}}(s;t)D^j_s v, v \rangle {{\mathbf {1}}}_{s\in [t, t+\varepsilon )} \\&\quad - \langle \mu ^*_{xx}(Y_s+Z_s){\tilde{p}}(s;t), Y_s+Z_s \rangle \Big ]ds + o(\varepsilon ). \end{aligned}$$

After simplifications, we prove (3.11). \(\square \)

1.3 Proof of Theorem 3.9

Proof

Since \(H^\varepsilon (s;t)\preceq 0\) and \( {\tilde{H}}(s;t) \succeq 0\), from Theorems 3.2 and 3.7, \((h^*, u^*)\) is an equilibrium control pair if and only if

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} \frac{1}{\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\int _t^{t+\varepsilon } \varLambda ^\varepsilon (s;t)ds = 0, \quad \lim _{\varepsilon \downarrow 0} \frac{1}{\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\int _t^{t+\varepsilon } {\tilde{\varLambda }}(s;t)ds = 0. \end{aligned}$$

Using the notations in (3.13) and (3.14), we only need to show

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} \frac{1}{\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\int _t^{t+\varepsilon } \lambda (s;t)ds = 0. \end{aligned}$$
(A.2)

The first step is to prove

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} \frac{1}{\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\int _t^{t+\varepsilon } \lambda (s;t)ds = \lim _{\varepsilon \downarrow 0} \frac{1}{\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\int _t^{t+\varepsilon } \lambda (s;s)ds. \end{aligned}$$
(A.3)

The proof is analogous to Proposition 3.3 in [11]. Define \(\varPsi (\cdot )\) as the solution of

$$\begin{aligned} d\varPsi (s) = \varPsi (s) \alpha '_sds, \; \varPsi (T) = I_n. \end{aligned}$$
(A.4)

\(I_n\) is the \(n\times n\) identity matrix. Since \(\alpha _s\) is bounded and deterministic, \(\varPsi (\cdot )\) is invertible, and \(\varPsi (\cdot ), \varPsi ^{-1}(\cdot )\) are bounded.

Denote \({\hat{p}}(s;t) = \varPsi (s) p(s;t) + \nu {\mathbb {E}}^{\mathbb {P}}_t[X^*_T] + \mu _1 X_t^* + \mu _2\) and \({\hat{k}}^j(s;t) = \varPsi (s) k^j(s;t)\). Then

$$\begin{aligned} \left\{ \begin{array}{ll} d{\hat{p}} (s;t) &{}= -\Big [ \sum _{j=1}^d \varPsi (s) (\beta _s^j)' \varPsi ^{-1}(s) \hat{k}^j(s;t) + \varPsi (s)\gamma _s \Big ]ds \\ &{}\quad + \sum _{j=1}^d {\hat{k}}^j(s;t)dW^{\mathbb {P}}_{js},\;\;s\in [t,T],\\ {\hat{p}}(T;t) &{}= G X^*_T \end{array}\right. \end{aligned}$$

has a unique solution which does not depend on t, so we can denote \({\hat{p}}(s;t) = {\hat{p}}(s)\), \({\hat{k}}^j(s;t) = {\hat{k}}^j(s)\), therefore, \(p(s;t) = \varPsi ^{-1}(s){\hat{p}}(s) + \varPsi ^{-1}(s) w_t\), where \(w_t = - \nu {\mathbb {E}}^{\mathbb {P}}_t[X^*_T] - \mu _1 X_t^* - \mu _2\). Then \(\lambda (s;t) = f_1(s) + f_2(s) w_t\), where

$$\begin{aligned}&f_1(s) \!=\! \lambda _0(s)\varPsi ^{-1}(s){\hat{p}}(s) \!+\! \sum ^d_{j=1} (\lambda ^j_s)' \varPsi ^{-1}(s)\hat{k}^j(s) \!+\! \lambda _{d+1}(s),\quad \!\! f_2(s) \!=\! \lambda _0(s) \varPsi ^{-1}(s). \end{aligned}$$

Since \({\mathbb {E}}^{\mathbb {P}}_t\left[ \sup _{s\in [t,T]} \Vert f_2(s)\Vert ^2_2 \right] < \infty \), we have

$$\begin{aligned}&\lim _{\varepsilon \downarrow 0} {\mathbb {E}}^{\mathbb {P}}_t\left[ \frac{1}{\varepsilon } \int _t^{t+\varepsilon } \Vert f_2(s) (w_s - w_t) \Vert _2 ds \right] \\&\quad \le \lim _{\varepsilon \downarrow 0} \sqrt{ \frac{1}{\varepsilon } \int _t^{t+\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t[\Vert f_2(s)\Vert ^2_2] ds} \sqrt{\frac{1}{\varepsilon } \int _t^{t+\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\Vert w_s - w_t\Vert ^2_2 ds} \\&\quad \le \lim _{\varepsilon \downarrow 0} \sqrt{{\mathbb {E}}^{\mathbb {P}}_t\left[ \sup _{s\in [t,T]} \Vert f_2(s)\Vert ^2_2 \right] } \sqrt{\frac{1}{\varepsilon } \int _t^{t+\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t\Vert w_s - w_t\Vert ^2_2 ds} = 0. \end{aligned}$$

Then (A.3) is proved.

For the equivalence between (3.12) and (A.2), we note that

  • If (3.12) is true, then \({\mathbb {E}}^{\mathbb {P}}_t[\lambda (s;s)] = 0\). Therefore,

    $$\begin{aligned} \lim _{\varepsilon \downarrow 0} \frac{1}{\varepsilon } \int _t^{t+\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t[\lambda (s;t)]ds = \lim _{\varepsilon \downarrow 0} \frac{1}{\varepsilon } \int _t^{t+\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t[\lambda (s;s)]ds = 0. \end{aligned}$$
  • If \(\lim _{\varepsilon \downarrow 0} \frac{1}{\varepsilon } \int _t^{t+\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t[\lambda (s;t)]ds = 0\), then \(\lim _{\varepsilon \downarrow 0} \frac{1}{\varepsilon } \int _t^{t+\varepsilon } {\mathbb {E}}^{\mathbb {P}}_t[\lambda (s;s)]ds = 0\). By Lemma 3.5 in [9], the stochastic Lebesgue differentiation theorem, we have (3.12).

\(\square \)

Proofs of Results in Section 4

1.1 Proof of Lemma 4.3

Proof

With slightly abuse of notations, suppose there is another solution pair (Xh) and the corresponding adjoint process is p(st). Note \( X_T - {\mathbb {E}}^{\mathbb {P}}_t[ X_T] - \mu _2 \in L^2_{{{\mathcal {F}}}_T}(\varOmega ; {\mathbb {R}}, {\mathbb {P}})\), by [38, Chapter 7, Theorem 2.2], we still have a unique adapted solution

$$\begin{aligned} p(s;t) = {\mathbb {E}}^{\mathbb {P}}[e^{\int ^T_s r_vdv} (X_T - {\mathbb {E}}^{\mathbb {P}}_t[X_T] - \mu _2) | {{\mathcal {F}}}_s]. \end{aligned}$$
(B.1)

Therefore, \(p(t;t)= -\mu _2 e^{\int ^T_t r_s ds}\) and we have the same solution for \( h_t = - \xi e^{\int ^T_t r_s ds} u^*_t = h^*_t\). \(\square \)

1.2 Proof of Lemma 4.5

Proof

Suppose there is another solution pair \((X, {\bar{u}})\), and \({\bar{u}}\) is admissible. Let

$$\begin{aligned} {\bar{p}}(s;t)&= {\tilde{p}}(s;t) - \Big [ {\tilde{M}}_s X_s + {\tilde{\varGamma }}_s^{(2)} - {\mathbb {E}}^{\mathbb {P}}_t[{\tilde{M}}_s X_s +{\tilde{\varGamma }}_s^{(3)}] \Big ],\quad \\ {\bar{k}}(s;t)&={\tilde{k}}(s;t) - \Big [ X_s {\tilde{U}}_s + {\tilde{M}}_s {\bar{u}}_s + {\tilde{\gamma }}^{(2)}_s \Big ]. \end{aligned}$$

By \({\tilde{\varLambda }}(t;t)=0\), we derive

$$\begin{aligned} {\bar{u}}_t&= - (2\alpha _t {\bar{p}}(t;t) + \alpha _t ({\tilde{\varGamma }}^{(2)}_t - {\tilde{\varGamma }}^{(3)}_t) + {\tilde{M}}_t )^{-1} \\&\quad \Big [ \theta _t {\bar{p}}(t;t) + \theta _t ({\tilde{\varGamma }}^{(2)}_t - {\tilde{\varGamma }}^{(3)}_t) + {\bar{k}}(t;t)+{\tilde{\gamma }}_t^{(2)} \Big ]. \end{aligned}$$

Then

$$\begin{aligned} d[{\tilde{M}}_sX_s] = [ -r{\tilde{M}}X + {\tilde{M}}(\theta +\alpha {\bar{u}})' {\bar{u}}]ds + {\tilde{M}} {\bar{u}}'dW^{\mathbb {P}}_s. \end{aligned}$$

Finally,

$$\begin{aligned} d {\bar{p}}(s;t)&= d {\tilde{p}}(s;t) - d \Big [ {\tilde{M}}_s X_s+{\tilde{\varGamma }}_s^{(2)} - {\mathbb {E}}^{\mathbb {P}}_t[{\tilde{M}}_s X_s + {\tilde{\varGamma }}_s^{(3)}] \Big ] \\&= -r {\bar{p}}(s;t) ds + {\bar{k}}(s;t)'dW^{\mathbb {P}}_s \\&\quad - [ {\tilde{M}}(\theta +\alpha {\bar{u}})' {\bar{u}} - {\tilde{M}}(\theta +\alpha u^*)' u^*]ds \\&\quad + {\mathbb {E}}^{\mathbb {P}}_t[ {\tilde{M}}(\theta +\alpha {\bar{u}})' {\bar{u}} - {\tilde{M}}(\theta +\alpha u^*)' u^*] ds. \end{aligned}$$

We take conditional expectation on both sides and notice that \(r_s\) is deterministic, therefore \({\bar{p}}(t;t)=0\). So,

$$\begin{aligned} {\bar{u}}_t= & {} - [\alpha _t ({\tilde{\varGamma }}^{(2)}_t - {\tilde{\varGamma }}^{(3)}_t) + {\tilde{M}}_t ]^{-1} \Big [ \theta _t ({\tilde{\varGamma }}^{(2)}_t - {\tilde{\varGamma }}^{(3)}_t) + {\bar{k}}(t;t)+{\tilde{\gamma }}_t^{(2)} \Big ] \\= & {} u^*_t - [\alpha _t ({\tilde{\varGamma }}^{(2)}_t - {\tilde{\varGamma }}^{(3)}_t) + {\tilde{M}}_t ]^{-1} {\bar{k}}(t;t). \end{aligned}$$

Then \({\bar{k}}(t;t)\) should be essentially bounded, and

$$\begin{aligned} d {\bar{p}}(s;t)&= -r {\bar{p}}(s;t) ds + {\bar{k}}(s;t)'dW^{\mathbb {P}}_s \\&\quad + (\alpha ({\tilde{\varGamma }}^{(2)}_s - {\tilde{\varGamma }}^{(3)}_s) + {\tilde{M}})^{-1} {\tilde{M}} {\bar{k}}(s;s)'( \theta + 2 \alpha u^*) ds\\&\quad -\alpha {\tilde{M}} (\alpha ({\tilde{\varGamma }}^{(2)}_s - {\tilde{\varGamma }}^{(3)}_s) + {\tilde{M}})^{-2} {\bar{k}}(s;s)'{\bar{k}}(s;s) ds \\&\quad - {\mathbb {E}}^{\mathbb {P}}_t\Big [ (\alpha ({\tilde{\varGamma }}^{(2)}_s - {\tilde{\varGamma }}^{(3)}_s) + {\tilde{M}})^{-1} {\tilde{M}} {\bar{k}}(s;s)'(\theta + 2 \alpha u^*) \Big ] ds \\&\quad + {\mathbb {E}}^{\mathbb {P}}_t\Big [ \alpha {\tilde{M}} (\alpha ({\tilde{\varGamma }}^{(2)}_s - {\tilde{\varGamma }}^{(3)}_s) + {\tilde{M}})^{-2} {\bar{k}}(s;s)'{\bar{k}}(s;s) \Big ] ds. \end{aligned}$$

The existence of a solution is obvious. For uniqueness, consider \(({\bar{p}}(\cdot ;t), {\bar{k}}(\cdot ; t) )\) in the space \(L^2_{{\mathcal {F}}}(\varOmega ; \,C(t,T;{\mathbb {R}}), {\mathbb {P}}) \) \(\times \) \( L^\infty _{{\mathcal {F}}}(t, T;\, {\mathbb {R}}^d)\).

Without loss of generality, let \(r=0\). As \({\bar{k}}(s;t)\) does not depend on t, we denote \({\bar{k}}(s) = {\bar{k}}(s;s) = {\bar{k}}(s;t)\). Moreover, we introduce \(a_1(s), a_2(s)\) to rewrite \(d{\bar{p}}(s;t)\) as follows, noting that \(a_1(s), a_2(s)\) are essentially bounded.

$$\begin{aligned} d {\bar{p}}(s;t)&= {\bar{k}}(s)'dW^{\mathbb {P}}_s + a'_1(s) {\bar{k}}(s) ds - a_2(s) {\bar{k}}(s)'{\bar{k}}(s) ds \\&\quad - {\mathbb {E}}^{\mathbb {P}}_t\big [ a'_1(s) {\bar{k}}(s) \big ] ds + {\mathbb {E}}^{\mathbb {P}}_t\big [ a_2(s) {\bar{k}}(s)'{\bar{k}}(s) \big ] ds. \end{aligned}$$

Define

$$\begin{aligned} p_0(s;t) = {\bar{p}}(s;t) - \int ^T_s {\mathbb {E}}^{\mathbb {P}}_t\big [ a'_1(v) {\bar{k}}(v) \big ] dv + \int ^T_s {\mathbb {E}}^{\mathbb {P}}_t\big [ a_2(v) {\bar{k}}(v)'{\bar{k}}(v) \big ] dv. \end{aligned}$$

Then

$$\begin{aligned}&d p_0 (s;t) = a'_1(s) {\bar{k}}(s) ds - a_2(s) {\bar{k}}(s)'{\bar{k}}(s) ds + {\bar{k}}(s)'dW^{\mathbb {P}}_s, \quad p_0(T;t) = 0. \end{aligned}$$

As \({\bar{p}}(\cdot ;t) \in L^2_{{\mathcal {F}}}(\varOmega ; \,C(t,T;{\mathbb {R}}), {\mathbb {P}})\), \(p_0(\cdot ;t) \in L^2_{{\mathcal {F}}}(\varOmega ; \,C(t,T;{\mathbb {R}}), {\mathbb {P}})\).

Suppose there are two solutions \((p^{(1)}_0 , {\bar{k}}^{(1)})\) and \((p^{(2)}_0 , {\bar{k}}^{(2)})\). Denote \(p_\varDelta (s;t) = p^{(1)}_0(s;t) - p^{(2)}_0(s;t)\), \(k_\varDelta (s) = {\bar{k}}^{(1)}(s) - {\bar{k}}^{(2)}(s)\), then

$$\begin{aligned} dp_\varDelta (s;t) = a'_1(s)k_\varDelta (s) - a_2(s) k'_\varDelta (s) \big ({\bar{k}}^{(1)}(s) + {\bar{k}}^{(2)}(s) \big ) ds + k'_\varDelta (s) dW^{\mathbb {P}}_s, \quad p_\varDelta (T; t)=0. \end{aligned}$$

Applying Itô’s lemma to \(\Vert p_\varDelta (s;t)\Vert ^2_2\) on s, taking expectation, and noting that \({\bar{k}}^{(1)}(s)\) and \({\bar{k}}^{(2)}(s)\) are essentially bounded, we have

$$\begin{aligned} {\mathbb {E}}^{\mathbb {P}}\Big [ \Vert p_\varDelta (s;t)\Vert ^2_2 \Big ] + {\mathbb {E}}^{\mathbb {P}}\Big [ \int ^T_s \Vert k_\varDelta (v) \Vert ^2_2 dv \Big ]&\le C {\mathbb {E}}^{\mathbb {P}}\Big [ \int ^T_s \Vert p_\varDelta (s; t)\Vert _2 \Vert k_\varDelta (v) \Vert _2 dv \Big ]\\&\le C{\mathbb {E}}^{\mathbb {P}}\Big [ \int ^T_s \Vert p_\varDelta (v; t) \Vert ^2_2 dv \Big ] \\&\quad + \frac{1}{2} {\mathbb {E}}^{\mathbb {P}}\Big [ \int ^T_s \Vert k_\varDelta (v)\Vert ^2_2 dv \Big ]. \end{aligned}$$

By Gronwall’s inequality, \({\mathbb {E}}^{\mathbb {P}}\big [ \Vert p_\varDelta (s;t)\Vert ^2_2 \big ] = 0\). Therefore, \({\bar{p}}(s;t) = 0,~{\bar{k}}(s;t) = 0\). \(\square \)

1.3 Proof of Proposition 4.7

Proof

A direct calculation shows

$$\begin{aligned} \tilde{F}^{(2)}_t= & {} r_t {\tilde{\varGamma }}^{(2)}_t + \frac{\xi \mu _2 - 1}{(\xi \mu _2 + 1)^2} \theta '_t {\tilde{\gamma }}^{(2)}_t - \frac{\xi }{(\xi \mu _2 + 1)^2} e^{-\int ^T_t r_s ds} \Vert {\tilde{\gamma }}^{(2)}_t \Vert ^2_2 \nonumber \\&+ \frac{\mu _2}{(\xi \mu _2 + 1)^2} e^{\int ^T_t r_s ds} \Vert \theta _t\Vert ^2_2. \end{aligned}$$
(B.2)

Since \(\theta \) is essentially bounded and we can introduce a new probability measure \({\tilde{{\mathbb {Q}}}}\) under which \({\tilde{W}}\) is a standard Brownian motion,

$$\begin{aligned} {\tilde{W}}_t \triangleq - \int ^t_0 \frac{\xi \mu _2 - 1}{(\xi \mu _2 + 1)^2} \theta _s ds + W^{\mathbb {P}}_t. \end{aligned}$$
(B.3)

Under \({\tilde{{\mathbb {Q}}}}\), the driver of \({\tilde{\varGamma }}^{(2)}_t\) has no cross term \(\theta '_t {\tilde{\gamma }}^{(2)}_t\) and

$$\begin{aligned} d\vartheta _s = \Big [b_{\vartheta }(s, \vartheta _s) + \frac{\xi \mu _2 - 1}{(\xi \mu _2 + 1)^2} \sigma _{\vartheta }(s) \theta _s \Big ]ds + \sigma _{\vartheta }(s)d{\tilde{W}}_s. \end{aligned}$$
(B.4)

Then it is straightforward to verify the conditions in [27, Theorem 2.5]. The Lipschitz constant in [27, Assumption (F.1)] is \(K_b + \frac{|\xi \mu _2 - 1|\Vert \sigma _\vartheta \Vert _\infty }{(\xi \mu _2 + 1)^2}\). We can take constants in [27, Assumption (B.1)] as follows.

$$\begin{aligned} K_{f,y} = \Vert r\Vert _\infty , \quad l = 1, \quad \alpha = 0, \quad \beta = \frac{2\mu _2}{(\xi \mu _2 +1)^2} e^{\int ^T_0 r_s ds}, \quad \gamma = \frac{2\xi }{(\xi \mu _2 +1)^2}.\nonumber \\ \end{aligned}$$
(B.5)

In particular, [27, Assumption (B.1)(4)] becomes Condition (2) above. Therefore, we can apply [27, Theorem 2.5] under measure \({\tilde{{\mathbb {Q}}}}\). \({\tilde{\gamma }}^{(2)}_t\) is essentially bounded since \(\theta _t\) is still essentially bounded under \({\tilde{{\mathbb {Q}}}}\). \(\square \)

1.4 Proof of Proposition 4.8

Proof

The proof is a direct application of [27, Proposition 3.1]. With the truncation function \(\rho _M(\cdot )\) in [27, Proposition 3.1], our truncated driver (corresponding to \(f_M\) in [27, Proposition 3.1]) is Lipschitz in \(\vartheta \) with constant \(\frac{|\xi \mu _2 - 1|}{(\xi \mu _2 + 1)^2} M + \frac{2\mu _2}{(\xi \mu _2 + 1)^2} e^{\int ^T_0 r_s ds} ( C_\theta + 1) \), Lipschitz in \({\tilde{\gamma }}^{(2)}\) with constant \(\frac{|\xi \mu _2 - 1|}{(\xi \mu _2 + 1)^2} ( C_\theta + 1) + \frac{2\xi }{(\xi \mu _2 + 1)^2} M\), and Lipschitz in \({\tilde{\varGamma }}^{(2)}\) with constant \( \Vert r\Vert _\infty \). The remaining proof follows exactly from [27, Proposition 3.1]. \(\square \)

1.5 Proof of Lemma 4.13

Proof

Suppose there is another solution pair (Xh) and the corresponding adjoint process is p(st). With the same idea as in the proof of Lemma 4.3, we have a unique adapted solution

$$\begin{aligned} p(s;t) = {\mathbb {E}}^{\mathbb {P}}[e^{\int ^T_s r_vdv} (X_T - {\mathbb {E}}^{\mathbb {P}}_t[X_T] - \mu _1 X^*_t) | {{\mathcal {F}}}_s]. \end{aligned}$$
(B.6)

Therefore, we have \(h_t = h^*_t\).

\(\square \)

1.6 Proof of Lemma 4.14

Proof

Let \(J=\frac{{\tilde{M}}}{{\tilde{N}}}\), \(K=\frac{J}{{\tilde{M}}} {\tilde{U}}- \frac{J^2}{{\tilde{M}}} {\tilde{V}}\), then to prove the existence and uniqueness of \(({\tilde{M}}, {\tilde{U}})\), \(({\tilde{N}}, {\tilde{V}})\), we only need to show the existence and uniqueness of \(({\tilde{M}}, {\tilde{U}}), (J, K)\). It is easy to show

$$\begin{aligned} \left\{ \begin{array}{rcl} d{\tilde{M}}_s&{}=&{} \big [-2(r+\alpha ){\tilde{M}} + \theta '\theta (1-\frac{1}{J}) {\tilde{M}} + \frac{\alpha ^2}{\xi } - \theta '\theta {\tilde{\varGamma }} \big ] ds \\ &{}&{}+ \big [ (2-\frac{1}{J}- \frac{\tilde{\varGamma }}{\tilde{M}} )\theta '{\tilde{U}} + \frac{{\tilde{U}}' {\tilde{U}}}{{\tilde{M}}} \big ]ds \\ &{} &{} + {\tilde{U}}_s 'dW_s^{\mathbb {P}},\quad {\tilde{M}}_T=1,\\ dJ_s&{}=&{}\big [\frac{\alpha ^2}{\xi } \frac{J}{{\tilde{M}}} + (1-\frac{1}{J}-\frac{{\tilde{\varGamma }}}{{\tilde{M}}}) \theta 'K + \frac{K'K}{J} \big ]ds + K_s' dW_s^{\mathbb {P}},\quad J_T=1. \end{array}\right. \end{aligned}$$
(B.7)

We consider \({\tilde{M}}^c = {\tilde{M}}\vee c\), \( J^c = J \vee c\), where \(c\le 1\) is a constant. The corresponding diffusion terms are denoted by \({\tilde{K}}^c\), \({\tilde{U}}^c\). Then,

$$\begin{aligned} \left\{ \begin{array}{rcl} d{\tilde{M}}^c_s&{}=&{} \big [-2(r+\alpha ){\tilde{M}}^c + \theta '\theta (1 -\frac{1}{J^c}) {\tilde{M}}^c + \frac{\alpha ^2}{\xi } - \theta '\theta {\tilde{\varGamma }} \big ]ds \\ &{}&{}+ \big [ (2-\frac{1}{J^c}- \frac{\tilde{\varGamma }}{\tilde{M}^c} )\theta '{\tilde{U}}^c + \frac{({\tilde{U}}^c)' {\tilde{U}}^c}{{\tilde{M}}^c} \big ]ds \\ &{} &{} + ({\tilde{U}}^c_s) 'dW_s^{\mathbb {P}},\quad {\tilde{M}}^c_T=1,\\ dJ^c_s&{}=&{}\big [ \frac{\alpha ^2 }{\xi } \frac{J^c}{{\tilde{M}}^c} + (1-\frac{1}{J^c}-\frac{{\tilde{\varGamma }}}{{\tilde{M}}^c}) \theta 'K^c + \frac{(K^c)'K^c}{J^c} \big ]ds + (K^c_s)' dW_s^{\mathbb {P}},\quad J^c_T=1. \end{array}\right. \end{aligned}$$
(B.8)

(B.8) is a standard quadratic BSDE system, there exists a solution pair \(({\tilde{M}}^c, {\tilde{U}}^c) \in L^\infty _{{\mathcal {F}}}(0,T;{\mathbb {R}}) \times L^2_{{\mathcal {F}}}(0,T;{\mathbb {R}}^d, {\mathbb {P}})\) and \((J^c, K^c) \in L^\infty _{{\mathcal {F}}}(0,T;{\mathbb {R}}) \times L^2_{{\mathcal {F}}}(0,T;{\mathbb {R}}^d, {\mathbb {P}})\). \({\tilde{U}}^c\cdot W^{\mathbb {P}}, K^c\cdot W^{\mathbb {P}}\) are BMO martingales, see [16, 22]. Since \(1-\frac{1}{J^c}\le 0\), by comparison principle for quadratic BSDE in [16], \({\tilde{M}}^c \ge {\hat{M}}^c\), where \({\hat{M}}^c\) is the solution to following BSDE,

$$\begin{aligned} d{\hat{M}}^c_s= & {} \bigg [ -2(r+\alpha ){\hat{M}}^c+ \frac{\alpha ^2}{\xi } +\left( 2-\frac{1}{J^c}- \frac{\tilde{\varGamma }}{\hat{M}^c} \right) \theta ' {\hat{U}}^c + \frac{({\hat{U}}^c)' {\hat{U}}^c}{\hat{M}^c} \bigg ]ds\nonumber \\&+ ({\hat{U}}^c_s) 'dW_s^{\mathbb {P}},\quad {\hat{M}}^c_T=1. \end{aligned}$$
(B.9)

Since \(\big [(2-\frac{1}{J^c}- \frac{\tilde{\varGamma }}{\hat{M}^c})\theta + \frac{{\hat{U}}^c}{\hat{M}^c}\big ]\cdot W^{\mathbb {P}}\) is a BMO martingale, we can introduce a new probability measure \({\mathbb {Q}}\) and under \({\mathbb {Q}}\) define a new Brownian motion \(W^{\mathbb {Q}}_s\) by \(W^{\mathbb {Q}}_s = W^{\mathbb {P}}_s + \int ^s_0 (2-\frac{1}{J^c}- \frac{\tilde{\varGamma }}{\hat{M}^c} )\theta + \frac{{\hat{U}}^c}{\hat{M}^c} dt\). Then

$$\begin{aligned} \hat{M}^c_s= & {} {\mathbb {E}}^{\mathbb {Q}}_t \big [ e^{2\int ^T_s r_t+\alpha _t dt} - \int ^T_s \frac{\alpha ^2_v}{\xi } e^{2\int ^v_s r_t+\alpha _t dt} dv \big ] \\= & {} e^{2\int ^T_s r_t dt}\big [ e^{2\int ^T_s \alpha _t dt} - \mu ^2_1 \xi \int ^T_s e^{2\int ^v_s \alpha _t dt} dv \big ] \ge {\underline{l}} > 0. \end{aligned}$$

Therefore, \({\tilde{M}}^c \ge {\underline{l}}\), \({\underline{l}}\) does not depend on c. By comparison principle for quadratic BSDE in [16], since \(\frac{\alpha ^2}{\xi {\tilde{M}}^c} \le \frac{\alpha ^2}{\xi {\underline{l}}}\) then \(J^c_s \ge \exp (-\int ^T_s\frac{\alpha ^2_t}{\xi {\underline{l}}} dt)\). This lower bound also does not depend on c. Finally, let constant \(0<c< \min \big \{\exp (-\int ^T_0 \frac{\alpha ^2_t}{\xi {\underline{l}}}dt), \;\; {\underline{l}}\big \}\), we have \({\tilde{M}} = {\tilde{M}}^c\), \( J =J^c\), \({\tilde{U}} = {\tilde{U}}^c\), \({\tilde{K}} = {\tilde{K}}^c\). The existence of solutions is guaranteed.

Next, we prove the uniqueness. Since \({\tilde{M}}, J \ge c >0\), define \(Y=\frac{1}{{\tilde{M}}}, Z=-\frac{{\tilde{U}}}{{\tilde{M}}^2}, G = \frac{1}{J}\), \(E=-\frac{K}{J^2}\). YG are essentially bounded. Then

$$\begin{aligned} \left\{ \begin{array}{rcl} dY_s &{}=&{}\big [ [2(r+\alpha )-\theta '\theta ] Y + \theta '\theta YG +(\theta '\theta {\tilde{\varGamma }} - \frac{\alpha ^2}{\xi })Y^2 \big ]ds+\big [ 2\theta 'Z-\theta 'ZG-\theta 'ZY{\tilde{\varGamma }} \big ]ds \\ &{} &{} + Z'_s dW^{\mathbb {P}}_s,\quad Y_T=1,\\ dG_s&{}=&{}\big [ -\frac{\alpha ^2}{\xi } YG + \theta 'E-\theta 'EG- \theta 'E Y \tilde{\varGamma } \big ]ds + E_s' dW^{\mathbb {P}}_s,\quad G_T=1. \end{array}\right. \nonumber \\ \end{aligned}$$
(B.10)

Suppose there are two solutions \((Y^{(1)},Z^{(1)}),(G^{(1)},E^{(1)})\) and \((Y^{(2)},Z^{(2)}),(G^{(2)},E^{(2)})\). Denote \({\bar{Y}} = Y^{(1)} - Y^{(2)}, {\bar{Z}} = Z^{(1)} - Z^{(2)}, {\bar{G}} = G^{(1)} - G^{(2)}, {\bar{E}} = E^{(1)} - E^{(2)}\). Then

$$\begin{aligned} \left\{ \begin{array}{rcl} d{\bar{Y}}_s &{}=&{} \big [[2(r+\alpha )-\theta '\theta ] {\bar{Y}} + \theta '\theta [Y^{(1)}{\bar{G}}+G^{(2)}{\bar{Y}}] \big ]ds + (\theta '\theta {\tilde{\varGamma }} - \frac{\alpha ^2}{\xi }) {\bar{Y}}[Y^{(1)}+Y^{(2)}] ds\\ &{}&{}+\big [2\theta ' {\bar{Z}} - \theta '[Z^{(1)}{\bar{G}}+G^{(2)}{\bar{Z}}] \big ]ds - \theta '{\tilde{\varGamma }} [Z^{(1)}{\bar{Y}} + Y^{(2)}{\bar{Z}}]ds + {\bar{Z}}'_s dW^{\mathbb {P}}_s,\qquad {\bar{Y}}_T=0,\\ d{\bar{G}}_s&{}=&{} \big [ -\frac{\alpha ^2}{\xi } [Y^{(1)}{\bar{G}}+G^{(2)}{\bar{Y}}] + \theta '{\bar{E}}-\theta '[E^{(1)}{\bar{G}}+G^{(2)}{\bar{E}}] \big ]ds - \tilde{\varGamma } \theta ' [Y^{(1)}{\bar{E}} + E^{(2)}{\bar{Y}}]ds \\ &{} &{} + {\bar{E}}_s' dW^{\mathbb {P}}_s,\quad {\bar{G}}_T=0. \end{array}\right. \nonumber \\ \end{aligned}$$
(B.11)

Applying Itô’s lemma to \(\Vert {\bar{G}}_s\Vert ^2_2+\Vert {\bar{Y}}_s\Vert ^2_2\) and taking conditional expectation, we have

$$\begin{aligned}&\Vert {\bar{G}}_s\Vert ^2_2 + {\mathbb {E}}^{\mathbb {P}}_s \Big [ \int ^T_s \Vert {\bar{E}}_v \Vert ^2_2 dv \Big ] + \Vert {\bar{Y}}_s\Vert ^2_2 + {\mathbb {E}}^{\mathbb {P}}_s \Big [ \int ^T_s \Vert {\bar{Z}}_v \Vert ^2_2 dv \Big ] \\&\quad \le C {\mathbb {E}}^{\mathbb {P}}_s \Big [ \int ^T_s \Vert {\bar{G}}_v \Vert ^2_2 + \Vert {\bar{Y}}_v \Vert ^2_2 + \Vert {\bar{G}}_v \Vert _2 \Vert {\bar{Y}}_v\Vert _2 + \Vert {\bar{G}}_v \Vert _2 \Vert {\bar{E}}_v\Vert _2 \\&\qquad + \Vert E^{(1)}_v \Vert _2 \Vert {\bar{G}}_v\Vert ^2_2 + \Vert E^{(2)}_v \Vert _2 \Vert {\bar{G}}_v \Vert _2 \Vert {\bar{Y}}_v\Vert _2 \\&\qquad + \Vert {\bar{Y}}_v \Vert _2 \Vert {\bar{Z}}_v\Vert _2 + \Vert Z^{(1)}_v \Vert _2 \Vert {\bar{Y}}_v\Vert ^2_2 + \Vert Z^{(1)}_v \Vert _2 \Vert {\bar{G}}_v \Vert _2 \Vert {\bar{Y}}_v\Vert _2 dv \Big ]. \end{aligned}$$

By Hölder inequality, we have

$$\begin{aligned} {\mathbb {E}}^{\mathbb {P}}_s \Big [ \int ^T_s \Vert E^{(2)}_v \Vert _2 \Vert {\bar{G}}_v \Vert _2 \Vert {\bar{Y}}_v\Vert _2 dv \Big ]&\le C \sqrt{{\mathbb {E}}^{\mathbb {P}}_s \Big [ \int ^T_s \Vert E^{(2)}_v \Vert ^2_2 dv \Big ] {\mathbb {E}}^{\mathbb {P}}_s \Big [ \int ^T_s \Vert {\bar{G}}_v \Vert ^2_2 \Vert {\bar{Y}}_v\Vert ^2_2 dv \Big ]} \\&\le C \sqrt{ {\mathbb {E}}^{\mathbb {P}}_s \Big [ \int ^T_s \Vert {\bar{G}}_v \Vert ^2_2 \Vert {\bar{Y}}_v\Vert ^2_2 dv \Big ]} \\&\le C \sqrt{ {\mathbb {E}}^{\mathbb {P}}_s \Big [ \int ^T_s \Vert {\bar{G}}_v \Vert ^4_2 + \Vert {\bar{Y}}_v\Vert ^4_2 dv \Big ]}. \end{aligned}$$

Other terms can be treated in a similar way. Finally,

$$\begin{aligned}&\Vert {\bar{G}}_s\Vert ^2_2 + {\mathbb {E}}^{\mathbb {P}}_s \Big [ \int ^T_s \Vert {\bar{E}}_v \Vert ^2_2 dv \Big ] + \Vert {\bar{Y}}_s\Vert ^2_2 + {\mathbb {E}}^{\mathbb {P}}_s \Big [ \int ^T_s \Vert {\bar{Z}}_v \Vert ^2_2 dv \Big ] \\&\quad \le C{\mathbb {E}}^{\mathbb {P}}_s \Big [ \int ^T_s \Vert {\bar{G}}_v \Vert ^2_2 + \Vert {\bar{Y}}_v \Vert ^2_2 dv \Big ] + \frac{1}{2} {\mathbb {E}}^{\mathbb {P}}_s \Big [ \int ^T_s \Vert {\bar{E}}_v \Vert ^2_2 dv \Big ] \\&\qquad + \frac{1}{2} {\mathbb {E}}^{\mathbb {P}}_s \Big [ \int ^T_s \Vert {\bar{Z}}_v \Vert ^2_2 dv \Big ]\\&\qquad + C \sqrt{ {\mathbb {E}}^{\mathbb {P}}_s \Big [ \int ^T_s \Vert {\bar{G}}_v \Vert ^4_2 + \Vert {\bar{Y}}_v\Vert ^4_2 dv \Big ]}. \end{aligned}$$

Consider \(s\in [T-\delta ,T]\) and denote \(G_\delta = \Vert {\bar{G}}_. \Vert _{L^\infty _{{\mathcal {F}}}(T -\delta , T; {\mathbb {R}})}\), \(Y_\delta = \Vert {\bar{Y}}_. \Vert _{L^\infty _{{\mathcal {F}}}(T -\delta , T; {\mathbb {R}})}\), we obtain

$$\begin{aligned} \Vert {\bar{G}}_s\Vert ^2_2 + \Vert {\bar{Y}}_s\Vert ^2_2\le & {} C \delta [G^2_\delta + Y^2_\delta ] + C\sqrt{\delta } \sqrt{G^4_\delta + Y^4_\delta } \\\le & {} C \delta [G^2_\delta + Y^2_\delta ] + C\sqrt{\delta } [G^2_\delta + Y^2_\delta ] \le C\sqrt{\delta } [G^2_\delta + Y^2_\delta ]. \end{aligned}$$

Let \(I_\delta = G_\delta \vee Y_\delta \). Taking supremum on the left-hand side over \(s\in [T-\delta ,T]\) yields

$$\begin{aligned} I^2_\delta \le C\sqrt{\delta } [G^2_\delta + Y^2_\delta ] \le C\sqrt{\delta } I^2_\delta . \end{aligned}$$

By choosing sufficiently small \(\delta \) such that \(C\sqrt{\delta } < 1\), we have \(G^2_\delta = Y^2_\delta = 0\). The same steps are repeated on \([T-2\delta ,T-\delta ], [T-3\delta ,T-2\delta ],...\), until time 0 is reached. The uniqueness follows. \(\square \)

1.7 Proof of Lemma 4.16

Proof

Suppose there is another solution pair \((X, {\bar{u}})\), let

$$\begin{aligned} {\bar{p}}(s;t)&= {\tilde{p}}(s;t) - \big [ {\tilde{M}}_s X_s - {\tilde{\varGamma }}_s X_t - {\mathbb {E}}^{\mathbb {P}}_t[{\tilde{N}}_s X_s] \big ], \\ {\bar{k}}(s;t)&={\tilde{k}}(s;t) - \big [ X_s{\tilde{U}}_s + {\tilde{M}}_s {\bar{u}}_s \big ]. \end{aligned}$$

By \({\tilde{\varLambda }}(t;t)=0\), we derive

$$\begin{aligned} {\bar{u}}_t = - {\tilde{M}}_t^{-1} [ \theta _t {\bar{p}}(t;t) + {\bar{k}}(t;t)] + L_t X_t. \end{aligned}$$
(B.12)

Then

$$\begin{aligned} d[{\tilde{M}}_sX_s]&= \big [ -(r+\alpha ){\tilde{M}}X + \frac{\alpha ^2}{\xi } X - \big (\theta + \frac{{\tilde{U}}}{{\tilde{M}}}\big )'[ \theta _s {\bar{p}}(s;s) + {\bar{k}}(s;s)] \big ]ds \\&\quad + ({\tilde{U}} X + {\tilde{M}} {\bar{u}}) 'dW^{\mathbb {P}}_s. \end{aligned}$$

Finally, we can show

$$\begin{aligned} d {\bar{p}}(s;t)= & {} -(r+\alpha ) {\bar{p}}(s;t) ds + {\bar{k}}(s;t)'dW^{\mathbb {P}}_s+\big \{ \theta ' {\bar{k}}(s;s) \\&+ \frac{{\tilde{U}}'}{{\tilde{M}}} {\bar{k}}(s;s)\big \}ds - {\mathbb {E}}^{\mathbb {P}}_t\big \{ \theta ' {\bar{k}}(s;s) + \frac{{\tilde{V}}'}{{\tilde{M}}} {\bar{k}}(s;s)\big \}ds. \end{aligned}$$

This BSDE has the exact same form as in [11], then it admits a unique solution \(({\bar{p}}(\cdot ;t), {\bar{k}}(\cdot ; t) ) \) in the space \(L^q_{{\mathcal {F}}}(\varOmega ; \,C(t,T;{\mathbb {R}}), {\mathbb {P}})\) \(\times \) \(L^q_{{\mathcal {F}}}(t, T;\, {\mathbb {R}}^d, {\mathbb {P}}) \) for any \(q \in (1,2)\). Therefore, \({\bar{p}}(s;t) = 0, {\bar{k}}(s;t) = 0\). \(\square \)

1.8 Proof of Proposition 4.18

Proof

Consider the BSDE (B.10) for (YGZE). For notation simplicity, introduce functions \(f^y\) and \(f^g\) for the drivers to rewrite (B.10) as

$$\begin{aligned} \left\{ \begin{array}{rcl} dY_s &{}=&{}- f^y(s, \theta _s, Y_s, Z_s, G_s) ds + Z'_s dW^{\mathbb {P}}_s,\quad Y_T=1,\\ dG_s&{}=&{}- f^g(s, \theta _s, G_s, E_s, Y_s) ds + E_s' dW^{\mathbb {P}}_s,\quad G_T=1. \end{array}\right. \end{aligned}$$
(B.13)

Lemma 4.14 shows \(0 < Y, G \le \frac{1}{c}\) for a constant \(0< c < 1\). Indeed, c depends on the value of T. Therefore, we use notation c(T) to highlight this dependence. As in [27, Proposition 3.1], consider a truncation function \(\rho _\kappa \) which is a smooth modification of the centered Euclidean ball of radius \(\kappa \). Denote \((Y^\kappa , G^\kappa , Z^\kappa , E^\kappa )\) as the solution to the truncated BSDE

$$\begin{aligned} \left\{ \begin{array}{rcl} dY^\kappa _s &{}=&{}- f^y_\kappa (s, \theta _s, Y^\kappa _s, Z^\kappa _s, G^\kappa _s) ds + (Z^\kappa _s)' dW^{\mathbb {P}}_s,\quad Y^\kappa _T=1,\\ dG^\kappa _s&{}=&{} - f^g_\kappa (s, \theta _s, G^\kappa _s, E^\kappa _s, Y^\kappa _s) ds + (E^\kappa _s)' dW^{\mathbb {P}}_s,\quad G^\kappa _T=1, \end{array}\right. \end{aligned}$$
(B.14)

where \(f^y_\kappa \triangleq f^y(\cdot , \varphi (\cdot ), \cdot , \rho _\kappa (\cdot ), \cdot )\) and \(f^g_\kappa \triangleq f^g(\cdot , \varphi (\cdot ), \cdot , \rho _\kappa (\cdot ), \cdot )\) truncate on (ZE). First consider that \(b_\vartheta \) is differentiable. Then \((\vartheta , Y, G, Z, E)\) is differentiable with respect to \(\zeta \) in (4.12) and

$$\begin{aligned} \nabla \vartheta _t&= I_d + \int ^t_0 \nabla b_\vartheta (s, \vartheta _s) \nabla \vartheta _s ds, \\ \nabla Y^\kappa _t&= \int ^T_t \big (\nabla _\vartheta f^y_\kappa \nabla \vartheta _s + \nabla _y f^y_\kappa \nabla Y^\kappa _s + \nabla _z f^y_\kappa \nabla Z^\kappa _s - \theta '_s \theta _s Y^\kappa _s \nabla G^\kappa _s + \theta '_s \rho _\kappa (Z^\kappa _s) \nabla G^\kappa _s \big )ds \\&\quad - \int ^T_t \nabla Z^\kappa _s dW^{\mathbb {P}}_s, \\ \nabla G^\kappa _t&= \int ^T_t \big (\nabla _\vartheta f^g_\kappa \nabla \vartheta _s + \nabla _g f^g_\kappa \nabla G^\kappa _s + \nabla _e f^g_\kappa \nabla E^\kappa _s + \frac{\alpha ^2_s}{\xi } G^\kappa _s \nabla Y^\kappa _s + \theta '_s \rho _\kappa (E^\kappa _s) {\tilde{\varGamma }}_s \nabla Y^\kappa _s \big )ds \\&\quad - \int ^T_t \nabla E^\kappa _s dW^{\mathbb {P}}_s, \end{aligned}$$

where \(\nabla \vartheta _t = (\partial \vartheta _{it}/ \partial \zeta _{jt} )_{1 \le i, j \le d}\) etc are defined like the counterparts in [26, Theorem 3.1] and we omit arguments in \(f^y_\kappa \) and \(f^g_\kappa \) for simplicity. One difficulty is \(\nabla Y^\kappa _t\) and \(\nabla G^\kappa _t\) are coupled. The idea is to truncate \(\nabla Y^\kappa _t\) in \(\nabla G^\kappa _t\) and \(\nabla G^\kappa _t\) in \(\nabla Y^\kappa _t\). Without loss of generality, we use the same constant \(\kappa \) and the truncation function is denoted as \({\bar{\rho }}_\kappa \). Denote \((\nabla {\bar{Y}}^\kappa , \nabla {\bar{G}}^\kappa , \nabla {\bar{Z}}^\kappa , \nabla {\bar{E}}^\kappa )\) as the solution to

$$\begin{aligned} \nabla {\bar{Y}}^\kappa _t&= \int ^T_t \big (\nabla _\vartheta f^y_\kappa \nabla \vartheta _s + \nabla _y f^y_\kappa \nabla {\bar{Y}}^\kappa _s + \nabla _z f^y_\kappa \nabla {\bar{Z}}^\kappa _s - \theta '_s \theta _s Y^\kappa _s {\bar{\rho }}_\kappa ( \nabla {\bar{G}}^\kappa _s) \\&\quad + \theta '_s \rho _\kappa (Z^\kappa _s) {\bar{\rho }}_\kappa (\nabla {\bar{G}}^\kappa _s) \big )ds \\&\quad - \int ^T_t \nabla {\bar{Z}}^\kappa _s dW^{\mathbb {P}}_s, \\ \nabla {\bar{G}}^\kappa _t&= \int ^T_t \big (\nabla _\vartheta f^g_\kappa \nabla \vartheta _s + \nabla _g f^g_\kappa \nabla {\bar{G}}^\kappa _s + \nabla _e f^g_\kappa \nabla {\bar{E}}^\kappa _s + \frac{\alpha ^2_s}{\xi } G^\kappa _s {\bar{\rho }}_\kappa (\nabla {\bar{Y}}^\kappa _s) \\&\quad + \theta '_s \rho _\kappa (E^\kappa _s) {\tilde{\varGamma }}_s {\bar{\rho }}_\kappa (\nabla {\bar{Y}}^\kappa _s) \big )ds \\&\quad - \int ^T_t \nabla {\bar{E}}^\kappa _s dW^{\mathbb {P}}_s. \end{aligned}$$

Note the BSDE above uses \((Y^\kappa , G^\kappa , Z^\kappa , E^\kappa )\). Since \(\nabla _z f^y_\kappa \) and \(\nabla _e f^g_\kappa \) are essentially bounded, we can apply Girsanov’s theorem to introduce a Brownian motion \(W^y_t \triangleq W^{\mathbb {P}}_t - \int ^t_0 \nabla _z f^y_\kappa ds\) under measure \({\mathbb {Q}}^y\) and \(W^g_t \triangleq W^{\mathbb {P}}_t - \int ^t_0 \nabla _e f^g_\kappa ds\) under \({\mathbb {Q}}^g\). Then

$$\begin{aligned} \nabla {\bar{Y}}^\kappa _t&= {\mathbb {E}}^{{\mathbb {Q}}^y}_t \Big [\int ^T_t e^{\int ^s_t \nabla _y f^y_\kappa du} \big (\nabla _\vartheta f^y_\kappa \nabla \vartheta _s - \theta '_s \theta _s Y^\kappa _s {\bar{\rho }}_\kappa ( \nabla {\bar{G}}^\kappa _s) + \theta '_s \rho _\kappa (Z^\kappa _s) {\bar{\rho }}_\kappa (\nabla {\bar{G}}^\kappa _s) \big )ds \Big ], \\ \nabla {\bar{G}}^\kappa _t&= {\mathbb {E}}^{{\mathbb {Q}}^g}_t \Big [ \int ^T_t e^{\int ^s_t \nabla _g f^g_\kappa du} \big (\nabla _\vartheta f^g_\kappa \nabla \vartheta _s + \frac{\alpha ^2_s}{\xi } G^\kappa _s {\bar{\rho }}_\kappa (\nabla {\bar{Y}}^\kappa _s) + \theta '_s \rho _\kappa (E^\kappa _s) {\tilde{\varGamma }}_s {\bar{\rho }}_\kappa (\nabla {\bar{Y}}^\kappa _s) \big )ds \Big ]. \end{aligned}$$

Since \(\Vert \nabla \vartheta _t\Vert _2 \le e^{K_b T}\) and

$$\begin{aligned} \Vert \nabla _\vartheta f^y_\kappa \Vert _2&\le 2 (C_\theta + 1) \Big [ \frac{1}{c(T)} + \frac{1 + \Vert \tilde{\varGamma } \Vert _\infty }{c^2(T)} \Big ] + 2 \kappa + \frac{\kappa }{c(T)} + \frac{\kappa \Vert \tilde{\varGamma } \Vert _\infty }{c(T)} \triangleq C^y_\vartheta , \\ \Vert \nabla _y f^y_\kappa \Vert _2&\le 2(\Vert r\Vert _\infty + \Vert \alpha \Vert _\infty )+ (C_\theta + 1)^2 \Big [1 + \frac{1}{c(T)} + 2 \frac{\Vert \tilde{\varGamma } \Vert _\infty }{c(T)} \Big ] + 2\frac{\Vert \alpha \Vert ^2_\infty }{\xi c(T)} \\&\quad + \kappa \Vert \tilde{\varGamma } \Vert _\infty (C_\theta + 1) \triangleq C^y_y, \\ \Vert \nabla _\vartheta f^g_\kappa \Vert _2&\le \kappa + \frac{\kappa }{c(T)} + \frac{\kappa \Vert \tilde{\varGamma } \Vert _\infty }{c(T)} \triangleq C^g_\vartheta , \quad \Vert \nabla _g f^g_\kappa \Vert _2 \le \frac{\Vert \alpha \Vert ^2_\infty }{\xi c(T)} + \kappa (C_\theta + 1) \triangleq C^g_g. \end{aligned}$$

It is direct to show

$$\begin{aligned} \Vert \nabla {\bar{Y}}^\kappa _t \Vert _2&\le Te^{ C^y_y T} \left( C^y_\vartheta e^{K_b T} + \frac{\kappa (C_\theta + 1)^2}{c(T)} + (C_\theta + 1) \kappa ^2 \right) , \\ \Vert \nabla {\bar{G}}^\kappa _t \Vert _2&\le T e^{C^g_g T} \left( C^g_\vartheta e^{K_b T} + \frac{ \kappa \Vert \alpha \Vert ^2_\infty }{\xi c(T)} + \kappa ^2 (C_\theta + 1) \Vert {\tilde{\varGamma }} \Vert _\infty \right) . \end{aligned}$$

Then by setting T sufficiently small, the right hand side of the two inequalities can be smaller than \(\kappa \). Therefore, the truncation \({\bar{\rho }}_\kappa \) is not binding and \((\nabla {\bar{Y}}^\kappa , \nabla {\bar{G}}^\kappa , \nabla {\bar{Z}}^\kappa , \nabla {\bar{E}}^\kappa ) =(\nabla Y^\kappa , \nabla G^\kappa , \nabla Z^\kappa , \nabla E^\kappa )\). With Malliavin calculus as in [26, Theorem 3.1], a version of \((Z^\kappa _t, E^\kappa _t)\) is given by \((\nabla Y^\kappa _t (\nabla \vartheta _t)^{-1}\sigma _\vartheta (t)\), \(\nabla G^\kappa _t (\nabla \vartheta _t)^{-1}\sigma _\vartheta (t))\). Since \(\Vert (\nabla \vartheta _t)^{-1}\sigma _\vartheta (t)\Vert _2 \le C_\sigma e^{K_b T}\), we can further select T small enough such that truncation \(\rho _\kappa \) is also not binding. Then we derive that (ZE) is essentially bounded and therefore \({\tilde{U}}\) is essentially bounded.

When \(b_\vartheta \) is not differentiable, the result can be proved by a standard approximation and stability results for Lipschitz BSDEs as noted in [26, 27]. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, B., Pun, C.S. & Wong, H.Y. Robust Time-Inconsistent Stochastic Linear-Quadratic Control with Drift Disturbance. Appl Math Optim 86, 4 (2022). https://doi.org/10.1007/s00245-022-09871-2

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00245-022-09871-2

Keywords

Mathematics Subject Classification

Navigation