Skip to main content
Log in

Risk-Sensitive Asset Management with Lognormal Interest Rates

  • Original Research
  • Published:
Asia-Pacific Financial Markets Aims and scope Submit manuscript

Abstract

Risk-sensitive asset management on both finite and infinite time horizons are treated on a market with a bank account and a risky stock. The risk-free interest rate is formulated as a geometric Brownian motion, and affects the return of the risky stock. The problems become standard risk-sensitive control problems. We derive the Hamilton–Jacobi–Bellman equations and study these solutions. Using solutions, we construct optimal strategies and optimal values.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Bensoussan, A. (1992). Stochastic control of partially observable systems. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Bielecki, T. R., & Pliska, S. R. (1999). Risk sensitive dynamic asset management. Journal of Applied Mathematics and Optimization, 39(3), 337–360.

    Article  Google Scholar 

  • Bielecki, T. R., & Pliska, S. R. (2004). Risk sensitive Intertemporal CAPM, with application to fixed-income management. IEEE Transactions on Automatic Control (special issue on stochastic control methods in financial engineering), 49(3), 420–432.

    Google Scholar 

  • Bielecki, T. R., Pliska, S. R., & Sheu, S. J. (2005). Risk-sensitive portfolio management with Cox-Ingersoll-Ross interest rates: the HJB equation. SIAM Journal on Control and Optimization, 44(5), 1811–1843.

    Article  Google Scholar 

  • Black, F., Derman, E., & Toy, W. (1990). A one-factor model of interest rates and its application to treasury bond options. Financial Analysts Journal, 7, 33–39.

    Article  Google Scholar 

  • Davis, M., & Lleo, S. (2008). Risk-sensitive benchmarked asset management. Quantitative Finance, 8(4), 415–426.

    Article  Google Scholar 

  • Davis, M., & Lleo, S. (2011). Jump-diffusion risk-sensitive asset management I: Diffusion factor model. SIAM Journal on Financial Mathematics, 2(1), 22–54.

    Article  Google Scholar 

  • Davis, M., & Lleo, S. (2013). Jump-diffusion risk-sensitive asset management II: Jump-diffusion factor model. SIAM Journal on Control and Optimization, 51(2), 1441–1480.

    Article  Google Scholar 

  • Fouque, J. P., Papanicolaou, G., & Sircar, K. R. (2000). Derivatives in financial markets with stochastic volatility. Cambridge: Cambridge University Press.

    Google Scholar 

  • Fleming, W. H., & McEneaney, W. M. (1995). Risk-sensitive control on an infinite time horizon. SIAM Journal on Control and Optimization, 33(6), 1881–1915.

    Article  Google Scholar 

  • Fleming, W. H., & Sheu, S. J. (1999). Optimal long term growth rate of expected utility of wealth. The Annals of Applied Probability, 9(3), 871–903.

    Article  Google Scholar 

  • Fleming, W. H., & Sheu, S. J. (2000). Risk-sensitive control and an optimal investment model. Mathematical Finance, 10(2), 197–213.

    Article  Google Scholar 

  • Fleming, W. H., & Sheu, S. J. (2002). Risk-sensitive control and an optimal investment model II. The Annals of Applied Probability, 12(2), 730–767.

    Article  Google Scholar 

  • Fleming, W. H., & Soner, M. (2006). Controlled Markov processes and viscosity solutions (2nd ed.). New York: Springer.

    Google Scholar 

  • Hata, H. (2011). Down-side risk large deviations control problem with Cox–Ingersoll–Ross’s interest rates. Asia-Pacific Financial Markets, 18(1), 69–87.

    Article  Google Scholar 

  • Hata, H. (2017). Risk-sensitive asset management in a general diffusion factor model: Risk-seeking case. Japan Journal of Industrial and Applied Mathematics, 34(1), 59–98.

    Article  Google Scholar 

  • Hata, H., & Iida, Y. (2006). A risk-sensitive stochastic control approach to an optimal investment problem with partial information. Finance and Stochastics, 10(3), 395–426.

    Article  Google Scholar 

  • Hata, H., Nagai, H., & Sheu, S. J. (2010). Asymptotics of the probability minimizing a “Down-side” risk. The Annals of Applied Probability, 20(1), 52–89.

    Article  Google Scholar 

  • Hata, H., & Sekine, J. (2006). Solving long term optimal investment problems with Cox–Ingersoll–Ross interest rates. Advanced Mathematical Economics, 8, 231–255.

    Article  Google Scholar 

  • Hata, H., & Sekine, J. (2010). Explicit solution to a certain non-ELQG risk-sensitive stochastic control problem. Applied Mathematics and Optimization, 62(3), 341–380.

    Article  Google Scholar 

  • Hata, H., & Sekine, J. (2013). Risk-sensitive asset management with Wishart-autoregressive-type factor model. Journal of Mathematical Finance, 3(1A), 222–229.

    Article  Google Scholar 

  • Hata, H., & Sekine, J. (2017). Risk-sensitive Asset Management with Jump-Wishart-autoregressive-type Factor Model. Asia-Pacific Financial Markets, 24(3), 221–252.

    Article  Google Scholar 

  • Kaise, H., & Sheu, S.J. (2004). Risk sensitive optimal investment: solutions of the dynamical programming equation. In Mathematics of finance, Contemp. Math., 351, American Mathematical Society, Providence, RI. (pp. 217–230.)

  • Kaise, H., & Sheu, S. J. (2006). On the structure of solutions of ergodic type Bellman equations related to risk-sensitive control. The Annals of Probability, 34(1), 284–320.

    Article  Google Scholar 

  • Karatzas, I., & Shreve, S. (1991). Brownian Motion and stochastic Calculas (2nd ed.). Berlin: Springer.

    Google Scholar 

  • Kuroda, K., & Nagai, H. (2002). Risk sensitive portfolio optimization on infinite time horizon. Stochastic Representative, 73(3–4), 309–331.

    Google Scholar 

  • Nagai, H. (2000). Risk-senstive dynamic asset management with partial information. Stochastics in Finite and Infinite Dimension, a volume in honor of G. Kallianpur, Rajput et al., Editors, Birkh \(\ddot{\text{a}}\) user, 37, 321–339.

  • Nagai, H. (2003). Optimal strategies for risk-sensitive portfolio optimization problems for general factor models. SIAM Journal on Control and Optimization, 41(6), 1779–1800.

    Article  Google Scholar 

  • Nagai, H. (2011). Asymptotics of the probability minimizing a “down-side” risk under partial information. Quantitative Finance, 11(5), 789–803.

    Article  Google Scholar 

  • Nagai, H. (2012). Downside risk minimization via a large deviations approach. The Annals of Applied Probability, 22(2), 608–669.

    Article  Google Scholar 

  • Nagai, H., & Peng, S. (2002). Risk-sensitive dynamic portfolio optimization with partial information on infinite time horizon. The Annals of Applied Probability, 12(1), 173–195.

    Article  Google Scholar 

  • Pham, H. (2003). A large deviations approach to optimal long term investment. Finance and Stochastics, 7(2), 169–195.

    Article  Google Scholar 

  • Sekine, J. (2006). A note on long-term optimal portfolios under drawdown constraints. Advances in Applied Probability, 38(3), 673–692.

    Article  Google Scholar 

  • Tamura, T., & Watanabe, Y. (2011). Risk-sensitive portfolio optimization problems for hidden Markov factors on infinite time horizon. Asymptotic Analysis, 75(3–4), 169–209.

    Article  Google Scholar 

  • Watanabe, Y. (2013). Asymptotic analysis for a downside risk minimization problem under partial information. Stochastic Processes and Their Applications, 123(3), 1046–1082.

    Article  Google Scholar 

  • Yor, M. (1992). On some exponential functionals of Brownian motion. The Advances in Applied Probability, 24(3), 509–531.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the referees for helpful comments and suggestions. Hiroaki Hata's research is supported by a Grant-in-Aid for Young Scientists (B), No. 15K17584, from Japan Society for the Promotion of Science.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hiroaki Hata.

Appendices

Appendix 1: Formal Derivation of HJB Equation (1.5)

We give a formal derivation of (1.5). Consider

$$\begin{aligned} V(t, x, y):=\inf _{\pi \in {\mathcal {A}}_{t,T}}E_{t,x,y} \left[ \left( X_{t,T}^\pi \right) ^\gamma \right] , \end{aligned}$$

where \(X_{t,T}^\pi :=X_{T}^\pi /X_{t}^\pi\).

Let \(0\le t \le T\) and \(\delta >0\) such that \(t+\delta <T\). By dynamic programming principle (III.7 of Fleming and Soner 2006), we have

$$\begin{aligned} V(t, x, y)&= \inf _{\pi \in {\mathcal {A}}_{t, t+\delta }} E_{t, x, y}\left[ V(t+\delta , X^\pi _{t+\delta }, Y_{t+\delta })\right] . \end{aligned}$$
(A.1)

Here, we recall

$$\begin{aligned} dX_{s}^\pi&=X_{s}^\pi \{r(Y_s)+\pi \lambda (Y_s) \}ds+X^\pi _s\pi _s e dw(s), \quad X^\pi _t=x, \\ dY_s&=b(Y_s)ds+c(Y_s)dw_2(s), \quad Y_t=y. \end{aligned}$$

Assume \(V(t, x, y) \in {\mathcal {C}}^{1,2,2}([0,T]\times (0, \infty ) \times (0, \infty ))\). Applying It\(\hat{\text {o}}\)’s formula, we have

$$\begin{aligned} V(t+\delta , X^\pi _{t+\delta }, Y_{t+\delta })&= V(t, x, y)+\int ^{t+\delta }_t \left[ \partial _s V(s, X^\pi _{s}, Y_{s}) \right. \\&\quad \left. + \frac{c(Y_s)^2}{2} \partial _{yy}V(s, X^\pi _{s}, Y_{s})+b(Y_s)\partial _yV(s, X^\pi _{s}, Y_{s})\right. \\&\quad \left. +\frac{1}{2}(X_s^\pi )^2 \pi _s^2 \partial _{xx}V(s, X^\pi _{s}, Y_{s}) \right. \\&\quad \left. +X_s^\pi \{ r(Y_s)+\pi _s \lambda (Y_s)\}\partial _xV(s, X^\pi _{s}, Y_{s}) \right. \\&\left. +\,X_s^\pi \pi _s \rho c(Y_s) \partial _{xy}V(s, X^\pi _{s}, Y_{s}) \right] ds \\&\qquad + \int ^{t+\delta }_t \partial _xV(s, X^\pi _{s}, Y_{s})X_s^\pi \pi _s e dw(s)\\&+\int ^{t+\delta }_t \partial _{y}V(s, X^\pi _{s}, Y_{s}) c(Y_s)dw_2(s). \end{aligned}$$

Assuming some suitable condition on the derivatives of V and also \(\pi\) such that

$$\begin{aligned} E_{t,x,y}\left[ \int ^{t+\delta }_t \partial _xV(s, X^\pi _{s}, Y_{s})X_s^\pi \pi _s e dw(s) \right] =0, \end{aligned}$$

and

$$\begin{aligned} E_{t,x,y}\left[ \int ^{t+\delta }_t \partial _{y}V(s, X^\pi _{s}, Y_{s}) c(Y_s)dw_2(s) \right] =0, \end{aligned}$$

we have

$$\begin{aligned}&E_{t, x, y}\left[ V(t+\delta , X^\pi _{t+\delta }, Y_{t+\delta }) \right] =V(t, x, y) + E_{t,x,y} \left[ \int ^{t+\delta }_t \left[ \partial _s V(s, X^\pi _{s}, Y_{s}) \right. \right. \\&\qquad + \frac{c(Y_s)^2}{2} \partial _{yy}V(s, X^\pi _{s}, Y_{s})+b(Y_s)\partial _yV(s, X^\pi _{s}, Y_{s})+\frac{1}{2}(X_s^\pi )^2 \pi _s^2 \partial _{xx}V(s, X^\pi _{s}, Y_{s}) \\&\quad \left. \left. +X_s^\pi \{ r(Y_s)+\pi _s \lambda (Y_s)\}\partial _xV(s, X^\pi _{s}, Y_{s}) +X_s^\pi \pi _s \rho c(Y_s) \partial _{xy}V(s, X^\pi _{s}, Y_{s}) \right] ds \right] . \end{aligned}$$

Hence, (A.1) becomes

$$\begin{aligned}&\inf _{ \pi \in {\mathcal {A}}_{t, t+\delta }} E_{t,x, y}\left[ \int ^{t+\delta }_t \left[ \partial _s V(s, X^\pi _{s}, Y_{s})+ \frac{c(Y_s)^2}{2}\partial _{yy}V(s, X^\pi _{s}, Y_{s}) \right. \right. \\&\left. \left. +b(Y_s)\partial _yV(s, X^\pi _{s}, Y_{s})+\frac{1}{2}(X_s^\pi )^2 \pi _s^2 \partial _{xx}V(s, X^\pi _{s}, Y_{s}) \right. \right. \\&\left. \left. X_s^\pi \{ r(Y_s)+\pi _s \lambda (Y_s)\}\partial _xV(s, X^\pi _{s}, Y_{s}) +X_s^\pi \pi _s \rho c(Y_s) \partial _{xy}V(s, X^\pi _{s}, Y_{s})\right] ds \right] =0. \end{aligned}$$

Dividing by \(\delta\) and letting \(\delta \rightarrow 0\), we can formally derive

$$\begin{aligned} \begin{aligned}&-\partial _{t}V= \frac{c(y)^2}{2} \partial _{yy}V +\inf _{\pi \in {{\mathbb {R}}}} \left[ b(y)\partial _{y}V + \rho c(y) \pi x \partial _{xy}V -\frac{1}{2}\pi ^2 x^2 \partial _{xx}V \right. \\&\quad \left. + \{ r(y)+\pi \lambda (y)\}x\partial _xV \right] ,\\&V(T, x, y)=x^\gamma . \end{aligned} \end{aligned}$$

Setting \(V(t,x,y)=x^\gamma {{\mathrm {e}}}^{\gamma v(t,y)}\), we obtain (1.5).

Appendix 2: Preliminaries

In this section, we introduce the following result, which will be used several times in the proofs of our theorems.

Lemma B.1

(Hata and Sekine (2006) : Theorem 3.1.) For \(f:=(f_1,f_2): (0,\infty ) \mapsto {{\mathbb {R}}}^2\), denote \(f(Y):=\left( f(Y_t)\right) _{t\in [0,T]}\). Suppose f(Y) is progressively measurable such that \(\int _0^T |f(Y_t)|^2 dt<\infty\) a.e.. Then the martingale property of \((M_t)_{t\in [0,T]}\) is equivalent to that of \((M_{2,t})_{t\in [0,T]}\). Here, \((M_t)_{t\in [0,T]}\) and \((M_{2,t})_{t\in [0,T]}\) are defined as follows :

$$\begin{aligned} M_t:&={{\mathrm {e}}}^{\int ^t_0 f(Y_s)dw(s)-\frac{1}{2}\int ^t_0 |f(Y_s)|^2 ds },\\ M_{2,t}&:={{\mathrm {e}}}^{\int ^t_0 f_2(Y_s)dw_2(s)-\frac{1}{2}\int ^t_0 |f_2(Y_s)|^2 ds }. \end{aligned}$$

Lemma B.2

Assume \((\mathbf{A1})\)-\((\mathbf{A3})\). Define \((M_t)_{t\in [0,T]}\) as

$$\begin{aligned} M_t:&={{\mathrm {e}}}^{\int ^t_0 (\beta _0+\beta _1Y_s)dw_2(s)-\frac{1}{2}\int ^t_0 (\beta _0+\beta _1Y_s)^2 ds }. \end{aligned}$$

If \(\beta _1<0\) and \(\beta _0 \in {\mathbb {R}}\), then we have

$$\begin{aligned} E\left[ M_t \right] =1, \ t\in [0,T]. \end{aligned}$$
(B.1)

Proof

We apply the idea of Lemma 4.1.1 in Bensoussan (1992). Recall that

$$\begin{aligned} dM_t =M_t(\beta _0+\beta _1 Y_t)dw_2(t). \end{aligned}$$
(B.2)

Let \(\epsilon >0\) be arbitrary. We apply It\(\hat{\text {o}}\)’s formula to \(\frac{M_t}{1+\epsilon M_t}\) and have

$$\begin{aligned}&d\left( \frac{M_t}{1+\epsilon M_t} \right) = dN_t-A_t dt, \end{aligned}$$
(B.3)

where \(N_t\) and \(A_t\) are defined by

$$\begin{aligned} N_t:=\int ^{t}_{0} \frac{M_s(\beta _0+\beta _1 Y_s)}{\left( 1+\epsilon M_s\right) ^2} dw_2(u),\ \mathrm{and} \ A_t:=\frac{\epsilon M_t^2(\beta _0+\beta _1 Y_t)^2}{\left( 1+\epsilon {M_t}\right) ^3}. \end{aligned}$$

If we can check that there is \(K_{1,T, y}>0\) such that

$$\begin{aligned} E\left[ \int ^{t}_{0} M_s(1+Y^2_s)ds\right] \le K_{1,T, y}, \end{aligned}$$
(B.4)

then we see that

$$\begin{aligned} E\left[ |N_t|^2 \right]&\le \overline{C}_{\epsilon } E\left[ \int ^{t}_{0} M_s(1+Y^2_s) ds \right] \\&\le \overline{C}_{\epsilon }K_{1,T,y}, \end{aligned}$$

and that \(N_t\) is a square-integrate martingale. Therefore, integrating (B.3) on [0, t] and taking expectation for both sides, we have

$$\begin{aligned} E\left[ \frac{M_t}{1+\epsilon M_t} \right] =\frac{1}{1+\epsilon }-E\left[ \int ^t_0 A_s ds \right] . \end{aligned}$$
(B.5)

Here, we observe the following:

  • \(\displaystyle A_s \rightarrow 0 \ a.e.\ a.s.\) as \(\epsilon \rightarrow 0\).

  • \(\displaystyle A_s \le K_2 M_s(1+Y^2_s), \ \exists K_2>0\).

Hence, from (B.4) and the dominated convergence theorem, we have

$$\begin{aligned} E\left[ \int ^t_0 A_s ds \right] \rightarrow 0 \ \text {as} \ \epsilon \rightarrow 0. \end{aligned}$$

Meanwhile, since \(E\left[ M_t \right] \le 1\),

$$\begin{aligned} E\left[ \frac{M_t}{1+\epsilon M_t} \right] \rightarrow E \left[ M_t \right] \ \text {as} \ \epsilon \rightarrow 0. \end{aligned}$$

Hence, letting \(\epsilon \rightarrow 0\) in (B.5), we obtain \(E\left[ M_t \right] = 1\).

Finally, we prove (B.4). Applying It\(\hat{\text {o}}\)’s formula to \(Y_t^2\), we have

$$\begin{aligned} dY_t^2=(2b+c^2)Y^2_tdt+2cY^2_t dw_2(t). \end{aligned}$$
(B.6)

Using (B.2) and (B.6), we have

$$\begin{aligned} d\left\{ M_t Y^2_t\right\}&=M_t dY^2_t+Y^2_t dM_t + dM_t \cdot dY^2_t \\&=M_t Y^2_t (2c\beta _1 Y_t +2c\beta _0+2b+c^2)dt+M_t Y^2_t(\beta _1 Y_t+\beta _0+2c)dw_2(t). \end{aligned}$$

Setting \(\tau _n:=\inf \{t>0 ; Y_t<1/n, n<Y_t\}\), we have

$$\begin{aligned} E\left[ M_{T \wedge \tau _n} Y^2_{t \wedge \tau _n} \right] -y^2=E\left[ \int ^{T \wedge \tau _n} _0 M_s Y^2_s (2c\beta _1 Y_s +2c\beta _0+2b+c^2)ds \right] . \end{aligned}$$

Observing that

$$\begin{aligned}&M_s Y^2_s (2c\beta _1 Y_s +2c\beta _0+2b+c^2)\\&\quad =M_s Y^2_s \left\{ 2c\beta _1 Y_s +(2c\beta _0+2b+c^2+1)-1\right\} \\&\quad \le K_3 M_s-M_s Y^2_s, \quad \exists K_3>0, \end{aligned}$$

we have

$$\begin{aligned} E\left[ \int ^{T \wedge \tau _n}_0 M_s Y^2_s ds \right] \le y^2+K_3T. \end{aligned}$$

As \(n\rightarrow \infty\), we obtain (B.4) with \(K_{1,T,y}=y^2+K_3T\). \(\square\)

Appendix 3: The Smoothness of the Function of \({\widehat{v}}(t,y)\)

From (2.11) we recall that

$$\begin{aligned} {\widehat{v}}(t,y)&=\frac{1}{k}\log E\left[ F_{T-t}\right] \\&=\frac{1}{k}\log \tilde{E}\left[ {{\mathrm {e}}}^{k\int ^{T-t}_0 \{r(Y_s)+\frac{\lambda (Y_s)^2}{2(1-\gamma )} \} ds } \right] , \end{aligned}$$

where \(\tilde{E}[\cdot ]\) denotes the expectation with respect to the probability measure \(\tilde{P}\) on \((\Omega , {\mathcal {F}})\) defined by

$$\begin{aligned} \frac{d\tilde{P}}{dP}\biggr |_{{\mathcal {F}}_{t}}:={{\mathrm {e}}}^{\frac{\gamma \rho }{1-\gamma }\int ^t_0 \lambda (Y_{s})dw_{2}(s)-\frac{1}{2}(\frac{\gamma \rho }{1-\gamma })^2 \int ^t_0 \lambda (Y_{s})^2 ds}. \end{aligned}$$
(C.1)

From Lemma B.2\(\tilde{P}\) is well-defined. Under \(\tilde{P}\) \(Y_t\) solves

$$\begin{aligned} dY_t=\left\{ \left( b+\frac{\gamma c\rho \lambda _0}{1-\gamma } \right) Y_t+\frac{\gamma c\rho \lambda _1}{1-\gamma }Y_t^2 \right\} dt+cY_t d\tilde{w}_2(t), \quad Y_0=y, \end{aligned}$$
(C.2)

where \(\tilde{w}_2(t)\) is a Brownian motion under \(\tilde{P}\) :

$$\begin{aligned} \tilde{w}_2(t):=w_2(t)-\int ^t_0 \left( \frac{\gamma \rho \lambda _0}{1-\gamma }+\frac{\gamma \rho \lambda _1}{1-\gamma } Y_s \right) ds. \end{aligned}$$

To show the smoothness of \({\widehat{v}}\) we show the smoothness of \(\phi\) :

$$\begin{aligned} \phi (t,y):=\tilde{E}\left[ {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s)ds } \right] , \end{aligned}$$
(C.3)

where \(\theta (y)\) is defined by

$$\begin{aligned} \theta (y):=k\left\{ r(y)+\frac{\lambda (y)^2}{2(1-\gamma )} \right\} . \end{aligned}$$

We try to prove that \(\phi (t,y)\) is differentiable with respect to t. We observe

$$\begin{aligned} \begin{aligned} \frac{\phi (t+h,y)-\phi (t,y)}{h}&=\tilde{E}\left[ \frac{1}{h}\left( {{\mathrm {e}}}^{\int ^{t+h}_0 \theta (Y_s)ds } - {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s)ds } \right) \right] \\&=\tilde{E}\left[ \frac{1}{h} \int ^h_0 \frac{\partial }{\partial \epsilon } \left( {{\mathrm {e}}}^{\int ^{t+\epsilon }_0 \theta (Y_s)ds } \right) d\epsilon \right] . \end{aligned} \end{aligned}$$
(C.4)

We also observe

$$\begin{aligned} \begin{aligned} \left| \frac{1}{h} \int ^h_0 \frac{\partial }{\partial \epsilon } \left( {{\mathrm {e}}}^{\int ^{t+\epsilon }_0 \theta (Y_s)ds } \right) d\epsilon \right|&\le \frac{1}{h} \int ^h_0|\theta (Y_{t+\epsilon })|d\epsilon \\&\le K \left( \sup _{t\in [0,T]}|Y_t|^2+1 \right) . \end{aligned} \end{aligned}$$
(C.5)

Now, we recall, for \(p>1\)

$$\begin{aligned} \begin{aligned}&dY_t^p=pY_t^{p-1}dY_t+\frac{p(p-1)}{2}Y_t^{p-2}d\langle Y_{\cdot } \rangle _t\\&=p\left[ \frac{\gamma c\rho \lambda _1}{1-\gamma }Y_t^{p+1} +\left( b+\frac{\gamma c\rho \lambda _0}{1-\gamma } +\frac{c^2(p-1)}{2} \right) Y_t^p\right] dt+pcY_t^p d\tilde{w}_2(t) \end{aligned} \end{aligned}$$
(C.6)

Using (C.6) with \(p=2\), we have

$$\begin{aligned} dY_t^2=2\left[ \frac{\gamma c\rho \lambda _1}{1-\gamma }Y_t^{3} +\left( b+\frac{\gamma c\rho \lambda _0}{1-\gamma } +\frac{c^2}{2} \right) Y_t^2\right] dt+2cY_t^2 d\tilde{w}_2(t). \end{aligned}$$

Define

$$\begin{aligned} \tilde{M}_p:=\sup _{y>0} \left[ p\left\{ \frac{\gamma c\rho \lambda _1}{1-\gamma }y^{p+1} +\left( b+\frac{\gamma c\rho \lambda _0}{1-\gamma } +\frac{c^2(p-1)}{2} \right) y^p \right\} \right] . \end{aligned}$$
(C.7)

Then, we have

$$\begin{aligned} \tilde{E}\left[ \sup _{t\in [0,T]}Y_t^2 \right]&\le y^2+\tilde{M}_2 T+2c \tilde{E}\left[ \sup _{t\in [0,T]} \int _0 ^t Y_s^2 d\tilde{w}_2(s) \right] \nonumber \\&\le y^2+\tilde{M}_2 T+2c \tilde{E}\left[ \left( \sup _{t\in [0,T]} \int _0 ^t Y_s^2 d\tilde{w}_2(s) \right) ^2 \right] ^{1/2} \nonumber \\&\le y^2+\tilde{M}_2 T+8c \tilde{E}\left[ \langle \int _0 ^{(\cdot )} Y_s^2 d\tilde{w}_2(s) \rangle _T \right] ^{1/2}\nonumber \\&\le y^2+\tilde{M}_2 T+8c \left( \int _0^T \tilde{E}\left[ Y_t^4\right] dt\right) ^{1/2}. \end{aligned}$$
(C.8)

In the third inequality we use the Burkholder-Davis-Gundy inequality. Using (C.6) with \(p=2\) and (C.7) with \(p=4\), we have

$$\begin{aligned} \tilde{E}\left[ Y_{t \wedge \tau _n}^4 \right] \le y^4 +\tilde{M}_4 T, \end{aligned}$$

where \(\tau _n:=\inf \{t>0 ; Y_t<1/n, n<Y_t \}\). As \(n \rightarrow \infty\), we have

$$\begin{aligned} \tilde{E}\left[ Y_{t}^4 \right] \le y^4 +\tilde{M}_4 T. \end{aligned}$$

Hence, we have

$$\begin{aligned} \tilde{E}\left[ \sup _{t\in [0,T]}Y_t^2 \right] \le y^2+\tilde{M}_2 T+8c\sqrt{(y^4 +\tilde{M}_4 T)T}. \end{aligned}$$

Using (C.5) and the dominated convergence theorem, we have

$$\begin{aligned} \frac{\partial \phi }{\partial t}(t,y)&=\lim _{h\rightarrow 0}\tilde{E}\left[ \frac{1}{h} \int ^h_0 \frac{\partial }{\partial \epsilon } \left( {{\mathrm {e}}}^{\int ^{t+\epsilon }_0 \theta (Y_s)ds } \right) d\epsilon \right] \\&=\tilde{E}\left[ \theta (Y_t) {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s)ds } \right] . \end{aligned}$$

Next, we try to show that \(\phi (t,y)\) is differentiable with respect to y. We observe

$$\begin{aligned} \begin{aligned} \frac{\phi (t,y+h)-\phi (t,y)}{h}&=\tilde{E}\left[ \frac{1}{h}\left( {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^{y+h})ds } - {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^y)ds } \right) \right] \\&=\tilde{E}\left[ \frac{1}{h} \int ^h_0 \frac{\partial }{\partial \epsilon } \left( {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^{y+\epsilon })ds } \right) d\epsilon \right] . \end{aligned} \end{aligned}$$
(C.9)

Here, \(Y_s^{z}\) solves (C.2) with \(Y_0^{z}=z\). We also observe

$$\begin{aligned} \begin{aligned} \left| \frac{1}{h} \int ^h_0 \frac{\partial }{\partial \epsilon } \left( {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^{y+\epsilon })ds } \right) d\epsilon \right|&= \frac{1}{h} \int ^h_0 \left| \frac{\partial }{\partial \epsilon } \left( {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^{y+\epsilon })ds } \right) \right| d\epsilon . \end{aligned} \end{aligned}$$
(C.10)

We also have

$$\begin{aligned} \begin{aligned} \left| \frac{\partial }{\partial \epsilon } \left( {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^{y+\epsilon })ds } \right) \right|&\le \left| k \int ^t_0 \left\{ 1+\frac{\lambda (Y^{y+\epsilon }_s)}{1-\gamma } \right\} \frac{\partial Y^{y+\epsilon }_s}{\partial \epsilon } ds \right| \\&\le K_T \int ^t_0 (1+Y^{y+\epsilon }_s) \left| \frac{\partial Y^{y+\epsilon }_s}{\partial \epsilon }\right| ds. \end{aligned} \end{aligned}$$
(C.11)

Following the arguments of Lemma 2.2, we see that (C.2) has a unique solution :

$$\begin{aligned} Y^{y+\epsilon }_{t}=\frac{\displaystyle (y+\epsilon ) {{\mathrm {e}}}^{c\tilde{w}_2(t)+c^{2}\tilde{\delta } t}}{\displaystyle 1-\frac{\gamma c\rho \lambda _1}{1-\gamma }(y+\epsilon ) \int _{0}^{t} {{\mathrm {e}}}^{c\tilde{w}_2(s)+c^{2}\tilde{\delta } s}ds}, \end{aligned}$$

where

$$\begin{aligned} \tilde{\delta }:=\frac{1}{c^2}\left( b+\frac{\gamma c \rho \lambda _0}{1-\gamma }-\frac{c^2}{2} \right) . \end{aligned}$$

By a direct calculation we have

$$\begin{aligned} \frac{\partial Y^{y+\epsilon }_{t}}{\partial \epsilon }=\frac{\displaystyle {{\mathrm {e}}}^{c\tilde{w}_2(t)+c^{2}\tilde{\delta } t}}{\displaystyle \left\{ 1-\frac{\gamma c\rho \lambda _1}{1-\gamma }(y+\epsilon ) \int _{0}^{t} {{\mathrm {e}}}^{c\tilde{w}_2(s)+c^{2}\tilde{\delta } s}ds\right\} ^2 }. \end{aligned}$$

Therefore, we have

$$\begin{aligned} 0<Y^{y+\epsilon }_{t}< (y+1) {{\mathrm {e}}}^{c\tilde{w}_2(t)+c^{2}\tilde{\delta } t}, \ \text {and} \ \ 0<\frac{\partial Y^{y+\epsilon }_{t}}{\partial \epsilon }<{{\mathrm {e}}}^{c\tilde{w}_2(t)+c^{2}\tilde{\delta } t}. \end{aligned}$$
(C.12)

Using (C.11) and (C.12), we have

$$\begin{aligned} \sup _{\epsilon \in [0,1]} \left| \frac{\partial }{\partial \epsilon } \left( {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^{y+\epsilon })ds } \right) \right| \le K_T (y+1) \int ^t_0 \left( {{\mathrm {e}}}^{2c\tilde{w}_2(s)+2c^{2}\tilde{\delta } s} +1 \right) ds. \end{aligned}$$
(C.13)

From (C.10) and (C.13) we have

$$\begin{aligned} \begin{aligned} \left| \frac{1}{h} \int ^h_0 \frac{\partial }{\partial \epsilon } \left( {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^{y+\epsilon })ds } \right) d\epsilon \right|&\le K_T (y+1) \int ^t_0 \left( {{\mathrm {e}}}^{2c\tilde{w}_2(s)+2c^{2}\tilde{\delta } s} +1 \right) ds. \end{aligned} \end{aligned}$$
(C.14)

and, we have

$$\begin{aligned} \tilde{E}\left[ \int ^t_0 {{\mathrm {e}}}^{2c\tilde{w}_2(s)+2c^{2}\tilde{\delta } s} ds \right] =\int ^t_0 {{\mathrm {e}}}^{2c^2s+2c^{2}\tilde{\delta } s}ds < \infty . \end{aligned}$$
(C.15)

Using (C.9, (C.14)), (C.15) and the dominated convergence theorem, we have

$$\begin{aligned} \frac{\partial \phi }{\partial y}(t,y)&=\lim _{h\rightarrow 0}\tilde{E}\left[ \frac{1}{h} \int ^h_0 \frac{\partial }{\partial \epsilon } \left( {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^{y+\epsilon })ds } \right) d\epsilon \right] \\&=\tilde{E}\left[ \frac{\partial }{\partial y}\left( {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^y)ds } \right) \right] . \end{aligned}$$

Finally, we show that \(\phi (t,y)\) is twice differentiable with respect to y. We have

$$\begin{aligned} \begin{aligned} \frac{\partial _y \phi (t,y+h)-\partial _y \phi (t,y)}{h}&=\tilde{E}\left[ \frac{1}{h} \int ^h_0 \frac{\partial ^2 }{\partial \epsilon \partial y} \left( {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^{y+\epsilon })ds } \right) d\epsilon \right] . \end{aligned} \end{aligned}$$
(C.16)

Here, we note

$$\begin{aligned} \begin{aligned} \left| \frac{1}{h} \int ^h_0 \frac{\partial ^2 }{\partial \epsilon \partial y} \left( {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^{y+\epsilon })ds } \right) d\epsilon \right|&= \frac{1}{h} \int ^h_0 \left| \frac{\partial ^2 }{\partial \epsilon \partial y} \left( {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^{y+\epsilon })ds } \right) \right| d\epsilon . \end{aligned} \end{aligned}$$
(C.17)

By a direct calculation, we have

$$\begin{aligned} \begin{aligned}&\frac{\partial }{\partial \epsilon } \left\{ \frac{\partial }{\partial y}\left( {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^{y+\epsilon })ds } \right) \right\} =\int ^t_0 \frac{\partial ^2 }{\partial \epsilon \partial y} \theta (Y^{y+\epsilon }_s)ds {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^{y+\epsilon })ds } \\&\quad +\left( \int ^t_0 \frac{\partial }{\partial y} \theta (Y^{y+\epsilon }_s)ds \right) \left( \int ^t_0 \frac{\partial }{\partial \epsilon } \theta (Y^{y+\epsilon }_s)ds \right) {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^{y+\epsilon })ds }. \end{aligned} \end{aligned}$$
(C.18)

Moreover, we have

$$\begin{aligned} \frac{\partial Y^{y+\epsilon }_{t}}{\partial y}=\frac{\displaystyle {{\mathrm {e}}}^{c\tilde{w}_2(t)+c^{2}\tilde{\delta } t}}{\displaystyle \left\{ 1-\frac{\gamma c\rho \lambda _1}{1-\gamma }(y+\epsilon ) \int _{0}^{t} {{\mathrm {e}}}^{c\tilde{w}_2(s)+c^{2}\tilde{\delta } s}ds\right\} ^2 }, \end{aligned}$$

and

$$\begin{aligned} \frac{\partial ^2 Y^{y+\epsilon }_{t}}{\partial y \partial \epsilon }=-2 \frac{\displaystyle -\frac{\gamma c\rho \lambda _1}{1-\gamma } \int _{0}^{t} {{\mathrm {e}}}^{c\tilde{w}_2(s)+c^{2}\tilde{\delta } s}ds \cdot {{\mathrm {e}}}^{c\tilde{w}_2(t)+c^{2}\tilde{\delta } t}}{\displaystyle \left\{ 1-\frac{\gamma c\rho \lambda _1}{1-\gamma }(y+\epsilon ) \int _{0}^{t} {{\mathrm {e}}}^{c\tilde{w}_2(s)+c^{2}\tilde{\delta } s}ds\right\} ^3 }. \end{aligned}$$

Then, we observe

$$\begin{aligned} \begin{aligned}&0<\frac{\partial Y^{y+\epsilon }_{t}}{\partial y}<{{\mathrm {e}}}^{c\tilde{w}_2(t)+c^{2}\tilde{\delta } t}, \\&\quad \left| \frac{\partial ^2 Y^{y+\epsilon }_{t}}{\partial y \partial \epsilon }\right| \le \frac{\displaystyle 2 {{\mathrm {e}}}^{c\tilde{w}_2(t)+c^{2}\tilde{\delta } t}}{\displaystyle y \left\{ 1-\frac{\gamma c\rho \lambda _1}{1-\gamma }(y+\epsilon ) \int _{0}^{t} {{\mathrm {e}}}^{c\tilde{w}_2(s)+c^{2}\tilde{\delta } s}ds\right\} ^2 }. \end{aligned} \end{aligned}$$
(C.19)

Using (C.18) and (C.19), we have

$$\begin{aligned} \sup _{\epsilon \in [0,1]} \left| \frac{\partial ^2 }{\partial \epsilon \partial y} \left( {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^{y+\epsilon })ds } \right) \right| \le K_T (y+1) \int ^t_0 \left( {{\mathrm {e}}}^{4c\tilde{w}_2(s)+4c^{2}\tilde{\delta } s} +1 \right) ds. \end{aligned}$$
(C.20)

From (C.17) and (C.19) we have

$$\begin{aligned} \begin{aligned} \left| \frac{1}{h} \int ^h_0 \frac{\partial ^2 }{\partial \epsilon \partial y} \left( {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^{y+\epsilon })ds } \right) d\epsilon \right|&\le K_T (y+1) \int ^t_0 \left( {{\mathrm {e}}}^{4c\tilde{w}_2(s)+4c^{2}\tilde{\delta } s} +1 \right) ds. \end{aligned} \end{aligned}$$
(C.21)

And, we have

$$\begin{aligned} \tilde{E}\left[ \int ^t_0 {{\mathrm {e}}}^{4c\tilde{w}_2(s)+4c^{2}\tilde{\delta } s} ds \right] =\int ^t_0 {{\mathrm {e}}}^{8c^2s+4c^{2}\tilde{\delta } s}ds < \infty . \end{aligned}$$
(C.22)

Using (C.16, (C.21), (C.22)) and the dominated convergence theorem, we have

$$\begin{aligned} \frac{\partial ^2 \phi }{\partial y^2}(t,y)&=\lim _{h\rightarrow 0}\tilde{E}\left[ \frac{1}{h} \int ^h_0 \frac{\partial ^2 }{\partial \epsilon \partial y} \left( {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^{y+\epsilon })ds } \right) d\epsilon \right] \\&=\tilde{E}\left[ \frac{\partial ^2 }{\partial y^2}\left( {{\mathrm {e}}}^{\int ^{t}_0 \theta (Y_s^y)ds } \right) \right] . \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hata, H. Risk-Sensitive Asset Management with Lognormal Interest Rates. Asia-Pac Financ Markets 28, 169–206 (2021). https://doi.org/10.1007/s10690-020-09312-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10690-020-09312-6

Keywords

Mathematics Subject Classification

Navigation