Skip to main content
Log in

Risk-sensitive portfolio optimization problem for a large trader with inside information

  • Original Paper
  • Area 4
  • Published:
Japan Journal of Industrial and Applied Mathematics Aims and scope Submit manuscript

Abstract

We consider a financial model that captures the characteristics of a large trader and an insider. This trader has some influence on the dynamics of prices. Moreover, the information of the insider is the final price plus a blurring noise that disappears as the final time approaches. In such a setting, we try to obtain the explicit solution of a risk-sensitive portfolio optimization problem with a finite time horizon.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Back, K.: Insider trading in continuous time. Rev. Financ. Stud. 5, 387–409 (1992)

    Article  Google Scholar 

  2. Bensoussan, A.: Stochastic Control of Partially Observable Systems. Cambridge University Press, Cambridge (1992)

  3. Corcuera, J.M., Imkeller, P., Kohatsu-Higa, A., Nualart, D.: Additional utility of insiders with imperfect dynamical information. Financ. Stoch. 8, 437–450 (2004)

    Article  MathSciNet  Google Scholar 

  4. Danilova, A., Monoyios, M., Ng, A.: Optimal investment with inside information and parameter uncertainty. Math. Financ. Econ. 3, 13–38 (2010)

    Article  MathSciNet  Google Scholar 

  5. Hata, H.: Risk-sensitive asset management in a general diffusion factor model: risk-seeking case. Jpn. J. Ind. Appl. Math. 34(1), 59–98 (2017)

    Article  MathSciNet  Google Scholar 

  6. Hata, H., Iida, Y.: A risk-sensitive stochastic control approach to an optimal investment problem with partial information. Financ. Stoch. 10, 395–426 (2006)

    Article  MathSciNet  Google Scholar 

  7. Hata, H., Kohatsu-Higa, A.: Two examples of an insider with medium/long term effects on the underlying. In: Recent Advances in Financial Engineering (Proceedings of KIER-TMU International Workshop on Financial Engineering 2010), pp. 19–42 (2011)

  8. Hata, H., Kohatsu-Higa, A.: A market model with medium/long-term effects due to an insider. Quant. Financ. 13(3), 421–437 (2013)

    Article  MathSciNet  Google Scholar 

  9. Hata, H., Sekine, J.: Solving long term optimal investment problems with Cox–Ingersoll–Ross interest rates. Adv. Math. Econ. 8, 231–255 (2006)

    Article  MathSciNet  Google Scholar 

  10. Hata, H., Sekine, J.: Explicit solution to a certain non-ELQG risk-sensitive stochastic control problem. Appl. Math. Optim. 62(3), 341–380 (2010)

    Article  MathSciNet  Google Scholar 

  11. Imkeller, P., Sheu, S.J.: Malliavin’s Calculus in Insiders Models: Additional Utility and Free Lunches. Math. Financ. 1, 153–169 (2003)

    Article  MathSciNet  Google Scholar 

  12. Karatzas, I., Pikovsky, I.: Anticipative portfolio optimization. Adv. Appl. Probab. 28, 1095–1122 (1996)

    Article  MathSciNet  Google Scholar 

  13. Kohatsu-Higa, A.: Models for insider trading with finite utility. Paris-Princeton Lectures on Mathematical Finance Series: Lecture Notes in Mathematics, Vol. 1919, pp. 103–172 (2007)

  14. Kohatsu-Higa, A., Sulem, A.: Utility maximization in an insider influenced market. Math. Financ. 16, 153–179 (2006)

    Article  MathSciNet  Google Scholar 

  15. Kohatsu-Higa, A., Yamazato, M.: Insider models with finite utility in markets with jumps. Appl. Math. Optim. 64, 217–255 (2011)

    Article  MathSciNet  Google Scholar 

  16. Pham, H.: A large deviations approach to optimal long term investment. Financ. Stoch. 7(2), 169–195 (2003)

    Article  MathSciNet  Google Scholar 

  17. Protter, P.: Stochastic Integration and Differential Equations. Springer, Berlin (2004)

    MATH  Google Scholar 

  18. Sekine, J.: A note on long-term optimal portfolios under drawdown constraints. Adv. Appl. Probab. 38(3), 673–692 (2006)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author thanks the anonymous referee for variable comments and suggestions. The author also thanks Professor A. Kohatsu-Higa for his helpful comments. This work is supported by Grant-in-Aid for Young Scientists (B) No. 15K17584 from the Japan Society for the Promotion of Science.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hiroaki Hata.

Appendices

Appendix A: Proof of Lemma 2.1

Define

$$\begin{aligned} X(t):=\int ^{(T-t)^\theta }_0 g_2(u)dW'(u). \end{aligned}$$

Then, using (2.2) and (2.3), we obtain

$$\begin{aligned} X(t)&=X(0)+\int ^t_0 G(u)d\alpha (u)+\int ^t_0 g_1(u)d\hat{W}(u)\nonumber \\ {}&\quad +\int ^t_0 \left\{ \dot{G}(u)+g_1^2(u) \right\} \alpha (u)du. \end{aligned}$$
(A.1)

In particular, X is a semimartingale in the filtration \({\mathcal {G}}\). From (2.2) recall that W is also a semimartingale in the filtration \({\mathcal {G}}\). Now, for each filtration \(\mathcal {K}\) we denote by \(\langle X_1, X_2 \rangle ^{\mathcal {K}}\) the quadratic covariation process of \(X_1, X_2\) with respect to the filtration \(\mathcal {K}\). Using Theorem 23 (ii) of Chapter II in [17] and adopting a partition \(0=t_0<t_1< \cdots <t_n=t\) we have

$$\begin{aligned} \langle W(\cdot ), X(\cdot ) \rangle ^{\mathcal {G}}_t&=\left\langle W(\cdot ), \int ^{(T-(\cdot ))^\theta }_0 g_2(u)dW'(u)\right\rangle ^{ \mathcal {F}}_t\nonumber \\&=-\lim _{n\rightarrow \infty }\sum _{k=1}^n (W(t_k)-W(t_{k-1}))\left( \int ^{(T-t_{k-1})^\theta }_{(T-t_{k})^\theta } g_2(s)dW'(s) \right) \\&\qquad in \ prob. \nonumber \end{aligned}$$
(A.2)

Now, we compute, for \(\eta \in \mathbb {R}\)

$$\begin{aligned}&E\left[ {\mathrm {e}}^{-i \eta \sum _{k=1}^n (W(t_k)-W(t_{k-1}))(X(t_{k-1})-X(t_k))}\right] \\&\quad =\prod _{k=1}^n E\left[ {\mathrm {e}}^{-i \eta (W(t_k)-W(t_{k-1}))(X(t_{k-1})-X(t_k))}\right] \\&\quad = \displaystyle \prod _{k=1}^n E\left[ E\left[ {\mathrm {e}}^{-i \eta (W(t_k)-W(t_{k-1}))(X(t_{k-1})-X(t_k))} \Bigr |X(t_{k-1}), X(t_k) \right] \right] \\&\quad =\prod _{k=1}^nE \left[ \frac{1}{\sqrt{2\pi (t_k-t_{k-1})}} \int ^{\infty }_{-\infty } {\mathrm {e}}^{-i \eta (X(t_{k-1})-X(t_k)) x-\frac{x^2}{2(t_k-t_{k-1})}}dx \right] \\&\quad =\prod _{k=1}^nE \left[ {\mathrm {e}}^{-\frac{1}{2}(t_k-t_{k-1}) \eta ^2(X(t_{k-1})-X(t_k))^2} \right] \\&\quad =\prod _{k=1}^n\left\{ 1+\eta ^2 (t_k-t_{k-1}) \int ^{(T-t_{k-1})^\theta }_{(T-t_{k})^\theta } g^2_2(s)ds \right\} ^{-1/2}. \end{aligned}$$

Hence, we see that

$$\begin{aligned} E\left[ {\mathrm {e}}^{-i \eta \sum _{k=1}^n (W(t_k)-W(t_{k-1}))(X(t_{k-1})-X(t_k))}\right] \rightarrow 1 \ \mathrm{as} \ n\rightarrow \infty . \end{aligned}$$

Then, \(\sum _{k=1}^n (W(t_k)-W(t_{k-1}))(X(t_{k-1})-X(t_k))\) converges in distribution to 0 (constant). And, \(\sum _{k=1}^n (W(t_k)-W(t_{k-1}))(X(t_{k-1})-X(t_k))\) converges to 0 in probability. We see that there exists a subsequence \(\{n_\ell \}\) such that \(\sum _{k=1}^{n_\ell } (W(t_k)-W(t_{k-1}))(X(t_{k-1})-X(t_k))\) converges to 0 almost surely. Using (A.2), we see that

$$\begin{aligned} \langle W(\cdot ), X(\cdot ) \rangle ^{\mathcal {G}}_t=0, \end{aligned}$$

and

$$\begin{aligned} \langle \hat{W}(\cdot ), X(\cdot ) \rangle ^{\mathcal {G}}_t=0. \end{aligned}$$
(A.3)

For \(s<t\) we have

$$\begin{aligned} E\left[ X(t)-X(s) \Bigr | \mathcal {G}_s \right]&=-E\left[ \int ^{(T-s)^\theta }_{(T-t)^\theta }g_2(u)dW'(s)\biggr | \mathcal {G}_s \right] \\&=-E\left[ \int ^{(T-s)^\theta }_{(T-t)^\theta }g_2(u)dW'(u)\biggr | \int ^T_s g_1(u)dW(u)\right. \\ {}&\quad +\left. \int ^{(T-s)^\theta }_0 g_2(u)dW'(u) \right] \\&=-\int ^{(T-s)^\theta }_{(T-t)^\theta }g^2_2(u)du \cdot \frac{1}{G(s)} \left( \int ^T_s g_1(u)dW(u)\right. \\ {}&\quad +\left. \int ^{(T-s)^\theta }_0 g_2(u)dW'(u) \right) \\&=-\int ^{(T-s)^\theta }_{(T-t)^\theta }g^2_2(u)du \cdot \alpha (s) \\&=-\int ^{t}_{s}f^2(u)du \cdot \alpha (s). \end{aligned}$$

Moreover, we have

$$\begin{aligned}&E\left[ X(t)-X(s)+\int ^t_s \frac{f^2(u)}{G(u)} \left( \int ^T_u g_1(v)dW(v)\right. \right. \nonumber \\ {}&\quad +\left. \left. \int ^{(T-u)^\theta }_0 g_2(v)dW'(v) \right) du \biggr |\mathcal {G}_s \right] \\&= -\int ^{t}_{s}f^2(u)du \cdot \alpha (s)+ \int ^t_s \frac{f^2(u)}{G(u)}E\left[ \int ^T_u g_1(v)dW(v)\right. \nonumber \\ {}&\quad +\left. \int ^{(T-u)^\theta }_0 g_2(v)dW'(v) \biggr |\mathcal {G}_s \right] du \nonumber \\&=\quad -\int ^{t}_{s}f^2(u)du \cdot \alpha (s)+ \int ^t_s \frac{f^2(u)}{G(u)} \nonumber \\&\cdot E\left[ \int ^T_u g_1(v)dW(v)+\int ^{(T-u)^\theta }_0 g_2(v)dW'(v) \biggr |\int ^T_s g_1(v)dW(v)\right. \nonumber \\ {}&\quad +\left. \int ^{(T-s)^\theta }_0 g_2(v)dW'(v) \right] du \nonumber \\&= -\int ^{t}_{s}f^2(u)du \cdot \alpha (s)+ \int ^t_s \frac{f^2(u)}{G(u)}\cdot \frac{G(u)}{G(s)}\left( \int ^T_s g_1(v)dW(v)\right. \nonumber \ \\&\quad + \left. \int ^{(T-s)^\theta }_0 g_2(v)dW'(v) \right) du \nonumber \\&\quad -\int ^{t}_{s}f^2(u)du \cdot \alpha (s)+\int ^{t}_{s}f^2(u)du \cdot \alpha (s) \nonumber \\&=0. \nonumber \end{aligned}$$
(A.4)

Setting

$$\begin{aligned} M^X(t):=X(t)+\int ^t_0 f(s)^2 \alpha (s) ds, \end{aligned}$$

and using (A.4), we see that \(\left\{ M^X(t); t\in [0, T) \right\} \) is a \(\mathcal {G}_t\)-martingale. Then, we have

$$\begin{aligned} \langle M^X(\cdot )\rangle ^{\mathcal {G}}_t=\langle X(\cdot ) \rangle ^{\mathcal {G}}_t = \langle X(\cdot ) \rangle ^{\mathcal {F}}_t=\int ^t_0f^2(s)ds. \end{aligned}$$

Then, \(\left\{ \widehat{B}(t); t\in [0, T) \right\} \) defined by

$$\begin{aligned} \widehat{B}(t):=\int ^t_0 \frac{1}{f(s)}dM^X(s) \end{aligned}$$

is a \(\mathcal {G}_t\)-Wiener process. And, we have

$$\begin{aligned} \langle \widehat{W}(\cdot ), \widehat{B}(\cdot ) \rangle ^{\mathcal {G}}_t=\int ^t_0 \frac{1}{f(s)}d\langle \widehat{W}(\cdot ), M^X(\cdot ) \rangle ^{\mathcal {G}}_s=\int ^t_0 \frac{1}{f(s)}d \langle \widehat{W}(\cdot ), X(\cdot ) \rangle ^{\mathcal {G}}_s=0, \end{aligned}$$

where the last equality comes from (A.3). Hence, \(\widehat{W}\) and \(\widehat{B}(\cdot )\) are mutually independent. On the other hand, we recall

$$\begin{aligned} X(t)=X(0)-\int ^t_0 f^2(s) \alpha (s)ds +\int ^t_0 f(s) d\widehat{B}(s). \end{aligned}$$

and

$$\begin{aligned} G(t)\alpha (t)=\int ^T_t g_1(s)d\widehat{W}(s)+\int ^T_t g^2_1(s)\alpha (s)ds+X(t). \end{aligned}$$

Then, we have

$$\begin{aligned} d\alpha (t)=\frac{1}{G(t)}\left\{ -g_1(t)d\widehat{W}(t)+f(t)\widehat{B}(t) \right\} . \end{aligned}$$
(A.5)

Appendix B: Proof of Lemma 3.1

(1) This is obtained immediately.

(2) Now, we perform the change of variables \(G_{1}(s)=g_{1}(s)(\dot{g}_{1}(s))^{-1}\). Then, the equation (3.11) becomes

$$\begin{aligned} 1-\dot{G}_{1}(s)=\frac{a+\sigma ^{2}+\{(1-\gamma )\sigma ^{2}-2a\} \theta (T-s)^{\theta -1}}{a}. \end{aligned}$$

The general solution of the above equation is (for a suitable constant \(C_1\))

$$\begin{aligned} G_{1}(s)=C_{1}-\frac{\sigma ^{2}}{a}s+\frac{(1-\gamma ) \sigma ^{2}-2a}{a}\left( (T-s)^{\theta }-T^{\theta }\right) . \end{aligned}$$

From here, one obtains that for a fixed \(t_{0}\in [0,T]\) and a constant \(C_{2}\), the solution is

$$\begin{aligned} g_{1}(t)=C_{2}\exp \left( \int _{t_{0}}^{t}\left( C_{1}-\frac{\sigma ^{2}s}{a}+\frac{(1-\gamma )\sigma ^{2}-2a}{a}\left( (T-s)^{\theta }-T^{\theta }\right) \right) ^{-1}ds\right) . \end{aligned}$$

In order to determine the constants we take \(t_{0}=0\), then one obtains that \(C_{2}=\sigma \) due to the condition \(g_{1}(0)=\sigma \). Moreover, if we take \(C_{1}\) as

$$\begin{aligned} C_{1}:=\frac{\sigma ^{2}T}{a}+\frac{(1-\gamma )\sigma ^{2}-2a}{a}T^{\theta }, \end{aligned}$$

then we obtain (3.13).

Further, under (H2), we observe \(\dot{g}_1(t)>0\) and that

$$\begin{aligned} \sigma \le g_{1}(t) \le \sigma \exp \left\{ \frac{a}{(1-\gamma )\sigma ^{2}-2a}\int ^{t}_{0}\frac{du}{(T-u)^{\theta }}\right\} . \end{aligned}$$

Hence, we have (3.13). And we obtain \(\int _{0}^{T}\frac{\left| g_{1}(s)\right| ^{2}}{G(s)}ds\) immediately.

3. Note that \(G(t)=\int ^{T}_{t}\left\{ 1+\theta (T-u)^{\theta -1}\right\} g^{2}_{1}(u)du\). From (3.15) we have

$$\begin{aligned} \dot{P}(t)&=\frac{1}{\gamma }\left[ \dot{G}(t)-\frac{(1-\gamma )\sigma ^{2}-2a}{a}\frac{G(t)}{g^{4}_{1}(t)} \left[ \left\{ \ddot{g}_{1}(t)G(t)+2\dot{g}_{1} (t)\dot{G}(t)\right\} g_{1}(t)\right. \right. \\ {}&\quad \left. \left. -3\dot{g}^{2}_{1}(t)G(t)\right] \right] \end{aligned}$$

Therefore, we observe that

$$\begin{aligned}&\dot{P}(t)+\frac{\gamma }{G^{2}(t)}\left\{ -\dot{G}(t)+ \frac{\gamma \sigma ^{2}g^{2}_{1}(t)}{(1-\gamma )\sigma ^{2}-2a}\right\} P^{2}(t) \\&\qquad -\frac{2\gamma }{(1-\gamma ) \sigma ^{2}-2a}\frac{\sigma ^{2}g^{2}_{1}(t)}{G(t)}P(t)+ \frac{\sigma ^{2} g^{2}_{1}(t)}{(1-\gamma )\sigma ^{2}-2a} \\&\quad = \dot{P}(t)+\frac{\sigma ^{2}g^{2}_{1}(t)}{(1-\gamma ) \sigma ^{2}-2a} \left\{ -\frac{\gamma P(t)}{G(t)}+1\right\} ^{2}-\frac{\gamma \dot{G}(t)}{G^{2}(t)}P^{2}(t)\\&\quad =\frac{1}{\gamma }\left[ \dot{G}(t)- \frac{(1-\gamma )\sigma ^{2}-2a}{a} \frac{G(t)}{g^{4}_{1}(t)} \left\{ \left( \ddot{g}_{1}(t)G(t)+2\dot{g}_{1} (t)\dot{G}(t)\right) g_{1}(t)- 3\dot{g}^{2}_{1}(t)G(t)\right\} \right] \\&\qquad +\frac{\sigma ^{2}\{(1-\gamma ) \sigma ^{2}-2a\}}{a^{2}}\frac{\dot{g}^{2}_{1} (t)G^{2}(t)}{g^{4}_{1}(t)}- \frac{\dot{G}(t)}{\gamma }\left\{ 1- \frac{(1-\gamma )\sigma ^{2}-2a}{a} \frac{\dot{g}_{1}(t)G(t)}{g^{3}_{1}(t)}\right\} ^{2}\\&\quad =-\frac{(1-\gamma ) \sigma ^{2}-2a}{\gamma a^{2}}\frac{G^{2}(t)}{g^{6}_{1}(t)}\left[ a \ddot{g}_{1}(t)g^{3}_{1}(t)-(3a+\gamma \sigma ^{2})\dot{g}^{2}_{1}(t)g^{2}_ {1}(t)+\{(1-\gamma )\sigma ^{2}\right. \\&\qquad -\left. 2a\}\dot{g}^{2}_{1}(t)\dot{G}(t)\right] \\&\quad =\frac{(1-\gamma )\sigma ^{2}-2a}{\gamma a^{2}}\frac{G^{2}(t)}{g^{4}_{1} (t)}\left\{ -a\ddot{g}_{1}(t)g_{1}(t)+\left[ (a+\sigma ^{2})+\{ (1-\gamma )\sigma ^{2}\right. \right. \\&\qquad -\left. \left. 2a\}\theta (T-t)^{\theta -1}\right] \dot{g}^{2}_{1}(t)\right\} \\&\quad =0. \end{aligned}$$

Next, from the proof of (2), recall that

$$\begin{aligned} G_{1}(t)=\frac{g_{1}(t)}{\dot{g}_{1}(t)}=\frac{1}{a}\left[ \sigma ^{2}(T-t)+\{(1-\gamma )\sigma ^{2}-2a\}(T-t)^{\theta }\right] . \end{aligned}$$
(B.1)

Then, we have

$$\begin{aligned} P(t)&=\frac{G(t)}{\gamma }\left[ 1-\frac{(1-\gamma )\sigma ^{2}-2a}{\sigma ^{2}(T-t)+\{(1-\gamma )\sigma ^{2}-2a\} (T-t)^{\theta }}\cdot \frac{G(t)}{g^{2}_{1}(t)} \right] \\&=\frac{G(t)}{[\sigma ^{2}(T-t)+\{ (1-\gamma )\sigma ^{2}-2a\}(T-t)^{\theta }]g^{2}_{1}(t) } \kappa (t), \nonumber \end{aligned}$$
(B.2)

where \(\kappa (t)\) is defined by

$$\begin{aligned} \kappa (t):=\frac{1}{\gamma } \left( [\sigma ^{2}(T-t)+\{(1-\gamma )\sigma ^{2}-2a\}(T-t)^{\theta }]g^{2}_{1}(t) -\{(1-\gamma )\sigma ^{2}-2a\}G(t) \right) . \end{aligned}$$

Then, since \(\dot{\kappa }(t)=-\sigma ^2 g^2_1(t)<0\) for \(t\in [0,T]\) and \(\kappa (T)=0\), we see that \(\kappa (t)\ge 0\) for \(t\in [0,T]\). Hence, \(P(t)\ge 0\) for \(t\in [0,T]\). In particular, \(P(t)> 0\) for \(t\in [0,T)\).

Note that under (H2), \(\dot{g}_1(t)>0\) holds. Therefore, we observe

$$\begin{aligned} \frac{G(t)}{g^{2}_{1}(t)} \ge (T-t)+(T-t)^{\theta }, \end{aligned}$$

and from (3.14) and (B.2) we see that there is \(K_T>0\) such that

$$\begin{aligned} P(t)&\le \frac{G(t)}{\gamma } \frac{(\gamma \sigma ^2+2a)(T-t)}{ \sigma ^{2}(T-t)+\{(1-\gamma )\sigma ^{2}-2a\}(T-t)^{\theta }}\\&\le K_T(T-t). \end{aligned}$$

Appendix C: Proof of Lemma 3.2

Set

$$\begin{aligned} \xi _{t}:=-\frac{\gamma }{(1-\gamma )\sigma ^{2}-2a}\frac{\sigma ^{2}g^{2}_{1}(t)}{G(t)} +\frac{\gamma }{G^{2}(t)}\left\{ -\dot{G}(t)+\frac{\gamma \sigma ^{2}g^{2}_{1}(t)}{(1-\gamma )\sigma ^{2}-2a}\right\} P(t). \end{aligned}$$

From (3.15) we have

$$\begin{aligned} \xi _{t}&\le K_T \frac{\{1+(T-t)^{\theta -1}\}(T-t)}{\{T-t+(T-t)^{1-\theta }\}^2}\le \frac{K_{T}}{(T-t)^{\theta }}, \end{aligned}$$

and

$$\begin{aligned} \int ^{s}_{t} \xi _{u}du \le K_{T}T^{1-\theta } \quad \text {for} \ s\ge t. \end{aligned}$$

Moreover, from (3.17) we observe that

$$\begin{aligned} \left| -\frac{\gamma P(t)}{G(t)}+1 \right|&= K_T \frac{T-t}{T-t +(T-t)^{1-\theta }}+1 \le K_{T}. \end{aligned}$$

Noting that by using the variations of constants method we have

$$\begin{aligned} Q(t)=\int _{t}^{T}{\mathrm {e}}^{\int ^{s}_{t}\xi _{u}du} \frac{(\mu -r)\sigma g_{1}(s)}{(1-\gamma )\sigma ^{2}-2a} \left\{ -\frac{\gamma P(s)}{G(s)}+1\right\} ds, \end{aligned}$$

we obtain \(|Q(t)|\le K_{T}(T-t)\).

As for the estimation of R(t), we have

$$\begin{aligned} |R(t)|&\le K_{T}\int ^{T}_{t}\left\{ \left| \frac{\dot{G}(u)}{G^{2}(u)} \right| |P(t)|+\left| \frac{\dot{G}(u)}{G^{2}(u)}\right| |Q(u)|^{2}+ \left| \frac{Q(u)}{G(u)}\right| ^{2}+1 \right\} du\\&\le K_{T} (T-t)^{1-\theta }. \end{aligned}$$

Moreover, from (B.2) we have, for \(t\in [0, T)\)

$$\begin{aligned} \frac{Q^{2}(t)}{P(t)}&=\frac{\gamma [\sigma ^{2}(T-t)+\{(1-\gamma )\ \sigma ^{2}-2a\}(T-t)^{\theta }]g^{2}_{1}(t) Q^{2}(t)}{G(t)[(\sigma ^{2}(T-t)+ \{(1-\gamma )\sigma ^{2}-2a\}(T-t)^{\theta })g^{2}_{1}(t)-\{(1-\gamma ) \sigma ^{2}-2a\}G(t)]}\\&\le K_{T}J(t), \end{aligned}$$

where

$$\begin{aligned} J(t):=\frac{\gamma (T-t)^{2}}{[\sigma ^{2}(T-t)+\{(1-\gamma )\sigma ^{2}-2a\} (T-t)^{\theta }]g^{2}_{1}(t)-\{(1-\gamma )\sigma ^{2}-2a\}G(t)}. \end{aligned}$$

Here, we observe

$$\begin{aligned} \lim _{t\rightarrow T}J(t)&=\lim _{t\uparrow T}\frac{-2\gamma (T-t)}{(-\gamma \sigma ^{2}+2a)g^{2}_{1}(t)+2[\sigma ^{2}(T-t)+\{(1-\gamma ) \sigma ^{2}-2a\}(T-t)^{\theta }]\dot{g}_{1}(t)g_{1}(t)}\\&=\lim _{t\uparrow T}\frac{-2\gamma (T-t)}{(-\gamma \sigma ^{2}-2a)g^{2}_{1}(t)+2a g^{2}_{1}(t)}\\&=\frac{2}{\sigma ^{2}}\lim _{t \uparrow T}\frac{T-t}{g^{2}_{1}(t)}, \end{aligned}$$

where the second inequality follows from (B.1). Hence, we see that J(t)\(=\mathcal {O}\left( (T-t)\right) \) as \(t\uparrow T\).

Appendix D: The martingale property of \(\{ Z_{t}(\widehat{\pi })\}_{t\in [0,T)}\)

Lemma D.1

Use (3.13). Let \(h_i(t, \alpha ) (i=1,2)\) be continuous functions satisfying \(h_i(t, \alpha )\le C(1+|\alpha |)\). Define \(\rho _t\) by

$$\begin{aligned} \rho _t:=\mathcal {E}\left( \int ^{(\cdot )}_{0}h_1(s,\alpha (s) ) d\widehat{W}(s)+\int ^{(\cdot )}_{0}h_2(s,\alpha (s)) d\widehat{B}(s)\right) _{t}. \end{aligned}$$

Then, \(\{ \rho _{t}\}_{t\in [0,T)}\) is a \(\mathcal {G}_t\)-martingale.

Proof

The proof follows similar arguments as for Lemma 4.1.1 in [2]. Recall that

$$\begin{aligned} d&\rho _t=\rho _t \left\{ h_1(t,\alpha (t) ) d\widehat{W}(t)+h_2(t,\alpha (t)) d\widehat{B}(t)\right\} \end{aligned}$$
(D.1)

For \(s\le t\) we set \(\rho _{t,s}:=\rho _t \rho _s^{-1}\). Let \(\epsilon >0\) be arbitrary. Then, we have

$$\begin{aligned}&d\left( \frac{\rho _{t,s}}{1+\epsilon \rho _{t,s}} \right) = d\overline{N}_{t,s}-\overline{A}_{t,s} dt, \end{aligned}$$
(D.2)

where \(\overline{N}_{t,s}\) and \(\overline{A}_{t,s}\) are defined by

$$\begin{aligned} \overline{N}_{t,s}&:=\int ^{t}_{s} \frac{\rho _{u,s}}{\left( 1+\epsilon \rho _{u,s}\right) ^2} \left\{ h_1(u,\alpha (u)) d\widehat{W}(u)+h_2(u,\alpha (u)) d\widehat{B}(u)\right\} ,\\ \overline{A}_{t,s}&:=\frac{\epsilon \rho _{t,s}^2}{(1+\epsilon \rho _{t,s})^3}\left\{ h_1^2(t,\alpha (t))+h_2^2(t,\alpha (t)) \right\} . \end{aligned}$$

If we can check

$$\begin{aligned} E\left[ \rho _{t,s} \alpha ^2(t)\right] \le K_{t,T}, \end{aligned}$$
(D.3)

we see that

$$\begin{aligned} E\left[ |\overline{N}_{t,s}|^2 \right]&\le \overline{C}_{\epsilon } E\left[ \int ^{t}_{s} \left\{ \rho _{u,s}+ \rho _{u,s} \alpha ^2(u)\right\} du \right] \\&\le \overline{C}_{\epsilon }(1+K_{t,T})T, \end{aligned}$$

and that \(\overline{N}_{t,s}\) is a square-integrate martingale. Here, we use \(E\left[ \rho _{t,s} \right] \le 1\) Therefore, integrating (D.2) on [st] and taking expectation for both sides, we have

$$\begin{aligned} E\left[ \frac{\rho _{t,s}}{1+\epsilon \rho _{t,s}} \Bigr | \mathcal {G}_s \right]&=\frac{1}{1+\epsilon }-E\left[ \int ^t_s \overline{A}_{u,s} du \Bigr | \mathcal {G}_s \right] \\&=\frac{1}{1+\epsilon }-E\left[ \int ^t_s \overline{A}_{u,s} du \right] . \nonumber \end{aligned}$$
(D.4)

Here, we observe the following :

  • \(\displaystyle \overline{A}_{u,s} \rightarrow 0 \ a.e.\ (u, \omega ) \in [s, T] \times \Omega .\) as \(\epsilon \rightarrow 0\),

  • \(\displaystyle \overline{A}_{u,s} \le K \left\{ \rho _{u,s}+ \rho _{u,s} \alpha ^2(u)\right\} \).

Hence, from (D.3) and the dominated convergence theorem, we have

$$\begin{aligned} E\left[ \int ^t_s \overline{A}_{u,s} du \right] \rightarrow 0 \ \text {as} \ \epsilon \rightarrow 0. \end{aligned}$$

Meanwhile, since \(E\left[ \rho _{t,s} \right] \le 1\),

$$\begin{aligned} E\left[ \frac{\rho _{t,s}}{1+\epsilon \rho _{t,s}} \Bigr | \mathcal {G}_s \right] \rightarrow E \left[ \rho _{t,s} | \mathcal {G}_s \right] \ \text {as} \ \epsilon \rightarrow 0. \end{aligned}$$

Letting \(\epsilon \rightarrow 0\) in (D.4), we have \(E \left[ \rho _{t,s} | \mathcal {G}_s \right] =1\), or equivalently

$$\begin{aligned} E \left[ \rho _{t} | \mathcal {G}_s \right] =\rho _{s}. \end{aligned}$$

Finally, we shall prove (D.3). Recalling that

$$\begin{aligned} d\langle \alpha (\cdot )\rangle _t=\frac{1}{G(t)}\left\{ g_1^2(t)d \widehat{W}(t)+f^2(t)\right\} =-\frac{\dot{G}(t)}{G^2(t)}dt, \end{aligned}$$

we have

$$\begin{aligned} d\alpha ^2(t)=\frac{2\alpha (t)}{G(t)}\left\{ -g_1(t)d\widehat{W}(t)+ f(t)d\widehat{B}(t) \right\} -\frac{\dot{G}(t)}{G^2(t)}dt. \end{aligned}$$
(D.5)

From (D.2) and (D.5) we have

$$\begin{aligned} d\{\rho _{t,s} \alpha ^2(t)\}&= \frac{\rho _{t,s}}{G(t)} \left\{ -\frac{\dot{G} (t)}{G(t)}+2\alpha (t)\{-g_1(t)h_1(t, \alpha (t))+f(t)h_2(t, \alpha (t)) \} \right\} dt\\&\quad +\rho _{t,s} \alpha (t) \left[ \left( -\frac{2g_1(t)}{G(t)}+ \alpha (t)h_1(t,\alpha (t))\right) d\widehat{W}(t)+\left( \frac{2f(t)}{G(t)}+ \alpha (t)h_2(t,\alpha (t))\right) d\widehat{B}(t) \right] . \end{aligned}$$

Hence, for \(\epsilon >0\) we have

$$\begin{aligned}&d\left( \frac{\rho _{t,s} \alpha ^2(t)}{1+\epsilon \rho _{t,s} \alpha ^2(t)} \right) = d\check{N}_{t,s}+\check{A}_{t,s} dt, \end{aligned}$$
(D.6)

where

$$\begin{aligned} \check{N}_{t,s}&:=\int ^t_s \frac{\rho _{u,s} \alpha (u)}{\{1+\epsilon \rho _{u,s} \alpha ^2(u)\}^2} \left[ \left( -\frac{2g_1(u)}{G(u)}+ \alpha (u)h_1(u,\alpha (u))\right) d\widehat{W}(u) \right. \\&\quad \left. +\left( \frac{2f(u)}{G(u)}+\alpha (u)h_2(u,\alpha (u))\right) d\widehat{B}(u) \right] \end{aligned}$$

is the local-martingale part and \((\check{A}_t)_{t\ge 0}\) is the bounded-variation part, which satisfies

$$\begin{aligned} \check{A}_{t,s} \le \frac{\rho _{t,s} }{\{1+\epsilon \rho _{t,s} \alpha ^2(t)\}^2}\left[ -\frac{\dot{G}(t)}{G^2(t)}+\frac{2\alpha (t)}{G(t)}\{-g_1(t)h_1(t, \alpha (t))+f(t)h_2(t, \alpha (t)) \} \right] . \end{aligned}$$

Here, observing that \(\dot{g}_1(t)>0\), and that

$$\begin{aligned} G(t)=\int ^T_t \{1+\theta (T-u)^{\theta -1}\}g_1^2(u)du \ge \{T-t+(T-t)^{\theta }\} g_1^2(t), \end{aligned}$$

we have

$$\begin{aligned} E[\check{N}_{t,s}^2]&\le E\left[ \int ^t_s \frac{\rho _{u,s}^2 \alpha ^2(u)}{\{1+\epsilon \rho _{u,s} \alpha ^2(u)\}^4} \left[ \left( -\frac{2g_1(u)}{G(u)}+\alpha (u)h_1(u,\alpha (u))\right) ^2 \right. \right. \\&\quad \left. \left. +\left( \frac{2f(u)}{G(u)}+\alpha (u)h_2(u,\alpha (u))\right) ^2 \right] du\right] \\&\le C_\epsilon \left( 1+\int ^t_s\left\{ -\frac{\dot{G}(u)}{G^2(u)}+ 1 \right\} E[\rho _{u,s}]du + \int ^t_s E[\alpha ^2(u)]du \right) \\&\le C_\epsilon \left( 1+\frac{1}{G(t)} \right) . \end{aligned}$$

Hence, \(\check{N}\) is a square-integrable martingale. Moreover, we have

$$\begin{aligned} \check{A}_{t,s} \le C\left\{ 1+\frac{1}{(T-t)^{1+\theta }}\rho _{t,s}+ \frac{1}{(T-t)^\theta }+\frac{1}{(T-t)^\theta }\frac{\rho _{t,s} \alpha ^2(t)}{1+\epsilon \rho _{t,s} \alpha ^2(t)} \right\} . \end{aligned}$$

Then, we have

$$\begin{aligned} E\left[ \frac{\rho _{t,s} \alpha ^2(t)}{1+\epsilon \rho _{t,s} \alpha ^2(t)} \right] \le C_T \left\{ 1+ \frac{1}{(T-t)^{\theta }}+\frac{1}{(T-t)^\theta }\int ^t_s E\left[ \frac{\rho _{u,s} \alpha ^2(s)}{1+\epsilon \rho _{u,s} \alpha ^2(s)} \right] ds \right\} . \end{aligned}$$

From Gronwall’s inequality we have

$$\begin{aligned} E\left[ \frac{\rho _{t,s} \alpha ^2(t)}{1+\epsilon \rho _{t,s} \alpha ^2(t)} \right] \le C_T \left\{ 1+\frac{1}{(T-t)^{\theta }} \right\} , \end{aligned}$$

and, by Fatou’s lemma, we obtain (D.3).

Appendix E: Proof of (3.28)

Now we shall compute

$$\begin{aligned} E^{(\widehat{\pi })}\left[ {\mathrm {e}}^{-\gamma \overline{u}(T_{n_k}, \alpha (T_{n_k}))}\biggl | \mathcal {G}_0\right] . \end{aligned}$$

Under \(P^{(\widehat{\pi })}\), \(\widehat{W}^{(\widehat{\pi })}(t)\) and \(\widehat{B}^{(\widehat{\pi })}(t)\) defined by

$$\begin{aligned} d\widehat{W}^{(\widehat{\pi })}(t)&=d\widehat{W}(t)-\gamma \left[ \left( -\left\{ 1+\frac{\gamma \sigma ^2}{(1-\gamma )\sigma ^2-2a}\right\} \frac{g_1(t)}{G(t)} P(t) \right. \right. \\ {}&\quad +\left. \left. \frac{\sigma ^2 g_1(t)}{(1-\gamma )\sigma ^2-2a}\right) \alpha (t) \right. \\&\quad \left. -\left\{ 1+\frac{\gamma \sigma ^2}{(1-\gamma )\sigma ^2-2a}\right\} \frac{g_1(t)}{G(t)} Q(t) +\frac{\sigma (\mu -r)}{(1-\gamma )\sigma ^2-2a} \right] dt \end{aligned}$$

and

$$\begin{aligned} d\widehat{B}^{(\widehat{\pi })}(t)=d\widehat{B}(t)-\gamma \frac{f(t)}{G(t)}\{P(t)\alpha (t)+Q(t) \}dt \end{aligned}$$

are \({\mathcal {G}}_t\)-Wiener processes and \(\alpha (t)\) satisfies

$$\begin{aligned} d\alpha (t)=\frac{1}{G(t)}\left\{ -g_1(t)d\widehat{W}^{(\widehat{\pi })}(t)+ f(t)d\widehat{B}^{(\widehat{\pi })}(t)\right\} + \left\{ b_1(t)\alpha (t)+b_0(t)\right\} dt, \end{aligned}$$

where \(b_0(t)\) and \(b_1(t)\) are defined by

$$\begin{aligned} b_0(t):=-\gamma \left\{ \frac{\dot{G}(t)}{G^2(t)}-\frac{\gamma \sigma ^2}{(1-\gamma )\sigma ^2-2a}\frac{g^2_1(t)}{G(t)} \right\} Q(t)- \frac{\gamma \sigma (\mu -r)}{(1-\gamma )\sigma ^2-2a}\frac{g_1(t)}{G(t)},\\ b_1(t):=-\gamma \left\{ \frac{\dot{G}(t)}{G^2(t)}-\frac{\gamma \sigma ^2}{(1-\gamma )\sigma ^2-2a}\frac{g^2_1(t)}{G(t)} \right\} P(t)-\frac{\gamma \sigma ^2}{(1-\gamma )\sigma ^2-2a} \frac{g^2_1(t)}{G(t)}. \end{aligned}$$

Define

$$\begin{aligned} m(t)&:=E[\alpha (t)|{\mathcal {G}}_0],\\ V(t)&:=E[\{\alpha (t)-m(t)\}^2 |{\mathcal {G}}_0]. \end{aligned}$$

Then, we have

$$\begin{aligned} m(t)&={\mathrm {e}}^{\int ^t_0 b_1(s)ds} \alpha (0)+\int ^t_0 {\mathrm {e}}^{\int ^t_s b_1(u)du}b_0(s)ds,\\ V(t)&=-\int ^t_0 {\mathrm {e}}^{2\int ^t_s b_1(u)du}\frac{\dot{G}(s)}{G^2(s)}ds. \end{aligned}$$

Here we observe the following :

  • \(\displaystyle |b_1(t)|\le K_T \left\{ \frac{1}{(T-t)^\theta }+1\right\} \)

  • \(\displaystyle |b_0(t)|\le K_T \left\{ \frac{1}{(T-t)^\theta }+1 \right\} \),

  • \(\displaystyle |m(t)| \le K_T \left\{ 1+|\alpha (0)| \right\} \),

  • \(\displaystyle |V(t)| \le \frac{K_T}{(T-t)^\theta }.\)

Hence, we have

$$\begin{aligned}&E^{(\widehat{\pi })}\left[ {\mathrm {e}}^{-\gamma \overline{u}(T_{n_k}, \alpha (T_{n_k}))}\biggl | \mathcal {G}_0\right] =\frac{1}{\sqrt{2\pi }}\int _{{\mathbb {R}}}{\mathrm {e}}^{-\gamma \bar{u}(T_{n_k}, m(T_{n_k})+\sqrt{V(T_{n_k})}x)}{\mathrm {e}}^{-\frac{x^2}{2}} dx \\&\quad =\frac{1}{\sqrt{2\pi }}\int _{{\mathbb {R}}} {\mathrm {e}}^{-\frac{1}{2}\{1+\gamma P(T_{n_k})V(T_{n_k}) \} \left\{ x+\gamma \frac{P(T_{n_k})m(T_{n_k})+Q(T_{n_k})}{1+\gamma P(T_{n_k})V(T_{n_k})} \sqrt{V(T_{n_k})} \right\} ^2 }dx\\&\qquad \cdot {\mathrm {e}}^{-\gamma \left\{ \frac{1}{2}P(T_{n_k})m^2(T_{n_k})+Q(T_{n_k}) m(T_{n_k})+R(T_{n_k})\right\} +\frac{\gamma ^2}{2}\frac{\{P(T_{n_k})m(T_{n_k})+Q(T_{n_k})\}^2}{1+\gamma P(T_{n_k})V(T_{n_k})} V(T_{n_k}) }\\&\quad =\frac{1}{\sqrt{1+\gamma P(T_{n_k})V(T_{n_k})}} \\&\qquad \cdot {\mathrm {e}}^{-\gamma \left\{ \frac{1}{2}P(T_{n_k})m^2(T_{n_k})+ Q(T_{n_k})m(T_{n_k})+R(T_{n_k})\right\} +\frac{\gamma ^2}{2}\frac{\{P(T_{n_k})m(T_{n_k})+Q(T_{n_k})\}^2}{1+\gamma P(T_{n_k})V(T_{n_k})} V(T_{n_k})}, \end{aligned}$$

which implies (3.28).

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hata, H. Risk-sensitive portfolio optimization problem for a large trader with inside information. Japan J. Indust. Appl. Math. 35, 1037–1063 (2018). https://doi.org/10.1007/s13160-018-0318-8

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13160-018-0318-8

Keywords

Mathematics Subject Classification

Navigation