Skip to main content
Log in

Optimal Portfolio Selection Based on Expected Shortfall Under Generalized Hyperbolic Distribution

Asia-Pacific Financial Markets Aims and scope Submit manuscript

Abstract

This paper discusses optimal portfolio selection problems under Expected Shortfall as the risk measure. We employ multivariate Generalized Hyperbolic distribution as the joint distribution for the risk factors of underlying portfolio assets, which include stocks, currencies and bonds. Working under this distribution, we find the optimal portfolio strategy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

References

  • Aas, K., & Haff, I. H. (2006). The generalized hyperbolic skew student’s t-distribution. Journal of Financial Econometrics, 4(2), 275–309.

    Article  Google Scholar 

  • Acerbi, C., & Tasche, D. (2002). On the coherence of expected shortfall. Journal of Banking and Finance, 26(7), 1487–1503.

    Article  Google Scholar 

  • Artzner, P., Delbaen, F., Eber, J. M., & Heath, D. (1999). Coherent measures of risk. Mathematical Finance, 9, 203–228.

    Article  Google Scholar 

  • Barndorff-Nielsen, O. E. (1977). Exponentially decreasing distributions for the logarithm of the particle size. Proceedings of the Royal Society London. Series A. Mathematical and Physical Sciences, 353, 401–419.

  • Barndorff-Nielsen, O. E. (1997). Normal inverse Gaussian distributions and the modelling of stock returns. Scandinavian Journal of Statistics, 24, 1–13.

    Article  Google Scholar 

  • Barndorff-Nielsen, O. E., Kent, J., & Sørensen, M. (1982). Normal variance-mean mixtures and z distributions. International Statistical Review, 50(2), 145–159.

    Article  Google Scholar 

  • Cont, R. (2001). Empirical properties of asset returns: Stylized facts and statistical issues. Quantitative Finance, 1, 223–236.

    Article  Google Scholar 

  • Eberlein, E., & Keller, U. (1995). Hyperbolic distributions in finance. Bernoulli, 1(3), 281–299.

    Article  Google Scholar 

  • Hogg, R. V., McKean, J. W., & Craig, A. T. (2005). Introduction to mathematical statistics (6th ed.). New Jersey: Pearson Prentice Hall.

    Google Scholar 

  • Hu, W. (2005). Calibration of multivariate generalized hyperbolic distributions using the EM algorithm, with applications in risk management, portfolio optimization and portfolio credit risk. Doctoral dissertation, College of Arts and Sciences, The Florida State of University.

  • Madan, D. B., & Seneta, E. (1990). The variance gamma model for share market returns. Journal of Business, 63(4), 511–524.

    Article  Google Scholar 

  • Mandelbrot, B. (1963). The variation of certain speculative prices. Journal of Business, 36, 394–419.

    Article  Google Scholar 

  • McNeil, A., Frey, R., & Embrechts, P. (2005). Quantitative risk management: Concepts, techniques and tools. Princeton: Princeton University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Budhi Arta Surya.

Appendices

Appendix 1: Calibration of GH Distribution

1.1 EM Algorithm

EM algorithm is a tool to estimate unknown parameters of a distribution. This estimate is based on the maximum likelihood method.

Definition 9.1

(Likelihood Function) Suppose \(\mathbf {X} =(\mathbf {X_1},\ldots ,\mathbf {X_n})\) is a vector of \(n\) independent and identically distributed (i.i.d.) random variables \(\mathbf {X_i}\in \mathfrak {R}^d\), called random samples, with pdf \(f\). Denote the parameter space of the distribution by \(\Omega \). Define the likelihood function as the joint density of the random samples, denoted by

$$\begin{aligned} L(\varvec{\theta };\mathbf {X}):=\Pi _{i=1}^n f(\mathbf {X_i};\varvec{\theta }),\quad \varvec{\theta }\in \Omega \end{aligned}$$
(9.1)

Theorem 9.2

Suppose \(\mathbf {X}=(\mathbf {X_1},\ldots ,\mathbf {X_n})\) is a vector of \(n\) independent and identically distributed (i.i.d.) random variables \(\mathbf {X_i}\in \mathfrak {R}^d\) with pdf \(f\) and parameter space \(\Omega \). Let \(\varvec{\theta }_\mathbf{0}\) be the true parameter of this distribution. If the pdf has common support for all \(\varvec{\theta }\in \Omega \), then

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb {P}(L(\varvec{\theta }_\mathbf{0};\mathbf {X})\ge L(\varvec{\theta };\mathbf {X}))=1,\quad \hbox { for all }\varvec{\theta }\ne \varvec{\theta }_\mathbf{0} \end{aligned}$$
(9.2)

For proof, see Hogg et al. (2005) p. 313.

Based on the theorem, an estimate of the true distribution parameter can obtained as

$$\begin{aligned} {\hat{\varvec{\theta }}}=\text {argmax}_{\varvec{\theta }}\text { }L(\varvec{\theta };\mathbf {X}). \end{aligned}$$
(9.3)

This estimate is called the maximum likelihood estimator (MLE).

In some cases, the maxima of the likelihood function is intractable due to the existence of unobserved latent random variables. In our case, this unobserved variable can be regarded as the mixing random variable \(W\) in (2.2), while \(\mathbf {X}\) is the observed variable. Using this unobserved variable, together with the observed variable, the EM algorithm is created to estimate the MLE.

Algorithm 9.3

(EM Algorithm) Denote the observed random variables by \(\mathbf {X}=(\mathbf {X_1},\ldots ,\mathbf {X_n})\) and the unobserved random variables by \(\mathbf {Y}=(\mathbf {Y_1},\ldots ,\mathbf {Y_n})\), where \(\mathbf {X_i}\in \mathfrak {R}^d\) and \(\mathbf {Y_i}\in \mathfrak {R}^{\tilde{d}}\). Assume that the \(\mathbf {X_i}\)s are i.i.d. and so are the \(\mathbf {Y_i}\)s. Furthermore, let \(\mathbf {X}\) be independent from \(\mathbf {Y}\). Let

$$\begin{aligned} {\tilde{L}}:={\tilde{L}}(\varvec{\theta };\mathbf {X},\mathbf {Y}), \end{aligned}$$
(9.4)

the joint pdf of \(\mathbf {X}\) and \(\mathbf {Y}, \)where \(\varvec{\theta }\) is the parameter of \(\mathbf {X}\) distribution. The EM algorithm is as follows

  1. 1.

    Give an initial estimate \({\hat{{\varvec{\theta }}}}^{\left( 0\right) }\) of the true distribution parameter.

  2. 2.

    Expectation Step. Compute

    $$\begin{aligned} Q({\varvec{\theta }};{\hat{{\varvec{\theta }}}}^{\left( 0\right) }):=\mathbb {E}\left[ \log {\tilde{L}}\left( {\varvec{\theta }};\mathbf {X},\mathbf {Y}\right) |\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( 0\right) }\right] \end{aligned}$$
    (9.5)
  3. 3.

    Maximization Step. Compute

    $$\begin{aligned} {\hat{{\varvec{\theta }}}}^{\left( 1\right) }=\text {argmax }_{\begin{array}{c} \theta \end{array}}Q({\varvec{\theta }};{\hat{{\varvec{\theta }}}}^{\left( 0\right) }), \end{aligned}$$
    (9.6)

    the estimate of the first iteration.

  4. 4.

    Obtain the estimate of the \(m+1\)-th iteration by executing step 2 and step 3 using the \(m\)-th estimate \({\hat{{\varvec{\theta }}}}^{\left( m\right) }.\)

Theorem 9.4

The sequence of estimates \({\hat{{\varvec{\theta }}}}^{\left( m\right) }\), defined by Algorithm 9.3, satisfies

$$\begin{aligned} L({\hat{{\varvec{\theta }}}}^{\left( m+1\right) };\mathbf {X})\ge L({\hat{{\varvec{\theta }}}}^{\left( m\right) };\mathbf {X}). \end{aligned}$$
(9.7)

For proof, see Hogg et al. (2005) pp. 360–361.

The theorem does not guarantee that the EM estimates will converge to the MLE, but, it increases the likelihood function on every executed iteration. This is the basis for estimating the MLE.

1.2 Calibration Using EM

To use the EM algorithm to calibrate GH distribution, the observed and unobserved variables must be identified. From representation (2.2), the random variable \(\mathbf {X}\) can be regarded as the observed variable, while \(W\) acts as the unobserved variable. The next step is to formulate \(Q({\varvec{\theta }};{\hat{{\varvec{\theta }}}}^{\left( k\right) })\), where \({\hat{{\varvec{\theta }}}}^{\left( k\right) }=(\lambda ^{\left( k\right) },\chi ^{\left( k\right) },\psi ^{\left( k\right) },\) \({\varvec{\mu }}^{\left( k\right) },{\varvec{\Sigma }}^{\left( k\right) },{\varvec{\gamma }}^{\left( k\right) })\).

Let \(W=(W_1,\ldots ,W_n)\) be a vector of \(n\) independent and identically distributed GIG random variables and \(\mathbf {X}|W=(\mathbf {X_1}|W_1,\ldots ,\mathbf {X_n}|W_n)\) be a vector of \(n\) independent and identically distributed (i.i.d.) GH random variables \(\mathbf {X_i}\in \mathfrak {R}^d\) conditional on its mixing variables \(W_i\). Suppose the \(k\)-th estimate \({\hat{{\varvec{\theta }}}}^{\left( k\right) }\) has been obtained, then the log-likelihood function \(\log \tilde{L}\) needs to be calculated. Since from representation (2.2) we have \(\mathbf {X_i}|W_i\sim N(\varvec{\mu }+W\varvec{\gamma },W\varvec{\Sigma })\),

$$\begin{aligned} \log \tilde{L}(\varvec{\theta };\mathbf {X},{W})=\sum ^n_{i=1}\log f_{\mathbf {X_i}|W_i}(\mathbf {x_i}|w_i;\varvec{\mu },\varvec{\Sigma },\varvec{\gamma })+\sum ^n_{i=1}\log h_{W_i}(w_i;\lambda ,\chi ,\psi ). \end{aligned}$$
(9.8)

Let

$$\begin{aligned} L_1&:=\sum ^n_{i=1}\log f_{\mathbf {X_i}|W_i}(\mathbf {x_i}|w_i;\varvec{\mu },\varvec{\Sigma },\varvec{\gamma }),\quad \text { and}\end{aligned}$$
(9.9)
$$\begin{aligned} L_2&:=\sum ^n_{i=1}\log h_{W_i}(w_i;\lambda ,\chi ,\psi ). \end{aligned}$$
(9.10)

Then, to get the next estimate, we have to maximize

$$\begin{aligned} Q({{\varvec{\theta }}};{\hat{\varvec{\theta }}})=\mathbb {E}[L_1|\mathbf {X},{\hat{\varvec{\theta }}}^{\left( k\right) }]+ E[L_2|\mathbf {X},{\hat{\varvec{\theta }}}^{\left( k\right) }]. \end{aligned}$$
(9.11)

From (9.8) and (9.11), it is evident that we can maximize each term of (9.11) separately. This is because, from (9.8), \(L_1\) does not contain \(\lambda ,\chi ,\psi \) from the current iteration, and \(L_2\) does not contain \({\varvec{\mu }},{\varvec{\Sigma }},{\varvec{\gamma }}\) from the current iteration. Note however, that those parameters will turn up later, but those are the ones from the previous iteration and so can be regarded as constants instead of variables.

From the formula of the pdf of multivariate normal distribution \(N({\varvec{\mu }},{\varvec{\Sigma }})\) of dimension \(d\)

$$\begin{aligned} f(\mathbf {x})=\frac{1}{\left( 2\pi \right) ^{\frac{d}{2}} \left| {\varvec{\Sigma }}\right| ^\frac{1}{2}}exp \left\{ -\frac{1}{2}\left( \mathbf {x}- {\varvec{\mu }}\right) '{\varvec{\Sigma }}^{-1} \left( \mathbf {x}-{\varvec{\mu }}\right) \right\} , \end{aligned}$$
(9.12)

we have

$$\begin{aligned} f_{\mathbf {X_i}|W_i}(\mathbf {x_i}|w_i;{\varvec{\mu }}, {\varvec{\Sigma }},{\varvec{\gamma }})&= \frac{1}{\left( 2\pi \right) ^{\frac{d}{2}} \left| w_i{\varvec{\Sigma }}\right| ^\frac{1}{2}}\nonumber \\&\quad \times exp\left\{ -\frac{1}{2}\left( \mathbf {x_i}- \left( {\varvec{\mu }}+w_i{\varvec{\gamma }}\right) \right) ' \frac{1}{w_i}{{\varvec{\Sigma }}^{-1}}\left( \mathbf {x_i}- \left( {\varvec{\mu }}+w_i{\varvec{\gamma }}\right) \right) \right\} \end{aligned}$$
(9.13)
$$\begin{aligned}&=\frac{1}{\left( 2\pi \right) ^{\frac{d}{2}}w_i^{\frac{d}{2}} \left| {\varvec{\Sigma }}\right| ^\frac{1}{2}} e^{\left( \mathbf {x_i}-{\varvec{\mu }}\right) '{{\varvec{\Sigma }}^{-1}} {\varvec{\gamma }}}e^{-\frac{\mathbf {\rho _{x_i}}}{2w_{\mathbf{i}}}} e^{-\frac{w_i}{2}{{{\varvec{\gamma }}}}'{{\varvec{\Sigma }}^{-1}} {\varvec{\gamma }}}. \end{aligned}$$
(9.14)

Hence,

$$\begin{aligned} L_1&= -\frac{n}{2}\log |{\varvec{\Sigma }}|-\frac{d}{2}\sum ^n_{i=1}\log w_i+\sum ^n_{i=1}\left( \mathbf {x_i}-{\varvec{\mu }}\right) ' {{\varvec{\Sigma }}^{-1}}{\varvec{\gamma }}\nonumber \\&\quad -\,\frac{1}{2}\sum ^n_{i=1}\frac{1}{w_i} {\rho _{x_i}}-\frac{1}{2} {{\varvec{\gamma }}}'{{\varvec{\Sigma }}^{-1}} {\varvec{\gamma }}\sum ^n_{i=1}w_i. \end{aligned}$$
(9.15)

If \(W\sim GIG(\lambda ,\chi ,\psi ),\,L_2\) takes the form

$$\begin{aligned} L_2&= (\lambda -1)\sum ^n_{i=1}\log w_i-\frac{\chi }{2}\sum ^n_{i=1}{w_i}^{-1}-\frac{\psi }{2}\sum ^n_{i=1}w_i-\frac{n\lambda }{2}\log \chi \nonumber \\&\quad +\,\frac{n\lambda }{2}\log \psi -n\log \left( 2K_{\lambda }\left( \sqrt{\chi \psi }\right) \right) . \end{aligned}$$
(9.16)

In (9.15)–(9.16), the factor \(w_i\) comes in the form of \(w_i,\,w_i^{-1}\), and \(\log w_i\). Hence, to compute the conditional expectations, it is necessary to compute

$$\begin{aligned} \eta _i^{\left( k\right) }:=\mathbb {E}[W_i|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }],\qquad \delta _i^{\left( k\right) }:=\mathbb {E}[W_i^{-1}|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }],\qquad \xi _i^{\left( k\right) }:=\mathbb {E}[\log W_i|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]. \end{aligned}$$
(9.17)

Also define

$$\begin{aligned} \bar{\eta _i}^{\left( k\right) }=\frac{1}{n}\sum ^n_{i=1}\eta _i^{\left( k\right) },\qquad \bar{\delta _i}^{\left( k\right) }=\frac{1}{n}\sum ^n_{i=1}\delta _i^{\left( k\right) },\qquad \bar{\xi _i}^{\left( k\right) }=\frac{1}{n}\sum ^n_{i=1}\xi _i^{\left( k\right) }. \end{aligned}$$
(9.18)

To compute them, the distribution of \(W_i|\mathbf {X_i};{\hat{{\varvec{\theta }}}}^{\left( k\right) }\) needs to be determined. It can be determined using its pdf with formula

$$\begin{aligned} f_{W_i|\mathbf {X_i}}\left( w_i|\mathbf {x_i};{\hat{{\varvec{\theta }}}}^{\left( k\right) } \right) =\frac{f_{\mathbf {X_i}|W_i}\left( \mathbf {x_i}|w_i;{\hat{{\varvec{\theta }}}}^{\left( k\right) }\right) h_{W_i} \left( w_i;{\hat{{\varvec{\theta }}}}^{\left( k\right) }\right) }{f_{\mathbf {X_i}}\left( x_i; {\hat{{\varvec{\theta }}}}^{\left( k\right) }\right) }. \end{aligned}$$
(9.19)

By substituting (2.1), (2.4), and (9.14) into (9.19), it can be checked (with formula (2.1)) that the resulting pdf is of GIG distribution \(N^\sim \left( \lambda ^{\left( k\right) }-\frac{d}{2},{\rho _{x_i}}^{\left( k\right) }\right. \) \(\left. +\chi ^{\left( k\right) },\psi ^{\left( k\right) }+{\varvec{\gamma }^{\left( k\right) }}'{\varvec{\Sigma }^{-1}}^{\left( k\right) }\varvec{\gamma }^{\left( k\right) }\right) \). Hence, we have

$$\begin{aligned} \delta _i^{\left( k\right) }&= \left( \frac{{\rho _{x_i}}^{\left( k\right) }+\chi ^{\left( k\right) }}{\psi ^{\left( k\right) }+{\varvec{\gamma }^{\left( k\right) }}'{\varvec{\Sigma }^{-1}}^{\left( k\right) }\varvec{\gamma }^{\left( k\right) }}\right) ^{-\frac{1}{2}}\nonumber \\&\quad \times \,\frac{K_{\lambda ^{\left( k\right) }-\frac{d}{2}-1}\left( \sqrt{\left( {\rho _{x_i}}^{\left( k\right) }+\chi ^{\left( k\right) }\right) \left( \psi ^{\left( k\right) }+{\varvec{\gamma }^{\left( k\right) }}'{\varvec{\Sigma }^{-1}}^{\left( k\right) }\varvec{\gamma }^{\left( k\right) }\right) }\right) }{K_{\lambda ^{\left( k\right) }-\frac{d}{2}}\left( \sqrt{\left( {\rho _{x_i}}^{\left( k\right) }+\chi ^{\left( k\right) }\right) \left( \psi ^{\left( k\right) }+{\varvec{\gamma }^{\left( k\right) }}'{\varvec{\Sigma }^{-1}}^{\left( k\right) }\varvec{\gamma }^{\left( k\right) }\right) }\right) }\end{aligned}$$
(9.20)
$$\begin{aligned} \eta _i^{\left( k\right) }&= \left( \frac{{\rho _{x_i}}^{\left( k\right) }+\chi ^{\left( k\right) }}{\psi ^{\left( k\right) }+{\varvec{\gamma }^{\left( k\right) }}'{\varvec{\Sigma }^{-1}}^{\left( k\right) }\varvec{\gamma }^{\left( k\right) }}\right) ^{\frac{1}{2}}\nonumber \\&\quad \times \,\frac{K_{\lambda ^{\left( k\right) }-\frac{d}{2}+1}\left( \sqrt{\left( {\rho _{x_i}}^{\left( k\right) }+\chi ^{\left( k\right) }\right) \left( \psi ^{\left( k\right) }+{\varvec{\gamma }^{\left( k\right) }}'{\varvec{\Sigma }^{-1}}^{\left( k\right) }\varvec{\gamma }^{\left( k\right) }\right) }\right) }{K_{\lambda ^{\left( k\right) }-\frac{d}{2}}\left( \sqrt{\left( {\rho _{x_i}}^{\left( k\right) }+\chi ^{\left( k\right) }\right) \left( \psi ^{\left( k\right) }+{\varvec{\gamma }^{\left( k\right) }}'{\varvec{\Sigma }^{-1}}^{\left( k\right) }\varvec{\gamma }^{\left( k\right) }\right) }\right) }\end{aligned}$$
(9.21)
$$\begin{aligned} \xi _i^{\left( k\right) }&= \frac{1}{2}\log \left( \frac{{\rho _{x_i}}^{\left( k\right) }+\chi ^{\left( k\right) }}{\psi ^{\left( k\right) }+{\varvec{\gamma }^{\left( k\right) }}'{\varvec{\Sigma }^{-1}}^{\left( k\right) }\varvec{\gamma }^{\left( k\right) }}\right) \nonumber \\&\quad +\,\frac{\frac{\partial K_{\lambda ^{\left( k\right) }-\frac{d}{2}+\alpha }\left( \sqrt{\left( {\rho _{x_i}}^{\left( k\right) }+\chi ^{\left( k\right) }\right) \left( \psi ^{\left( k\right) }+{\varvec{\gamma }^{\left( k\right) }}'{\varvec{\Sigma }^{-1}}^{\left( k\right) }\varvec{\gamma }^{\left( k\right) }\right) }\right) }{\partial \alpha }|_{\alpha =0}}{K_{\lambda ^{\left( k\right) }-\frac{d}{2}}\left( \sqrt{\left( {\rho _{x_i}}^{\left( k\right) }+\chi ^{\left( k\right) }\right) \left( \psi ^{\left( k\right) }+{\varvec{\gamma }^{\left( k\right) }}'{\varvec{\Sigma }^{-1}}^{\left( k\right) }\varvec{\gamma }^{\left( k\right) }\right) }\right) }. \end{aligned}$$
(9.22)

So, to obtain \(\mathbb {E}[L_1|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]\) and \(\mathbb {E}[L_2|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]\), we only have to substitute (9.20) to (9.22) into (9.15) and (9.16). Now, we can proceed to the maximization process.

Since \(L_1\) does not depend on the distribution of \(\mathrm{W}\) as explained before, we only need to consider the values of the parameters \({{\varvec{\mu }}},{\varvec{\Sigma }},{\varvec{\gamma }}\) to maximize \(\mathbb {E}[L_1|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]\). Following the standard routine optimization as Hu does (see Hu 2005), set

$$\begin{aligned} \frac{\partial \mathbb {E}[L_1|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]}{\partial {{\varvec{\mu }}}}&=\mathbf {0}\end{aligned}$$
(9.23)
$$\begin{aligned} \frac{\partial \mathbb {E}[L_1| \mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]}{\partial {{\varvec{\gamma }}}}&= \mathbf {0}\end{aligned}$$
(9.24)
$$\begin{aligned} \frac{\partial \mathbb {E}[L_1|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]}{\partial {\varvec{\Sigma }}}&=\mathbf {0}. \end{aligned}$$
(9.25)

We can find the next estimate \((\varvec{\mu }^{\left( k+1\right) },\varvec{\Sigma }^{\left( k+1\right) },\varvec{\gamma }^{\left( k+1\right) })\) by solving the above system. The solution to the system can be estimated by first solving (9.23) to find \(\varvec{\mu }^{\left( k+1\right) }\), and then substituting it to (9.24) to find \(\varvec{\gamma }^{\left( k+1\right) }\). Finally, \(\varvec{\Sigma }^{\left( k+1\right) }\) can be obtained by substituting \((\varvec{\mu }^{\left( k+1\right) },\varvec{\gamma }^{\left( k+1\right) })\) into (9.25). The process is as follows.

Using (9.15), we have

$$\begin{aligned} \frac{\partial \mathbb {E}[L_1|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]}{\partial \varvec{\mu }}&= -n\varvec{\gamma }'\varvec{\Sigma }^{-1}+\left( \sum ^n_{i=1}\delta _i^{\left( k\right) }\left( \mathbf {x_i}-\varvec{\mu }\right) '\right) \varvec{\Sigma }^{-1}\end{aligned}$$
(9.26)
$$\begin{aligned} \frac{\partial \mathbb {E}[L_1|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]}{\partial \varvec{\gamma }}&= \varvec{\Sigma }^{-1}\left( \sum ^n_{i=1}\left( \mathbf {x_i}-\varvec{\mu }\right) -\sum ^n_{i=1}\eta _i^{\left( k\right) }\varvec{\gamma }\right) \end{aligned}$$
(9.27)
$$\begin{aligned} \frac{\partial \mathbb {E}[L_1|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]}{\partial \varvec{\Sigma }}&= \varvec{\Sigma }^{-1}\left( -\frac{n}{2}-\sum ^n_{i=1}\left( \mathbf {x_i}-\varvec{\mu }\right) \varvec{\gamma }'\varvec{\Sigma }^{-1}+\frac{1}{2}\sum ^n_{i=1}\delta _i^{\left( k\right) }\left( \mathbf {x_i}-\varvec{\mu }\right) \left( \mathbf {x_i}-\varvec{\mu }\right) '\right. \nonumber \\&\quad \left. \times \,\varvec{\Sigma }^{-1}+\sum ^n_{i=1}\eta _i^{\left( k\right) }\varvec{\gamma }\varvec{\gamma }'\varvec{\Sigma }^{-1}\right) . \end{aligned}$$
(9.28)

By setting Eq. (9.26) to \(\mathbf {0}\), we can get the next estimate for \(\varvec{\mu }\) as

$$\begin{aligned} \varvec{\mu }^{\left( k+1\right) }=\frac{n^{-1}\sum ^n_{i=1}\delta _i^{\left( k\right) }\mathbf {x_i}-\varvec{\gamma }}{\bar{\delta }^{\left( k\right) }}. \end{aligned}$$
(9.29)

By setting Eq. (9.27) to \(\mathbf {0}\) and then substituting \(\varvec{\mu }\) with formula (9.29), we get the estimate

$$\begin{aligned} \varvec{\gamma }^{\left( k+1\right) }=\frac{n^{-1}\sum ^n_{i=1}\delta _i^{\left( k\right) }\left( \bar{\mathbf {x}}-\mathbf {x_i}\right) }{\bar{\delta }^{\left( k\right) }\bar{\eta }^{\left( k\right) }-1}. \end{aligned}$$
(9.30)

From (9.30), since \(\varvec{\gamma }^{\left( k+1\right) }\) does not depend on \(\varvec{\mu }^{\left( k+1\right) },\,\varvec{\gamma }\) in (9.29) can be changed into \(\varvec{\gamma }^{\left( k+1\right) }\) to get

$$\begin{aligned} \varvec{\mu }^{\left( k+1\right) }=\frac{n^{-1}\sum ^n_{i=1}\delta _i^{\left( k\right) }\mathbf {x_i}-\varvec{\gamma }^{\left( k+1\right) }}{\bar{\delta }^{\left( k\right) }}. \end{aligned}$$
(9.31)

By setting Eq. (9.28) to \(\mathbf {0}\) and then substituting \(\varvec{\mu }\) and \(\varvec{\gamma }\) with formula (9.31) and (9.30), we get the estimate

$$\begin{aligned} \varvec{\Sigma }^{\left( k+1\right) }=\frac{1}{n}\sum ^n_{i=1}\delta _i^{\left( k\right) }\left( \mathbf {x_i}-\varvec{\mu }^{\left( k+1\right) }\right) \left( \mathbf {x_i}-\varvec{\mu }^{\left( k+1\right) }\right) '-\bar{\eta }^{\left( k\right) }\varvec{\gamma }^{\left( k+1\right) }{\varvec{\gamma }^{\left( k+1\right) }}'. \end{aligned}$$
(9.32)

The maximization of \(\mathbb {E}[L_2|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]\) depends on the parameter \((\lambda ,\chi ,\psi )\). In calibrating GH distribution, it is intractable to do so because of \(\lambda \). In order to make it tractable, we have to fix \(\lambda \). As Hu (Hu 2005, p. 29) stated, no one has ever calibrated \(\lambda \) exactly, so what we can do is executing the algorithm for several choices of our \(\lambda \) to get the values of the other parameters. After we have done so, we evaluate the likelihood function over those sets of values, and then we make the conjecture for the value of \(\lambda \) that results in the highest likelihood value. So, from now on, we calibrate GH distribution only for fixed \(\lambda \).

Define

$$\begin{aligned} \phi :=\sqrt{\chi \psi }. \end{aligned}$$
(9.33)

To overcome the identifiability problem when calibrating GH distribution, one of the free parameters must be fixed. To make it simple, either \(\chi \) or \(\psi \) can be fixed. By substituting (9.20)–(9.21) into (9.16) we obtain

$$\begin{aligned} \mathbb {E}[L_2|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]&= (\lambda -1)\sum ^n_{i=1}\xi _i^{\left( k\right) }-\frac{\chi }{2}\sum ^n_{i=1}\delta _i^{\left( k\right) }-\frac{\psi }{2}\sum ^n_{i=1}\eta _i^{\left( k\right) }-\frac{n\lambda }{2}\log \chi \nonumber \\&\quad +\,\frac{n\lambda }{2}\log \psi -n\log \left( 2K_{\lambda }\left( \phi \right) \right) . \end{aligned}$$
(9.34)

If we fix \(\chi \), to maximize (9.34) we need to set

$$\begin{aligned} \frac{\partial \mathbb {E}[L_2|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]}{\partial \psi }=0. \end{aligned}$$
(9.35)

Using the formula

$$\begin{aligned} \frac{d \log K_\lambda \left( x\right) }{d x}=\frac{\lambda }{x}-\frac{K_{\lambda +1}\left( x\right) }{K_{\lambda }\left( x\right) }, \end{aligned}$$
(9.36)

we have

$$\begin{aligned} \frac{\partial \mathbb {E}\left[ L_2|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }\right] }{\partial \psi }=-\frac{n}{2}\left( \bar{\eta }^{\left( k\right) }-\sqrt{\frac{\chi }{\psi }}\frac{K_{\lambda +1}\left( \phi \right) }{K_{\lambda }\left( \phi \right) }\right) . \end{aligned}$$
(9.37)

Then, we can get \(\phi ^{\left( k+1\right) }\) by solving

$$\begin{aligned} \phi \bar{\eta }^{\left( k\right) }K_{\lambda }\left( \phi \right) -K_{\lambda +1}\left( \phi \right) \chi =0. \end{aligned}$$
(9.38)

Hence, we obtain \(\psi ^{\left( k+1\right) }\) as

$$\begin{aligned} \psi ^{\left( k+1\right) }=\frac{{\phi ^{\left( k+1\right) }}^2}{\chi } \end{aligned}$$
(9.39)

This method where \(\chi \) is fixed is called as the \(\chi \)-algorithm.

If we fix \(\psi \), to maximize (9.34) we need to set

$$\begin{aligned} \frac{\partial \mathbb {E}[L_2|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]}{\partial \chi }=0. \end{aligned}$$
(9.40)

Using the formula

$$\begin{aligned} \frac{d \log K_\lambda \left( x\right) }{d x}=-\frac{\lambda }{x}-\frac{K_{\lambda -1}\left( x\right) }{K_{\lambda }\left( x\right) }, \end{aligned}$$
(9.41)

we have

$$\begin{aligned} \frac{\partial \mathbb {E}[L_2|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]}{\partial \chi }=-\frac{n}{2}\left( \bar{\delta }^{\left( k\right) }-\sqrt{\frac{\psi }{\chi }}\frac{K_{\lambda -1}\left( \phi \right) }{K_{\lambda }\left( \phi \right) }\right) . \end{aligned}$$
(9.42)

Then, we can get \(\phi ^{\left( k+1\right) }\) by solving

$$\begin{aligned} \phi \bar{\delta }^{\left( k\right) }K_\lambda (\phi )-K_{\lambda -1}(\phi )\psi =0. \end{aligned}$$
(9.43)

Hence, we obtain \(\chi ^{\left( k+1\right) }\) as

$$\begin{aligned} \chi ^{\left( k+1\right) }=\frac{{\phi ^{\left( k+1\right) }}^2}{\psi }. \end{aligned}$$
(9.44)

This method where \(\psi \) is fixed is called as the \(\psi \)-algorithm.

Appendix 2: Parameters of In-sample Multivariate GH Distribution

$$\begin{aligned} \begin{array}{l} \lambda =-1.1312\\ \chi =0.3849\\ \psi =0.0005 \end{array}\quad \quad {\varvec{\mu }}= \begin{pmatrix} 0.0046 \\ 0.0054 \\ -0.0047 \\ 0.1292 \\ -0.0057 \\ -0.0164 \\ 0.1035 \\ -0.0130 \\ -0.0162 \\ -0.0145 \\ \end{pmatrix} \quad \quad {\varvec{\gamma }}= \begin{pmatrix} -0.0095 \\ -0.0026 \\ 0.0112 \\ -0.0508 \\ -0.0443 \\ -0.0512 \\ -0.0820 \\ 0.0441 \\ -0.0188 \\ 0.0113 \\ \end{pmatrix} \end{aligned}$$
$$\begin{aligned}&\varvec{\Sigma }=\\&\quad \left( \begin{array}{llllllllll} 0.145875 &{}\ \ 0.053347 &{}\ \ 0.000563 &{}\quad -0.13281 &{}\quad 0.031179 &{}\quad -0.01317 &{}\quad -0.07941 &{}\quad 0.019424 &{}\quad -0.02864 &{}\quad 0.016901\\ &{} 0.168513 &{}\ \ 0.003838 &{}\ \ -0.16763 &{}\quad 0.083138 &{}\quad -0.00417 &{}\quad 0.043805 &{}\quad 0.024895 &{}\quad -0.0084 &{}\quad 0.02657\\ &{} \ \ &{}\ \ 0.048184 &{}\quad -0.21143 &{}\quad -0.04955 &{}\quad -0.03935 &{}\quad -0.06839 &{}\quad 0.030933 &{}\quad -0.0128 &{}\quad 0.036011\\ &{} \ \ &{}\ \ &{}\quad 29.72531 &{}\quad 1.893676 &{}\quad 1.709939 &{}\quad 2.470889 &{}\quad -1.13704 &{}\quad -0.14801 &{}\quad -1.07707\\ &{} \ \ &{}\ \ &{}\quad &{}\quad 12.68621 &{}\quad 0.018155 &{}\quad 0.939443 &{}\quad -0.44036 &{}\quad 0.124222 &{}\quad -0.41513\\ &{} \ \ &{}\ \ &{}\quad &{}\quad &{}\quad 16.80886 &{}\quad 4.61311 &{}\quad -0.18215 &{}\quad 0.302885 &{}\quad -0.18056\\ &{} \ \ &{}\ \ &{} \quad &{} \quad &{} \quad &{} \quad 9.505603 &{}\quad -0.31379 &{}\quad 0.320674 &{}\quad -0.40826\\ &{} \ \ &{}\ \ &{}\quad &{}\quad &{}\quad &{}\quad &{}\quad 0.974863 &{}\quad 0.383262 &{}\quad 0.947959\\ &{} \ \ &{}\ \ &{}\quad &{}\quad &{}\quad &{}\quad &{}\quad &{}\quad 2.15487&{}\quad 0.3622\\ &{} \ \ &{}\ \ &{}\quad &{}\quad &{}\quad &{}\quad &{}\quad &{}\quad &{}\quad 1.026299\\ \end{array}\right) \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Surya, B.A., Kurniawan, R. Optimal Portfolio Selection Based on Expected Shortfall Under Generalized Hyperbolic Distribution. Asia-Pac Financ Markets 21, 193–236 (2014). https://doi.org/10.1007/s10690-014-9183-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10690-014-9183-x

Keywords

Navigation