Abstract
This paper discusses optimal portfolio selection problems under Expected Shortfall as the risk measure. We employ multivariate Generalized Hyperbolic distribution as the joint distribution for the risk factors of underlying portfolio assets, which include stocks, currencies and bonds. Working under this distribution, we find the optimal portfolio strategy.
Similar content being viewed by others
References
Aas, K., & Haff, I. H. (2006). The generalized hyperbolic skew student’s t-distribution. Journal of Financial Econometrics, 4(2), 275–309.
Acerbi, C., & Tasche, D. (2002). On the coherence of expected shortfall. Journal of Banking and Finance, 26(7), 1487–1503.
Artzner, P., Delbaen, F., Eber, J. M., & Heath, D. (1999). Coherent measures of risk. Mathematical Finance, 9, 203–228.
Barndorff-Nielsen, O. E. (1977). Exponentially decreasing distributions for the logarithm of the particle size. Proceedings of the Royal Society London. Series A. Mathematical and Physical Sciences, 353, 401–419.
Barndorff-Nielsen, O. E. (1997). Normal inverse Gaussian distributions and the modelling of stock returns. Scandinavian Journal of Statistics, 24, 1–13.
Barndorff-Nielsen, O. E., Kent, J., & Sørensen, M. (1982). Normal variance-mean mixtures and z distributions. International Statistical Review, 50(2), 145–159.
Cont, R. (2001). Empirical properties of asset returns: Stylized facts and statistical issues. Quantitative Finance, 1, 223–236.
Eberlein, E., & Keller, U. (1995). Hyperbolic distributions in finance. Bernoulli, 1(3), 281–299.
Hogg, R. V., McKean, J. W., & Craig, A. T. (2005). Introduction to mathematical statistics (6th ed.). New Jersey: Pearson Prentice Hall.
Hu, W. (2005). Calibration of multivariate generalized hyperbolic distributions using the EM algorithm, with applications in risk management, portfolio optimization and portfolio credit risk. Doctoral dissertation, College of Arts and Sciences, The Florida State of University.
Madan, D. B., & Seneta, E. (1990). The variance gamma model for share market returns. Journal of Business, 63(4), 511–524.
Mandelbrot, B. (1963). The variation of certain speculative prices. Journal of Business, 36, 394–419.
McNeil, A., Frey, R., & Embrechts, P. (2005). Quantitative risk management: Concepts, techniques and tools. Princeton: Princeton University Press.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1: Calibration of GH Distribution
1.1 EM Algorithm
EM algorithm is a tool to estimate unknown parameters of a distribution. This estimate is based on the maximum likelihood method.
Definition 9.1
(Likelihood Function) Suppose \(\mathbf {X} =(\mathbf {X_1},\ldots ,\mathbf {X_n})\) is a vector of \(n\) independent and identically distributed (i.i.d.) random variables \(\mathbf {X_i}\in \mathfrak {R}^d\), called random samples, with pdf \(f\). Denote the parameter space of the distribution by \(\Omega \). Define the likelihood function as the joint density of the random samples, denoted by
Theorem 9.2
Suppose \(\mathbf {X}=(\mathbf {X_1},\ldots ,\mathbf {X_n})\) is a vector of \(n\) independent and identically distributed (i.i.d.) random variables \(\mathbf {X_i}\in \mathfrak {R}^d\) with pdf \(f\) and parameter space \(\Omega \). Let \(\varvec{\theta }_\mathbf{0}\) be the true parameter of this distribution. If the pdf has common support for all \(\varvec{\theta }\in \Omega \), then
For proof, see Hogg et al. (2005) p. 313.
Based on the theorem, an estimate of the true distribution parameter can obtained as
This estimate is called the maximum likelihood estimator (MLE).
In some cases, the maxima of the likelihood function is intractable due to the existence of unobserved latent random variables. In our case, this unobserved variable can be regarded as the mixing random variable \(W\) in (2.2), while \(\mathbf {X}\) is the observed variable. Using this unobserved variable, together with the observed variable, the EM algorithm is created to estimate the MLE.
Algorithm 9.3
(EM Algorithm) Denote the observed random variables by \(\mathbf {X}=(\mathbf {X_1},\ldots ,\mathbf {X_n})\) and the unobserved random variables by \(\mathbf {Y}=(\mathbf {Y_1},\ldots ,\mathbf {Y_n})\), where \(\mathbf {X_i}\in \mathfrak {R}^d\) and \(\mathbf {Y_i}\in \mathfrak {R}^{\tilde{d}}\). Assume that the \(\mathbf {X_i}\)s are i.i.d. and so are the \(\mathbf {Y_i}\)s. Furthermore, let \(\mathbf {X}\) be independent from \(\mathbf {Y}\). Let
the joint pdf of \(\mathbf {X}\) and \(\mathbf {Y}, \)where \(\varvec{\theta }\) is the parameter of \(\mathbf {X}\) distribution. The EM algorithm is as follows
-
1.
Give an initial estimate \({\hat{{\varvec{\theta }}}}^{\left( 0\right) }\) of the true distribution parameter.
-
2.
Expectation Step. Compute
$$\begin{aligned} Q({\varvec{\theta }};{\hat{{\varvec{\theta }}}}^{\left( 0\right) }):=\mathbb {E}\left[ \log {\tilde{L}}\left( {\varvec{\theta }};\mathbf {X},\mathbf {Y}\right) |\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( 0\right) }\right] \end{aligned}$$(9.5) -
3.
Maximization Step. Compute
$$\begin{aligned} {\hat{{\varvec{\theta }}}}^{\left( 1\right) }=\text {argmax }_{\begin{array}{c} \theta \end{array}}Q({\varvec{\theta }};{\hat{{\varvec{\theta }}}}^{\left( 0\right) }), \end{aligned}$$(9.6)the estimate of the first iteration.
-
4.
Obtain the estimate of the \(m+1\)-th iteration by executing step 2 and step 3 using the \(m\)-th estimate \({\hat{{\varvec{\theta }}}}^{\left( m\right) }.\)
Theorem 9.4
The sequence of estimates \({\hat{{\varvec{\theta }}}}^{\left( m\right) }\), defined by Algorithm 9.3, satisfies
For proof, see Hogg et al. (2005) pp. 360–361.
The theorem does not guarantee that the EM estimates will converge to the MLE, but, it increases the likelihood function on every executed iteration. This is the basis for estimating the MLE.
1.2 Calibration Using EM
To use the EM algorithm to calibrate GH distribution, the observed and unobserved variables must be identified. From representation (2.2), the random variable \(\mathbf {X}\) can be regarded as the observed variable, while \(W\) acts as the unobserved variable. The next step is to formulate \(Q({\varvec{\theta }};{\hat{{\varvec{\theta }}}}^{\left( k\right) })\), where \({\hat{{\varvec{\theta }}}}^{\left( k\right) }=(\lambda ^{\left( k\right) },\chi ^{\left( k\right) },\psi ^{\left( k\right) },\) \({\varvec{\mu }}^{\left( k\right) },{\varvec{\Sigma }}^{\left( k\right) },{\varvec{\gamma }}^{\left( k\right) })\).
Let \(W=(W_1,\ldots ,W_n)\) be a vector of \(n\) independent and identically distributed GIG random variables and \(\mathbf {X}|W=(\mathbf {X_1}|W_1,\ldots ,\mathbf {X_n}|W_n)\) be a vector of \(n\) independent and identically distributed (i.i.d.) GH random variables \(\mathbf {X_i}\in \mathfrak {R}^d\) conditional on its mixing variables \(W_i\). Suppose the \(k\)-th estimate \({\hat{{\varvec{\theta }}}}^{\left( k\right) }\) has been obtained, then the log-likelihood function \(\log \tilde{L}\) needs to be calculated. Since from representation (2.2) we have \(\mathbf {X_i}|W_i\sim N(\varvec{\mu }+W\varvec{\gamma },W\varvec{\Sigma })\),
Let
Then, to get the next estimate, we have to maximize
From (9.8) and (9.11), it is evident that we can maximize each term of (9.11) separately. This is because, from (9.8), \(L_1\) does not contain \(\lambda ,\chi ,\psi \) from the current iteration, and \(L_2\) does not contain \({\varvec{\mu }},{\varvec{\Sigma }},{\varvec{\gamma }}\) from the current iteration. Note however, that those parameters will turn up later, but those are the ones from the previous iteration and so can be regarded as constants instead of variables.
From the formula of the pdf of multivariate normal distribution \(N({\varvec{\mu }},{\varvec{\Sigma }})\) of dimension \(d\)
we have
Hence,
If \(W\sim GIG(\lambda ,\chi ,\psi ),\,L_2\) takes the form
In (9.15)–(9.16), the factor \(w_i\) comes in the form of \(w_i,\,w_i^{-1}\), and \(\log w_i\). Hence, to compute the conditional expectations, it is necessary to compute
Also define
To compute them, the distribution of \(W_i|\mathbf {X_i};{\hat{{\varvec{\theta }}}}^{\left( k\right) }\) needs to be determined. It can be determined using its pdf with formula
By substituting (2.1), (2.4), and (9.14) into (9.19), it can be checked (with formula (2.1)) that the resulting pdf is of GIG distribution \(N^\sim \left( \lambda ^{\left( k\right) }-\frac{d}{2},{\rho _{x_i}}^{\left( k\right) }\right. \) \(\left. +\chi ^{\left( k\right) },\psi ^{\left( k\right) }+{\varvec{\gamma }^{\left( k\right) }}'{\varvec{\Sigma }^{-1}}^{\left( k\right) }\varvec{\gamma }^{\left( k\right) }\right) \). Hence, we have
So, to obtain \(\mathbb {E}[L_1|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]\) and \(\mathbb {E}[L_2|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]\), we only have to substitute (9.20) to (9.22) into (9.15) and (9.16). Now, we can proceed to the maximization process.
Since \(L_1\) does not depend on the distribution of \(\mathrm{W}\) as explained before, we only need to consider the values of the parameters \({{\varvec{\mu }}},{\varvec{\Sigma }},{\varvec{\gamma }}\) to maximize \(\mathbb {E}[L_1|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]\). Following the standard routine optimization as Hu does (see Hu 2005), set
We can find the next estimate \((\varvec{\mu }^{\left( k+1\right) },\varvec{\Sigma }^{\left( k+1\right) },\varvec{\gamma }^{\left( k+1\right) })\) by solving the above system. The solution to the system can be estimated by first solving (9.23) to find \(\varvec{\mu }^{\left( k+1\right) }\), and then substituting it to (9.24) to find \(\varvec{\gamma }^{\left( k+1\right) }\). Finally, \(\varvec{\Sigma }^{\left( k+1\right) }\) can be obtained by substituting \((\varvec{\mu }^{\left( k+1\right) },\varvec{\gamma }^{\left( k+1\right) })\) into (9.25). The process is as follows.
Using (9.15), we have
By setting Eq. (9.26) to \(\mathbf {0}\), we can get the next estimate for \(\varvec{\mu }\) as
By setting Eq. (9.27) to \(\mathbf {0}\) and then substituting \(\varvec{\mu }\) with formula (9.29), we get the estimate
From (9.30), since \(\varvec{\gamma }^{\left( k+1\right) }\) does not depend on \(\varvec{\mu }^{\left( k+1\right) },\,\varvec{\gamma }\) in (9.29) can be changed into \(\varvec{\gamma }^{\left( k+1\right) }\) to get
By setting Eq. (9.28) to \(\mathbf {0}\) and then substituting \(\varvec{\mu }\) and \(\varvec{\gamma }\) with formula (9.31) and (9.30), we get the estimate
The maximization of \(\mathbb {E}[L_2|\mathbf {X},{\hat{{\varvec{\theta }}}}^{\left( k\right) }]\) depends on the parameter \((\lambda ,\chi ,\psi )\). In calibrating GH distribution, it is intractable to do so because of \(\lambda \). In order to make it tractable, we have to fix \(\lambda \). As Hu (Hu 2005, p. 29) stated, no one has ever calibrated \(\lambda \) exactly, so what we can do is executing the algorithm for several choices of our \(\lambda \) to get the values of the other parameters. After we have done so, we evaluate the likelihood function over those sets of values, and then we make the conjecture for the value of \(\lambda \) that results in the highest likelihood value. So, from now on, we calibrate GH distribution only for fixed \(\lambda \).
Define
To overcome the identifiability problem when calibrating GH distribution, one of the free parameters must be fixed. To make it simple, either \(\chi \) or \(\psi \) can be fixed. By substituting (9.20)–(9.21) into (9.16) we obtain
If we fix \(\chi \), to maximize (9.34) we need to set
Using the formula
we have
Then, we can get \(\phi ^{\left( k+1\right) }\) by solving
Hence, we obtain \(\psi ^{\left( k+1\right) }\) as
This method where \(\chi \) is fixed is called as the \(\chi \)-algorithm.
If we fix \(\psi \), to maximize (9.34) we need to set
Using the formula
we have
Then, we can get \(\phi ^{\left( k+1\right) }\) by solving
Hence, we obtain \(\chi ^{\left( k+1\right) }\) as
This method where \(\psi \) is fixed is called as the \(\psi \)-algorithm.
Appendix 2: Parameters of In-sample Multivariate GH Distribution
Rights and permissions
About this article
Cite this article
Surya, B.A., Kurniawan, R. Optimal Portfolio Selection Based on Expected Shortfall Under Generalized Hyperbolic Distribution. Asia-Pac Financ Markets 21, 193–236 (2014). https://doi.org/10.1007/s10690-014-9183-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10690-014-9183-x