Skip to main content
Log in

A Stein-type shrinkage estimator of the covariance matrix for portfolio selections

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

The covariance matrix plays a crucial role in portfolio optimization problems as the risk and correlation measure of asset returns. An improved estimation of the covariance matrix can enhance the performance of the portfolio. In this paper, based on the Cholesky decomposition of the covariance matrix, a Stein-type shrinkage strategy for portfolio weights is constructed under the mean-variance framework. Furthermore, according to the agent’s maximum expected utility value, a portfolio selection strategy is proposed. Finally, simulation experiments and an empirical study are used to test the feasibility of the proposed strategy. The numerical results show our portfolio strategy performs satisfactorily.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Bai JS, Shi SZ (2011) Estimating high dimensional covariance matrices and its applications. Ann Econ Finance 12:199–215

    Google Scholar 

  • Best MJ, Grauer RR (1991) On the sensitivity of Mean-variance efficient portfolios to changes in asset means: some analytical and computational results. Rev Financ Stud 4:315–342

    Article  Google Scholar 

  • Bodnar T, Mazurb S, Podgórskib K (2016) Singular inverse Wishart distribution and its application to portfolio theory. J Multivar Anal 143:314–326

    Article  MathSciNet  Google Scholar 

  • Chan LKC, Karceski J, Lakonishok J (1999) On portfolio optimization: forecasting covariances and choosing the risk model. Rev Financ Stud 12:937–974

    Article  Google Scholar 

  • DeMiguel V, Garlappi L, Nogales FJ, Uppal R (2009) A generalized approach to portfolio optimization: improving performance by constraining portfolio norms. Manag Sci 55:798–812

    Article  Google Scholar 

  • Fan JQ, Fan YY, Lv JC (2008) High dimensional covariance matrix estimation using a factor model. J Econom 147:186–197

    Article  MathSciNet  Google Scholar 

  • Fourdrinier D, Strawderman W (2015) Robust minimax Stein estimation under invariant data-based loss for spherically and elliptically symmetric distributions. Metrika 78:461–484

    Article  MathSciNet  Google Scholar 

  • Gillen BJ (2014) An empirical Bayesian approach to Stein-optimal covariance matrix estimation. J Empir Finance 29:402–420

    Article  Google Scholar 

  • Grübel R (1988) A minimal characterization of the covariance matrix. Metrika 35:49–52

    Article  MathSciNet  Google Scholar 

  • Haff LR (1979) Estimation of the inverse covariance matrix: random mixtures of the inverse Wishart matrix and identity. Ann Stat 7:1264–1276

    Article  MathSciNet  Google Scholar 

  • Hannart A, Naveau P (2014) Estimating high dimensional covariance matrices: a new look at the Gaussian conjugate framework. J Multivar Anal 131:149–162

    Article  MathSciNet  Google Scholar 

  • Ikeda Y, Kubokawa T (2016) Linear shrinkage estimation of large covariance matrices using factor models. J Multivar Anal 152:61–81

    Article  MathSciNet  Google Scholar 

  • James W, Stein C (1961) Estimation with quadratic loss. In: Proceedings of the 4th Berkeley symposium on mathematical statistics and probability. University of California Press, Los Angeles

  • Kan R, Zhou G (2007) Optimal portfolio choice with parameter uncertainty. J Financ Quant Anal 42:621–656

    Article  Google Scholar 

  • Kourtis A, Dotsis G, Markellos RH (2012) Parameter uncertainty in portfolio selection: shrinkage the inverse covariance matrix. J Bank Finance 36:2522–2531

    Article  Google Scholar 

  • Kubokawa T, Srivastava MS (2008) Estimation of the precision matrix of a singular Wishart distribution and its application in high-dimensional data. J Multivar Anal 99:1906–1928

    Article  MathSciNet  Google Scholar 

  • Lan W, Wang HS, Tsai C (2012) A Bayesian information criterion for portfolio selection. Comput Stat Data Anal 56:88–99

    Article  MathSciNet  Google Scholar 

  • Ledoit O, Wolf M (2003) Improved estimation of the covariance matrix of stock returns with an application to portfolio selection. J Empir Finance 10:603–621

    Article  Google Scholar 

  • Ledoit O, Wolf M (2004) A well-conditioned estimator for large-dimensional covariance matrices. J Multivar Anal 88:365–411

    Article  MathSciNet  Google Scholar 

  • Liagkouras K, Metaxiotis K (2018) Multi-period mean-variance fuzzy portfolio optimization model with transaction costs. Eng Appl Artif Intell 67:260–269

    Article  Google Scholar 

  • Markowitz HM (1952) Portfolio selection. J Finance 7:77–91

    Google Scholar 

  • Merton RC (1980) On estimating the expected return on the market: an exploratory investigation. J Financ Econ 8:323–361

    Article  Google Scholar 

  • Michaud RO (1989) The Markowitz optimization enigma: is optimized optimal? Financ Anal J 45:31–42

    Article  Google Scholar 

  • Muirhead R (1982) Aspects of multivariate statistical theory. Wiley, Chichester

    Book  Google Scholar 

  • Okhrin Y, Schmid W (2006) Distributional properties of portofolio weights. J Econom 134:235–256

    Article  Google Scholar 

  • Plachky D, Rukhin A (1999) Nonparametric covariance estimation in multivariate distributions. Metrika 50:131–136

    Article  MathSciNet  Google Scholar 

  • Sharpe WF (1963) A simplified model for portfolio analysis. Manag Sci 9:277–293

    Article  Google Scholar 

  • Stevens GVG (1998) On the inverse of the covariance matrix in portfolio analysis. J Finance 53:1821–1827

    Article  Google Scholar 

  • Vigna V (2014) On efficiency of mean-variance based portfolio selection in defined contribution pension schemes. Quant Finance 14:237–258

    Article  MathSciNet  Google Scholar 

  • Yu WT, Pang WK, Trouttb MD (2009) Objective comparisons of the optimal portfolios corresponding to different utility functions. Eur J Oper Res 199:604–610

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We would like to thank the Editor, Associate Editor and Referees very much for their constructive comments, which significantly helped us improve the manuscript. The first two authors’ research was supported by the Fundamental Research Funds for Central Universities (Nos. JBK1607121, JBK120509, JBK140507). This study was also supported by the National Natural Science Foundation of China (Nos. 11471264, 11401148, 11571282, 51437003).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuangzhe Liu.

Appendix

Appendix

1.1 Some preliminaries

In order to construct our Stein-type shrinkage strategy for portfolio selection, Lemmas 4 and 5, needed to establish the main results in this paper, are presented and followed by the proofs of Lemmas 13, Theorem 1 and Corollary 1.

Lemma 4

(Haff 1979) Let A follow a Wishart distribution with n degrees of freedom and covariance matrix \(\varSigma \), denoted as \(A \sim W_{p}(n,\varSigma )\), where \(n\ge p+4\). Then

$$\begin{aligned} E(A^{-1}VA^{-1})= & {} \frac{tr(\varSigma ^{-1}V)}{(n-p)(n-p-1)(n-p-3)}\varSigma ^{-1}\\&+\,\frac{1}{(n-p)(n-p-3)}\varSigma ^{-1}V\varSigma ^{-1}, \end{aligned}$$

where \(V\ge 0\) is a \(p\times p\) non-random matrix.

Lemma 5

(Muirhead 1982, Theorem 3.2.14) Let \(A \sim W_{p}(n,I_{p})\), where \(n\ge p\) and \(I_{p}\) is a \(p\times p\) identity matrix, and \(A=C'C\), where C is an upper triangular \(p\times p\) matrix with positive diagonal elements. Then the elements \(c_{ij}(1\le i\le j\le p)\) of C are all independent, \(c_{ii}^{2}\sim \chi _{n-i+1}^{2}(i=1,\ldots ,p)\), and \(c_{ij}\sim N(0,1) (1\le i<j\le p)\).

1.2 Proof of Lemma 1

We recall that \(\hat{\mu }=\frac{1}{n}\sum _{t=1}^{n}R_{t}\) and \(\widehat{\mu }\sim N(\mu ,\varSigma /n)\). \(\varGamma \) is the upper triangular matrix from the Cholesky decomposition of \(\varSigma \), denoted as \(\varSigma =\varGamma '\varGamma \). \(\widetilde{\mu }=(\varGamma ')^{-1}\widehat{\mu }\). F is the upper triangular matrix from the Cholesky decomposition of \(n\hat{\varSigma }\), denoted as \(n\hat{\varSigma }=F'F\). \(\tilde{F}=(\tilde{f}_{ij})=F\varGamma ^{-1}\). So, \(E(\widehat{\mu }\widehat{\mu }')=\varSigma /n+\mu \mu '\), and \(\widehat{\mu }\) and \(\hat{\varSigma }\) are independent of each other.

Substituting \(\hat{w}^{s}=\frac{1}{\gamma }F^{-1}H(F')^{-1}\hat{\mu }\) into (1), we have

$$\begin{aligned} U(\hat{w}^{s})=\frac{1}{\gamma }\widehat{\mu }'F^{-1}H(F')^{-1}\mu -\frac{1}{2\gamma }\widehat{\mu }'F^{-1}H(F')^{-1}\varSigma F^{-1}H(F')^{-1}\widehat{\mu }. \end{aligned}$$

Taking the expectation of \(U(\hat{w}^{s})\), we obtain

$$\begin{aligned} E(U(\hat{w}^{s}))=\frac{1}{\gamma }E(\widehat{\mu }'F^{-1}H(F')^{-1}\mu )-\frac{1}{2\gamma }E(\widehat{\mu }'F^{-1}H(F')^{-1}\varSigma F^{-1}H(F')^{-1}\widehat{\mu }). \end{aligned}$$

We calculate the two parts of \(E(U(\hat{w}^{s}))\) as follows:

$$\begin{aligned}&E(\widehat{\mu }'F^{-1}H(F')^{-1}\mu )\\&\quad =E(\widehat{\mu }')E(F^{-1}H(F')^{-1})\mu =\mu 'E(F^{-1}H(F')^{-1})\mu \\&\quad =E(\mu '\varGamma ^{-1}\widetilde{F}^{-1}H(\widetilde{F}')^{-1}(\varGamma ')^{-1}\mu )=\mu '\varGamma ^{-1}A_{1}(\varGamma ')^{-1}\mu ,\\&\qquad E(\widehat{\mu }'F^{-1}H(F')^{-1}\varSigma F^{-1}H(F')^{-1}\widehat{\mu })\\&\quad =E(tr(\widehat{\mu }'F^{-1}H(F')^{-1}\varSigma F^{-1}H(F')^{-1}\widehat{\mu }))\\&\quad =E(tr(F^{-1}H(F')^{-1}\varSigma F^{-1}H(F')^{-1}\widehat{\mu }\widehat{\mu }'))\\&\quad =tr(E(F^{-1}H(F')^{-1}\varSigma F^{-1}H(F')^{-1})E(\widehat{\mu }\widehat{\mu }'))\\&\quad =\frac{1}{n}tr(E(F^{-1}H(F')^{-1}\varSigma F^{-1}H(F')^{-1}\varSigma ))\\&\qquad +E(\mu 'F^{-1}H(F')^{-1}\varSigma F^{-1}H(F')^{-1}\mu )\\&\quad =\frac{1}{n}tr(A_{2})+\mu '\varGamma ^{-1}A_{2}(\varGamma ')^{-1}\mu . \end{aligned}$$

The proof is complete. \(\square \)

1.3 Proof of Lemma 2

1.3.1 Step 1

In order to compute \(A_{1}\), we need \(\tilde{F}\) and H to be decomposed as

$$\begin{aligned} \tilde{F}=\left( \begin{array}{cc} \tilde{f}_{11} &{}\quad \tilde{f}_{21} \\ \mathbf 0 &{}\quad \tilde{F}_{22} \end{array} \right) , \ \ H=\left( \begin{array}{cc} h_{1} &{}\quad \mathbf 0 ' \\ \mathbf 0 &{} \quad H_{2} \end{array} \right) , \end{aligned}$$

where \(\tilde{f}_{11}\) and \(h_{1}\) are scalars, \(\tilde{F}_{22}\) and \(H_{2}\) are the \((p-1)\times (p-1)\) submatrices of \(\tilde{F}\) and H, respectively.

Then \(A_{1}\) can be expressed as

$$\begin{aligned} \begin{array}{lll} A_{1} &{} =&{}E[\tilde{F}^{-1}H(\tilde{F}')^{-1}] \\ &{} = &{}E\left( \begin{array}{cc} \tilde{f}_{11}^{-2}h_{1}+\tilde{f}_{11}^{-2}\tilde{f}_{21}\tilde{F}_{22}^{-1}H_{2}(\tilde{F}_{22}')^{-1}\tilde{f}_{21}'\quad &{}\quad -\tilde{f}_{11}^{-1}\tilde{f}_{21}\tilde{F}_{22}^{-1}H_{2}(\tilde{F}_{22}')^{-1} \\ -\tilde{f}_{11}^{-1}\tilde{F}_{22}^{-1}H_{2}(\tilde{F}_{22}')^{-1}\tilde{f}_{21}'&{} \tilde{F}_{22}^{-1}H_{2}(\tilde{F}_{22}')^{-1} \end{array} \right) .\\ \end{array} \end{aligned}$$

Note that \(\tilde{f}_{11}\), \(\tilde{f}_{21}\) and \(\tilde{F}_{22}\) are mutually independently distributed as \(\tilde{f}_{11}\sim \chi _{n-1}^{2}\), \(\tilde{f}_{21}\sim N(0,I_{p-1})\) and \(\tilde{F}_{22}'\tilde{F}_{22}\sim W_{p-1}(n-2)\) according to Lemma 5. Calculating the means of the inverse Chi-square and normal distributions, we obtain

$$\begin{aligned} \begin{array}{lll} &{} A_{1}=&{}\left( \begin{array}{cc} \frac{h_{1}}{n-3}+\frac{1}{n-3}tr[E(\tilde{F}_{22}^{-1}H_{2}(\tilde{F}_{22}')^{-1})]&{} \mathbf 0 ' \\ \mathbf 0 &{} E(\tilde{F}_{22}^{-1}H_{2}(\tilde{F}_{22}')^{-1}) \end{array} \right) . \end{array} \end{aligned}$$

1.3.2 Step 2

\(E(\tilde{F}_{22}^{-1}H_{2}(\tilde{F}_{22}')^{-1})\) needs to be calculated.

Decomposing \(\tilde{F}_{22}\) and \(H_{2}\) as follows

$$\begin{aligned} \tilde{F}_{22}=\left( \begin{array}{cc} \tilde{f}_{22}&{}\quad \tilde{f}_{32} \\ \mathbf 0 &{}\quad \tilde{F}_{33} \end{array} \right) , \ \ H_{2}=\left( \begin{array}{cc} h_{2}&{}\quad \mathbf 0 ' \\ \mathbf 0 &{}\quad H_{3} \end{array} \right) , \end{aligned}$$

where \(\tilde{f}_{22}\) and \(h_{2}\) are scalars, \(\tilde{F}_{33}\) and \(H_{3}\) are the \((p-2)\times (p-2)\) lower right submatrices of \(\tilde{F}\) and H, respectively, we get \(E(\tilde{F}_{22}^{-1}H_{2}(\tilde{F}_{22}')^{-1})\) expressed as

$$\begin{aligned} \begin{array}{lll} &{}&{}E(\tilde{F}_{22}^{-1}H_{2}(\tilde{F}_{22}')^{-1}) \\ &{}&{}\quad = E\left( \begin{array}{cc} \tilde{f}_{22}^{-2}h_{2}+\tilde{f}_{22}^{-2}\tilde{f}_{32}\tilde{F}_{33}^{-1}H_{3}(\tilde{F}_{33}')^{-1}\tilde{f}_{32}'\quad &{}\quad -\tilde{f}_{22}^{-1}\tilde{f}_{32}\tilde{F}_{33}^{-1}H_{3}(\tilde{F}_{33}')^{-1} \\ -\tilde{f}_{22}^{-1}\tilde{F}_{33}^{-1}H_{3}(\tilde{F}_{33}')^{-1}\tilde{f}_{32}'&{} \tilde{F}_{33}^{-1}H_{3}(\tilde{F}_{33}')^{-1} \end{array} \right) .\\ \end{array} \end{aligned}$$

Note that \(\tilde{f}_{22}\), \(\tilde{f}_{32}\) and \(\tilde{F}_{33}\) are mutually independently distributed as \(\tilde{f}_{22}\sim \chi _{n-2}^{2}\), \(\tilde{f}_{32}\sim N(0,I_{p-2})\) and \(\tilde{F}_{33}'\tilde{F}_{33}\sim W_{p-2}(n-3)\) according to Lemma 5. Calculating the means of inverse Chi-square and normal distributions, we get

$$\begin{aligned} E(\tilde{F}_{22}^{-1}H_{2}(\tilde{F}_{22}')^{-1})=\left( \begin{array}{cc} \frac{h_{2}}{n-4}+\frac{1}{n-4}tr[E(\tilde{F}_{33}^{-1}H_{3}(\tilde{F}_{33}')^{-1})]&{} \mathbf 0 ' \\ \mathbf 0 &{} E(\tilde{F}_{33}^{-1}H_{3}(\tilde{F}_{33}')^{-1}) \end{array} \right) . \end{aligned}$$

1.3.3 Step 3

Decomposing \(\tilde{F}_{33}\) and \(H_{3}\) similarly and repeating the same procedure, we get

$$\begin{aligned} E(\tilde{F}_{33}^{-1}H_{3}(\tilde{F}_{33}')^{-1})=\left( \begin{array}{cc} \frac{h_{3}}{n-5}+\frac{1}{n-5}tr[E(\tilde{F}_{44}^{-1}H_{4}(\tilde{F}_{44}')^{-1})]&{} \mathbf 0 ' \\ \mathbf 0 &{} E(\tilde{F}_{44}^{-1}H_{4}(\tilde{F}_{44}')^{-1}) \end{array} \right) . \end{aligned}$$

1.3.4 Step p-1

$$\begin{aligned} \begin{array}{rcl} &{}&{}E(\tilde{F}_{(p-1)\times (p-1)}^{-1}H_{p-1}(\tilde{F}_{(p-1)\times (p-1)}')^{-1})\\ &{}&{}\quad = \left( \begin{array}{cc} \frac{h_{p-1}}{n-p-1}+\frac{1}{n-p-1}E(\tilde{f}_{p\times p}^{-1}h_{p}(\tilde{f}_{_{p\times p}}')^{-1})&{} \mathbf 0 ' \\ \mathbf 0 &{} E(\tilde{f}_{_{p\times p}}^{-1}h_{p}(\tilde{f}_{_{p\times p}}')^{-1}) \end{array} \right) \\ &{}&{}\quad = \left( \begin{array}{cc} \frac{h_{p-1}}{n-p-1}+\frac{h_{p}}{(n-p-1)(n-p-2)}&{} 0 \\ 0&{} \frac{h_{p}}{n-p-2} \end{array} \right) . \end{array} \end{aligned}$$

1.3.5 Step p

Based on the computations, we have

$$\begin{aligned} A_{1}=\left( \begin{array}{cccc} \frac{h_{1}}{n-3}+\sum _{j=2}^{p}\frac{h_{j}}{(n-j-1)(n-j-2)} &{} 0 &{} \cdots &{} 0 \\ 0 &{} \frac{h_{2}}{n-4}+\sum _{j=3}^{p}\frac{h_{j}}{(n-j-1)(n-j-2)} &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \frac{h_{p}}{(n-p-2)}\\ \end{array} \right) . \end{aligned}$$

The proof is completed. \(\square \)

1.4 Proof of Lemma 3

Proof

Using \(\tilde{F}'\tilde{F}\sim W_{p}(n-1)\) and Lemma 4, we can easily get the above result after some calculations. \(\square \)

1.5 Proof of Theorem 1

Proof

\(E[U(\hat{w}^{s})]\) can be rewritten as follows:

$$\begin{aligned} E[U(\hat{w}^{s})]=\frac{1}{\gamma }\mu '\varGamma ^{-1}\left( A_{1}-\frac{1}{2}A_{2}\right) (\varGamma ')^{-1}\mu -\frac{1}{2n\gamma }tr\{A_{2}\}. \end{aligned}$$

Similarly,

$$\begin{aligned} E(U(\bar{w}_{3}))=\frac{1}{\gamma }\mu '\varGamma ^{-1}\left( \frac{k_{2}}{2}I_{p}\right) (\varGamma ')^{-1}\mu -\frac{1}{2\gamma }k_{3}, \end{aligned}$$

where \(k_{2}=(\frac{n}{n+1})(2-\frac{n(n-2)(n-p-2)}{(n+1)(n-p-1)(n-p-4)})\) and \(k_{3}=\frac{np(n-2)(n-p-2)}{(n+1)^{2}(n-p-1)(n-p-4)}\).

If \(E[U(\hat{w}^{s})]\ge E(U(\bar{w}_{3}))\), we need the following conditions to be satisfied:

$$\begin{aligned} \begin{array}{rcl} A_{1}-\frac{1}{2}A_{2} &{}\ge &{} \frac{k_{2}}{2}I_{p}\\ \frac{k_{3} }{2\gamma }&{}\ge &{} \frac{tr(A_{2})}{2n\gamma } \end{array}. \end{aligned}$$

Using Lemma 2, the above conditions can be rewritten as

$$\begin{aligned} \begin{array}{rcl} A_{1}-\frac{(h_{m})^{2}(n-2)}{2(n-p-1)(n-p-2)(n-p-4)}I_{p}&{}\ge &{} \frac{n}{n+1}I_{p}-\frac{n^{2}(n-p-2)(n-2)}{2(n+1)^{2}(n-p-1)(n-p-4)}I_{p}\\ \frac{n^{2}(n-p-2)(n-2)p}{(n+1)^{2}(n-p-1)(n-p-4)}&{} \ge &{}\frac{(h_{m})^{2}(n-2)p}{(n-p-1)(n-p-2)(n-p-4)} \end{array}.(*) \end{aligned}$$

Using the condition in \((*)\) and the restriction \(h_{i}>0\), we get \(0<h_{m}\le \frac{n(n-p-2)}{n+1}\).

1.5.1 Step 1

From Lemma 2 and the first condition in \((*)\), we have

$$\begin{aligned}&\frac{h_{p}}{n-p-2}-\frac{(h_{m})^{2}(n-2)}{2(n-p-1)(n-p-2)(n-p-4)}\\&\quad \ge \frac{n}{n+1}-\frac{n^{2}(n-2)(n-p-2)}{2(n+1)^{2}(n-p-1)(n-p-4)}. \end{aligned}$$

Simplifying the above inequality, we have

$$\begin{aligned} h_{p}\ge \frac{n(n-p-2)}{n+1}-\frac{n^{2}(n-2)(n-p-2)^{2}}{2(n+1)^{2}(n-p-1)(n-p-4)}. \end{aligned}$$

Then, \(max(0,\frac{n(n-p-2)}{n+1}-\frac{n^{2}(n-2)(n-p-2)^{2}}{2(n+1)^{2}(n-p-1)(n-p-4)})\le h_{p} \le \frac{n(n-p-2)}{n+1}\).

1.5.2 Step 2

From the first condition in \((*)\), we know

$$\begin{aligned}&\frac{h_{p-1}}{n-p-1}+\frac{h_{p}}{(n-p-1)(n-p-2)}-\frac{(h_{m})^{2}(n-2)}{2(n-p-1)(n-p-2)(n-p-4)}\\&\quad \ge \frac{n}{n+1}-\frac{n^{2}(n-2)(n-p-2)}{2(n+1)^{2}(n-p-1)(n-p-4)}. \end{aligned}$$

Because of \(h_{p} \le \frac{n(n-p-2)}{n+1}\), we obtain

$$\begin{aligned} max(0,\frac{n(n-p-2)}{n+1}-\frac{n^{2}(n-2)(n-p-2)}{2(n+1)^{2}(n-p-4)})\le h_{p-1}\le \frac{n(n-p-2)}{n+1}. \end{aligned}$$

1.5.3 Step p

From the first condition in \((*)\), we know

$$\begin{aligned}&\frac{h_{1}}{n-3}+\cdots +\frac{h_{p}}{(n-p-1)(n-p-2)}-\frac{(h_{m})^{2}(n-2)}{2(n-p-1)(n-p-2)(n-p-4)}\\&\quad \ge \frac{n}{n+1}-\frac{n^{2}(n-2)(n-p-2)}{2(n+1)^{2}(n-p-1)(n-p-4)}. \end{aligned}$$

Using the upper limit of \(h_{i}\) \((i=2,\ldots ,p)\), we get

$$\begin{aligned} max(0,\frac{n(n-p-2)}{n+1}-\frac{n^{2}(n-2)(n-p-2)(n-3)}{2(n+1)^{2}(n-p-1)(n-p-4)})\le h_{1}\le \frac{n(n-p-2)}{n+1}. \end{aligned}$$

The proof can be completed by repeating the above manipulation several times. \(\square \)

1.6 Proof of Corollary 1

Proof

Using \(h_{1}=\cdots =h_{k}>h_{k+1}=\cdots =h_{p}\) where \(1<k<p\), the sufficient condition in Theorem 1 can be easily simplified and the above result is obtained after some computations. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, R., Ma, T. & Liu, S. A Stein-type shrinkage estimator of the covariance matrix for portfolio selections. Metrika 81, 931–952 (2018). https://doi.org/10.1007/s00184-018-0663-2

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-018-0663-2

Keywords

Navigation