Skip to main content
Log in

MSE dominance of the positive-part shrinkage estimator when each individual regression coefficient is estimated

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

In this paper we consider a regression model and a general family of shrinkage estimators of regression coefficients. The estimation of each individual regression coefficient is important in some practical situations. Thus, we derive the formula for the mean squared error (MSE) of the general class of shrinkage estimators for each individual regression coefficient. It is shown analytically that the general family of shrinkage estimators is dominated by its positive-part variant in terms of MSE whenever there exists the positive-part variant or, in other words, the shrinkage factor can be negative for some parameter and data values.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Baranchik AJ (1970) A family of minimax estimators of the mean of a multivariate normal distribution. Ann Math Stat 41:642–645

    Article  MathSciNet  Google Scholar 

  • Efron B, Morris C (1973a) Combining possibly related estimation problems (with discussion). J R Stat Soc B 35:379–421

    MATH  MathSciNet  Google Scholar 

  • Efron B, Morris C (1973b) Stein’s estimation rule and its competitors—an empirical Bayes approach. J Am Stat Assoc 68:117–130

    MATH  MathSciNet  Google Scholar 

  • Goldstein M, Khan MS (1985) Income and price effects in foreign trade. In: Kenen B, Jones RW (eds) Handbook of international economics, vol 2. Elsevier, Amsterdam, pp 1041–1105

    Google Scholar 

  • James W, Stein C (1961) Estimation with quadratic loss. In: Proceedings of the fourth Berkeley symposium on mathematical statistics and probability, vol 1. University of California Press, Berkeley, pp 361–379

  • Judge GG, Bock ME (1978) The statistical implications of pre-test and Stein-rule estimators in econometrics. Elsevier, North-Holland

  • Judge GG, Yancey TA (1986) Improved methods of inference in econometrics. North-Holland, Amsterdam

    MATH  Google Scholar 

  • Namba A (2003a) Dominance of the positive-part shrinkage estimator in a regression model when relevant regressors are omitted. Stat Probab Lett 63:375–385

    Article  MATH  MathSciNet  Google Scholar 

  • Namba A (2003b) On the use of the Stein variance estimator in the double \(k\)-class estimator when each individual regression coefficient is estimated. Stat Pap 44:117–124

    Article  MATH  MathSciNet  Google Scholar 

  • Namba A, Ohtani K (2002) MSE performance of the double \(k\)-class estimator of each individual regression coefficient under multivariate \(t\)-errors. In: Ullah A, Wan AT, Chaturvedi A (eds) Handbook of applied econometrics and statistical inference. Marcel Dekker, New York, pp 305–326

    Google Scholar 

  • Namba A, Ohtani K (2010) Risk performance of a pre-test ridge regression estimator under the LINEX loss function when each individual regression coefficient is estimated. J Stat Comput Simul 80:255–262

    Article  MATH  MathSciNet  Google Scholar 

  • Nickerson DM (1988) Dominance of the positive-part version of the James–Stein estimator. Stat Probab Lett 7:97–103

    Article  MATH  MathSciNet  Google Scholar 

  • Ohtani K (1997) Minimum mean squared error estimation of each individual coefficient in a linear regression model. J Stat Plan Inference 62:301–316

    Article  MATH  MathSciNet  Google Scholar 

  • Ohtani K, Kozumi H (1996) The exact general formulae for the moments and the MSE dominance of the Stein-rule and positive-part Stein-rule estimators. J Econom 74:273–287

    Article  MATH  MathSciNet  Google Scholar 

  • Rao CR, Shinozaki N (1978) Precision of individual estimators in simultaneous estimation of parameters. Biometrika 65:23–30

    Article  MATH  MathSciNet  Google Scholar 

  • Stein C (1956) Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. In: Proceedings of the third Berkeley symposium on mathematical statistics and probability, vol 1, University of California Press, Berkeley, pp 197–206

  • Uemukai R (2011) Small sample properties of a ridge regression estimator when there exist omitted variables. Stat Pap 52:953–969

    Article  MATH  MathSciNet  Google Scholar 

  • Ullah A (1974) On the sampling distribution of improved estimators for coefficients in linear regression. J Econom 2:143–150

    Google Scholar 

Download references

Acknowledgments

The author is grateful to Kazuhiro Ohtani for his helpful comments and suggestions. He is also grateful to Aman Ullah, Tae-Hwy Lee and other participants of the econometric seminar held at University of California, Riverside on May 12, 2010. Also, he would like to thank the anonymous referees and the editors for their very useful comments and suggestions. This work was supported by JSPS KAKENHI Grant Numbers 23243038.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Akio Namba.

Appendix

Appendix

In this appendix, we derive the formulae for \(H(p,q;c)\) and \(J(p,q;c)\). First, we derive the formula for \(H(p,q;c)\). Let \(u_1=(h'\widehat{\gamma })^2/\sigma ^2\), \(u_2=\widehat{\gamma }'(I_k -hh')\widehat{\gamma }/\sigma ^2\) and \(u_3=e'e/\sigma ^2\). Then, \(u_1 \sim \chi _1^{\prime 2}(\lambda _1)\) and \(u_2 \sim \chi _{k-1}^{\prime 2}(\lambda _2)\), where \(\chi _f^{\prime 2}(\lambda )\) is the noncentral chi-square distribution with \(f\) degrees of freedom and noncentrality parameter \(\lambda \), \(\lambda _1=(h'\gamma )^2/\sigma ^2\) and \(\lambda _2=\gamma '(I_k - hh')\gamma /\sigma ^2\). Further, \(u_3\) is distributed as the chi-square distribution with \(\nu =n-k\) degrees of freedom, and \(u_1\), \(u_2\) and \(u_3\) are mutually independent.

Using \(u_1\), \(u_2\) and \(u_3\), \(H(p,q;c)\) is expressed as

$$\begin{aligned} H(p,q;c)&= (\sigma ^2)^q \sum _{i=0}^{\infty }\sum _{j=0}^{\infty } K_{ij} \int \!\!\!\int \!\!\!\int \limits _R \left( 1-\phi \left( \frac{u_1+u_2}{u_3}\right) \right) ^p \nonumber \\&\times u_1^{1/2+q+i-1}u_2^{(k-1)/2+j-1}u_3^{\nu /2-1} \exp [-(u_1\!+\!u_2\!+\!u_3)/2] du_1\, du_2 \, du_3,\nonumber \\ \end{aligned}$$
(25)

where

$$\begin{aligned} K_{ij}= \frac{w_i(\lambda _1)w_j(\lambda _2)}{2^{(\nu +k)/2+i+j}\Gamma (1/2+i)\Gamma ((k-1)/2+j)\Gamma (\nu /2)}, \end{aligned}$$
(26)

\(w_i(\lambda )=\exp (-\lambda /2)(\lambda /2)^i/i!\), and \(R\) is the region such that \((u_1+u_2)/u_3\ge c\).

Making use of the change of variables, \(v_1=(u_1+u_2)/u_3\), \(v_2=u_1 u_3/(u_1+u_2)\) and \(v_3=u_3\), (25) reduces to

$$\begin{aligned}&(\sigma ^2)^q \sum _{i=0}^{\infty } \sum _{j=0}^{\infty } K_{ij} \int \limits _{0}^{\infty }\int \limits _{0}^{v_3} \int \limits _{c}^{\infty } \left( 1-\phi (v_1)\right) ^p v_1^{k/2+q+i+j-1} v_2^{1/2+q+i-1} v_3^{\nu /2} (v_3-v_2)^{(k-1)/2+j-1} \nonumber \\&\qquad \times \exp [-v_3(v_1+1)/2] dv_1\,dv_2\,dv_3. \end{aligned}$$
(27)

Again, making use of the change of variable, \(z_1=v_2/v_3\), (27) reduces to

$$\begin{aligned}&(\sigma ^2)^q \sum _{i=0}^{\infty } \sum _{j=0}^{\infty } K_{ij} \frac{\Gamma (1/2+q+i)\Gamma ((k-1)/2+j)}{\Gamma (k/2+q+i+j)} \nonumber \\&\quad \times \int \limits _{0}^{\infty } \int \limits _{c}^{\infty } \left( 1\!-\!\phi (v_1)\right) ^p v_1^{k/2+q+i+j-1}v_3^{(\nu +k)/2+q+i+j-1} \exp [-v_3(v_1+1)/2] dv_1\, dv_3.\nonumber \\ \end{aligned}$$
(28)

Further, making use of the change of variable, \(z_2=v_3(v_1+1)/2\), (28) reduces to

$$\begin{aligned}&(\sigma ^2)^q \sum _{i=0}^{\infty } \sum _{j=0}^{\infty } K_{ij} 2^{(\nu +k)/2+q+i+j} \frac{\Gamma (1/2\!+\!q\!+\!i)\Gamma ((k\!-\!1)/2\!+\!j)\Gamma ((\nu \!+\!k)/2\!+\!q\!+\!i\!+\!j)}{\Gamma (k/2\!+\!q\!+\!i\!+\!j)}\nonumber \\&\qquad \times \int \limits _{c}^{\infty } \left( 1-\phi (v_1)\right) ^p \frac{v_1^{k/2+q+i+j-1}}{(1+v_1)^{(\nu +k)/2+q+i+j}} dv_1. \end{aligned}$$
(29)

Finally, making use of the change of variable, \(t=v_1/(1+v_1)\). we obtain (14) in the text.

Next, we derive the formula for \(J(p,q;c)\). Differentiating \(H(p,q;c)\) given in (14) with respect to \(\gamma \), we have

$$\begin{aligned} \frac{\partial H(p,q;c)}{\partial \gamma }&= (2\sigma ^2)^q \sum _{i=0}^{\infty } \sum _{j=0}^{\infty } \left[ \frac{\partial w_i(\lambda _1)}{\partial \gamma }w_j(\lambda _2) +w_i(\lambda _1)\frac{\partial w_j(\lambda _2)}{\partial \gamma }\right] G_{ij}(p,q;c) \nonumber \\&= -\frac{hh'\gamma }{\sigma ^2}(2\sigma ^2)^q \sum _{i=0}^{\infty }\sum _{j=0}^{\infty }w_i(\lambda _1)w_j(\lambda _2) G_{ij}(p,q;c) \nonumber \\&+\frac{hh'\gamma }{\sigma ^2}(2\sigma ^2)^q \sum _{i=0}^{\infty }\sum _{j=0}^{\infty }w_i(\lambda _1)w_j(\lambda _2) G_{i+1,j}(p,q;c) \nonumber \\&-\frac{(I_k-hh')\gamma }{\sigma ^2}(2\sigma ^2)^q \sum _{i=0}^{\infty }\sum _{j=0}^{\infty }w_i(\lambda _1)w_j(\lambda _2) G_{ij}(p,q;c) \nonumber \\&+\frac{(I_k-hh')\gamma }{\sigma ^2}(2\sigma ^2)^q \sum _{i=0}^{\infty }\sum _{j=0}^{\infty }w_i(\lambda _1)w_j(\lambda _2) G_{i,j+1}(p,q;c), \end{aligned}$$
(30)

where we define \(w_{-1}(\lambda _1) = w_{-1}(\lambda _2) =0\). Since \(h'h=1\), we obtain

$$\begin{aligned} h'\frac{\partial H(p,q;c)}{\partial \gamma }&= -\frac{h'\gamma }{\sigma ^2}H(p,q;c)\nonumber \\&+\frac{h'\gamma }{\sigma ^2} (2\sigma ^2)^q \sum _{i=0}^{\infty }\sum _{j=0}^{\infty }w_i(\lambda _1)w_j(\lambda _2) G_{i+1,j}(p,q;c). \end{aligned}$$
(31)

Expressing \(H(p,q;c)\) by \(\widehat{\gamma }\) and \(e'e\), we have

$$\begin{aligned} H(p,q;c)&= \int \!\!\!\int \limits _{F\ge c} \left( 1-\phi \left( F \right) \right) ^p (h'\widehat{\gamma })^{2q} f_N(\widehat{\gamma }) f_e(e'e) d\widehat{\gamma }\, d(e'e), \end{aligned}$$
(32)

where \(F=(\widehat{\gamma }'\widehat{\gamma })/(e'e)\), \(f(e'e)\) is the density function of \(e'e\), and

$$\begin{aligned} f_N(\widehat{\gamma }) = \frac{1}{(2\pi )^{k/2}\sigma ^k} \exp \left[ -\frac{(\widehat{\gamma }-\gamma )'(\widehat{\gamma }-\gamma )}{2\sigma ^2}\right] , \end{aligned}$$
(33)

is the density function of \(\widehat{\gamma }\).

Differentiating \(H(p,q;c)\) given in (32) with respect to \(\gamma \), and multiplying \(h'\) from the left, we obtain

$$\begin{aligned} h'\frac{\partial H(p,q;c)}{\partial \gamma } \!=\! -\frac{h'\gamma }{\sigma ^2} H(p,q;c) \!+\!\frac{1}{\sigma ^2} E\left[ I(F\ge c) \left( 1-\phi \left( F \right) \right) ^p (h'\widehat{\gamma })^{2q}h'\widehat{\gamma } \right] .\nonumber \\ \end{aligned}$$
(34)

Equating (31) and (34), we obtain (15) in the text.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Namba, A. MSE dominance of the positive-part shrinkage estimator when each individual regression coefficient is estimated. Stat Papers 56, 379–390 (2015). https://doi.org/10.1007/s00362-014-0586-6

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-014-0586-6

Keywords

Navigation