Skip to main content
Log in

Applications of resampling methods in multivariate Liu estimator

  • Original paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

Multicollinearity among independent variables is one of the most common problems in regression models. The aftereffects of this problem, such as ill-conditioning, instability of estimators, and inflating mean squared error of ordinary least squares estimator (OLS), in the multivariate linear regression model (MLRM) are the same that of linear regression models. To combat multicollinearity, several approaches have been presented in the literature. Liu estimator (LE), as a well known estimator in this connection, has been used in linear, generalized linear, and nonlinear regression models by researchers in recent years. In this paper, for the first time, LE and jackknifed Liu estimator (JLE) are investigated in MLRM. To improve estimators in the sense of mean squared error, two known resampling methods, i.e., jackknife and bootstrap, are used. Finally, OLS, LE, and JLE are compared by a simulation study and also using a real data set, by resampling methods in MLRM.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

Download references

Acknowledgements

The authors would like to sincerely thank two anonymous referees for their constructive comments that appreciably improved the quality of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hamid Bidram.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Here, the proofs of Theorems 1, 2, and 3 are given.

Proof of Theorem 1

Let \(\textbf{B}_{0}\) be an arbitrary estimator of \(\textbf{B}\). Then,

$$\begin{aligned}&SSE(\textbf{B}_{0})=\left( \textbf{Y}-\textbf{X}\textbf{B}_{0}\right) ^{T}\left( \textbf{Y}-\textbf{X}\textbf{B}_{0}\right) +\left( d\hat{\textbf{B}}_{{OLS}}-\textbf{B}_{0}\right) ^{T}\left( d\hat{\textbf{B}}_{{OLS}}-\textbf{B}_{0}\right) =\nonumber \\&\left( \left( \textbf{Y}-\textbf{X}\hat{\textbf{B}}_{LE}\right) +\textbf{X}\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right) \right) ^{T}\left( \left( \textbf{Y}-\textbf{X}\hat{\textbf{B}}_{LE}\right) +\textbf{X}\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right) \right) +\nonumber \\&\left( \left( d\hat{\textbf{B}}_{{OLS}}-\hat{\textbf{B}}_{LE}\right) +\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right) \right) ^{T}\left( \left( d\hat{\textbf{B}}_{{OLS}}-\hat{\textbf{B}}_{LE}\right) +\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right) \right) \nonumber \\&=\left( \textbf{Y}-\textbf{X}\hat{\textbf{B}}_{LE}\right) ^{T}\left( \textbf{Y}-\textbf{X}\hat{\textbf{B}}_{LE}\right) +\left( d\hat{\textbf{B}}_{{OLS}}-\hat{\textbf{B}}_{LE}\right) ^{T}\left( d\hat{\textbf{B}}_{{OLS}}-\hat{\textbf{B}}_{LE}\right) \nonumber \\&+\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right) ^{T}\textbf{X}^{T}\textbf{X}\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right) +\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right) ^{T}\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right) \nonumber \\&=SSE(\hat{\textbf{B}}_{LE})+\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right) ^{T}\textbf{X}^{T}\textbf{X}\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right) \nonumber \\&+\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right) ^{T}\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right) . \end{aligned}$$
(20)

Finally, since \(\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right) ^{T}\textbf{X}^{T}\textbf{X}\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right)\) and \(\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right) ^{T}\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right)\) are PSD matrices, \(\hat{\textbf{B}}_{LE}\) is an LE for \(\textbf{B}\). As we see, the used approach for LE is similar to that of OLS in MLRM.

The following lemma, given by Farebrother (1976), is used to prove Theorems 2, and 3:

Lemma 1

Let \(\textbf{M}\) be a positive definite (PD) matrix and \(\varvec{\upsilon }\) be a column vector. Then \(\textbf{M}-\varvec{\upsilon }\varvec{\upsilon }^T\) is a PSD matrix iff \(\varvec{\upsilon }^T\textbf{M}^{-1}\varvec{\upsilon }\le 1\).

Proof of Theorem 2

From (15) and (16) we have:

$$\begin{aligned} \textbf{MSE}\left( vec\hat{\textbf{A}}_{{OLS}}\right) -\textbf{MSE}\left( vec\hat{\textbf{A}}_{LE}\right) =\textbf{M}_1-\varvec{\upsilon }_1\varvec{\upsilon }_1^T, \end{aligned}$$

where

$$\begin{aligned} \textbf{M}_1=\textbf{V}\left( vec\hat{\textbf{A}}_{{OLS}}\right) -\textbf{V}\left( vec\hat{\textbf{A}}_{LE}\right) , \end{aligned}$$

and

$$\begin{aligned} \varvec{\upsilon }_1=\left( \textbf{I}_q\otimes \textbf{G}_d-\textbf{I}_{pq}\right) vec\textbf{A}. \end{aligned}$$

First, we show that \(\textbf{M}_1\) is a PD matrix. We have

$$\begin{aligned} \textbf{M}_1=\varvec{\varSigma }\otimes \varvec{\varLambda }^{-1}- \varvec{\varSigma }\otimes \textbf{G}_d\varvec{\varLambda }^{-1}\textbf{G}_d^T=\varvec{\varSigma }\otimes \left( \varvec{\varLambda }^{-1}-\textbf{G}_d\varvec{\varLambda }^{-1}\textbf{G}_d^T\right) . \end{aligned}$$

Since \(\varvec{\varSigma }\) is a PD matrix, it is enough to show that \(\textbf{K}=\varvec{\varLambda }^{-1}-\textbf{G}_d\varvec{\varLambda }^{-1}\textbf{G}_d^T\) is a PD matrix. \(\textbf{K}\) is a diagonal matrix with ith element

$$\begin{aligned} k_{ii}=\frac{(\lambda _i+1)^2-(\lambda _i+d)^2}{\lambda _i(\lambda _i+1)^2};\; i=1,\ldots ,p, \end{aligned}$$

that is a positive number and so \(\textbf{K}\) is a PD matrix. According to Lemma 1, \(\textbf{M}_1-\varvec{\upsilon }_1\varvec{\upsilon }_1^T\) is a PSD matrix, iff

$$\begin{aligned} \varvec{\upsilon }_1^T\textbf{M}_1^{-1}\varvec{\upsilon }_1\le 1, \end{aligned}$$

iff

$$\begin{aligned} vec^T\textbf{A}\left( \textbf{I}_q\otimes \textbf{G}_d-\textbf{I}_{pq}\right) ^T\left[ \varvec{\varSigma }\otimes \left( \varvec{\varLambda }^{-1}-\textbf{G}_d\varvec{\varLambda }^{-1}\textbf{G}_d^T\right) \right] ^{-1} \left( \textbf{I}_q\otimes \textbf{G}_d-\textbf{I}_{pq}\right) vec\textbf{A}\le 1, \end{aligned}$$

iff

$$\begin{aligned} vec^T\textbf{A}\left[ \textbf{I}_q\otimes \left( \textbf{G}_d-\textbf{I}_p\right) \right] ^T\left[ \varvec{\varSigma }^{-1}\otimes \left( \varvec{\varLambda }^{-1}-\textbf{G}_d\varvec{\varLambda }^{-1}\textbf{G}_d^T\right) ^{-1}\right] \left[ \textbf{I}_q\otimes \left( \textbf{G}_d-\textbf{I}_p\right) \right] vec\textbf{A}\le 1, \end{aligned}$$

iff

$$\begin{aligned} vec^T\textbf{A}\left[ \varvec{\varSigma }^{-1}\otimes \varvec{\varLambda }\left( \textbf{I}_p-\textbf{G}_d \right) ^2\left( \textbf{I}_p-\textbf{G}_d^2\right) ^{-1}\right] vec\textbf{A}\le 1. \end{aligned}$$

This completes the proof.

Proof of Theorem 3

The proof is similar to that of Theorem 2. From (15) and (17) we have:

$$\begin{aligned} \textbf{MSE}\left( vec\hat{\textbf{A}}_{{OLS}}\right) -\textbf{MSE}\left( vec\hat{\textbf{A}}_{JLE}\right) =\textbf{M}_2-\varvec{\upsilon }_2\varvec{\upsilon }_2^T, \end{aligned}$$

where

$$\begin{aligned} \textbf{M}_2=\textbf{V}\left( vec\hat{\textbf{A}}_{{OLS}}\right) -\textbf{V}\left( vec\hat{\textbf{A}}_{JLE}\right) , \end{aligned}$$

and

$$\begin{aligned} \varvec{\upsilon }_2=\left[ \textbf{I}_q\otimes \left( \textbf{I}_p-\textbf{G}_d\right) ^2\right] vec\textbf{A}. \end{aligned}$$

The \(\textbf{M}_2\) is obtained as:

$$\begin{aligned} \textbf{M}_2&=\varvec{\varSigma }\otimes \varvec{\varLambda }^{-1}- \varvec{\varSigma }\otimes \left( 2\textbf{I}_p-\textbf{G}_d\right) \textbf{G}_d\varvec{\varLambda }^{-1}\textbf{G}_d^T\left( 2\textbf{I}_p-\textbf{G}_d\right) ^T\\&=\varvec{\varSigma }\otimes \left[ \varvec{\varLambda }^{-1}-\left( 2\textbf{I}_p-\textbf{G}_d\right) \textbf{G}_d\varvec{\varLambda }^{-1}\textbf{G}_d^T\left( 2\textbf{I}_p-\textbf{G}_d\right) ^T\right] . \end{aligned}$$

We just show that \(\textbf{L}=\varvec{\varLambda }^{-1}-\left( 2\textbf{I}_p-\textbf{G}_d\right) \textbf{G}_d\varvec{\varLambda }^{-1}\textbf{G}_d^T\left( 2\textbf{I}_p-\textbf{G}_d\right) ^T\) is a PD matrix. \(\textbf{L}\) is a diagonal matrix with ith element

$$\begin{aligned} l_{ii}=\frac{(\lambda _i+1)^4-(\lambda _i+d)^2(\lambda _i+2-d)^2}{\lambda _i(\lambda _i+1)^4};\; i=1,\ldots ,p, \end{aligned}$$

which \(l_{ii}\) is a positive number. Thus, \(\textbf{L}\) is a PD matrix. Now, we conclude that \(\textbf{M}_2\) is a PD matrix, too. From Lemma 1, we know that \(\textbf{M}_2-\varvec{\upsilon }_2\varvec{\upsilon }_2^T\) is a PSD matrix, iff

$$\begin{aligned} \varvec{\upsilon }_2^T\textbf{M}_2^{-1}\varvec{\upsilon }_2\le 1, \end{aligned}$$

iff

$$\begin{aligned}&vec^T\textbf{A}\left[ \textbf{I}_q\otimes \left( \textbf{I}_p-\textbf{G}_d\right) ^2\right] ^T\left\{ \varvec{\varSigma }\otimes \left[ \varvec{\varLambda }^{-1}-\left( 2\textbf{I}_p-\textbf{G}_d\right) \textbf{G}_d\varvec{\varLambda }^{-1}\textbf{G}_d^T\left( 2\textbf{I}_p-\textbf{G}_d\right) ^T\right] \right\} ^{-1}\\&\left[ \textbf{I}_q\otimes \left( \textbf{I}_p-\textbf{G}_d\right) ^2\right] vec\textbf{A}\le 1, \end{aligned}$$

iff

$$\begin{aligned} vec^T\textbf{A}\left\{ \varvec{\varSigma }^{-1}\otimes \varvec{\varLambda }\left( \textbf{I}_p-\textbf{G}_d\right) ^4\left[ \textbf{I}_p-\textbf{G}_d^2\left( 2\textbf{I}_p-\textbf{G}_d\right) ^2\right] ^{-1}\right\} vec\textbf{A}\le 1, \end{aligned}$$

and this confirms the assertion.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pirmohammadi, S., Bidram, H. Applications of resampling methods in multivariate Liu estimator. Comput Stat 39, 677–708 (2024). https://doi.org/10.1007/s00180-022-01316-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-022-01316-2

Keywords

Navigation