Abstract
Multicollinearity among independent variables is one of the most common problems in regression models. The aftereffects of this problem, such as ill-conditioning, instability of estimators, and inflating mean squared error of ordinary least squares estimator (OLS), in the multivariate linear regression model (MLRM) are the same that of linear regression models. To combat multicollinearity, several approaches have been presented in the literature. Liu estimator (LE), as a well known estimator in this connection, has been used in linear, generalized linear, and nonlinear regression models by researchers in recent years. In this paper, for the first time, LE and jackknifed Liu estimator (JLE) are investigated in MLRM. To improve estimators in the sense of mean squared error, two known resampling methods, i.e., jackknife and bootstrap, are used. Finally, OLS, LE, and JLE are compared by a simulation study and also using a real data set, by resampling methods in MLRM.
Similar content being viewed by others
References
Anderson TW (2003) An introduction to multivariate statistical analysis, 3rd edn. Wiley, Hoboken, New Jersey
Arumairajan S, Wijekoon P (2017) Modified almost unbiased Liu estimator in linear regression model. Commun Math Stat 5:261–276
Brown PJ, Zidek JV (1980) Adaptive multivariate ridge regression. Ann Stat 8(1):64–74
Davison AC, Hinkley DV (1997) Bootstrap methods and their application. Cambridge University Press, Cambridge
Duran EA, Akdeniz F (2012) Efficiency of the modified jackknifed Liu-type estimator. Stat Pap 53:265–280
Eck DJ (2018) Bootstrapping for multivariate linear regression models. Stat Probab Lett 134:141–149
Efron B (1979) Bootstrap methods: another look at the jackknife. Ann Stat 7(1):1–26
Efron B, Tibshirani RJ (1993) An introduction to the bootstrap. Chapman & Hall, New York
Farebrother RW (1976) Further results on the mean square error of ridge regression. J R Stat Soc Ser B 38(3):248–250
Haitovsky Y (1987) On multivariate ridge regression. Biometrica 74(3):563–570
Hinkley DV (1977) Jackknifing in unbalanced situation. Technometrics 19(3):285–292
Hoerl AE, Kennard RW (1970) Ridge regression: biased estimation for nonorthogonal problems. Technometrics 12(1):55–67
Izenman AJ (2008) Modern multivariate statistical techniques. Springer, Cham
Kurans FS, Akay KU (2015) A new Liu-type estimator. Stat Pap 56:495–517
Kurtoglu F, Ozkale MR (2016) Liu estimation in generalized linear models: application on gamma distributed response variable. Stat Pap 57:911–928
Li Y, Yang H (2012) A new Liu-type estimator in linear regression model. Stat Pap 53:427–437
Liu K (1993) A new class of blased estimate in linear regression. Commun Stat Theory and Method 22(2):393–402
Liu XQ (2011) Improved Liu estimator in a linear regression model. J Stat Plan Inference 141:189–196
Mansson K, Kibria BMG, Shukur G (2012) On Liu estimators for the logit regression model. Econ Model 29:1483–1488
Mardia KV, Kent JT, Bibby JM (1979) Multivariate analysis. Academic Press Inc, San Diego
Mori Y, Suzuki T (2018) Generalized ridge estimator and model selection criteria in multivariate linear regression. J Multivar Anal 165:243–261
Nyquist H (1988) Applications of the jackknife procedure in ridge regression. Comput Stat Data Anal 6(2):177–183
Perveen I, Suhail M (2021) Bootstrap Liu estimators for poisson regression model. Commun Stat Simul Comput. https://doi.org/10.1080/03610918.2021.1916825
Pirmohammadi S, Bidram H (2022) On the Liu estimator in the beta and Kumaraswamy regression models: a comparative study. Commun Stat Theory Method 51:8553–8578
Qasim M, Amin M, Amanullah M (2018) On the performance of some new Liu parameters for the gamma regression model. J Stat Comput Simul 88:3065–3080
Quenouille M (1949) Approximate tests of correlation in time series. J R Stat Soc Ser B 11:18–84
Quenouille M (1956) Notes on bias in estimation. Biometrika 43:353–360
Rencher AC (1998) Multivariate statistical inference and applications. Wiley, Hoboken, New Jersey
Sclove SL (1971) Improved estimation of parameters in multivariate regression. Sankhya Ser A 33:61–66
Siray GU, Toker S, Kaciranlar S (2015) On the restricted Liu estimator in the logistic regression model. Commun Stat Simul Comput 44:217–232
Tukey J (1958) Bias and confidence in not samples, abstract. Ann Math Stat 29:614
Wu J (2016) Improved Liu-type estimator in partial linear model. Int J Comput Math 93(3):498–510
Wu J (2016) Modified restricted Liu estimator in logistic regression model. Comput Stat 31:1557–1567
Wu J (2016) Restricted difference-based Liu estimator in partially linear model. J Comput Appl Math 300:97–102
Wu J, Asar Y (2017) More on the restricted Liu estimator in the logistic regression model. Commun Stat Simul Comput 46(5):3680–3689
Acknowledgements
The authors would like to sincerely thank two anonymous referees for their constructive comments that appreciably improved the quality of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Here, the proofs of Theorems 1, 2, and 3 are given.
Proof of Theorem 1
Let \(\textbf{B}_{0}\) be an arbitrary estimator of \(\textbf{B}\). Then,
Finally, since \(\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right) ^{T}\textbf{X}^{T}\textbf{X}\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right)\) and \(\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right) ^{T}\left( \hat{\textbf{B}}_{LE}-\textbf{B}_{0}\right)\) are PSD matrices, \(\hat{\textbf{B}}_{LE}\) is an LE for \(\textbf{B}\). As we see, the used approach for LE is similar to that of OLS in MLRM.
The following lemma, given by Farebrother (1976), is used to prove Theorems 2, and 3:
Lemma 1
Let \(\textbf{M}\) be a positive definite (PD) matrix and \(\varvec{\upsilon }\) be a column vector. Then \(\textbf{M}-\varvec{\upsilon }\varvec{\upsilon }^T\) is a PSD matrix iff \(\varvec{\upsilon }^T\textbf{M}^{-1}\varvec{\upsilon }\le 1\).
Proof of Theorem 2
where
and
First, we show that \(\textbf{M}_1\) is a PD matrix. We have
Since \(\varvec{\varSigma }\) is a PD matrix, it is enough to show that \(\textbf{K}=\varvec{\varLambda }^{-1}-\textbf{G}_d\varvec{\varLambda }^{-1}\textbf{G}_d^T\) is a PD matrix. \(\textbf{K}\) is a diagonal matrix with ith element
that is a positive number and so \(\textbf{K}\) is a PD matrix. According to Lemma 1, \(\textbf{M}_1-\varvec{\upsilon }_1\varvec{\upsilon }_1^T\) is a PSD matrix, iff
iff
iff
iff
This completes the proof.
Proof of Theorem 3
The proof is similar to that of Theorem 2. From (15) and (17) we have:
where
and
The \(\textbf{M}_2\) is obtained as:
We just show that \(\textbf{L}=\varvec{\varLambda }^{-1}-\left( 2\textbf{I}_p-\textbf{G}_d\right) \textbf{G}_d\varvec{\varLambda }^{-1}\textbf{G}_d^T\left( 2\textbf{I}_p-\textbf{G}_d\right) ^T\) is a PD matrix. \(\textbf{L}\) is a diagonal matrix with ith element
which \(l_{ii}\) is a positive number. Thus, \(\textbf{L}\) is a PD matrix. Now, we conclude that \(\textbf{M}_2\) is a PD matrix, too. From Lemma 1, we know that \(\textbf{M}_2-\varvec{\upsilon }_2\varvec{\upsilon }_2^T\) is a PSD matrix, iff
iff
iff
and this confirms the assertion.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Pirmohammadi, S., Bidram, H. Applications of resampling methods in multivariate Liu estimator. Comput Stat 39, 677–708 (2024). https://doi.org/10.1007/s00180-022-01316-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00180-022-01316-2