Abstract
In the context of linear regression models, it is well-known that the ordinary least squares estimator is very sensitive to outliers whereas the least absolute deviations (LAD) is an alternative method to estimate the known regression coefficients. Selecting significant variables is very important; however, by choosing these variables some information may be sacrificed. To prevent this, in our proposal, we can combine the full model estimates toward the candidate sub-model, resulting in improved estimators in risk sense. In this article, we consider shrinkage estimators in a sparse linear regression model and study their relative asymptotic properties. Advantages of the proposed estimators over the usual LAD estimator are demonstrated through a Monte Carlo simulation as well as a real data example.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ahmed, S.E.: Penalty, Shrinkage and Pretest Estimation Strategies: Variable Selection and Estimation. Springer, Berlin (2014)
Ahmed, S.E., Hossain, S., Doksum, A.: Lasso and shrinkage estimation in weibull censored regression models. J. Stat. Plan. Inference 142(6), 1273–1284 (2012)
Ahmed, S.E., Hussein, A.A., Sen, P.K.: Risk comparison of some shrinkage m-estimators in linear models. J. Nonparametric Stat. 18(4–6), 401–415 (2006)
Arashi, M., Norouzirad, M., Ahmed, S.E., Yüzbaşı, B.: Rank-based LIU regression. Comput. Stat. 1–37 (2018)
Barro, R., Lee, J.W.: Dataset for a panel of 138 countries discussion paper (1994)
Bassett, G., Koenker, R.: Asymptotic theory of least absolute error regression. J. Am. Stat. Assoc. 78(363), 618–622 (1978)
Hossain, S., Ahmed, S.E.: Shrinkage and penalty estimators of a poisson regression model. Aust. N. Z. J. Stat. 54(3), 359–373 (2012)
Hurvish, C.M., Tsai, C.L.: Model selection for least absolute deviations regression in small samples. Stat. Probab. Lett. 9(3), 259–265 (1990)
Koenker, R.: Quantreg: Quantile Regression. R Package Version 5.19 (2015)
Lawless, J.F., Singhal, K.: Efficient screening of nonnormal regression models. Biometrics 34(3), 318–327 (1978)
Montgomery, D.C., Peck, E.A., Vinig, G.G.: Introduction to Linear Regression Analysis, 3rd edn. Wiley, New York (2011)
Saleh, A.K.M.E.: Theory of Preliminary Test and Stein-Type Estimation with Applications. Wiley, New York (2006)
Sen, P.K., Saleh, A.K.M.E.: On preliminary and shrinkage M-estimation in linear model. Ann. Stat. 15, 1580–1592 (1987)
Shumway, R.H., Azari, A.S., Pawitan, Y.: Modeling mortality fluctuations in Los Angeles as functions of pollution and weather effects. Environ. Res. 45(2), 224–241 (1988)
Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 58, 267–288 (1996)
Yüzbaşı, B., Ahmed, S.E., Güngör, M.: Improved penalty strategies in linear regression models. REVSTAT-Stat. J. 15(2), 251–276 (2017)
Yüzbaşı, B., Aşar, Y., Şık, Ş., Demiralp, A.: Improving estimations in quantile regression model with autoregressive errors. Therm. Sci. 22(1), S97–S107 (2018)
Wang, H., Li, G., Jiang, G.: Robust regression shrinkage and consistent variable selection through the lad-lasso. J. Bus. Econ. Stat. 25(3), 347–355 (2007)
Acknowledgements
We would like to thank anonymous reviewers for constructive comments which significantly improved the presentation of the paper. M. Arashi’s research is supported in part by the National Research Foundation of South Africa (ref. IFR170227223754 grant number 109214). Prof. S. Ejaz Ahmed is supported by the Natural Sciences and the Engineering Research Council of Canada (NSERC).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
Lemma 1
[6] Whenever the assumptions (A) and (B) hold, \(\sqrt{n}(\hat{\varvec{\beta }}^{F}-\varvec{\beta })\) convergences in distribution to a p-dimensional normal distribution with mean \(\varvec{0}\) and covariance matrix \(\tau ^2 \varvec{C}^{-1}\), where \(\tau ^2\) is the asymptotic variance of the sample median of symmetric F distribution with median 0, i.e. \(\tau ^2=[2f(0)]^{-2}\). On the other hand,
Corollary 1
Assume (A) and (B) are held, thus
Proof
Suppose that the matrix \(\varvec{x}\) is partitioned as \(\varvec{X}_1\) and \(\varvec{X}_2\) with sizes \(n\times p_1\) and \(n\times p_2\), respectively. Accordingly, \(\varvec{\mu }\) and \(\varvec{\Sigma }\) partitioned as follows:
and
Now, based on assumption (A) consider the covariance matrix as \(\tau ^2 \varvec{C}^{-1}\). Using Schure complement, the inverted blockwise is as below
Base on Lemma 1, the result is concluded.
Proof
(Theorem 1). Based on Eq. (19), we can construct Hotelling’s T-squared statistic as
which follows of a \(\chi ^2\)-distribution with \(p_2\) d.f. and non-centrality parameter \(\varDelta \). Under the null hypothesis, the test statistic, \(\mathscr {L}_n\), is a central \(\chi ^2\) distribution with \(p_2\) d.f.
Proof
(Theorem 2). The first equation is derived from (18). To obtain the asymptotic distribution of the restricted LAD estimator and complete the proof, we must use the conditional normal \(\left( \hat{\varvec{\beta }}^{F}_1|\hat{\varvec{\beta }}_2^{F}=\varvec{\beta }_2=\varvec{0}\right) \).
The distribution \(X_1|X_2=x_2\) is
Now, with substituting the Eqs. (18)–(20) into (22) the result is obtained.
The mean vector is as
Also, the covariance matrix has the form
Using this fact that \(\varvec{C}_{12}\varvec{C}_{22}^{-1}\varvec{C}_{21}=\varvec{C}_{11}-\varvec{C}_{11.2}\), the Eq. (24) equals to
The proof is complete.
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Yüzbaşı, B., Ahmed, S.E., Arashi, M., Norouzirad, M. (2020). LAD, LASSO and Related Strategies in Regression Models. In: Xu, J., Ahmed, S., Cooke, F., Duca, G. (eds) Proceedings of the Thirteenth International Conference on Management Science and Engineering Management. ICMSEM 2019. Advances in Intelligent Systems and Computing, vol 1001. Springer, Cham. https://doi.org/10.1007/978-3-030-21248-3_32
Download citation
DOI: https://doi.org/10.1007/978-3-030-21248-3_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-21247-6
Online ISBN: 978-3-030-21248-3
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)