Abstract
In this chapter we present various large sample estimation strategies in a classical multiple regression model for estimation of the regression coefficients. These strategies are motivated by Stein-rule and pretest estimation procedures. In the context of two competing regression models (the full model and the candidate submodel), we suggest an adaptive shrinkage estimation technique that shrinks the full model estimate in the direction of the submodel estimate. The estimator based on pretest principle is also considered. Further, we apply the penalty estimation strategy for both variable selection and parameters estimation. We investigate the properties of the suggested estimators analytically and numerically. We provide the relative performance of all the listed estimators with the estimators based on the full model, respectively. Our analytical and simulation studies reveal that the shrinkage estimation strategy outperforms the estimation based on the full model procedure. Further, based on our limited simulation study, shrinkage and pretest estimators outperform penalty estimators when there are many inactive covariates in the model. Finally, the suggested methodology is evaluated through application to a real prostate data.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ahmed, S., & Basu, A. K. (2000). Least squares, preliminary test and stein-type estimation in general vector ar(p) models. Statistica neerlandica, 54, 47–66.
Ahmed, S. E. (1997). Asymptotic shrinkage estimation: The regression case. Applied Statistical Science, II, 113–139.
Ahmed, S. E. (2001). Shrinkage estimation of regression coefficients from censored data with multiple observations. In S. Ahmed & N. Reid (Eds.), Empirical bayes and likelihood inference, lecture notes in statistics (Vol. 148, pp. 103–120). New York: Springer-Verlag.
Ahmed, S. E., & Chitsaz, S. (2011). Data-based adaptive estimation in an investment model. Communications in Statistics-Theory and Methods, 40, 19–20.
Ahmed, S. E., Doksum, K. A., Hossain, S., & You, J. (2007). Shrinkage, pretest and absolute penalty estimators in partially linear models. Australian and New Zealand Journal of Statistics, 49, 435–454.
Ahmed, S. E., Hossain, S., & Doksum, K. A. (2012). LASSO and shrinkage estimation in Weibull censored regression models. Journal of Statistical Planning and Inference, 12, 1273–1284.
Ahmed, S. E., & Nicol, C. J. (2012). An application of shrinkage estimation to the nonlinear regression model. Computational Statistics and Data Analysis, 56, 3309–3321.
Ahmed, S. E., Raheem, E., and Hossain, M. S. (2010). Absolute penalty estimation. In International encyclopedia of statistical science. New York: Springer.
Ahmed, S. E., Saleh, A. K., & Md, E. (1990). Estimation strategies for the intercept vector in a simple linear multivariate normal regression model. Computational Statistics and Data Analysis, 10, 193–206.
Efron, B., Hastie, T., Johnstone, I., & Tibshirani, R. (2004). Least angle regression. Annals of Statistics, 32, 407–499.
Efron, B., & Morris, C. (1972). Empirical bayes on vector observations-an extension of stein’s method. Biometrika, 59, 335–347.
Fan, J. (1997). Comments on wavelets in statistics: A review by A. Antoniadis. Journal of the Italian Statistical Association, 6, 131–138.
Fan, J., & Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456), 1348–1360.
Frank, I. E., & Friedman, J. H. (1993). A statistical view of some chemometrics regression tools. Technometrics, 35, 109–148.
Friedman, J., Hastie, T., & Tibshirani, R. (2010). Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1), 1–22.
Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning: Data mining inference and prediction. New York: Springer.
Khan, B. U., & Ahmed, S. E. (2003). Improved estimation of coefficient vector in a regression model. Communications in Statistics-Simulation and Computation, 32(3), 747–769.
R Development Core Team. (2010). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. ISBN 3-900051-07-0.
Raheem, S. E., & Ahmed, S. E. (2011). Shrink: An R package for shrinkage estimation in linear regression models. Beta version: R package.
Raheem, S. E., & Ahmed, S. E. (2012). Shrinkage and absolute penalty estimation in linear models. (Submitted) WIREs Computational Statistics.
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B, 267–288.
Tibshirani, R. J., & Tibshirani, R. (2009). A bias correction for the minimum error rate in cross-validation. Annals of Applied Statistics, 3, 822–829.
Zhang, C. H. (2010). Nearly unbiased variable selection under minimax concave penalty. Annals of Statistics, 38, 894–942.
Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101(456), 1418–1429.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2014 The Author(s)
About this chapter
Cite this chapter
Ahmed, S.E. (2014). Estimation Strategies in Multiple Regression Models. In: Penalty, Shrinkage and Pretest Strategies. SpringerBriefs in Statistics. Springer, Cham. https://doi.org/10.1007/978-3-319-03149-1_4
Download citation
DOI: https://doi.org/10.1007/978-3-319-03149-1_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-03148-4
Online ISBN: 978-3-319-03149-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)