Skip to main content

Estimation Strategies in Multiple Regression Models

  • Chapter
  • First Online:
Penalty, Shrinkage and Pretest Strategies

Part of the book series: SpringerBriefs in Statistics ((BRIEFSSTATIST))

  • 1263 Accesses

Abstract

In this chapter we present various large sample estimation strategies in a classical multiple regression model for estimation of the regression coefficients. These strategies are motivated by Stein-rule and pretest estimation procedures. In the context of two competing regression models (the full model and the candidate submodel), we suggest an adaptive shrinkage estimation technique that shrinks the full model estimate in the direction of the submodel estimate. The estimator based on pretest principle is also considered. Further, we apply the penalty estimation strategy for both variable selection and parameters estimation. We investigate the properties of the suggested estimators analytically and numerically. We provide the relative performance of all the listed estimators with the estimators based on the full model, respectively. Our analytical and simulation studies reveal that the shrinkage estimation strategy outperforms the estimation based on the full model procedure. Further, based on our limited simulation study, shrinkage and pretest estimators outperform penalty estimators when there are many inactive covariates in the model. Finally, the suggested methodology is evaluated through application to a real prostate data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Ahmed, S., & Basu, A. K. (2000). Least squares, preliminary test and stein-type estimation in general vector ar(p) models. Statistica neerlandica, 54, 47–66.

    Google Scholar 

  • Ahmed, S. E. (1997). Asymptotic shrinkage estimation: The regression case. Applied Statistical Science, II, 113–139.

    Google Scholar 

  • Ahmed, S. E. (2001). Shrinkage estimation of regression coefficients from censored data with multiple observations. In S. Ahmed & N. Reid (Eds.), Empirical bayes and likelihood inference, lecture notes in statistics (Vol. 148, pp. 103–120). New York: Springer-Verlag.

    Google Scholar 

  • Ahmed, S. E., & Chitsaz, S. (2011). Data-based adaptive estimation in an investment model. Communications in Statistics-Theory and Methods, 40, 19–20.

    Google Scholar 

  • Ahmed, S. E., Doksum, K. A., Hossain, S., & You, J. (2007). Shrinkage, pretest and absolute penalty estimators in partially linear models. Australian and New Zealand Journal of Statistics, 49, 435–454.

    Article  MathSciNet  MATH  Google Scholar 

  • Ahmed, S. E., Hossain, S., & Doksum, K. A. (2012). LASSO and shrinkage estimation in Weibull censored regression models. Journal of Statistical Planning and Inference, 12, 1273–1284.

    Article  MathSciNet  Google Scholar 

  • Ahmed, S. E., & Nicol, C. J. (2012). An application of shrinkage estimation to the nonlinear regression model. Computational Statistics and Data Analysis, 56, 3309–3321.

    Article  MathSciNet  MATH  Google Scholar 

  • Ahmed, S. E., Raheem, E., and Hossain, M. S. (2010). Absolute penalty estimation. In International encyclopedia of statistical science. New York: Springer.

    Google Scholar 

  • Ahmed, S. E., Saleh, A. K., & Md, E. (1990). Estimation strategies for the intercept vector in a simple linear multivariate normal regression model. Computational Statistics and Data Analysis, 10, 193–206.

    Article  MathSciNet  MATH  Google Scholar 

  • Efron, B., Hastie, T., Johnstone, I., & Tibshirani, R. (2004). Least angle regression. Annals of Statistics, 32, 407–499.

    Article  MathSciNet  MATH  Google Scholar 

  • Efron, B., & Morris, C. (1972). Empirical bayes on vector observations-an extension of stein’s method. Biometrika, 59, 335–347.

    Article  MathSciNet  MATH  Google Scholar 

  • Fan, J. (1997). Comments on wavelets in statistics: A review by A. Antoniadis. Journal of the Italian Statistical Association, 6, 131–138.

    Article  Google Scholar 

  • Fan, J., & Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456), 1348–1360.

    Article  MathSciNet  MATH  Google Scholar 

  • Frank, I. E., & Friedman, J. H. (1993). A statistical view of some chemometrics regression tools. Technometrics, 35, 109–148.

    Article  MATH  Google Scholar 

  • Friedman, J., Hastie, T., & Tibshirani, R. (2010). Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1), 1–22.

    Google Scholar 

  • Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning: Data mining inference and prediction. New York: Springer.

    Google Scholar 

  • Khan, B. U., & Ahmed, S. E. (2003). Improved estimation of coefficient vector in a regression model. Communications in Statistics-Simulation and Computation, 32(3), 747–769.

    Article  MathSciNet  MATH  Google Scholar 

  • R Development Core Team. (2010). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. ISBN 3-900051-07-0.

    Google Scholar 

  • Raheem, S. E., & Ahmed, S. E. (2011). Shrink: An R package for shrinkage estimation in linear regression models. Beta version: R package.

    Google Scholar 

  • Raheem, S. E., & Ahmed, S. E. (2012). Shrinkage and absolute penalty estimation in linear models. (Submitted) WIREs Computational Statistics.

    Google Scholar 

  • Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B, 267–288.

    Google Scholar 

  • Tibshirani, R. J., & Tibshirani, R. (2009). A bias correction for the minimum error rate in cross-validation. Annals of Applied Statistics, 3, 822–829.

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang, C. H. (2010). Nearly unbiased variable selection under minimax concave penalty. Annals of Statistics, 38, 894–942.

    Article  MathSciNet  MATH  Google Scholar 

  • Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101(456), 1418–1429.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Ejaz Ahmed .

Rights and permissions

Reprints and permissions

Copyright information

© 2014 The Author(s)

About this chapter

Cite this chapter

Ahmed, S.E. (2014). Estimation Strategies in Multiple Regression Models. In: Penalty, Shrinkage and Pretest Strategies. SpringerBriefs in Statistics. Springer, Cham. https://doi.org/10.1007/978-3-319-03149-1_4

Download citation

Publish with us

Policies and ethics