Model selection via standard error adjusted adaptive lasso
The adaptive lasso is a model selection method shown to be both consistent in variable selection and asymptotically normal in coefficient estimation. The actual variable selection performance of the adaptive lasso depends on the weight used. It turns out that the weight assignment using the OLS estimate (OLS-adaptive lasso) can result in very poor performance when collinearity of the model matrix is a concern. To achieve better variable selection results, we take into account the standard errors of the OLS estimate for weight calculation, and propose two different versions of the adaptive lasso denoted by SEA-lasso and NSEA-lasso. We show through numerical studies that when the predictors are highly correlated, SEA-lasso and NSEA-lasso can outperform OLS-adaptive lasso under a variety of linear regression settings while maintaining the same theoretical properties of the adaptive lasso.
KeywordsBIC Model selection consistency Solution path Variable selection
Unable to display preview. Download preview PDF.
- Friedman J., Hastie T., Tibshirani R. (2010) Regularization paths for generalized linear models via coordinate descent.. Journal of Statistical Software 33: 1–22Google Scholar
- Härdle, W., Simar, L. (2007). Applied multivariate statistical analysis (2nd ed.). New York: Springer.Google Scholar
- Huang, J., Ma, S., Zhang, C.-H. (2008). Adaptive Lasso for sparse high-dimensional regression models. Statistica Sinica, 18, 1603–1618.Google Scholar
- Osborne, M., Presnell, B., Turlach, B. (2000). A new approach to variable selection in least squares problems. IMA Journal of Numerical Analysis, 20, 389–404.Google Scholar
- Zhang, T. (2011b). Multi-stage convex relaxation for feature selection. arXiv:1106.0565.Google Scholar