Simultaneous estimation and variable selection in median regression using Lasso-type penalty

  • Jinfeng XuEmail author
  • Zhiliang Ying


We consider the median regression with a LASSO-type penalty term for variable selection. With the fixed number of variables in regression model, a two-stage method is proposed for simultaneous estimation and variable selection where the degree of penalty is adaptively chosen. A Bayesian information criterion type approach is proposed and used to obtain a data-driven procedure which is proved to automatically select asymptotically optimal tuning parameters. It is shown that the resultant estimator achieves the so-called oracle property. The combination of the median regression and LASSO penalty is computationally easy to implement via the standard linear programming. A random perturbation scheme can be made use of to get simple estimator of the standard error. Simulation studies are conducted to assess the finite-sample performance of the proposed method. We illustrate the methodology with a real example.


Variable selection Median regression Least absolute deviations Lasso Perturbation Bayesian information criterion 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Chen, S., Donoho, D. (1994). Basis pursuit. In 28th Asilomar Conference Signals. Asilomar: Systems Computers.Google Scholar
  2. Efron B., Johnstone I., Hastie T., Tibshirani R. (2004) Least angle regression (with discussions). Annals of Statistics 32: 407–499zbMATHCrossRefMathSciNetGoogle Scholar
  3. Fan J., Li R. (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96: 1348–1360zbMATHCrossRefMathSciNetGoogle Scholar
  4. Fan J., Li R. (2002) Variable selection for Cox’s proportional hazards model and frailty model. Annals of Statistics 30: 74–99zbMATHCrossRefMathSciNetGoogle Scholar
  5. Hurvich C.M., Tsai C.L. (1990) Model selection for Least absolute Deviations Regressions in Small Samples. Statistics and Probability Letters 9: 259–265CrossRefMathSciNetGoogle Scholar
  6. Knight K., Fu W.J. (2000) Asymptotics for Lasso-type estimators. Annals of Statistics 28: 1356–1378zbMATHCrossRefMathSciNetGoogle Scholar
  7. Koenker R., D’Orey V. (1987) Computing regression quantiles. Applied Statistics 36: 383–393CrossRefGoogle Scholar
  8. Shen X., Ye J. (2002) Adaptive model selection. Journal of the American Statistical Association 97: 210–221zbMATHCrossRefMathSciNetGoogle Scholar
  9. Pakes A., Pollard D. (1989) Simulation and the asymptotics of optimization estimators. Econometrica 57: 1027–1057zbMATHCrossRefMathSciNetGoogle Scholar
  10. Pollard, D. (1990). Empirical Processes: Theory and Applications, Reginal Conference Series Probability and Statistics: Vol. 2. Hayward: Institute of Mathematical Statistics.Google Scholar
  11. Pollard D. (1991) Asymptotics for least absolute deviation regression estimators. Econometric Theory 7: 186–199CrossRefMathSciNetGoogle Scholar
  12. Portnoy S., Koenker R. (1997) The Gaussian hare and the Laplacian tortoise: Computability of squared-error versus absolute-error estimators. Statistical Science 12: 279–296zbMATHCrossRefMathSciNetGoogle Scholar
  13. Rao C.R., Zhao L.C. (1992) Approximation to the distribution of M-estimates in linear models by randomly weighted bootstrap. Sankhyā A 54: 323–331zbMATHMathSciNetGoogle Scholar
  14. Ronchetti E., Staudte R.G. (1994) A Robust Version of Mallows’s C p. Journal of the American Statistical Association 89: 550–559zbMATHCrossRefMathSciNetGoogle Scholar
  15. Stamey T., Kabalin J., McNeal J., Johnstone I., Freiha F., Redwine E., Yang N. (1989) Prostate specific antigen in the diagnosis and treatment of adenocarcinoma of the prostate, ii: Radical prostatectomy treated patients. Journal of Urology 16: 1076–1083Google Scholar
  16. Tibshirani R.J. (1996) Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B 58: 267–288zbMATHMathSciNetGoogle Scholar
  17. Xu, J. (2005). Parameter estimation, model selection and inferences in L 1-based linear regression. PhD dissertation. Columbia University.Google Scholar
  18. Zou H. (2006) The adaptive lasso and its oracle properties. Journal of the American Statistical Association 101: 1418–1429zbMATHCrossRefMathSciNetGoogle Scholar
  19. Zou H., Hastie T., Tibshirani R. (2007) On the “degrees of freedom” of the LASSO. Annals of Statistics 35: 2173–2192zbMATHCrossRefMathSciNetGoogle Scholar

Copyright information

© The Institute of Statistical Mathematics, Tokyo 2008

Authors and Affiliations

  1. 1.Department of Statistics and Applied ProbabilityNational University of SingaporeSingaporeSingapore
  2. 2.Department of StatisticsColumbia UniversityNew YorkUSA

Personalised recommendations