Advertisement

χ2-Confidence Sets in High-Dimensional Regression

  • Sara van de GeerEmail author
  • Benjamin Stucky
Conference paper
Part of the Abel Symposia book series (ABEL, volume 11)

Abstract

We study a high-dimensional regression model. Aim is to construct a confidence set for a given group of regression coefficients, treating all other regression coefficients as nuisance parameters. We apply a one-step procedure with the square-root Lasso as initial estimator and a multivariate square-root Lasso for constructing a surrogate Fisher information matrix. The multivariate square-root Lasso is based on nuclear norm loss with 1-penalty. We show that this procedure leads to an asymptotically χ2-distributed pivot, with a remainder term depending only on the 1-error of the initial estimator. We show that under 1-sparsity conditions on the regression coefficients β0 the square-root Lasso produces to a consistent estimator of the noise variance and we establish sharp oracle inequalities which show that the remainder term is small under further sparsity conditions on β0 and compatibility conditions on the design.

Keywords

Tuning Parameter Remainder Term Initial Estimator Structure Sparsity Nuclear Norm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Bach, F.: Structured sparsity-inducing norms through submodular functions. In: Advances in Neural Information Processing Systems (NIPS), vol. 23, pp. 118–126 (2010)Google Scholar
  2. 2.
    Belloni, A., Chernozhukov, V., Wang, L.: Square-root Lasso: pivotal recovery of sparse signals via conic programming. Biometrika 98(4), 791–806 (2011)CrossRefMathSciNetzbMATHGoogle Scholar
  3. 3.
    Belloni, A., Chernozhukov, V., Kato, K.: Uniform postselection inference for LAD regression models (2013). arXiv:1306.0282Google Scholar
  4. 4.
    Belloni, A., Chernozhukov, V., Wei, Y.: Honest confidence regions for logistic regression with a large number of controls (2013). arXiv:1306.3969Google Scholar
  5. 5.
    Belloni, A., Chernozhukov, V., Hansen, C.: Inference on treatment effects after selection among high-dimensional controls. Rev. Econ. Stud. 81(2), 608–650 (2014)CrossRefMathSciNetGoogle Scholar
  6. 6.
    Bickel, P., Ritov, Y., Tsybakov, A.: Simultaneous analysis of Lasso and Dantzig selector. Ann. Stat. 37, 1705–1732 (2009)CrossRefMathSciNetzbMATHGoogle Scholar
  7. 7.
    Bunea, F., Lederer, J., She, Y.: The group square-root Lasso: theoretical properties and fast algorithms (2013). arXiv:1302.0261Google Scholar
  8. 8.
    Javanmard, A., Montanari, A.: Hypothesis testing in high-dimensional regression under the Gaussian random design model: asymptotic theory (2013). arXiv:1301.4240v1Google Scholar
  9. 9.
    Koltchinskii, V., Lounici, K., Tsybakov, A.B.: Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Stat. 39 (5), 2302–2329 (2011)CrossRefMathSciNetzbMATHGoogle Scholar
  10. 10.
    Laurent, B., Massart, P.: Adaptive estimation of a quadratic functional by model selection. Ann. Stat. 28(5), 1302–1338 (2000)CrossRefMathSciNetzbMATHGoogle Scholar
  11. 11.
    Lounici, K., Pontil, M., van de Geer, S., Tsybakov, A.B.: Oracle inequalities and optimal inference under group sparsity. Ann. Stat. 39, 2164–2204 (2011)CrossRefzbMATHGoogle Scholar
  12. 12.
    Mitra, R., Zhang, C.-H.: The benefit of group sparsity in group inference with de-biased scaled group Lasso (2014). arXiv:1412.4170Google Scholar
  13. 13.
    Obozinski, G., Bach, F.: Convex relaxation for combinatorial penalties (2012). arXiv:1205.1240Google Scholar
  14. 14.
    Sun, T., Zhang, C.-H.: Scaled sparse linear regression. Biometrika 99, 879–898 (2012)CrossRefMathSciNetzbMATHGoogle Scholar
  15. 15.
    Sun, T., Zhang, C.-H.: Sparse matrix inversion with scaled lasso. J. Mach. Learn. Res. 14(1), 3385–3418 (2013)MathSciNetzbMATHGoogle Scholar
  16. 16.
    van de Geer, S.A.: The deterministic Lasso. In: JSM Proceedings, p. 140. American Statistical Association, Alexandria, VA (2007)Google Scholar
  17. 17.
    van de Geer, S.: Weakly decomposable regularization penalties and structured sparsity. Scand. J. Stat. 41(1), 72–86 (2014)CrossRefMathSciNetzbMATHGoogle Scholar
  18. 18.
    van de Geer, S., Bühlmann, P., Ritov, Y., Dezeure, R.: On asymptotically optimal confidence regions and tests for high-dimensional models. Ann. Stat. 42, 1166–1202 (2014)CrossRefzbMATHGoogle Scholar
  19. 19.
    Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B 68, 49 (2006)CrossRefMathSciNetzbMATHGoogle Scholar
  20. 20.
    Zhang, C.-H., Zhang, S.S.: Confidence intervals for low dimensional parameters in high dimensional linear models. J. R. Stat. Soc. Ser. B Stat. Methodol. 76(1), 217–242 (2014)CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Seminar for StatisticsETH ZürichZürichSwitzerland

Personalised recommendations