Skip to main content

Bounds on the Prediction Error of Penalized Least Squares Estimators with Convex Penalty

  • Conference paper
  • First Online:
Modern Problems of Stochastic Analysis and Statistics (MPSAS 2016)

Part of the book series: Springer Proceedings in Mathematics & Statistics ((PROMS,volume 208))

Abstract

This paper considers the penalized least squares estimator with arbitrary convex penalty. When the observation noise is Gaussian, we show that the prediction error is a subgaussian random variable concentrated around its median. We apply this concentration property to derive sharp oracle inequalities for the prediction error of the LASSO, the group LASSO, and the SLOPE estimators, both in probability and in expectation. In contrast to the previous work on the LASSO-type methods, our oracle inequalities in probability are obtained at any confidence level for estimators with tuning parameters that do not depend on the confidence level. This is also the reason why we are able to establish sparsity oracle bounds in expectation for the LASSO-type estimators, while the previously known techniques did not allow for the control of the expected risk. In addition, we show that the concentration rate in the oracle inequalities is better than it was commonly understood before.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Alekseev, V.M., Tikhomirov, V.M., Fomin, S.V.: Optimal Control. Consultants Bureau, New York (1987)

    Book  MATH  Google Scholar 

  2. Bellec, P.C., Lecué, G., Tsybakov, A.B.: Slope Meets Lasso: Improved Oracle Bounds and Optimality (2016). arXiv:1605.08651

  3. Bickel, P.J., Ritov, Y., Tsybakov, A.B.: Simultaneous analysis of Lasso and Dantzig selector. Ann. Stat. 37(4), 1705–1732 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bogdan, M., van den Berg, E., Sabatti, C., Su, W., Candès, E.J.: SLOPE-adaptive variable selection via convex optimization. Ann. Appl. Stat. 9(3), 1103–1140 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bühlmann, P., van de Geer, S.: Statistics for High-dimensional Data: Methods, Theory and Applications. Springer, Berlin (2011)

    Book  MATH  Google Scholar 

  6. Dalalyan, A.S., Hebiri, M., Lederer, J.: On the Prediction Performance of the Lasso (2014). arXiv:1402.1700

  7. Giraud. C.: Introduction to High-dimensional Statistics, vol. 138. CRC Press, Boca Raton (2014)

    Google Scholar 

  8. Hiriart-Urruty, J.-B., Lemaréchal, C.: Convex Analysis and Minimization Algorithms I: Fundamentals. Springer, Berlin (1993)

    Google Scholar 

  9. Koltchinskii, V., Lounici, K., Tsybakov, A.B.: Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Stat. 39(5), 2302–2329 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Lifshits, M.: Lectures on Gaussian Processes. Springer, Berlin (2012)

    Google Scholar 

  11. Lounici, K., Pontil, M., Tsybakov, A.B., van de Geer, S.: Oracle inequalities and optimal inference under group sparsity. Ann. Stat. 39, 2164–2204 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  12. Maurer, A., Pontil, M.: Structured sparsity and generalization. J. Mach. Learn. Res. 13, 671–690 (2012)

    MathSciNet  MATH  Google Scholar 

  13. Micchelli, C.A., Morales, J.M., Pontil, M.: A family of penalty functions for structured sparsity. Adv. Neural. Inf. Process. Syst. NIPS 23, 2010 (2010)

    Google Scholar 

  14. Peypouquet, J.: Convex Optimization in Normed Spaces: Theory, Methods and Examples. Springer, Berlin (2015)

    Book  MATH  Google Scholar 

  15. Tibshirani, R.J., Taylor, J.: Degrees of freedom in lasso problems. Ann. Stat. 40(2), 1198–1232 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  16. van de Geer, S.: Estimation and Testing under Sparsity. Springer, Berlin (2016)

    Book  MATH  Google Scholar 

  17. van de Geer, S., Wainwright, M.: On Concentration for (Regularized) Empirical Risk Minimization (2015). arXiv:1512.00677

Download references

Acknowledgements

This work was supported by GENES and by the French National Research Agency (ANR) under the grants IPANEMA (ANR-13-BSH1-0004-02) and Labex Ecodec (ANR-11-LABEX-0047). It was also supported by the “Chaire Economie et Gestion des Nouvelles Donné es”, under the auspices of Institut Louis Bachelier, Havas-Media and Paris-Dauphine.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexandre Tsybakov .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bellec, P., Tsybakov, A. (2017). Bounds on the Prediction Error of Penalized Least Squares Estimators with Convex Penalty. In: Panov, V. (eds) Modern Problems of Stochastic Analysis and Statistics. MPSAS 2016. Springer Proceedings in Mathematics & Statistics, vol 208. Springer, Cham. https://doi.org/10.1007/978-3-319-65313-6_13

Download citation

Publish with us

Policies and ethics