Skip to main content

Estimation and Testing Under Sparsity

École d'Été de Probabilités de Saint-Flour XLV – 2015

  • Book
  • © 2016

Overview

  • Starting with the popular Lasso method as its prime example, the book then extends to a broad family of estimation methods for high-dimensional data
  • A theoretical basis for sparsity-inducing methods is provided, together with ways to build confidence intervals and tests
  • The focus is on common features of methods for high-dimensional data and, as such, a potential starting point is given for the analysis of other methods not treated in the book

Part of the book series: Lecture Notes in Mathematics (LNM, volume 2159)

Part of the book sub series: École d'Été de Probabilités de Saint-Flour (LNMECOLE)

This is a preview of subscription content, log in via an institution to check access.

Access this book

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

eBook USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

About this book

Taking the Lasso method as its starting point, this book describes the main ingredients needed to study general loss functions and sparsity-inducing regularizers. It also provides a semi-parametric approach to establishing confidence intervals and tests. Sparsity-inducing methods have proven to be very useful in the analysis of high-dimensional data. Examples include the Lasso and group Lasso methods, and the least squares method with other norm-penalties, such as the nuclear norm. The illustrations provided include generalized linear models, density estimation, matrix completion and sparse principal components. Each chapter ends with a problem section. The book can be used as a textbook for a graduate or PhD course.

Similar content being viewed by others

Keywords

Table of contents (18 chapters)

Reviews

“This book is presented as a series of lecture notes on the theory of penalized estimators under sparsity. … The level of detail is high, and almost all proofs are given in full, with discussion. Each chapter ends with a section of problems, which could be used in a study setting to improve understanding of the proofs.” (Andrew Duncan A. C. Smith, Mathematical Reviews, August, 2017)


“The book provides several examples and illustrations of the methods presented and discussed, while each of its 17 chapters ends with a problem section. Thus, it can be used as textbook for students mainly at postgraduate level.” (Christina Diakaki, zbMATH 1362.62006, 2017)

Authors and Affiliations

  • Seminar für Statistik HGG 24.1, ETH Zentrum, Zürich, Switzerland

    Sara van de Geer

Bibliographic Information

Publish with us