Statistics and Computing

, Volume 25, Issue 4, pp 781–795

Scalable estimation strategies based on stochastic approximations: classical results and new insights

Article

DOI: 10.1007/s11222-015-9560-y

Cite this article as:
Toulis, P. & Airoldi, E.M. Stat Comput (2015) 25: 781. doi:10.1007/s11222-015-9560-y

Abstract

Estimation with large amounts of data can be facilitated by stochastic gradient methods, in which model parameters are updated sequentially using small batches of data at each step. Here, we review early work and modern results that illustrate the statistical properties of these methods, including convergence rates, stability, and asymptotic bias and variance. We then overview modern applications where these methods are useful, ranging from an online version of the EM algorithm to deep learning. In light of these results, we argue that stochastic gradient methods are poised to become benchmark principled estimation procedures for large datasets, especially those in the family of stable proximal methods, such as implicit stochastic gradient descent.

Keywords

Maximum likelihood Recursive estimation Implicit stochastic gradient descent methods Optimal learning rate Asymptotic analysis Big data 

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.Department of StatisticsHarvard UniversityCambridgeUSA

Personalised recommendations