Advertisement

Constructive Approximation

, Volume 26, Issue 2, pp 289–315 | Cite as

On Early Stopping in Gradient Descent Learning

  • Yuan Yao
  • Lorenzo Rosasco
  • Andrea Caponnetto
Article

Abstract

In this paper we study a family of gradient descent algorithms to approximate the regression function from reproducing kernel Hilbert spaces (RKHSs), the family being characterized by a polynomial decreasing rate of step sizes (or learning rate). By solving a bias-variance trade-off we obtain an early stopping rule and some probabilistic upper bounds for the convergence of the algorithms. We also discuss the implication of these results in the context of classification where some fast convergence rates can be achieved for plug-in classifiers. Some connections are addressed with Boosting, Landweber iterations, and the online learning algorithms as stochastic approximations of the gradient descent method.

Keywords

Convergence Rate Gradient Descent Tikhonov Regularization Reproduce Kernel Hilbert Space Gradient Descent Method 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer 2007

Authors and Affiliations

  1. 1.Department of Mathematics, University of CaliforniaBerkeley, CA 94720USA
  2. 2.C.B.C.L., Massachusetts Institute of Technology, Bldg. E25-201, 45 Carleton St.Cambridge, MA 02142USA
  3. 3.DISI, Universita di Genova, Via Dodecaneso 3516146 GenovaItaly

Personalised recommendations