Abstract
In this paper we study a family of gradient descent algorithms to approximate the regression function from reproducing kernel Hilbert spaces (RKHSs), the family being characterized by a polynomial decreasing rate of step sizes (or learning rate). By solving a bias-variance trade-off we obtain an early stopping rule and some probabilistic upper bounds for the convergence of the algorithms. We also discuss the implication of these results in the context of classification where some fast convergence rates can be achieved for plug-in classifiers. Some connections are addressed with Boosting, Landweber iterations, and the online learning algorithms as stochastic approximations of the gradient descent method.
Similar content being viewed by others
Author information
Authors and Affiliations
Corresponding authors
Rights and permissions
About this article
Cite this article
Yao, Y., Rosasco, L. & Caponnetto, A. On Early Stopping in Gradient Descent Learning. Constr Approx 26, 289–315 (2007). https://doi.org/10.1007/s00365-006-0663-2
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00365-006-0663-2