Skip to main content
Log in

New error bounds and their applications to convergence analysis of iterative algorithms

  • Published:
Mathematical Programming Submit manuscript

Abstract.

We present two new error bounds for optimization problems over a convex set whose objective function f is either semianalytic or γ-strictly convex, with γ≥1. We then apply these error bounds to analyze the rate of convergence of a wide class of iterative descent algorithms for the aforementioned optimization problem. Our analysis shows that the function sequence {f(x k )} converges at least at the sublinear rate of k for some positive constant ε, where k is the iteration index. Moreover, the distances from the iterate sequence {x k} to the set of stationary points of the optimization problem converge to zero at least sublinearly.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Author information

Authors and Affiliations

Authors

Additional information

Received: October 5, 1999 / Accepted: January 1, 2000¶Published online July 20, 2000

Rights and permissions

Reprints and permissions

About this article

Cite this article

Luo, ZQ. New error bounds and their applications to convergence analysis of iterative algorithms. Math. Program. 88, 341–355 (2000). https://doi.org/10.1007/s101070050020

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s101070050020

Keywords

Navigation