Skip to main content
Log in

Convergence of Implementable Descent Algorithms for Unconstrained Optimization

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

Descent algorithms use sufficient descent directions combined with stepsize rules, such as the Armijo rule, to produce sequences of iterates whose cluster points satisfy some necessary optimality conditions. In this note, we present a proof that the whole sequence of iterates converges for quasiconvex objective functions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Iusem, A. N., and Svaiter, B. F., A Proximal Regularization of the Steepest Descent Method, RAIRO-Recherche Opérationnelle, Vol. 29, pp. 123–130, 1995.

    Google Scholar 

  2. Armijo, L., Minimum of Functions Having Lipschitz-Continuous First Partial Derivatives, Pacific Journal of Mathematics, Vol. 16, pp. 1–3, 1966.

    Google Scholar 

  3. Wolfe, P., Convergence Conditions for Ascent Methods,SIAM Review, Vol. 11, pp. 226–235, 1969.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Dussault, J.P. Convergence of Implementable Descent Algorithms for Unconstrained Optimization. Journal of Optimization Theory and Applications 104, 739–745 (2000). https://doi.org/10.1023/A:1004602012151

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1004602012151

Navigation