Skip to main content
Log in

Stepsize analysis for descent methods

  • Contributed Papers
  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

The convergence rates of descent methods with different stepsize rules are compared. Among the stepsize rules considered are: constant stepsize, exact minimization along a line, Goldstein-Armijo rules, and stepsize equal to that which yields the minimum of certain interpolatory polynomials. One of the major results shown is that the rate of convergence of descent methods with the Goldstein-Armijo stepsize rules can be made as close as desired to the rate of convergence of methods that require exact minimization along a line. Also, a descent algorithm that combines a Goldstein-Armijo stepsize rule with a secant-type step is presented. It is shown that this algorithm has a convergence rate equal to the convergence of descent methods that require exact minimization along a line and that, eventually (i.e., near the minimum), it does not require a search to determine an acceptable stepsize.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Ortega, J. M., andRheinbolt, W. C.,Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, New York, 1970.

    Google Scholar 

  2. Goldstein, A. A.,Constructive Real Analysis, Harper and Row, New York, New York, 1967.

    Google Scholar 

  3. Armijo, L.,Minimization of Functions Having Continuous Partial Derivatives, Pacific Journal of Mathematics, Vol. 16, pp. 1–3, 1966.

    Google Scholar 

  4. Wolfe, P.,Convergence Conditions for Ascent Methods, SIAM Review, Vol. 11, pp. 226–235, 1969.

    Google Scholar 

  5. Polak, E.,Computational Methods in Optimization: A Unified Approach, Academic Press, New York, New York, 1971.

    Google Scholar 

  6. Crockett, J. B., andChernoff, H.,Gradient Methods of Maximization, Pacific Journal of Mathematics, Vol. 5, pp. 33–50, 1955.

    Google Scholar 

  7. Goldstein, A.,Cauchy's Method of Minimization, Numerische Mathematik, Vol. 4, pp. 146–150, 1962.

    Google Scholar 

  8. Polyak, B. T.,Gradient Methods for the Minimization of Functionals, USSR Computational Mathematics and Mathematical Physics, Vol. 3, pp. 864–878, 1963.

    Google Scholar 

  9. Greenstadt, J.,On the Relative Efficiencies of Gradient Methods, Mathematics of Computation, Vol. 21, pp. 360–367, 1967.

    Google Scholar 

  10. Luenberger, D. G.,Introduction to Linear and Nonlinear Programming, Addison-Wesley Publishing Company, Reading, Massachusetts, 1973.

    Google Scholar 

  11. Akaike, H.,On the Successive Transformation of Probability Distributions and Its Application to the Analysis of the Optimum Gradient Method, Annals of Institute of Mathematical Statistics, Tokyo, Japan, Vol. 11, pp. 1–17, 1959.

    Google Scholar 

  12. Bauer, F. L., andHouseholder, A. S.,Some Inequalities Involving the Euclidean Condition of a Matrix, Numerische Mathematik, Vol. 2, pp. 308–311, 1960.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

Communicated by D. Q. Mayne

Rights and permissions

Reprints and permissions

About this article

Cite this article

Cohen, A.I. Stepsize analysis for descent methods. J Optim Theory Appl 33, 187–205 (1981). https://doi.org/10.1007/BF00935546

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00935546

Key Words

Navigation