Skip to main content
Log in

An acceleration of gradient descent algorithm with backtracking for unconstrained optimization

Numerical Algorithms Aims and scope Submit manuscript


In this paper we introduce an acceleration of gradient descent algorithm with backtracking. The idea is to modify the steplength t k by means of a positive parameter θ k , in a multiplicative manner, in such a way to improve the behaviour of the classical gradient algorithm. It is shown that the resulting algorithm remains linear convergent, but the reduction in function value is significantly improved.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions


  1. Armijo, L.: Minimization of functions having Lipschitz continuous first partial derivatives. Pac. J. Math. 6, 1–3 (1966)

    MathSciNet  Google Scholar 

  2. Bongartz, I., Conn, A.R., Gould, N.I.M., Toint, P.L.: CUTE: Constrained and unconstrained testing environments. ACM Trans. Math. Softw. 21, 123–160 (1995)

    Article  MATH  Google Scholar 

  3. Cauchy, A.: Méthodes générales pour la résolution des systémes déquations simultanées. C. R. Acad. Sci., Paris 25, 536–538 (1848)

    Google Scholar 

  4. Dennis, J.E., Schnabel, R.B.: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, Englewoods Cliffs, New Jersey (1983)

    MATH  Google Scholar 

  5. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  6. Fletcher, R.: Practical Methods of Optimization. Wiley, New York (1987)

    MATH  Google Scholar 

  7. Forsythe, G.E., Motzkin, T.S.: Asymptotic properties of the optimum gradient method. Bull. Am. Soc. 57, 183 (1951)

    Google Scholar 

  8. Goldstein, A.A.: On steepest descent. SIAM J. Control 3, 147–151 (1965)

    Google Scholar 

  9. Humphrey, W.E.: A general minimising routine – minfun. In: Lavi, A., Vogl, T.P. (eds.) Recent Advances in Optimisation Techniques. Wiley, New York (1966)

    Google Scholar 

  10. Lemaréchal, C.: A view of line search. In: Auslander, A., Oettli, W., Stoer, J. (eds.) Optimization and Optimal Control, pp. 59–78. Springer, Berlin (1981)

    Chapter  Google Scholar 

  11. Moré, J.J., Thuente, D.J.: On line search algorithms with guaranteed sufficient decrease. Mathematics and Computer Science Division Preprint MCS-P153-0590, Argonne National Laboratory, Argonne (1990)

  12. Potra, F.A., Shi, Y.: Efficient line search algorithm for unconstrained optimization. J. Optim. Theory Appl. 85, 677–704 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  13. Powell, M.J.D.: Some global convergence properties of a variable-metric algorithm for minimization without exact line searches. SIAM-AMS Proc., Philadelphia 9, 53–72 (1976)

    MathSciNet  Google Scholar 

  14. Schinzinger, R.: Optimization in electromagnetic system design. In: Lavi, A., Vogl, T.P. (Eds.) Recent Advances in Optimisation Techniques. Wiley, New York (1966)

    Google Scholar 

  15. Zhen-Jun, S.: Convergence of line search methods for unconstrained optimization. Appl. Math. Comput. 157, 393–405 (2004)

    Article  MathSciNet  Google Scholar 

  16. Wolfe, P.: Convergence conditions for ascent methods. SIAM Rev. 11, 226–235 (1968)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Neculai Andrei.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Andrei, N. An acceleration of gradient descent algorithm with backtracking for unconstrained optimization. Numer Algor 42, 63–73 (2006).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


AMS subject classifications