Skip to main content
Log in

Accelerated gradient descent methods with line search

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

We introduced an algorithm for unconstrained optimization based on the transformation of the Newton method with the line search into a gradient descent method. Main idea used in the algorithm construction is approximation of the Hessian by an appropriate diagonal matrix. The steplength calculation algorithm is based on the Taylor’s development in two successive iterative points and the backtracking line search procedure. The linear convergence of the algorithm is proved for uniformly convex functions and strictly convex quadratic functions satisfying specified conditions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Andrei, N.: An acceleration of gradient descent algorithm with backtracking for unconstrained optimization. Numer. Algor. 42, 63–73 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  2. Andrei, N.: An unconstrained optimization test functions collection. http://www.ici.ro/camo/journal/vol10/v10a10.pdf

  3. Armijo, L.: Minimization of functions having Lipschitz first partial derivatives. Pac. J. Math 6, 1–3 (1966)

    MathSciNet  Google Scholar 

  4. Barzilai, J., Borwein, J.M.: Two point step size gradient method. IMA J. Numer. Anal. 8, 141–148 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  5. Dai, Y.H.: Alternate step gradient method. Report AMSS–2001–041, Academy of Mathematics and Systems Sciences, Chinese Academy of Sciences (2001)

  6. Dai, Y.H., Fletcher, R.: On the asymptotic behaviour of some new gradient methods. Numerical Analysis Report, NA/212, Dept. of Math. University of Dundee, Scotland, UK (2003)

  7. Dai, Y.H., Yuan, J.Y., Yuan, Y.: Modified two-point step-size gradient methods for unconstrained optimization. Comput. Optim. Appl. 22, 103–109 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  8. Dai, Y.H., Yuan, Y.: Alternate minimization gradient method. IMA J. Numer. Anal. 23, 377–393 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  9. Dai, Y.H., Yuan, Y.: Analysis of monotone gradient methods. J. Ind. Manage. Optim. 1, 181–192 (2005)

    MATH  MathSciNet  Google Scholar 

  10. Dai, Y.H., Zhang, H.: An adaptive two-point step-size gradient method. Research report, Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese Academy of Sciences (2001)

  11. Dai, Y.H., Liao, L.Z.: R-linear convergence of the Barzilai and Borwein gradient method. IMA J. Numer. Anal. 22, 1–10 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  12. Fletcher, R., Reeves, C.M.: Function minimization by conjugate gradients. Comput. J. 7, 149–154 (1964)

    Article  MATH  MathSciNet  Google Scholar 

  13. Friedlander, A., Martinez, J.M., Molina, B., Raydan, M.: Gradient method with retards and generalizations. SIAM J. Numer. Anal. 36, 275–289 (1999)

    Article  MathSciNet  Google Scholar 

  14. Goldstein, A.A.: On steepest descent. SIAM J. Control 3, 147–151 (1965)

    Google Scholar 

  15. Lemaréchal, C.: A view of line search. In: Auslander, A., Oetti, W., Stoer J. (eds.) Optimization and Optimal Control, pp. 59–78. Springer, Berlin (1981)

    Chapter  Google Scholar 

  16. Luenberg, D.G., and Ye, Y.: Linear and nonlinear programming. Springer Science + Business Media, LLC, New York (2008)

    Google Scholar 

  17. Molina, B., Raydan, M.: Preconditioned Barzilai–Borwein method for the numerical solution of partial differential equations. Numer. Algor. 13, 45–60 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  18. Moré, J.J., Thuente, D.J.: On line search algorithm with guaranteed sufficient decrease. Mathematics and Computer Science Division Preprint MCS-P153-0590, Argone National Laboratory, Argone (1990)

  19. Ortega, J.M., Rheinboldt, W.C.: Iterative Solution Of Nonlinear Equation in Several Variables. Academic, London (1970)

    Google Scholar 

  20. Polak, E., Ribiére, G.: Note sur la convergence de méthodes de directions conjuguées. Revue Francaise Informat. Reserche Opérationnelle, 3e Année 16, 35–43 (1969)

    Google Scholar 

  21. Polyak, B.T.: The conjugate gradient method in extreme problems. USSR Comp. Math. Math. Phys. 9, 94–112 (1969)

    Article  Google Scholar 

  22. Potra, F.A., Shi, Y.: Efficient line search algorithm for unconstrained optimization. J. Optim. Theory Appl. 85, 677–704 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  23. Powell, M.J.D.: Some Global Convergence Properties of a Variable-Metric Algorithm For Minimization Without Exact Line Search, vol. 9, pp. 53–72. AIAM-AMS Proc., Philadelphia (1976)

  24. Raydan, M.: On the Barzilai and Borwein choice of steplength for the gradient method. IMA J. Numer. Anal. 13, 321–326 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  25. Raydan, M.: The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM J. Optim. 7, 26–33 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  26. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    MATH  Google Scholar 

  27. Shi, Z.-J.: Convergence of line search methods for unconstrained optimization. Appl. Math. Comput. 157, 393–405 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  28. Vrahatis, M.N., Androulakis, G.S., Lambrinos, J.N., Magoulas, G.D.: A class of gradient unconstrained minimization algorithms with adaptive step-size. J. Comp. and Appl. Math. 114, 367–386 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  29. Yuan, Y.: A new stepsize for the steepest descent method. Research report, Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Sciences, Chinese Academy of Sciences (2004)

  30. Sun, W., Yuan, Y.-X.: Optimization Theory and Methods: Nonlinear Programming. Springer, New York (2006)

    MATH  Google Scholar 

  31. Wolfe, P.: Convergence conditions for ascent methods. SIAM Rev. 11, 226–235 (1968)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Predrag S. Stanimirović.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Stanimirović, P.S., Miladinović, M.B. Accelerated gradient descent methods with line search. Numer Algor 54, 503–520 (2010). https://doi.org/10.1007/s11075-009-9350-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-009-9350-8

Keywords

Mathematics Subject Classifications (2000)

Navigation