Advertisement

Computational Optimization and Applications

, Volume 38, Issue 3, pp 401–416 | Cite as

Scaled conjugate gradient algorithms for unconstrained optimization

  • Neculai AndreiEmail author
Article

Abstract

In this work we present and analyze a new scaled conjugate gradient algorithm and its implementation, based on an interpretation of the secant equation and on the inexact Wolfe line search conditions. The best spectral conjugate gradient algorithm SCG by Birgin and Martínez (2001), which is mainly a scaled variant of Perry’s (1977), is modified in such a manner to overcome the lack of positive definiteness of the matrix defining the search direction. This modification is based on the quasi-Newton BFGS updating formula. The computational scheme is embedded in the restart philosophy of Beale–Powell. The parameter scaling the gradient is selected as spectral gradient or in an anticipative manner by means of a formula using the function values in two successive points. In very mild conditions it is shown that, for strongly convex functions, the algorithm is global convergent. Preliminary computational results, for a set consisting of 500 unconstrained optimization test problems, show that this new scaled conjugate gradient algorithm substantially outperforms the spectral conjugate gradient SCG algorithm.

Keywords

Unconstrained optimization Conjugate gradient method Spectral gradient method BFGS formula Numerical comparisons 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Andrei, N.: A new gradient descent method for unconstrained optimization. ICI Technical report (March 2004) Google Scholar
  2. 2.
    Barzilai, J., Borwein, J.M.: Two point step size gradient method. IMA J. Numer. Anal. 8, 141–148 (1988) zbMATHCrossRefMathSciNetGoogle Scholar
  3. 3.
    Birgin, E., Martínez, J.M.: A spectral conjugate gradient method for unconstrained optimization. Appl. Math. Optim. 43, 117–128 (2001) zbMATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Bongartz, I., Conn, A.R., Gould, N.I.M., Toint, P.L.: CUTE: constrained and unconstrained testing environments. ACM Trans. Math. Softw. 21, 123–160 (1995) zbMATHCrossRefGoogle Scholar
  5. 5.
    Cauchy, A.: Méthodes générales pour la résolution des systèmes d’équations simultanées. C. R. Acad. Sci. Paris 25, 536–538 (1847) Google Scholar
  6. 6.
    Dai, Y.H., Liao, L.Z.: New conjugate conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43, 87–101 (2001) zbMATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Fletcher, R.: On the Barzilai–Borwein method. Numerical analysis report NA/207 (2001) Google Scholar
  8. 8.
    Fletcher, R., Reeves, C.M.: Function minimization by conjugate gradients. Comput. J. 7, 149–154 (1964) zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Golub, G.H., O’Leary, D.P.: Some history of the conjugate gradient and Lanczos algorithms: 1948–1976. SIAM Rev. 31, 50–102 (1989) zbMATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16, 170–192 (2005) zbMATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Hager, W.W., Zhang, H.: A survey of nonlinear conjugate gradient methods. Pac. J. Optim. 2, 35–58 (2006) zbMATHGoogle Scholar
  12. 12.
    Hestenes, M.R., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Stand. Sect. B 48, 409–436 (1952) MathSciNetGoogle Scholar
  13. 13.
    Perry, J.M.: A class of conjugate gradient algorithms with a two step variable metric memory. Discussion paper 269, Center for Mathematical Studies in Economics and Management Science, Northwestern University (1977) Google Scholar
  14. 14.
    Polak, E., Ribière, G.: Note sur la convergence de méthods de directions conjugées. Rev. Française Inform. Res. Opér. 16, 35–43 (1969) Google Scholar
  15. 15.
    Powell, M.J.D.: Restart procedures for the conjugate gradient method. Math. Program. 12, 241–254 (1977) zbMATHCrossRefMathSciNetGoogle Scholar
  16. 16.
    Raydan, M.: The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem. SIAM J. Optim. 7, 26–33 (1997) zbMATHCrossRefMathSciNetGoogle Scholar
  17. 17.
    Shanno, D.F.: Conjugate gradient methods with inexact searches. Math. Oper. Res. 3, 244–256 (1978) zbMATHMathSciNetCrossRefGoogle Scholar
  18. 18.
    Shanno, D.F.: On the convergence of a new conjugate gradient algorithm. SIAM J. Numer. Anal. 15, 1247–1257 (1978) zbMATHCrossRefMathSciNetGoogle Scholar
  19. 19.
    Shanno, D.F., Phua, K.H.: Algorithm 500. Minimization of unconstrained multivariate functions [E4]. ACM Trans. Math. Softw. 2, 87–94 (1976) zbMATHCrossRefGoogle Scholar
  20. 20.
    Wolfe, P.: Convergence conditions for ascent methods. SIAM Rev. 11, 226–235 (1969) zbMATHCrossRefMathSciNetGoogle Scholar
  21. 21.
    Wolfe, P.: Convergence conditions for ascent methods II: some corrections. SIAM Rev. 13, 185–188 (1971) zbMATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2007

Authors and Affiliations

  1. 1.Center for Advanced Modeling and OptimizationResearch Institute for InformaticsBucharestRomania

Personalised recommendations