Skip to main content
Log in

Accelerated hybrid conjugate gradient algorithm with modified secant condition for unconstrained optimization

Numerical Algorithms Aims and scope Submit manuscript

Abstract

An accelerated hybrid conjugate gradient algorithm represents the subject of this paper. The parameter β k is computed as a convex combination of \(\beta_k^{HS}\) (Hestenes and Stiefel, J Res Nat Bur Stand 49:409–436, 1952) and \(\beta_k^{DY}\) (Dai and Yuan, SIAM J Optim 10:177–182, 1999), i.e. \(\beta_k^C =\left({1-\theta_k}\right)\beta_k^{HS} + \theta_k \beta_k^{DY}\). The parameter θ k in the convex combinaztion is computed in such a way the direction corresponding to the conjugate gradient algorithm is the best direction we know, i.e. the Newton direction, while the pair (s k , y k ) satisfies the modified secant condition given by Li et al. (J Comput Appl Math 202:523–539, 2007) B k + 1 s k  = z k , where \(z_k =y_k +\left({{\eta_k} / {\left\| {s_k} \right\|^2}} \right)s_k\), \(\eta_k =2\left( {f_k -f_{k+1}} \right)+\left( {g_k +g_{k+1}} \right)^Ts_k\), s k  = x k + 1 − x k and y k  = g k + 1 − g k . It is shown that both for uniformly convex functions and for general nonlinear functions the algorithm with strong Wolfe line search is globally convergent. The algorithm uses an acceleration scheme modifying the steplength α k for improving the reduction of the function values along the iterations. Numerical comparisons with conjugate gradient algorithms show that this hybrid computational scheme outperforms a variant of the hybrid conjugate gradient algorithm given by Andrei (Numer Algorithms 47:143–156, 2008), in which the pair (s k , y k ) satisfies the classical secant condition B k + 1 s k  = y k , as well as some other conjugate gradient algorithms including Hestenes-Stiefel, Dai-Yuan, Polack-Ribière-Polyak, Liu-Storey, hybrid Dai-Yuan, Gilbert-Nocedal etc. A set of 75 unconstrained optimization problems with 10 different dimensions is being used (Andrei, Adv Model Optim 10:147–161, 2008).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

References

  1. Andrei, N.: Numerical comparison of conjugate gradient algorithms for unconstrained optimization. Stud. Inform. Control 16, 333–352 (2007)

    Google Scholar 

  2. Andrei, N.: An unconstrained optimization test functions collection. Adv. Model. Optim. 10, 147–161 (2008)

    MATH  MathSciNet  Google Scholar 

  3. Andrei, N.: Another hybrid conjugate gradient algorithm for unconstrained optimization. Numer. Algorithms 47, 143–156 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  4. Andrei, N.: A hybrid conjugate gradient algorithm for unconstrained optimization. JOTA 141, 249–264 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  5. Andrei, N.: Performance profiles of conjugate gradient algorithms for unconstrained optimization. In: Floudas, C.A., Pardalos, P.M. (eds.) Encyclopedia of Optimization, 2nd edn., vol. P, pp. 2938–2953. Springer, New York (2009)

    Google Scholar 

  6. Andrei, N.: Acceleration of conjugate gradient algorithms for unconstrained optimization. Appl. Math. Comput. 213, 361–369 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  7. Aris, R.: The Mathematical Theory of Diffusion and Reaction in Permeable Catalysts. Oxford (1975)

  8. Averick, B.M., Carter, R.G., Moré, J.J., Xue, G.L.: The MINPACK-2 Test Problem Collection. Mathematics and Computer Science Division, Argonne National Laboratory, Preprint MCS-P153–0692 (1992)

  9. Bongartz, I., Conn, A.R., Gould, N.I.M., Toint, P.L.: CUTE: constrained and unconstrained testing environments. ACM Trans. Math. Softw. 21, 123–160 (1995)

    Article  MATH  Google Scholar 

  10. Cimatti, G.: On a problem of the theory of lubrication governed by a variational inequality. Appl. Math. Optim. 3, 227–242 (1977)

    Article  MATH  MathSciNet  Google Scholar 

  11. Dai, Y.H.: New properties of a nonlinear conjugate gradient method. Numer. Math. 89, 83–98 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  12. Dai, Y.H., Liao, L.Z.: New conjugacy conditions and related nonlinear conjugate gradient methods. Appl. Math. Optim. 43, 87–101 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  13. Dai, Y.H., Yuan, Y.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10, 177–182 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  14. Dai, Y.H., Yuan, Y.: An efficient hybrid conjugate gradient method for unconstrained optimization. Ann. Oper. Res. 103, 33–47 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  15. Dai, Y.H., Han, J.Y., Liu, G.H., Sun, D.F., Yin, X., Yuan, Y.: Convergence properties of nonlinear conjugate gradient methods. SIAM J. Optim. 10, 348–358 (1999)

    Article  MathSciNet  Google Scholar 

  16. Daniel, J.W.: The conjugate gradient method for linear and nonlinear operator equations. SIAM J. Numer. Anal. 4, 10–26 (1967)

    Article  MATH  MathSciNet  Google Scholar 

  17. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  18. Fletcher, R.: Practical Methods of Optimization, vol. 1: Unconstrained Optimization. Wiley, New York (1987)

    Google Scholar 

  19. Fletcher, R., Reeves, C.: Function minimization by conjugate gradients. Comput. J. 7, 149–154 (1964)

    Article  MATH  MathSciNet  Google Scholar 

  20. Gilbert, J.C., Nocedal, J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2, 21–42 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  21. Glowinski, R.: Numerical Methods for Nonlinear Variational Problems. Springer, Berlin (1984)

    MATH  Google Scholar 

  22. Goodman, J., Kohn, R., Reyna, L.: Numerical study of a relaxed variational problem from optimal design. Comput. Methods Appl. Mech. Eng. 57, 107–127 (1986)

    Article  MATH  MathSciNet  Google Scholar 

  23. Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16, 170–192 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  24. Hager, W.W., Zhang, H.: A survey of nonlinear conjugate gradient methods. Pacific J. Optim. 2, 35–58 (2006)

    MATH  MathSciNet  Google Scholar 

  25. Hestenes, M.R., Stiefel, E.L.: Methods of conjugate gradients for solving linear systems. J. Res. Natl. Bur. Stand. 49, 409–436 (1952)

    MATH  MathSciNet  Google Scholar 

  26. Hu, Y.F., Storey, C.: Global convergence result for conjugate gradient methods. J. Optim. Theory Appl. 71, 399–405 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  27. Li, G., Tang, C., Wei, Z.: New conjugacy condition and related new conjugate gradient methods for unconstrained optimization. J. Comput. Appl. Math. 202, 523–539 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  28. Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45, 503–528 (1989)

    Article  MATH  MathSciNet  Google Scholar 

  29. Liu, Y., Storey, C.: Efficient generalized conjugate gradient algorithms, part 1: theory. JOTA 69, 129–137 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  30. Nitsche, J.C.C.: Lectures on Minimal Surfaces, vol. 1. Cambridge University Press (1989)

    Google Scholar 

  31. Nocedal, J.: Conjugate gradient methods and nonlinear optimization. In: Adams, L., Nazareth, J.L. (eds.) Linear and Nonlinear Conjugate Gradient—Related Methods, pp. 9–23. SIAM, Philadelphia (1996)

    Google Scholar 

  32. Perry, A.: A modified conjugate gradient algorithm. Discussion Paper no. 229, Center for Mathematical Studies in Economics and Management Science, Northwestern University (1976)

  33. Polak, E., Ribière, G.: Note Sur la Convergence de Directions Conjuguée, vol.16, pp. 35–43. Rev. Francaise Informat Recherche Operationelle, 3e Année (1969)

  34. Polyak, B.T.: The conjugate gradient method in extreme problems. USSR Comput. Math. Math. Phys. 9, 94–112 (1969)

    Article  Google Scholar 

  35. Powell, M.J.D.: Restart procedures for the conjugate gradient method. Math. Program. 12, 241–254 (1977)

    Article  MATH  MathSciNet  Google Scholar 

  36. Powell, M.J.D.: Nonconvex minimization calculations and the conjugate gradient method. In: Numerical Analysis (Dundee, 1983). Lecture Notes in Mathematics, vol. 1066, pp. 122–141. Springer, Berlin (1984)

    Chapter  Google Scholar 

  37. Touati-Ahmed, D., Storey, C.: Efficient hybrid conjugate gradient techniques. J. Optim. Theory Appl. 64, 379–397 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  38. Wolfe, P.: Convergence conditions for ascent methods. SIAM Rev. 11, 226–235 (1969)

    Article  MATH  MathSciNet  Google Scholar 

  39. Zhang, J.Z., Deng, N.Y., Chen, L.H.: New quasi-Newton equation and related methods for unconstrained optimization. J. Optim. Theory Appl. 102, 147–167 (1999)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neculai Andrei.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Andrei, N. Accelerated hybrid conjugate gradient algorithm with modified secant condition for unconstrained optimization. Numer Algor 54, 23–46 (2010). https://doi.org/10.1007/s11075-009-9321-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-009-9321-0

Keywords

Mathematics Subject Classifications (2000)

Navigation