Skip to main content
Log in

A new variant of the memory gradient method for unconstrained optimization

  • Original Paper
  • Published:
Optimization Letters Aims and scope Submit manuscript

Abstract

In this paper, we present a new memory gradient method such that the direction generated by this method provides a sufficient descent direction for the objective function at every iteration. Then, we analyze its global convergence under mild conditions and convergence rate for uniformly convex functions. Finally, we report some numerical results to show the efficiency of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Pardalos P.M., Resende M.G.C.: Handbook of Applied Optimization. Oxford University Press, Oxford (2002)

    MATH  Google Scholar 

  2. Nocedal J., Wright S.J.: Numerical Optimization (Second Edn). Springer Series in Operations Research, Springer, New York (2006)

    MATH  Google Scholar 

  3. Fletcher R., Reeves C.: Function minimization by conjugate gradients. Comput. J. 7, 149–154 (1964)

    Article  MathSciNet  MATH  Google Scholar 

  4. Polyak B.T.: The conjugate gradient method in extreme problems. USSR Comput. Math. Math. Phys. 9, 94–112 (1969)

    Article  Google Scholar 

  5. Gilbert J.C., Nocedal J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2(1), 21–42 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  6. Dai Y.H., Yuan Y.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10, 177–182 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  7. Al-Baali M.: Descent Property and Global Convergence of the Fletcher-Reeves Method with Inexact Line Search. IMA J. Num. Anal. 5, 121–124 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  8. Dai Y.H., Yuan Y.: Convergence properties of the Fletcher-Reeves method. IMA J. Num. Anal. 16, 155–164 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  9. Powell M.J.D.: Convergence properties of algorithms for nonlinear optimization. SIAM Rev. 28, 487–500 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  10. Grippo L., Lucidi S.: A globally convergent version of the Polark-Ribière conjugate gradient method. Math. Prog. 77, 375–391 (1997)

    MathSciNet  Google Scholar 

  11. Hu Y.F., Storey C.: Global convergence result for conjugate gradient methods. J. Optim. Theor. Appl. 71(2), 399–405 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  12. Floudas, C.A., Pardalos, P.M. (eds): Encyclopedia of Optimization. 2nd edn. Springer, Berlin (2009)

    MATH  Google Scholar 

  13. Wolfe M.A., Viazminsky C.: Supermemory descent methods for unconstrained minimization. J. Optim. Theor. Appl. 18(4), 455–468 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  14. Shi Z.: A new memory gradient under exact line search. Asia-Pacific J. Oper. Res. 20(2), 275–284 (2003)

    MathSciNet  MATH  Google Scholar 

  15. Shi Z.: Supermemory gradient method for unconstrained optimization. J. Eng. Math. 17(2), 99–104 (2000)

    MATH  Google Scholar 

  16. Shi Z.: On memory gradient method with trust region for unconstrained optimization. Num. Algortm. 41, 173–196 (2006)

    Article  MATH  Google Scholar 

  17. Cantrell W.: Relation between the memory gradient method and the Fletcher-Revees method. J. Optim. Theor. Appl. 4(1), 67–71 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  18. Ou, Y., Wang, G.: A new supermemory gradient method for unconstrained optimization problems. Optim. Lett. doi:10.1007/s11590-011-0328-9 (2011)

  19. Narushima Y., Yabe H.: Global convergence of a memory gradient method for unconstrained optimization. Computational Optim. Appl. 35(3), 325–346 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  20. Moré J.J., Garbow B.S., Hillstrom K.E.: Testing unconstrained optimization software. ACM Trans. Math. Softw. 7, 17–41 (1981)

    Article  MATH  Google Scholar 

  21. Andrei N.: An unconstrained optimization test functions collection. Adv. Model. Optim. 10(1), 147–161 (2008)

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhongping Wan.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zheng, Y., Wan, Z. A new variant of the memory gradient method for unconstrained optimization. Optim Lett 6, 1643–1655 (2012). https://doi.org/10.1007/s11590-011-0355-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11590-011-0355-6

Keywords

Navigation