Advertisement

A modified PRP conjugate gradient algorithm with nonmonotone line search for nonsmooth convex optimization problems

  • Gonglin Yuan
  • Zengxin WeiEmail author
Original Research

Abstract

It is well-known that nonlinear conjugate gradient (CG) methods are preferred to solve large-scale smooth optimization problems due to their simplicity and low storage. However, the CG methods for nonsmooth optimization have not been studied. In this paper, a modified Polak–Ribière–Polyak CG algorithm which combines with a nonmonotone line search technique is proposed for nonsmooth convex minimization. The search direction of the given method not only possesses the sufficiently descent property but also belongs to a trust region. Moreover, the search direction has not only the gradients information but also the functions information. The global convergence of the presented algorithm is established under suitable conditions. Numerical results show that the given method is competitive to other three methods.

Keywords

Nonsmooth convex minimization Conjugate gradient Nonmonotone technique Global convergence 

Mathematics Subject Classification

65K05 90C30 

Notes

Acknowledgments

We would like to thank the anonymous referees for catching several typos of the paper, and their useful suggestions and comments which improved the paper greatly. This work is supported by the Guangxi NSF (Grant No. 2012GXNSFAA053002) and the China NSF (Grant Nos. 11261006 and 11161003).

References

  1. 1.
    Ahmed, T., Storey, D.: Efficient hybrid conjugate gradient techniques. Journal of Optimization Theory and Applications 64, 379–394 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Al-Baali, A.: Descent property and global convergence of the Flecher–Reeves method with inexact line search. IMA Journal of Numerical Analysis 5, 121–124 (1985)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Birge, J.R., Qi, L., Wei, Z.: A general approach to convergence properties of some methods for nonsmooth convex optimization. Applied Mathematics & Optimization 38, 141–158 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Birgin, E.G., Martinez, J.M., Raydan, M.: Nonmonotone spectral projected gradient methods on convex sets. SIAM Journal on Optimization 10, 1196–1121 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Bonnans, J.F., Gilbert, J.C., Lemaréchal, C., Sagastizábal, C.A.: A family of veriable metric proximal methods. Mathematical Programming 68, 15–47 (1995)MathSciNetzbMATHGoogle Scholar
  6. 6.
    Calamai, P.H., Moré, J.J.: Projected gradient methods for linear constrained problems. Mathematical Programming 39, 93–116 (1987)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Correa, R., Lemaréchal, C.: Convergence of some algorithms for convex minization. Mathematical Programming 62, 261–273 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Dai, Y., Yuan, Y.: Nonlinear Conjugate Gradient Methods. Shanghai Scientific and Technical Publishers, Shanghai (1998)Google Scholar
  9. 9.
    Fukushima, M., Qi, L.: A global and superlinearly convergent algorithm for nonsmooth convex minimization. SIAM Journal on Optimization 6, 1106–1120 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Gilbert, J.C., Nocedal, J.: Global convergence properties of conjugate gradient methods for optimization. SIAM Journal on Optimization 2, 21–42 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23, 707–716 (1986)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Grippo, L., Lucidi, S.: A globally convergent version of the Polak–Ribière gradient method. Mathematical Programming 78, 375–391 (1997)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Haarala, M., Miettinen, K., Mäkelä, M.M.: New limited memory bundle method for large-scale nonsmooth optimization. Optimization Methods and Software 19, 673–692 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM Journal on Optimization 16, 170–192 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Hager, W.W., Zhang, H.: Algorithm 851: CGDESCENT, a conjugate gradient method with guaranteed descent. ACM Transactions on Mathematical Software 32, 113–137 (2006)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Han, J.Y., Liu, G.H.: Global convergence analysis of a new nonmonotone BFGS algorithm on convex objective functions. Computational Optimization and Applications 7, 277–289 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Hiriart-Urruty, J.B., Lemmaréchal, C.: Convex Analysis and Minimization Algorithms II. Springer, Berlin (1983)Google Scholar
  18. 18.
    Kiwiel, K.C.: Methods of Descent for Nondifferentiable Optimization. Springer, Berlin (1985)zbMATHGoogle Scholar
  19. 19.
    Kiwiel, K.C.: Proximal level bundle methods for convex nondifferentiable optimization, saddle-point problems and variational inequalities. Mathematical Programming 69, 89–109 (1995)MathSciNetzbMATHGoogle Scholar
  20. 20.
    Kiwiel, K.C.: Proximity control in bundle methods for convex nondifferentiable optimization. Mathematical Programming 46, 105–122 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Lemaréchal, C.: Extensions Diverses des Médthodes de Gradient et Applications. Thèse d’Etat, Paris (1980)Google Scholar
  22. 22.
    Lemaréchal, C.: Nondifferentiable optimization. In: Nemhauser, G.L., Rinnooy Kan, A.H.G., Todd, M.J. (eds.) Handbooks in Operations Research and Management Science, Vol. 1, Optimization. North-Holland, Amsterdam (1989)Google Scholar
  23. 23.
    Lemaréchal, C., Sagastizábal, C.: Practical aspects of the Moreau-Yosida regularization: Theoretical preliminaries. SIAM Journal on Optimization 7, 367–385 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Liu, G.H., Peng, J.M.: The convergence properties of a nonmonotonic algorithm. Journal of Computational Mathematics 1, 65–71 (1992)MathSciNetGoogle Scholar
  25. 25.
    Lukšan, L., Vlček, J.: A bundle-Newton method for nonsmooth unconstrained minimization. Mathematical Programming 83, 373–391 (1998)MathSciNetzbMATHGoogle Scholar
  26. 26.
    Lukšan, L., Vlček, J. (2000). Test Problems for Nonsmooth Unconstrained and Linearly Constrained Optimization, Technical Report No. 798, Institute of Computer Science, Academy of Sciences of the Czech RepublicGoogle Scholar
  27. 27.
    Polak, E., Ribière, G.: Note sur la convergence de directions conjugées. Revue Francaise informat Recherche Opératinelle 3, 35–43 (1969)zbMATHGoogle Scholar
  28. 28.
    Polyak, B.T.: The conjugate gradient method in extreme problems. USSR Computational Mathematics and Mathematical Physics 9, 94–112 (1969)CrossRefzbMATHGoogle Scholar
  29. 29.
    Powell, M.J.D.: Nonconvex Minimization Calculations and the Conjugate Gradient Method, Vol. 1066 of Lecture Notes in Mathematics. Spinger, Berlin (1984)Google Scholar
  30. 30.
    Powell, M.J.D.: Convergence properties of algorithm for nonlinear optimization. SIAM Review 28, 487–500 (1986)MathSciNetCrossRefzbMATHGoogle Scholar
  31. 31.
    Qi, L.: Convergence analysis of some algorithms for solving nonsmooth equations. Mathematics of Operations Research 18, 227–245 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Qi, L., Sun, J.: A nonsmooth version of Newton’s method. Mathematical Programming 58, 353–367 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
  33. 33.
    Schramm, H.: Eine kombination yon bundle-und trust-region-verfahren zur Lösung nicht- differenzierbare optimierungsprobleme, Bayreuther Mathematische Schriften, Heft 30. University of Bayreuth, Bayreuth (1989)Google Scholar
  34. 34.
    Schramm, H., Zowe, J.: A version of the bundle idea for minimizing a nonsmooth function: conceptual idea, convergence analysis, numerical results. SIAM Journal on Optimization 2, 121–152 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Toint, P.L.: An assessment of non-monotone line search techniques for unconstrained minimization problem. SIAM Journal on Computing 17, 725–739 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    Wei, Z., Li, G., Qi, L.: Global convergence of the Polak–Ribiere–Polyak conjugate gradient methods with inexact line search for nonconvex unconstrained optimization problems. Mathematics of Computation 77, 2173–2193 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  37. 37.
    Wei, Z., Li, G., Qi, L.: New quasi-Newton methods for unconstrained optimization problems. Applied Mathematics and Computation 175, 1156–1188 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    Wei, Z., Yu, G., Yuan, G., Lian, Z.: The superlinear convergence of a modified BFGS-type method for unconstrained optimization. Computational Optimization and Application 29, 315–332 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  39. 39.
    Wolfe, P.: A method of conjugate subgradients for minimizing nondifferentiable convex functions. Mathematical Programming Study 3, 145–173 (1975)MathSciNetCrossRefzbMATHGoogle Scholar
  40. 40.
    Xiao, Y., Wei, Z., Wang, Z.: A limited memory BFGS-type method for large-scale unconstrained optimization. Computational Mathematics Applications 56, 1001–1009 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  41. 41.
    Yuan, G.: Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems. Optimization Letters 3, 11–21 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  42. 42.
    Yuan, G., Wei, Z.: Convergence analysis of a modified BFGS method on convex minimizations. Computational Optimization and Applications 47, 237–255 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  43. 43.
    Yuan, G., Wei, Z., Li, G.: A modified PolakCRibièreCPolyak conjugate gradient algorithm for nonsmooth convex programs. Journal of Computational and Applied Mathematics 255, 86–96 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  44. 44.
    Zhang, J., Deng, N., Chen, L.: New quasi-Newton equation and related methods for unconstrained optimization. Journal of Optimization Theory and Application 102, 147–167 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
  45. 45.
    Zhang, H., Hager, W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14, 1043–1056 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  46. 46.
    Zhang, L., Zhou, W., Li, D.: A descent modified Polak–Ribière–Polyak conjugate method and its global convergence. IMA Journal on Numerical Analysis 26, 629–649 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  47. 47.
    Zhou, J., Tits, A.: Nonmonotone line search for minimax problem. Journal Optimization Theory and Applications 76, 455–476 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
  48. 48.
    Zowe, J.: Computational mathematical programming. In: Schittkowski, K. (ed.) Nondifferentiable optimization, pp. 323–356. Springer, Berlin (1985)Google Scholar

Copyright information

© Korean Society for Computational and Applied Mathematics 2015

Authors and Affiliations

  1. 1.The Guangxi Colleges and Universities Key Laboratory of Mathematics and Its Applications, Department of Mathematics and Information ScienceGuangxi UniversityNanningPeople’s Republic of China

Personalised recommendations