Abstract
In this paper, we accomplish the unified convergence analysis of a second-order method of multipliers (i.e., a second-order augmented Lagrangian method) for solving the conventional nonlinear conic optimization problems. Specifically, the algorithm that we investigate incorporates a specially designed nonsmooth (generalized) Newton step to furnish a second-order update rule for the multipliers. We first show in a unified fashion that under a few abstract assumptions, the proposed method is locally convergent and possesses a (nonasymptotic) superlinear convergence rate, even though the penalty parameter is fixed and/or the strict complementarity fails. Subsequently, we demonstrate that for the three typical scenarios, i.e., the classic nonlinear programming, the nonlinear second-order cone programming and the nonlinear semidefinite programming, these abstract assumptions are nothing but exactly the implications of the iconic sufficient conditions that are assumed for establishing the Q-linear convergence rates of the method of multipliers without assuming the strict complementarity.
Similar content being viewed by others
References
Alizadeh F, Goldfarb D. Second-order cone programming. Math Program, 2003, 95: 3–51
Bertsekas D P. Multiplier methods: A survey. Automatica, 1976, 12: 133–145
Bertsekas D P. On penalty and multiplier methods for constrained minimization. SIAM J Control Optim, 1976, 14: 216–235
Bertsekas D P. On the convergence properties of second-order methods of multipliers. J Optim Theory Appl, 1978, 25: 443–449
Bertsekas D P. Constrained Optimization and Lagrange Multiplier Methods. New York: Academic Press, 1982
Bonnans J F, Ramírez H. Perturbation analysis of second-order cone programming problems. Math Program, 2005, 104: 205–227
Bonnans J F, Shapiro A. Nondegeneracy and quantitative stability of parameterized optimization problems with multiple solutions. SIAM J Optim, 1998, 8: 940–946
Bonnans J F, Shapiro A. Perturbation Analysis of Optimization Problems. New York: Springer, 2000
Brusch R. A rapidly convergent method for equality constrained function minimization. In: Proceedings of 1973 IEEE Conference on Decision and Control. New York: IEEE, 1973, 80–81
Bueno L, Haeser G, Santos L. Towards an efficient augmented Lagrangian method for convex quadratic programming. Comput Optim Appl, 2020, 76: 767–800
Buys J. Dual algorithms for constrained optimization problems. PhD Thesis. Netherlands: University of Leiden, 1972
Chen J-S, Chen X, Tseng P. Analysis of nonsmooth vector-valued functions associated with second-order cones. Math Program, 2004, 101: 95–117
Chen L, Liao A P. On the convergence properties of a second-order augmented Lagrangian method for nonlinear programming problems with inequality constraints. J Optim Theory Appl, 2020, 187: 248–265
Chen X, Sun D F, Sun J. Complementarity functions and numerical experiments on some smoothing Newton methods for second-order-cone complementarity problems. Comput Optim Appl, 2003, 25: 39–56
Clarke F H. Optimization and Nonsmooth Analysis. New York: Wiley, 1983
Conn A, Gould N, Toint P. A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J Numer Anal, 1991, 28: 545–572
Conn A, Gould N, Toint P. Lancelot: A Fortran Package for Large-Scale Nonlinear Optimization (Release A). Berlin-Heidelberg: Springer-Verlag, 1992
Contesse-Becker L. Extended convergence results for the method of multipliers for nonstrictly binding inequality constraints. J Optim Theory Appl, 1993, 79: 273–310
Fletcher R. An ideal penalty function for constrained optimization. IMA J Appl Math, 1975, 15: 319–342
Fontecilla R, Steihaug T, Tapia R. A convergence theory for a class of quasi-Newton methods for constrained optimization. SIAM J Numer Anal, 1987, 24: 1133–1151
Fukushima M, Luo Z Q, Tseng P. Smoothing functions for second-order-cone complementarity problems. SIAM J Optim, 2001, 12: 436–460
Hayashi S, Yamashita N, Fukushima M. A combined smoothing and regularization method for monotone second-order cone complementarity problems. SIAM J Optim, 2005, 15: 593–615
Hestenes M. Multiplier and gradient methods. J Optim Theory Appl, 1969, 4: 303–320
Ito K, Kunisch K. The augmented Lagrangian method for equality and inequality constraints in Hilbert spaces. Math Program, 1990, 46: 341–360
Kanzow C, Ferenczi I, Fukushima M. On the local convergence of semismooth Newton methods for linear and nonlinear second-order cone programs without strict complementarity. SIAM J Optim, 2009, 20: 297–320
Kummer B. Newton’s method based on generalized derivatives for nonsmooth functions: Convergence analysis. In: Advances in Optimization. Lecture Notes in Economics and Mathematical Systems, vol. 382. Berlin-Heidelberg: Springer, 1992, 171–194
Kummer B. Newton’s method for non-differentiable functions. In: Advances in Mathematical Optimization. Berlin: Akademie-Verlag, 1998, 114–125
Li X D, Sun D F, Toh K-C. A highly efficient semismooth Newton augmented Lagrangian method for solving Lasso problems. SIAM J Optim, 2018, 28: 433–458
Liu Y J, Zhang L W. Convergence analysis of the augmented Lagrangian method for nonlinear second-order cone optimization problems. Nonlinear Anal, 2007, 67: 1359–1373
Liu Y J, Zhang L W. Convergence of the augmented Lagrangian method for nonlinear optimization problems over second-order cones. J Optim Theory Appl, 2008, 139: 557–575
Meng F W, Sun D F, Zhao G Y. Semismoothness of solutions to generalized equations and the Moreau-Yosida regularization. Math Program, 2005, 104: 561–581
Mifflin R. Semismooth and semiconvex functions in constrained optimization. SIAM J Control Optim, 1977, 15: 959–972
Nocedal J, Wright S J. Numerical Optimization, 2nd ed. New York: Springer-Verlag, 2006
Outrata J V, Sun D F. On the coderivative of the projection operator onto the second-order cone. Set-Valued Anal, 2008, 16: 999
Pang J-S, Qi L Q. Nonsmooth equations: Motivation and algorithms. SIAM J Optim, 1993, 3: 443–465
Pang J-S, Sun D F, Sun J. Semismooth homeomorphisms and strong stability of semidefinite and Lorentz complementarity problems. Math Oper Res, 2003, 28: 39–63
Powell M J D. A method for nonlinear constraints in minimization problems. In: Optimization. London: Academic Press, 1969, 283–298
Qi L Q. Convergence analysis of some algorithms for solving nonsmooth equations. Math Oper Res, 1993, 18: 227–244
Qi L Q, Sun J. A nonsmooth version of Newton’s method. Math Program, 1993, 58: 353–367
Robinson S M. Strongly regular generalized equations. Math Oper Res, 1980, 5: 43–62
Robinson S M. Local structure of feasible sets in nonlinear programming, part II: Nondegeneracy. In: Mathematical Programming at Oberwolfach II. Mathematical Programming Studies, vol. 22. Berlin-Heidelberg: Springer, 1984, 217–230
Robinson S M. Constraint nondegeneracy in variational analysis. Math Oper Res, 2003, 28: 201–232
Rockafellar R T. The multiplier method of Hestenes and Powell applied to convex programming. J Optim Theory Appl, 1973, 12: 555–562
Rockafellar R T. A dual approach to solving nonlinear programming problems by unconstrained optimization. Math Program, 1973, 5: 354–373
Rockafellar R T. Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math Oper Res, 1976, 1: 97–116
Rockafellar R T, Wets R J-B. Variational Analysis. Berlin: Springer-Verlag, 1998
Shapiro A. Sensitivity analysis of generalized equations. J Math Sci, 2003, 115: 2554–2565
Sun D F. The strong second-order sufficient condition and constraint nondegeneracy in nonlinear semidefinite programming and their implications. Math Oper Res, 2006, 31: 761–776
Sun D F, Han J Y. Newton and quasi-Newton methods for a class of nonsmooth equations and related problems. SIAM J Optim, 1997, 7: 463–480
Sun D F, Sun J. Semismooth matrix valued functions. Math Oper Res, 2002, 27: 150–169
Sun D F, Sun J, Zhang L W. The rate of convergence of the augmented Lagrangian method for nonlinear semidefinite programming. Math Program, 2008, 114: 349–391
Tretyakov N. A method of penalty estimates for convex programming problems. Ékonom Mat Metody, 1973, 9: 525–540
Yang L Q, Sun D F, Toh K-C. SDPNAL+: A majorized semismooth Newton-CG augmented Lagrangian method for semidefinite programming with nonnegative constraints. Math Program Comput, 2015, 7: 331–366
Yuan Y-X. Analysis on a superlinearly convergent augmented Lagrangian method. Acta Math Sin (Engl Ser), 2014, 30: 1–10
Zhao X Y, Sun D F, Toh K-C. A Newton-CG augmented Lagrangian method for semidefinite programming. SIAM J Optim, 2010, 20: 1737–1765
Acknowledgements
The first author was supported by National Natural Science Foundation of China (Grant No. 11801158), the Hunan Provincial Natural Science Foundation of China (Grant No. 2019JJ50040) and the Fundamental Research Funds for the Central Universities in China. The third author was supported by National Natural Science Foundation of China (Grant No. 11871002) and the General Program of Science and Technology of Beijing Municipal Education Commission (Grant No. KM201810005004). The authors thank Professor Liwei Zhang at Dalian University of Technology for pointing out a problem in the early version of this work. The authors also thank the anonymous referees for their comments and suggestions, which are very helpful in improving the quality of this paper.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Chen, L., Zhu, J. & Zhao, X. Unified convergence analysis of a second-order method of multipliers for nonlinear conic programming. Sci. China Math. 65, 2397–2422 (2022). https://doi.org/10.1007/s11425-021-1920-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11425-021-1920-5
Keywords
- second-order method of multipliers
- augmented Lagrangian method
- convergence rate
- generalized Newton method
- second-order cone programming
- semidefinite programming