Abstract
The objective of this paper is to conduct a theoretical study on the convergence properties of a second-order augmented Lagrangian method for solving nonlinear programming problems with both equality and inequality constraints. Specifically, we utilize a specially designed generalized Newton method to furnish the second-order iteration of the multipliers and show that when the linear independent constraint qualification and the strong second-order sufficient condition hold, the method employed in this paper is locally convergent and possesses a superlinear rate of convergence, although the penalty parameter is fixed and/or the strict complementarity fails.
Similar content being viewed by others
References
Hestenes, M.: Multiplier and gradient methods. J. Optim. Theory Appl. 4(5), 303–320 (1969)
Powell, M.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed.) Optimization, pp. 283–298. Academic, New York (1969)
Rockafellar, R.: A dual approach to solving nonlinear programming problems by unconstrained optimization. Math. Program. 5(1), 354–373 (1973)
Rockafellar, R.: The multiplier method of Hestenes and Powell applied to convex programming. J. Optim. Theory Appl. 12(6), 555–562 (1973)
Tretyakov, N.: A method of penalty estimates for convex programming problems. Ékonomika i Matematicheskie Metody 9, 525–540 (1973)
Bertsekas, D.: On penalty and multiplier methods for constrained minimization. SIAM J. Control Optim. 14(2), 216–235 (1976)
Conn, A., Gould, N., Toint, P.: A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J. Numer. Anal. 28(2), 545–572 (1991)
Ito, K., Kunisch, K.: The augmented Lagrangian method for equality and inequality constraints in hilbert spaces. Math. Program. 46(1–3), 341–360 (1990)
Contesse-Becker, L.: Extended convergence results for the method of multipliers for nonstrictly binding inequality constraints. J. Optim. Theory Appl. 79(2), 273–310 (1993)
Rockafellar, R.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97–116 (1976)
Bertsekas, D.: Constrained Optimization and Lagrange Multiplier Methods. Academic, New York (1982)
Buys, J.: Dual algorithms for constrained optimization problems. Ph.D. thesis, University of Leiden, Netherlands (1972)
Bertsekas, D.: Multiplier methods: a survey. Automatica 12(2), 133–145 (1976)
Bertsekas, D.: On the convergence properties of second-order methods of multipliers. J. Optim. Theory Appl. 25(3), 443–449 (1978)
Brusch, R.: A rapidly convergent method for equality constrained function minimization. In: Proceeding of 1973 IEEE Conference on Decision Control, pp. 80–81. San Diego, CA (1973)
Fletcher, R.: An ideal penalty function for constrained optimization. IMA J. Appl. Math. 15(3), 319–342 (1975)
Fontecilla, R., Steihaug, T., Tapia, R.: A convergence theory for a class of quasi-Newton methods for constrained optimization. SIAM J. Numer. Anal. 24(5), 1133–1151 (1987)
Yuan, Y.: Analysis on a superlinearly convergent augmented Lagrangian method. Acta Math. Sin. Engl. Ser. 30(1), 1–10 (2014)
Nocedal, J., Wright, S.: Numerical Optimization, 2nd edn. Springer, New York (2006)
Robinson, S.: Strongly regular generalized equations. Math. Oper. Res. 5(1), 43 (1980)
Mifflin, R.: Semismooth and semiconvex functions in constrained optimization. SIAM J. Control Optim. 15(6), 959–972 (1977)
Qi, L., Sun, J.: A nonsmooth version of Newton’s method. Math. Program. 58(1–3), 353–367 (1993)
Clarke, H.: Optimization and Nonsmooth Analysis. Wiley, New York (1983)
Gowda, M.: Inverse and implicit function theorems for H-differentiable and semismooth functions. Optim. Methods Softw. 19(5), 443–461 (2004)
Kummer, B.: Lipschitzian inverse functions, directional derivatives, and applications in \(c^{1,1}\) optimization. J. Optim. Theory Appl. 70(3), 561–582 (1991)
Sun, D.: A further result on an implicit function theorem for locally Lipschitz functions. Oper. Res. Lett. 28(4), 193–198 (2001)
Chan, Z., Sun, D.: Constraint nondegeneracy, strong regularity, and nonsingularity in semidefinite programming. SIAM J. Optim. 19(1), 370–396 (2008)
Sun, D., Sun, J., Zhang, L.: The rate of convergence of the augmented Lagrangian method for nonlinear semidefinite programming. Math. Program. 114(2), 349–391 (2008)
Kummer, B.: Newton’s method for nondifferentiable functions. In: Guddat, J. et al. (eds.) Advances in Mathematical Optimization, pp. 114–125. Akademi, Berlin (1988)
Kummer, B.: Newton’s method based on generalized derivatives for nonsmooth functions: convergence analysis. In: Oettli, W. (ed.) Advances in Optimization, pp. 171–194. Springer, Berlin (1992)
Qi, L.: Convergence analysis of some algorithms for solving nonsmooth equations. Math. Oper. Res. 18(1), 227–244 (1993)
Sun, D., Han, J.: Newton and quasi-Newton methods for a class of nonsmooth equations and related problems. SIAM J. Optim. 7, 463–480 (1997)
Rockafellar, R., Wets, R.: Variational Analysis. Springer, Berlin (1998)
Acknowledgments
The authors would like to thank Professor Defeng Sun at National University of Singapore for discussions on topics covered in this paper. This research was supported by the National Natural Science Foundation of China (Grant No. 11271117). The research of the first author was supported by the Chinese government CSC scholarship while visiting the National University of Singapore.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Chen, L., Liao, A. On the Convergence Properties of a Second-Order Augmented Lagrangian Method for Nonlinear Programming Problems with Inequality Constraints. J Optim Theory Appl 187, 248–265 (2020). https://doi.org/10.1007/s10957-015-0842-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-015-0842-5
Keywords
- Second-order augmented Lagrangian method
- Nonlinear programming
- Generalized Newton method
- Nonsmooth analysis