Skip to main content

Advertisement

Log in

On the Convergence Properties of a Second-Order Augmented Lagrangian Method for Nonlinear Programming Problems with Inequality Constraints

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

The objective of this paper is to conduct a theoretical study on the convergence properties of a second-order augmented Lagrangian method for solving nonlinear programming problems with both equality and inequality constraints. Specifically, we utilize a specially designed generalized Newton method to furnish the second-order iteration of the multipliers and show that when the linear independent constraint qualification and the strong second-order sufficient condition hold, the method employed in this paper is locally convergent and possesses a superlinear rate of convergence, although the penalty parameter is fixed and/or the strict complementarity fails.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Hestenes, M.: Multiplier and gradient methods. J. Optim. Theory Appl. 4(5), 303–320 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  2. Powell, M.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed.) Optimization, pp. 283–298. Academic, New York (1969)

    Google Scholar 

  3. Rockafellar, R.: A dual approach to solving nonlinear programming problems by unconstrained optimization. Math. Program. 5(1), 354–373 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  4. Rockafellar, R.: The multiplier method of Hestenes and Powell applied to convex programming. J. Optim. Theory Appl. 12(6), 555–562 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  5. Tretyakov, N.: A method of penalty estimates for convex programming problems. Ékonomika i Matematicheskie Metody 9, 525–540 (1973)

    MathSciNet  Google Scholar 

  6. Bertsekas, D.: On penalty and multiplier methods for constrained minimization. SIAM J. Control Optim. 14(2), 216–235 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  7. Conn, A., Gould, N., Toint, P.: A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J. Numer. Anal. 28(2), 545–572 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  8. Ito, K., Kunisch, K.: The augmented Lagrangian method for equality and inequality constraints in hilbert spaces. Math. Program. 46(1–3), 341–360 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  9. Contesse-Becker, L.: Extended convergence results for the method of multipliers for nonstrictly binding inequality constraints. J. Optim. Theory Appl. 79(2), 273–310 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  10. Rockafellar, R.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97–116 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  11. Bertsekas, D.: Constrained Optimization and Lagrange Multiplier Methods. Academic, New York (1982)

    MATH  Google Scholar 

  12. Buys, J.: Dual algorithms for constrained optimization problems. Ph.D. thesis, University of Leiden, Netherlands (1972)

  13. Bertsekas, D.: Multiplier methods: a survey. Automatica 12(2), 133–145 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  14. Bertsekas, D.: On the convergence properties of second-order methods of multipliers. J. Optim. Theory Appl. 25(3), 443–449 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  15. Brusch, R.: A rapidly convergent method for equality constrained function minimization. In: Proceeding of 1973 IEEE Conference on Decision Control, pp. 80–81. San Diego, CA (1973)

  16. Fletcher, R.: An ideal penalty function for constrained optimization. IMA J. Appl. Math. 15(3), 319–342 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  17. Fontecilla, R., Steihaug, T., Tapia, R.: A convergence theory for a class of quasi-Newton methods for constrained optimization. SIAM J. Numer. Anal. 24(5), 1133–1151 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  18. Yuan, Y.: Analysis on a superlinearly convergent augmented Lagrangian method. Acta Math. Sin. Engl. Ser. 30(1), 1–10 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  19. Nocedal, J., Wright, S.: Numerical Optimization, 2nd edn. Springer, New York (2006)

    MATH  Google Scholar 

  20. Robinson, S.: Strongly regular generalized equations. Math. Oper. Res. 5(1), 43 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  21. Mifflin, R.: Semismooth and semiconvex functions in constrained optimization. SIAM J. Control Optim. 15(6), 959–972 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  22. Qi, L., Sun, J.: A nonsmooth version of Newton’s method. Math. Program. 58(1–3), 353–367 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  23. Clarke, H.: Optimization and Nonsmooth Analysis. Wiley, New York (1983)

    MATH  Google Scholar 

  24. Gowda, M.: Inverse and implicit function theorems for H-differentiable and semismooth functions. Optim. Methods Softw. 19(5), 443–461 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  25. Kummer, B.: Lipschitzian inverse functions, directional derivatives, and applications in \(c^{1,1}\) optimization. J. Optim. Theory Appl. 70(3), 561–582 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  26. Sun, D.: A further result on an implicit function theorem for locally Lipschitz functions. Oper. Res. Lett. 28(4), 193–198 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  27. Chan, Z., Sun, D.: Constraint nondegeneracy, strong regularity, and nonsingularity in semidefinite programming. SIAM J. Optim. 19(1), 370–396 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  28. Sun, D., Sun, J., Zhang, L.: The rate of convergence of the augmented Lagrangian method for nonlinear semidefinite programming. Math. Program. 114(2), 349–391 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  29. Kummer, B.: Newton’s method for nondifferentiable functions. In: Guddat, J. et al. (eds.) Advances in Mathematical Optimization, pp. 114–125. Akademi, Berlin (1988)

  30. Kummer, B.: Newton’s method based on generalized derivatives for nonsmooth functions: convergence analysis. In: Oettli, W. (ed.) Advances in Optimization, pp. 171–194. Springer, Berlin (1992)

    Chapter  Google Scholar 

  31. Qi, L.: Convergence analysis of some algorithms for solving nonsmooth equations. Math. Oper. Res. 18(1), 227–244 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  32. Sun, D., Han, J.: Newton and quasi-Newton methods for a class of nonsmooth equations and related problems. SIAM J. Optim. 7, 463–480 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  33. Rockafellar, R., Wets, R.: Variational Analysis. Springer, Berlin (1998)

    Book  MATH  Google Scholar 

Download references

Acknowledgments

The authors would like to thank Professor Defeng Sun at National University of Singapore for discussions on topics covered in this paper. This research was supported by the National Natural Science Foundation of China (Grant No. 11271117). The research of the first author was supported by the Chinese government CSC scholarship while visiting the National University of Singapore.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liang Chen.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, L., Liao, A. On the Convergence Properties of a Second-Order Augmented Lagrangian Method for Nonlinear Programming Problems with Inequality Constraints. J Optim Theory Appl 187, 248–265 (2020). https://doi.org/10.1007/s10957-015-0842-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-015-0842-5

Keywords

Mathematics Subject Classification

Navigation