Skip to main content
Log in

Convergence of augmented Lagrangian methods in extensions beyond nonlinear programming

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

The augmented Lagrangian method (ALM) is extended to a broader-than-ever setting of generalized nonlinear programming in convex and nonconvex optimization that is capable of handling many common manifestations of nonsmoothness. With the help of a recently developed sufficient condition for local optimality, it is shown to be derivable from the proximal point algorithm through a kind of local duality corresponding to an optimal solution and accompanying multiplier vector that furnish a local saddle point of the augmented Lagrangian. This approach leads to surprising insights into stepsize choices and new results on linear convergence that draw on recent advances in convergence properties of the proximal point algorithm. Local linear convergence is shown to be assured for a class of model functions that covers more territory than before.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The definite difference between the two conditions is confirmed by the example that answers [30, Question 2].

  2. Although the statement of their convergence result [9, Theorem 5.3] explicitly assumes this, the authors say that their proof mostly goes through without it. For more, see the discussion in Sect. 5 after our Example 5.3.

  3. As shown in Example 1 of [30]

  4. When \(F({\bar{x}})\) isn’t at the apex of the constraint cone, i.e., \(F({\bar{x}})\ne 0\), as shown in Example 3 of [30]. For more on this condition see [9, Proposition 2.1] and the discussion after it.

  5. It is the same as the modulus of strong convexity invoked in the assumption of strong variational convexity, as seen in [30] in the theorem’s proof, although not brought out in the theorem’s statement.

  6. This is dual to the formula \(\varphi _{r}(x,\cdot ) =\varphi _{{\bar{r}}}(x,\cdot )+\frac{r-{\bar{r}}}{2}|\cdot |^2\) through the conjugacy of \(\varphi _{r}(x,\cdot )\) and \(\varphi _{{\bar{r}}}(x,\cdot )\) with the functions \(-l_r(x,\cdot )\) and \(-l_{{\bar{r}}}(x,\cdot )\) in the definition (1.3), along with the fact that addition of convex functions dualizes to infimal convolution [32, 11.23(a)].

  7. The strong convexity inequality \(l_{r_k}(x,y^k)\ge l_{r_k}(x^{k+1},y^k) + \nabla _x l_{r_k}(x^{k+1},y^k){\cdot }(x-x^{k+1})+\frac{s}{2}|x-x^{k+1}|^2\) leads to this by minimizing on both sides with respect to \(x\in {{\mathcal {X}}}\).

  8. In fact, to reach this conclusion it would suffice to assume the nonemptiness and boundedness of just one of the sets in (2.18) for \(y\in {\mathrm{int}}\,{{\mathcal {Y}}}\).

  9. Pennanen has an assumption in [15, Proposition 6] that is needed to get a rate of linear convergence, but that assumption is irrelevant to his proof of localization of the generated sequence.

  10. This is a special case of the minimax rule in [18, Theorem 37.3(b)].

  11. Specifically, it is shown that, by taking \(r_k\) large enough, it can be arranged with respect to the norm \(||(x,y)||=|x|+|y|\) that \(||(x^{k+1},y^{k+1})-({\bar{x}},{\bar{y}})||\le \frac{1}{2}||(x^k,y^k)-({\bar{x}},{\bar{y}})||\).

References

  1. Aragon, F.J., Geoffroy, M.F.: Characterization of metric regularity of subdifferential mappings. J. Convex Analysis 15, 365–380 (2008)

    MathSciNet  MATH  Google Scholar 

  2. Alizateh, F., Goldfarb, D.: Second-order cone programming. Math. Programming 95, 3–51 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Academic Press (1982)

  4. Bonnans, J.F., Ramirez, H.C.: Perturbation analysis of second-order cone programming problems. Math. Programming 104, 205–227 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  5. Buys, J.D.: Dual algorithms for constrained optimization. Thesis, Leiden (1972)

  6. Dontchev, A.D., Rockafellar, R.T.: Implicit Functions and Solution Mappings, 2nd edn. Springer Verlag (2014)

  7. Eckstein, J., Bertsekas, D.P.: On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Programming 55, 293–318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  8. Fernandez, D., Solodov, M.V.: Local convergence of exact and inexact augmented Lagrangian methods under the second-order sufficient optimality condition. SIAM J. Optimization 22, 384–407 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  9. Hang, N.V.T., Mordukhovich, B.S., Sarabi, M.E.: Augmented Lagrangian method for second-order cone programs under second-order sufficiency. J. Global Optimization 82, 51–81 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  10. Hang, N.V.T., Sarabi, M.E.: Local convergence analysis of augmented Lagrangian methods for piecewise linear-quadratic composite optimization problems. Math. of Operations Research, to appear; arXiv: 2010.11379

  11. Haarhoff, P.C., Buys, J.D.: A new method for the optimization of a nonlinear function subject to nonlinear constraints. Computer J. 13, 178–184 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  12. Hestenes, M.: Multiplier and gradient methods. J. Optimization Theory Appl. 4, 303–320 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  13. Liu, Y.J., Zhang, L.: Convergence analysis of the augmented Lagrangian method for second-order cone optimization problems. Nonlinear Analysis 67, 1359–1373 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  14. Luque, F.J.: Asymptotic convergence analysis of the proximal point algorithm. SIAM J. Control Opt. 22, 277–293 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  15. Pennanen, T.: Local convergence of the proximal point algorithm and multiplier methods without monotonicity. Mathematics of Operations Research 27, 170–191 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  16. Poliquin, R.A., Rockafellar, R.T.: Tilt stability of a local minimum. SIAM J. Optimization 8, 287–299 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  17. Powell, M.J.D.: A method for nonlinear optimization in minimization problems. In: Optimization (R. Fletcher, ed.), Academic Press, 283–298 (1969)

  18. Rockafellar, R.T.: Convex Analysis. Princeton University Press (1970)

  19. Rockafellar, R.T.: New applications of duality in convex programming. Proc. 4th Conference on Probability, brasov, Romania, 1971. (This is the written version of a talk given at several conferences, including the 7th International Symposium on Mathematical Programming in the Hague, (1970)

  20. Rockafellar, R.T.: A dual approach to solving nonlinear programming problems by unconstrained optimization. Math. Programming 5, 354–373 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  21. Rockafellar, R.T.: The multiplier method of Hestenes and Powell applied to convex programming. J. Optimization Theory 12, 555–562 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  22. Rockafellar, R.T.: Augmented Lagrange muliplier functions and duality in nonconvex programming. SIAM J. Control 12, 268–285 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  23. Rockafellar, R.T.: Conjugate Duality and Optimization, No. 16 in Conference Board of Math\(\dot{S}\)ciences Series, SIAM Publications, (1974)

  24. Rockafellar, R.T.: Solving a nonlinear programming problem by way of a dual problem. Symposia Mathematica 19, 135–160 (1976)

    MathSciNet  Google Scholar 

  25. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Opt. 14, 877–898 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  26. Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. of Operations Research 1, 97–116 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  27. Rockafellar, R.T.: Lagrange multipliers and optimality. SIAM Review 35, 183–238 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  28. Rockafellar, R.T.: Variational convexity and local monotonicity of subgradient mappings. Vietnam J. Math. 47, 547–561 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  29. Rockafellar, R.T.: Progressive decoupling of linkages in optimization and variational inequalities with elicitable convexity or monotonicity. Set-Valued and Variational Analysis 27, 863–893 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  30. Rockafellar, R.T.: Augmented Lagrangians and hidden convexity in sufficient conditions for local optimality. Math. Programming, published online January 2022; https://doi.org/10.1007/s10107-022-01768-w

  31. Rockafellar, R.T.: Advances in convergence and scope of the proximal point algorithm. J. Nonlinear and Convex Analysis 22, 2347–2375 (2021)

    MathSciNet  MATH  Google Scholar 

  32. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis, No. 317 in the series Grundlehren der Mathematischen Wissenschaften, Springer-Verlag, (1997)

  33. Sun, D., Sun, J., Zhang, L.: The rate of convergence of the augmented Lagrangian method for nonlinear semidefinite programming. Math. Programming 114, 349–381 (2008)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to R. Tyrrell Rockafellar.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rockafellar, R.T. Convergence of augmented Lagrangian methods in extensions beyond nonlinear programming. Math. Program. 199, 375–420 (2023). https://doi.org/10.1007/s10107-022-01832-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-022-01832-5

Keywords

Mathematics Subject Classification

Navigation