Skip to main content
Log in

A practical relative error criterion for augmented Lagrangians

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

This paper develops a new error criterion for the approximate minimization of augmented Lagrangian subproblems. This criterion is practical since it is readily testable given only a gradient (or subgradient) of the augmented Lagrangian. It is also “relative” in the sense of relative error criteria for proximal point algorithms: in particular, it uses a single relative tolerance parameter, rather than a summable parameter sequence. Our analysis first describes an abstract version of the criterion within Rockafellar’s general parametric convex duality framework, and proves a global convergence result for the resulting algorithm. Specializing this algorithm to a standard formulation of convex programming produces a version of the classical augmented Lagrangian method with a novel inexact solution condition for the subproblems. Finally, we present computational results drawn from the CUTE test set—including many nonconvex problems—indicating that the approach works well in practice.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Andreani R., Birgin E.G., Martínez J.M., Schuverdt M.L.: On augmented Lagrangian methods with general lower-level constraints. SIAM J. Optim. 18(4), 1286–1309 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  2. Andreani R., Birgin E.G., Martínez J.M., Schuverdt M.L.: Augmented Lagrangian methods under the constant positive linear dependence constraint qualification. Math. Program. 111(1–2), 5–32 (2008)

    MathSciNet  MATH  Google Scholar 

  3. Andreani, R., Haeser, G., Schuverdt, M., Silva, P.J.S.: A relaxed constant positive linear dependence constraint qualification and applications. Math. Program. Published electronically. doi:10.1007/s10107-011-0456-0 (2011)

  4. Andreani, R., Haeser, G., Schuverdt, M., Silva, P.J.S.: Two new weak constraint qualifications and applications. Available at Optimization Online: http://www.optimization-online.org/DB_HTML/2011/07/3105.html (2011)

  5. Bertsekas D.P.: Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York, NY (1982)

    MATH  Google Scholar 

  6. Birgin, E.G., Fernández, D., Martínez, J.M.: On the boundedness of penalty parameters in an augmented Lagrangian method with constrained subproblems. Optim. Meth. Softw. (2012, in press)

  7. Bongartz I., Conn A.R., Gould N., Toint P.L.: CUTE: constrained and unconstrained testing environment. ACM Trans. Math. Softw. 21(1), 123–160 (1995)

    Article  MATH  Google Scholar 

  8. Conn A.R., Gould N., Sartenaer A., Toint P.L.: Convergence properties of an augmented Lagrangian algorithm for optimization with a combination of general equality and linear constraints. SIAM J. Optim. 6(3), 674–703 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  9. Conn A.R., Gould N.I.M., Toint P.L.: A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J. Numer. Anal. 28(2), 545–572 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  10. Conn A.R., Gould N.I.M., Toint P.L.: LANCELOT: A Fortran package for Large-Scale Nonlinear Optimization (Release A). Springer, Berlin (1992)

    Book  MATH  Google Scholar 

  11. Dolan E.D., Moré J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  12. Eckstein J.: A practical general approximation criterion for methods of multipliers based on Bregman distances. Math. Program. 96(1), 61–86 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  13. Eckstein J., Silva P.J.S.: Proximal methods for nonlinear programming: double regularization and inexact subproblems. Comput. Optim. Appl. 46(2), 279–304 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  14. Fernández, D., Solodov, M.V.: Local convergence of exact and inexact augmented Lagrangian methods under the second-order sufficient optimality condition. Technical Report A677, Instituto Nacional de Matemática Pura e Aplicada (IMPA), Rio de Janeiro (2011)

  15. Friedlander M.P., Saunders M.A.: A globally convergent linearly constrained Lagrangian method for nonlinear optimization. SIAM J. Optim. 15(3), 863–897 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  16. Hager, W.W., Zhang, H.: ASA-CG source code. http://www.math.ufl.edu/~hager/papers/CG/

  17. Hager W.W., Zhang H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16(1), 170–192 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  18. Hager W.W., Zhang H.: A new active set algorithm for box constrained optimization. SIAM J. Optim. 17(2), 526–557 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  19. Jones, E., Oliphant, T., Peterson, P., et al.: SciPy: Open source scientific tools for Python. http://www.scipy.org/ (2001)

  20. Korpelevich G.M.: Extrapolation gradient methods and their relation to modified Lagrange functions. 19(4), 694–703 (1983)

    MathSciNet  MATH  Google Scholar 

  21. Rockafellar R.T.: Local boundedness of nonlinear, monotone operators. Michigan Math. J. 16, 397–407 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  22. Rockafellar R.T.: Convex Analysis. Princeton University Press, Princeton, NJ (1970)

    MATH  Google Scholar 

  23. Rockafellar R.T.: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 149, 75–88 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  24. Rockafellar R.T.: Conjugate Duality and Optimization. SIAM, Philadelphia (1974)

    Book  MATH  Google Scholar 

  25. Rockafellar R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97–116 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  26. Rockafellar R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14(5), 877–898 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  27. Solodov M.V., Svaiter B.F.: A hybrid approximate extragradient-proximal point algorithm using the enlargement of a maximal monotone operator. Set-Valued Anal. 7(4), 323–345 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  28. Solodov M.V., Svaiter B.F.: A hybrid projection-proximal point algorithm. J. Convex Anal. 6(1), 59–70 (1999)

    MathSciNet  MATH  Google Scholar 

  29. Solodov M.V., Svaiter B.F.: An inexact hybrid generalized proximal point algorithm and some new results on the theory of Bregman functions. Math. Oper. Res. 25(2), 214–230 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  30. Spingarn J.E.: Partial inverse of a monotone operator. Appl. Math. Optim. 10(3), 247–265 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  31. van Rossum, G., et al.: Python language website. http://www.python.org/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jonathan Eckstein.

Additional information

This material is based in part upon work supported by the National Science Foundation under Grant CCF-1115638. Jonathan Eckstein was also partially supported by Rutgers Business School Research Resources Committee grants. Paulo J. S. Silva was partially supported by CNPq (grants 303030/2007-0 and 474138/2008-9) and PRONEX–Optimization.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Eckstein, J., Silva, P.J.S. A practical relative error criterion for augmented Lagrangians. Math. Program. 141, 319–348 (2013). https://doi.org/10.1007/s10107-012-0528-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-012-0528-9

Mathematics Subject Classification

Navigation