Skip to main content
Log in

The augmented Lagrangian method can approximately solve convex optimization with least constraint violation

  • Full Length Paper
  • Series B
  • Published:
Mathematical Programming Submit manuscript

Abstract

There are many important practical optimization problems whose feasible regions are not known to be nonempty or not, and optimizers of the objective function with the least constraint violation prefer to be found. A natural way for dealing with these problems is to extend the nonlinear optimization problem as the one optimizing the objective function over the set of points with the least constraint violation. This leads to the study of the shifted problem. This paper focuses on the constrained convex optimization problem. The sufficient condition for the closedness of the set of feasible shifts is presented and the continuity properties of the optimal value function and the solution mapping for the shifted problem are studied. Properties of the conjugate dual of the shifted problem are discussed through the relations between the dual function and the optimal value function. The solvability of the dual of the optimization problem with the least constraint violation is investigated. It is shown that, if the least violated shift is in the domain of the subdifferential of the optimal value function, then this dual problem has an unbounded solution set. Under this condition, the optimality conditions for the problem with the least constraint violation are established in term of the augmented Lagrangian. It is shown that the augmented Lagrangian method has the properties that the sequence of shifts converges to the least violated shift and the sequence of multipliers is unbounded. Moreover, it is proved that the augmented Lagrangian method is able to find an approximate solution to the problem with the least constraint violation and it has linear rate of convergence under an error bound condition. The augmented Lagrangian method is applied to an illustrative convex second-order cone constrained optimization problem with least constraint violation and numerical results verify our theoretical results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Theorem 10.1. A convex function f on \(\Re ^n\) is continuous relative to any relatively open convex set C in its effective domain, in particular relative to \(\mathrm{ri}\, (\mathrm{dom}\, f)\).

  2. Corollary 23.5.1. If f is a closed proper convex function, \(\partial f^*\) is the inverse of \(\partial f\) in the sense of multivalued mappings, i.e. \(x \in \partial f^*(x^*)\) if and only if \(x^*\in \partial f(x)\).

  3. Here \(\limsup \) stands for the outer limit of a sequence of sets from Chapter 4 of [23]:

    $$\begin{aligned} \limsup _{k \rightarrow +\infty } C^k=\Big \{z: \text{ there } \text{ exists } \text{ a } \text{ subsequence } N\subset \mathbf{N} , \exists \, z^k \in C^k \text{ for } k \in N \text{ such } \text{ that } z^k{\mathop {\rightarrow }\limits ^{N}}z\Big \}. \end{aligned}$$

References

  1. Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York (1982)

    MATH  Google Scholar 

  2. Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000)

    Book  MATH  Google Scholar 

  3. Burke, J.V., Curtis, F.E., Wang, H.: A Sequential Quadratic Optimization Algorithm with Rapid Infeasibility Detection. SIAM J. on Optim. 24, 839–872 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  4. Byrd, R.H., Curtis, F.E., Nocedal, J.: Infeasibility Detection and SQP Methods for Nonlinear Optimization. SIAM J. on Optim. 20(5), 2281–2299 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  5. Censor, Y., Zaknoon, M., Zaslavski, A.J.: Data-compatibility of Algorithms for Constrained Convex Optimization. J. of Appl. and Numerical Optim. 3(1), 21–41 (2021)

    Google Scholar 

  6. Chiche, A., Gilbert, JCh.: How the Augmented Lagrangian Algorithm Can Deal with An Infeasible Convex Quadratic Optimization Problem. J. of Convex Anal. 23(2), 425–459 (2016)

    MathSciNet  MATH  Google Scholar 

  7. Clarke, F.H.: Optimization and Nonsmooth Analysis. John Wiley and Sons, New York (1983)

    MATH  Google Scholar 

  8. Combettes, P.L., Bondon, P.: Hard-Constrained Inconsistent Signal Feasibility Problems. IEEE Trans. on Signal Process. 47, 2460–2468 (1999)

    Article  MATH  Google Scholar 

  9. Conn, A.R., Gould, N.I.M., Toint, Ph.L.: A Globally Convergent Augmented Lagrangian Algorithm for Optimization with General Constraints and Simple Bounds. SIAM J. on Numerical Anal. 28, 545–572 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  10. Contesse-Becker, L.: Extended Convergence Results for the Method of Multipliers for Non-Strictly Binding Inequality Constraints. J. of Optim. Theory and Appl. 79, 273–310 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  11. Dai, Y.H., Liu, X.W., Sun, J.: A Primal-Dual Interior-point Method Capable of Rapidly Detecting Infeasibility for Nonlinear Programs. J. of Industrial and Management Optim. 16(2), 1009–1035 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  12. Dai, Y.H., Zhang, L.W.: Optimization with Least Constraint Violation. CSIAM Trans. on Appl. Math. 2(3), 551–584 (2021)

    Article  MathSciNet  Google Scholar 

  13. Hestenes, M.R.: Multiplier and Gradient Methods. J. of Optim. Theory and Appl. 4, 303–320 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  14. Ito, K., Kunisch, K.: The Augmented Lagrangian Method for Equality and Inequality Constraints in Hilbert Spaces. Math. Program. 46, 341–360 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  15. Luque, F.J.: Asympototic Convergence Analysis of the Proximal Point Algorithm. SIAM J. on Control and Optim. 22(2), 277–293 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  16. Luo, Z.Q., Pang, J.S., Ralph, D.: Mathematical Programs with Equilibrium Constraints. Cambridge University Press, Cambridge (1996)

    Book  MATH  Google Scholar 

  17. Powell, M.J.D.: A Method for Nonlinear Constraints in Minimization Problems. In: Fletcher, R. (ed.) Optimization, pp. 283–298. Academic Press, New York (1969)

  18. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton, New Jersey (1970)

    Book  MATH  Google Scholar 

  19. Rockafellar, R.T.: A Dual Approach to Solving Nonlinear Programming Problems by Unconstrained Optimization. Math. Program. 5, 354–373 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  20. Rockafellar, R.T.: The Multiplier Method of Hestenes and Powell Applied to Convex Programming. J. of Optim. Theory and Appl. 12, 555–562 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  21. Rockafellar, R.T.: Monotone Operators and The Proximal Point Algorithm. SIAM J. on Control and Optim. 14, 877–898 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  22. Rockafellar, R.T.: Augmented Lagrangians and Applications of The Proximal Point Algorithm in Convex Programming. Math. of Oper. Research 1, 97–116 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  23. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer-Verlag, New York (1998)

    Book  MATH  Google Scholar 

  24. Sun, D.F., Sun, J., Zhang, L.W.: The Rate of Convergence of the Augmented Lagrangian Method for Nonlinear Semidefinite Programming. Math. Program. 114, 349–391 (2008)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank Prof. Ya-xiang Yuan for his long time guidance and encouragement and for Profs. Xinwei Liu and Zhongwen Chen for their useful discussions and comments. They thank Dr. Jiani Wang for making numerical experiments to verify the theoretical results of the augmented Lagrangian method. Many thanks are also due to the two anonymous reviewers for their valuable comments and suggestions, which helped to improve the quality of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu-Hong Dai.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This author was supported by the Natural Science Foundation of China (Nos. 11991020, 12021001, 11631013, 11971372 and 11991021) and the Strategic Priority Research Program of Chinese Academy of Sciences (No. XDA27000000). This author was supported by the Natural Science Foundation of China (Nos. 11971089 and 11731013) and partially supported by Dalian High-level Talent Innovation Project (No. 2020RD09).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dai, YH., Zhang, L. The augmented Lagrangian method can approximately solve convex optimization with least constraint violation. Math. Program. 200, 633–667 (2023). https://doi.org/10.1007/s10107-022-01843-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-022-01843-2

Keywords

Mathematics Subject Classification

Navigation