Skip to main content
Log in

Augmented Lagrangian Methods for Convex Matrix Optimization Problems

  • Published:
Journal of the Operations Research Society of China Aims and scope Submit manuscript

Abstract

In this paper, we provide some gentle introductions to the recent advance in augmented Lagrangian methods for solving large-scale convex matrix optimization problems (cMOP). Specifically, we reviewed two types of sufficient conditions for ensuring the quadratic growth conditions of a class of constrained convex matrix optimization problems regularized by nonsmooth spectral functions. Under a mild quadratic growth condition on the dual of cMOP, we further discussed the R-superlinear convergence of the Karush–Kuhn–Tucker (KKT) residuals of the sequence generated by the augmented Lagrangian methods (ALM) for solving convex matrix optimization problems. Implementation details of the ALM for solving core convex matrix optimization problems are also provided.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. It is easy to check that the largest eigenvalue of P is 1 and all other eigenvalues are no more than 1 in magnitude.

  2. For the convergence of the PPA, one does not need the parameters to be increasing as long as \(\sigma _k\) is bounded away from zero.

  3. The Lipschitz continuity of a set-valued mapping may refer to other properties elsewhere, such as in [74, Definition 9.26].

References

  1. Vandenberghe, L., Boyd, S.: Semidefinite programming. SIAM Rev. 38, 49–95 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  2. Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  3. Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56, 2053–2080 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  4. Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum rank solutions to linear matrix equations via nuclear norm minimization. SIAM Rev. 52, 471–501 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  5. Watson, G.A.: On matrix approximation problems with Ky Fan \(k\) norms. Numer. Algorithm 5, 263–272 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  6. Greenbaum, A., Trefethen, L.N.: GMRES/CR and Arnoldi/Lanczos as matrix approximation problems. SIAM J. Sci. Comput. 15, 359–368 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  7. Toh, K.-C., Trefethen, L.N.: The Chebyshev polynomials of a matrix. SIAM J. Matrix Anal. Appl. 20, 400–419 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  8. Boyd, S., Diaconis, P., Sun, J., Xiao, L.: Fastest mixing Markov chain on a path. Am. Math. Month. 113, 70–74 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  9. Boyd, S., Diaconis, P., Parrilo, P.A., Xiao, L.: Fastest mixing Markov chain on graphs with symmetries. SIAM J. Optim. 20, 792–819 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  10. Ding, C.: An Introduction to a Class of Matrix Optimization Problems, Ph.D Thesis, Department of Mathematics, National University of Singapore, (2012)

  11. Ding, C., Sun, D.F., Toh, K.-C.: An introduction to a class of matrix cone programming. Math. Program. 144, 141–179 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  12. Lewis, A.S.: The convex analysis of unitarily invariant matrix functions. J. Conv. Anal. 2, 173–183 (1995)

    MathSciNet  MATH  Google Scholar 

  13. Lewis, A.S.: Convex analysis on the Hermitian matrices. SIAM J. Optim. 6, 164–177 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  14. Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4, 303–320 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  15. Powell, M.J.D.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed.) Optimization, Academic, pp. 283–298 (1969)

  16. Ito, K., Kunisch, K.: The augmented Lagrangian method for equality and inequality constraints in Hilbert spaces. Math. Program. 46, 341–360 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  17. Conn, A.R., Gould, N., Toint, P.L.: A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J. Numer. Anal. 28, 545–572 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  18. Contesse-Becker, L.: Extended convergence results for the method of multipliers for non-strictly binding inequality constraints. J. Optim. Theory Appl. 79, 273–310 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  19. Conn, A.R., Gould, N., Sartenaer, A., Toint, P.L.: Convergence Properties of an Augmented Lagrangian Algorithm for Optimization with a Combination of General Equality and Linear Constraints. SIAM J. Optim. 6, 674–703 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  20. Pennanen, T.: Local convergence of the proximal point algorithm and multiplier methods without monotonicity. Math. Oper. Res. 27, 170–191 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  21. Shapiro, A., Sun, J.: Some properties of the augmented Lagrangian in cone constrained optimization. Math. Oper. Res. 29, 479–491 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  22. Sun, D.F., Sun, J., Zhang, L.W.: The rate of convergence of the augmented Lagrangian method for nonlinear semidefinite programming. Math. Program. 114, 349–391 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  23. Bertsekas, D.: Constrained Optimization and Lagrange Multipliers Methods. Academic Press, New York (1982)

    MATH  Google Scholar 

  24. Golshtein, E.G., Tretyakov, N.V.: Modified Lagrangians and Monotone Maps in Optimization. Wiley, New York (1989)

    MATH  Google Scholar 

  25. Fortin, M., Glowinski, R.: Augmented Lagrangian Methods: Applications to Numerical Solutions of Boundary Value Problems. North-Holland, Amsterdam (1983)

    MATH  Google Scholar 

  26. Bergounioux, M.: Use of augmented Lagrangian methods for the optimal control of obstacle problems. J. Optim. Theory Appl. 95, 101–126 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  27. Nilssen, T.K., Mannseth, T., Tai, X.-C.: Permeability estimation with the augmented Lagrangian method for a nonlinear diffusion equation. Comput. Geosci. 7, 27–47 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  28. Attouch, H., Soueycatt, M.: Augmented Lagrangian and proximal alternating direction methods of multipliers in Hilbert spaces. Applications to games, PDE’s and control. Pac. J. Optim. 5, 17–37 (2009)

    MathSciNet  MATH  Google Scholar 

  29. Zhao, X.Y., Sun, D.F., Toh, K.-C.: A Newton-CG augmented Lagrangian method for semidefinite programming. SIAM J. Optim. 20, 1737–1765 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  30. Yang, L.Q., Sun, D.F., Toh, K.-C.: SDPNAL+: a majorized semismooth Newton-CG augmented Lagrangian method for semidefinite programming with nonnegative constraints. Math. Program. Comput. 7, 331–336 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  31. Li, X.D., Sun, D.F., Toh, K.-C.: QSDPNAL: a two-phase augmented Lagrangian method for convex quadratic semidefinite programming. Math. Program. Comput. 10, 703–743 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  32. Jiang, K. F., Sun, D. F., Toh, K.-C.: Solving nuclear norm regularized and semidefinite matrix least squares problems with linear equality constraints, In: Discrete Geometry and Optimization, Springer, 133–162 (2013)

  33. Chen, C.H., Liu, Y.J., Sun, D.F., Toh, K.-C.: A semismooth Newton-CG dual proximal point algorithm for matrix spectral norm approximation problems. Math. Program. 155, 435–470 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  34. Fernández, D., Solodov, M.V.: Local convergence of exact and inexact augmented Lagrangian methods under the second-order sufficient optimality condition. SIAM J. Optim. 22, 384–407 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  35. Dontchev, A.L., Rockafellar, R.T.: Characterizations of Lipschitzian stability in nonlinear programming. In Mathematical Programming With Data Perturbations, Marcel Dekker, New York, pp. 65–82 (1997)

  36. Klatte, D.: Upper Lipschitz behavior of solutions to perturbed \(C^{1, 1}\) programs. Math. Program. 88, 285–311 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  37. Izmailov, A.F., Kurennoy, A.S., Solodov, M.V.: A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems. Math. Program. 142, 591–604 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  38. Mordukhovich, B.S., Sarabi, M.E.: Critical multipliers in variational systems via second-order generalized differentiation. Math. Program. 169, 605–645 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  39. Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1, 97–116 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  40. Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000)

    Book  MATH  Google Scholar 

  41. Luque, F.J.: Asymptotic convergence analysis of the proximal point algorithm. SIAM J. Control Optim. 22, 277–293 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  42. Cui, Y.: Large scale composite optimization problems with coupled objective functions: theory, algorithms and applications, Ph.D Thesis, Department of Mathematics, National University of Singapore, (2016)

  43. Cui, Y., Ding, C., Zhao, X.Y.: Quadratic growth conditions for convex matrix optimization problems associated with spectral functions. SIAM J. Optim. 27, 2332–2355 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  44. Cui, Y., Sun, D.F., Toh, K.-C.: On the R-superlinear convergence of the KKT residuals generated by the augmented Lagrangian method for convex composite conic programming. Math. Program. 178, 381–415 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  45. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)

    Book  MATH  Google Scholar 

  46. Miao, W.M., Pan, S.H., Sun, D.F.: A rank-corrected procedure for matrix completion with fixed basis coefficients. Math. Program. 159, 289–338 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  47. Ding, C., Sun, D.F., Sun, J., Toh, K.-C.: Spectral Operators of Matrices. Math. Program. 168, 509–531 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  48. Negahban, S., Wainwright, M.J.: Restricted strong convexity and weighted matrix completion: optimal bounds with noise. J. Mach. Learn. Res. 13, 1665–1697 (2012)

    MathSciNet  MATH  Google Scholar 

  49. Klopp, O.: Noisy low-rank matrix completion with general sampling distribution. Bernoulli 20, 282–303 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  50. Dontchev, A.L., Rockafellar, R.T.: Implicit Functions and Solution Mappings. Springer, New York (2009)

    Book  MATH  Google Scholar 

  51. Ioffe, A.D., Outrata, J.V.: On metric and calmness qualification conditions in subdifferential calculus. Set-Valued Anal. 16, 199–227 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  52. Fabian, M., Henrion, R., Kruger, A.Y., Outrata, J.V.: Error bounds: necessary and sufficient conditions. Set-Valued Variat. Anal. 18, 121–149 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  53. Gfrerer, H.: First order and second order characterizations of metric subregularity and calmness of constraint set mappings. SIAM J. Optim. 21, 1439–1474 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  54. Li, G.Y., Mordukhovich, B.S.: Hölder metric subregularity with applications to proximal point method. SIAM J. Optim. 22, 1655–1684 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  55. Gfrerer, H.: On directional metric subregularity and second-order optimality conditions for a class of nonsmooth mathematical programs. SIAM J. Optim. 23, 632–665 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  56. Gfrerer, H., Outrata, J.V.: On Lipschitzian properties of implicit multifunctions. SIAM J. Optim. 26, 2160–2189 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  57. Luo, Z.Q., Tseng, P.: Error bounds and convergence analysis of feasible descent methods: a general approach. Ann. Oper. Res. 46, 157–178 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  58. Tseng, P.: Approximation accuracy, gradient methods, and error bound for structured convex optimization. Math. Program. 125, 263–295 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  59. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  60. Leventhal, D.: Metric subregularity and the proximal point method. J. Math. Anal. Appl. 360, 681–688 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  61. Fischer, A.: Local behavior of an iterative framework for generalized equations with nonisolated solutions. Math. Program. 94, 91–124 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  62. Facchinei, F., Fischer, A., Herrich, M.: An LP-Newton method: nonsmooth equations, KKT systems, and nonisolated solutions. Math. Program. 146, 1–36 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  63. Mordukhovich, B.S., Ouyang, W.: Higher-order metric subregularity and its applications. J. Global Optim. 63, 777–795 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  64. Artacho, F.J.A., Geoffroy, M.H.: Characterization of metric regularity of subdifferentials. J. Convex Anal. 15, 365–380 (2008)

    MathSciNet  MATH  Google Scholar 

  65. Artacho, F.J.A., Geoffroy, M.H.: Metric subregularity of the convex subdifferential in Banach spaces. J. Nonlinear Convex Anal. 15, 35–47 (2014)

    MathSciNet  MATH  Google Scholar 

  66. Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367–426 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  67. Bauschke, H.H., Borwein, J.M., Li, W.: Strong conical hull intersection property, bounded linear regularity, Jameson’s property (G), and error bounds in convex optimization. Math. Program. 86, 135–160 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  68. Sun, D.F.: The strong second order sufficient condition and constraint nondegeneracy in nonlinear semidefinite programming and their implications. Math. Oper. Res. 31, 761–776 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  69. Ding, C.: Variational analysis of the Ky Fan \(k\)-norm. Set-Valued Variat. Anal. 25, 265–296 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  70. Mangasarian, O.L.: A simple characterization of solution sets of convex programs. Oper. Res. Lett. 7, 21–26 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  71. Zhou, Z.R., So, A.M.C.: A unified approach to error bounds for structured convex optimization problems. Math. Program. 165, 689–728 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  72. Cui, Y., Sun, D. F., Toh, K-C.: On the asymptotic superlinear convergence of the augmented Lagrangian method for semidefinite programming with multiple solutions. arXiv:1610.00875 (2016)

  73. Overton, M., Womersley, R.: Optimality conditions and duality theory for minimizing sums of the largest eigenvalues of symmetric matrices. Math. Program. 62, 321–357 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  74. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, New York (1998)

    Book  MATH  Google Scholar 

  75. Robinson, S. M.: An implicit-function theorem for generalized variational inequalities. Technical Summary Report No. 1672, Mathematics Research Center, University of Wisconsin-Madison, 1976; available from National Technical Information Service under Accession No. ADA031952

  76. Robinson, S. M.: Some continuity properties of polyhedral multifunctions, In Mathematical Programming at Oberwolfach, vol. 14 of Mathematical Programming Studies, Springer, Heidelberg, pp. 206–214 (1981)

  77. Sun, J.: On Monotropic Piecewise Quadratic Programming, Ph.D Thesis, Department of Mathematics, University of Washington, Seattle (1986)

  78. Han, D.R., Sun, D.F., Zhang, L.W.: Linear rate convergence of the alternating direction method of multipliers for convex composite programming. Math. Oper. Res. 43, 347–692 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  79. Sun, D.F., Sun, J.: Semismooth matrix-valued functions. Math. Oper. Res. 27, 150–169 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  80. Fischer, A.: Solution of monotone complementarity problems with locally Lipschitzian functions. Math. Program. 76, 513–532 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  81. Clarke, F.: Optimization and Nonsmooth Analysis. Wiley, New York (1983)

    MATH  Google Scholar 

  82. Hiriart-Urruty, J.-B., Strodiot, J.-J., Nguyen, V.H.: Generalized Hessian matrix and second-order optimality conditions for problems with \({C}^{1,1}\) data. Appl. Math. Optim. 11, 43–56 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  83. Pang, J.-S., Sun, D.F., Sun, J.: Semismooth homeomorphisms and strong stability of semidefinite and Lorentz complementarity problems. Math. Oper. Res. 28, 39–63 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  84. Chen, L., Sun, D.F., Toh, K.-C.: An efficient inexact symmetric Gauss-Seidel based majorized ADMM for high-dimensional convex composite conic programming. Math. Program. 161, 237–270 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  85. Li, X.D., Sun, D.F., Toh, K.-C.: A Schur complement based semi-proximal ADMM for convex quadratic conic programming and extensions. Math. Program. 155, 333–373 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  86. Li, X. D.: A Two-Phase Augmented Lagrangian Method for Convex Composite Quadratic Programming, PhD thesis, Department of Mathematics, National University of Singapore, (2015)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xu-Dong Li.

Additional information

Chao Ding’s research was supported by the National Natural Science Foundation of China (Nos. 11671387, 11531014, and 11688101) and Beijing Natural Science Foundation (No. Z190002). Xu-Dong Li’s research was supported by the National Key R&D Program of China (No. 2020YFA0711900), the National Natural Science Foundation of China (No. 11901107), the Young Elite Scientists Sponsorship Program by CAST (No. 2019QNRC001), the Shanghai Sailing Program (No. 19YF1402600), and the Science and Technology Commission of Shanghai Municipality Project (No. 19511120700). Xin-Yuan Zhao’s research was supported by the National Natural Science Foundation of China (No. 11871002) and the General Program of Science and Technology of Beijing Municipal Education Commission (No. KM201810005004)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cui, Y., Ding, C., Li, XD. et al. Augmented Lagrangian Methods for Convex Matrix Optimization Problems. J. Oper. Res. Soc. China 10, 305–342 (2022). https://doi.org/10.1007/s40305-021-00346-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40305-021-00346-9

Keywords

Mathematics Subject Classification

Navigation