Abstract
In this paper, we provide some gentle introductions to the recent advance in augmented Lagrangian methods for solving large-scale convex matrix optimization problems (cMOP). Specifically, we reviewed two types of sufficient conditions for ensuring the quadratic growth conditions of a class of constrained convex matrix optimization problems regularized by nonsmooth spectral functions. Under a mild quadratic growth condition on the dual of cMOP, we further discussed the R-superlinear convergence of the Karush–Kuhn–Tucker (KKT) residuals of the sequence generated by the augmented Lagrangian methods (ALM) for solving convex matrix optimization problems. Implementation details of the ALM for solving core convex matrix optimization problems are also provided.
Similar content being viewed by others
Notes
It is easy to check that the largest eigenvalue of P is 1 and all other eigenvalues are no more than 1 in magnitude.
For the convergence of the PPA, one does not need the parameters to be increasing as long as \(\sigma _k\) is bounded away from zero.
The Lipschitz continuity of a set-valued mapping may refer to other properties elsewhere, such as in [74, Definition 9.26].
References
Vandenberghe, L., Boyd, S.: Semidefinite programming. SIAM Rev. 38, 49–95 (1996)
Candès, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2008)
Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56, 2053–2080 (2009)
Recht, B., Fazel, M., Parrilo, P.A.: Guaranteed minimum rank solutions to linear matrix equations via nuclear norm minimization. SIAM Rev. 52, 471–501 (2010)
Watson, G.A.: On matrix approximation problems with Ky Fan \(k\) norms. Numer. Algorithm 5, 263–272 (1993)
Greenbaum, A., Trefethen, L.N.: GMRES/CR and Arnoldi/Lanczos as matrix approximation problems. SIAM J. Sci. Comput. 15, 359–368 (1994)
Toh, K.-C., Trefethen, L.N.: The Chebyshev polynomials of a matrix. SIAM J. Matrix Anal. Appl. 20, 400–419 (1998)
Boyd, S., Diaconis, P., Sun, J., Xiao, L.: Fastest mixing Markov chain on a path. Am. Math. Month. 113, 70–74 (2006)
Boyd, S., Diaconis, P., Parrilo, P.A., Xiao, L.: Fastest mixing Markov chain on graphs with symmetries. SIAM J. Optim. 20, 792–819 (2009)
Ding, C.: An Introduction to a Class of Matrix Optimization Problems, Ph.D Thesis, Department of Mathematics, National University of Singapore, (2012)
Ding, C., Sun, D.F., Toh, K.-C.: An introduction to a class of matrix cone programming. Math. Program. 144, 141–179 (2014)
Lewis, A.S.: The convex analysis of unitarily invariant matrix functions. J. Conv. Anal. 2, 173–183 (1995)
Lewis, A.S.: Convex analysis on the Hermitian matrices. SIAM J. Optim. 6, 164–177 (1996)
Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4, 303–320 (1969)
Powell, M.J.D.: A method for nonlinear constraints in minimization problems. In: Fletcher, R. (ed.) Optimization, Academic, pp. 283–298 (1969)
Ito, K., Kunisch, K.: The augmented Lagrangian method for equality and inequality constraints in Hilbert spaces. Math. Program. 46, 341–360 (1990)
Conn, A.R., Gould, N., Toint, P.L.: A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J. Numer. Anal. 28, 545–572 (1991)
Contesse-Becker, L.: Extended convergence results for the method of multipliers for non-strictly binding inequality constraints. J. Optim. Theory Appl. 79, 273–310 (1993)
Conn, A.R., Gould, N., Sartenaer, A., Toint, P.L.: Convergence Properties of an Augmented Lagrangian Algorithm for Optimization with a Combination of General Equality and Linear Constraints. SIAM J. Optim. 6, 674–703 (1996)
Pennanen, T.: Local convergence of the proximal point algorithm and multiplier methods without monotonicity. Math. Oper. Res. 27, 170–191 (2002)
Shapiro, A., Sun, J.: Some properties of the augmented Lagrangian in cone constrained optimization. Math. Oper. Res. 29, 479–491 (2004)
Sun, D.F., Sun, J., Zhang, L.W.: The rate of convergence of the augmented Lagrangian method for nonlinear semidefinite programming. Math. Program. 114, 349–391 (2008)
Bertsekas, D.: Constrained Optimization and Lagrange Multipliers Methods. Academic Press, New York (1982)
Golshtein, E.G., Tretyakov, N.V.: Modified Lagrangians and Monotone Maps in Optimization. Wiley, New York (1989)
Fortin, M., Glowinski, R.: Augmented Lagrangian Methods: Applications to Numerical Solutions of Boundary Value Problems. North-Holland, Amsterdam (1983)
Bergounioux, M.: Use of augmented Lagrangian methods for the optimal control of obstacle problems. J. Optim. Theory Appl. 95, 101–126 (1997)
Nilssen, T.K., Mannseth, T., Tai, X.-C.: Permeability estimation with the augmented Lagrangian method for a nonlinear diffusion equation. Comput. Geosci. 7, 27–47 (2003)
Attouch, H., Soueycatt, M.: Augmented Lagrangian and proximal alternating direction methods of multipliers in Hilbert spaces. Applications to games, PDE’s and control. Pac. J. Optim. 5, 17–37 (2009)
Zhao, X.Y., Sun, D.F., Toh, K.-C.: A Newton-CG augmented Lagrangian method for semidefinite programming. SIAM J. Optim. 20, 1737–1765 (2010)
Yang, L.Q., Sun, D.F., Toh, K.-C.: SDPNAL+: a majorized semismooth Newton-CG augmented Lagrangian method for semidefinite programming with nonnegative constraints. Math. Program. Comput. 7, 331–336 (2015)
Li, X.D., Sun, D.F., Toh, K.-C.: QSDPNAL: a two-phase augmented Lagrangian method for convex quadratic semidefinite programming. Math. Program. Comput. 10, 703–743 (2018)
Jiang, K. F., Sun, D. F., Toh, K.-C.: Solving nuclear norm regularized and semidefinite matrix least squares problems with linear equality constraints, In: Discrete Geometry and Optimization, Springer, 133–162 (2013)
Chen, C.H., Liu, Y.J., Sun, D.F., Toh, K.-C.: A semismooth Newton-CG dual proximal point algorithm for matrix spectral norm approximation problems. Math. Program. 155, 435–470 (2016)
Fernández, D., Solodov, M.V.: Local convergence of exact and inexact augmented Lagrangian methods under the second-order sufficient optimality condition. SIAM J. Optim. 22, 384–407 (2012)
Dontchev, A.L., Rockafellar, R.T.: Characterizations of Lipschitzian stability in nonlinear programming. In Mathematical Programming With Data Perturbations, Marcel Dekker, New York, pp. 65–82 (1997)
Klatte, D.: Upper Lipschitz behavior of solutions to perturbed \(C^{1, 1}\) programs. Math. Program. 88, 285–311 (2000)
Izmailov, A.F., Kurennoy, A.S., Solodov, M.V.: A note on upper Lipschitz stability, error bounds, and critical multipliers for Lipschitz-continuous KKT systems. Math. Program. 142, 591–604 (2013)
Mordukhovich, B.S., Sarabi, M.E.: Critical multipliers in variational systems via second-order generalized differentiation. Math. Program. 169, 605–645 (2018)
Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1, 97–116 (1976)
Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer, New York (2000)
Luque, F.J.: Asymptotic convergence analysis of the proximal point algorithm. SIAM J. Control Optim. 22, 277–293 (1984)
Cui, Y.: Large scale composite optimization problems with coupled objective functions: theory, algorithms and applications, Ph.D Thesis, Department of Mathematics, National University of Singapore, (2016)
Cui, Y., Ding, C., Zhao, X.Y.: Quadratic growth conditions for convex matrix optimization problems associated with spectral functions. SIAM J. Optim. 27, 2332–2355 (2017)
Cui, Y., Sun, D.F., Toh, K.-C.: On the R-superlinear convergence of the KKT residuals generated by the augmented Lagrangian method for convex composite conic programming. Math. Program. 178, 381–415 (2019)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
Miao, W.M., Pan, S.H., Sun, D.F.: A rank-corrected procedure for matrix completion with fixed basis coefficients. Math. Program. 159, 289–338 (2016)
Ding, C., Sun, D.F., Sun, J., Toh, K.-C.: Spectral Operators of Matrices. Math. Program. 168, 509–531 (2018)
Negahban, S., Wainwright, M.J.: Restricted strong convexity and weighted matrix completion: optimal bounds with noise. J. Mach. Learn. Res. 13, 1665–1697 (2012)
Klopp, O.: Noisy low-rank matrix completion with general sampling distribution. Bernoulli 20, 282–303 (2014)
Dontchev, A.L., Rockafellar, R.T.: Implicit Functions and Solution Mappings. Springer, New York (2009)
Ioffe, A.D., Outrata, J.V.: On metric and calmness qualification conditions in subdifferential calculus. Set-Valued Anal. 16, 199–227 (2008)
Fabian, M., Henrion, R., Kruger, A.Y., Outrata, J.V.: Error bounds: necessary and sufficient conditions. Set-Valued Variat. Anal. 18, 121–149 (2010)
Gfrerer, H.: First order and second order characterizations of metric subregularity and calmness of constraint set mappings. SIAM J. Optim. 21, 1439–1474 (2011)
Li, G.Y., Mordukhovich, B.S.: Hölder metric subregularity with applications to proximal point method. SIAM J. Optim. 22, 1655–1684 (2012)
Gfrerer, H.: On directional metric subregularity and second-order optimality conditions for a class of nonsmooth mathematical programs. SIAM J. Optim. 23, 632–665 (2013)
Gfrerer, H., Outrata, J.V.: On Lipschitzian properties of implicit multifunctions. SIAM J. Optim. 26, 2160–2189 (2016)
Luo, Z.Q., Tseng, P.: Error bounds and convergence analysis of feasible descent methods: a general approach. Ann. Oper. Res. 46, 157–178 (1993)
Tseng, P.: Approximation accuracy, gradient methods, and error bound for structured convex optimization. Math. Program. 125, 263–295 (2010)
Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)
Leventhal, D.: Metric subregularity and the proximal point method. J. Math. Anal. Appl. 360, 681–688 (2009)
Fischer, A.: Local behavior of an iterative framework for generalized equations with nonisolated solutions. Math. Program. 94, 91–124 (2002)
Facchinei, F., Fischer, A., Herrich, M.: An LP-Newton method: nonsmooth equations, KKT systems, and nonisolated solutions. Math. Program. 146, 1–36 (2014)
Mordukhovich, B.S., Ouyang, W.: Higher-order metric subregularity and its applications. J. Global Optim. 63, 777–795 (2015)
Artacho, F.J.A., Geoffroy, M.H.: Characterization of metric regularity of subdifferentials. J. Convex Anal. 15, 365–380 (2008)
Artacho, F.J.A., Geoffroy, M.H.: Metric subregularity of the convex subdifferential in Banach spaces. J. Nonlinear Convex Anal. 15, 35–47 (2014)
Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367–426 (1996)
Bauschke, H.H., Borwein, J.M., Li, W.: Strong conical hull intersection property, bounded linear regularity, Jameson’s property (G), and error bounds in convex optimization. Math. Program. 86, 135–160 (1999)
Sun, D.F.: The strong second order sufficient condition and constraint nondegeneracy in nonlinear semidefinite programming and their implications. Math. Oper. Res. 31, 761–776 (2006)
Ding, C.: Variational analysis of the Ky Fan \(k\)-norm. Set-Valued Variat. Anal. 25, 265–296 (2017)
Mangasarian, O.L.: A simple characterization of solution sets of convex programs. Oper. Res. Lett. 7, 21–26 (1988)
Zhou, Z.R., So, A.M.C.: A unified approach to error bounds for structured convex optimization problems. Math. Program. 165, 689–728 (2017)
Cui, Y., Sun, D. F., Toh, K-C.: On the asymptotic superlinear convergence of the augmented Lagrangian method for semidefinite programming with multiple solutions. arXiv:1610.00875 (2016)
Overton, M., Womersley, R.: Optimality conditions and duality theory for minimizing sums of the largest eigenvalues of symmetric matrices. Math. Program. 62, 321–357 (1993)
Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, New York (1998)
Robinson, S. M.: An implicit-function theorem for generalized variational inequalities. Technical Summary Report No. 1672, Mathematics Research Center, University of Wisconsin-Madison, 1976; available from National Technical Information Service under Accession No. ADA031952
Robinson, S. M.: Some continuity properties of polyhedral multifunctions, In Mathematical Programming at Oberwolfach, vol. 14 of Mathematical Programming Studies, Springer, Heidelberg, pp. 206–214 (1981)
Sun, J.: On Monotropic Piecewise Quadratic Programming, Ph.D Thesis, Department of Mathematics, University of Washington, Seattle (1986)
Han, D.R., Sun, D.F., Zhang, L.W.: Linear rate convergence of the alternating direction method of multipliers for convex composite programming. Math. Oper. Res. 43, 347–692 (2018)
Sun, D.F., Sun, J.: Semismooth matrix-valued functions. Math. Oper. Res. 27, 150–169 (2002)
Fischer, A.: Solution of monotone complementarity problems with locally Lipschitzian functions. Math. Program. 76, 513–532 (1997)
Clarke, F.: Optimization and Nonsmooth Analysis. Wiley, New York (1983)
Hiriart-Urruty, J.-B., Strodiot, J.-J., Nguyen, V.H.: Generalized Hessian matrix and second-order optimality conditions for problems with \({C}^{1,1}\) data. Appl. Math. Optim. 11, 43–56 (1984)
Pang, J.-S., Sun, D.F., Sun, J.: Semismooth homeomorphisms and strong stability of semidefinite and Lorentz complementarity problems. Math. Oper. Res. 28, 39–63 (2003)
Chen, L., Sun, D.F., Toh, K.-C.: An efficient inexact symmetric Gauss-Seidel based majorized ADMM for high-dimensional convex composite conic programming. Math. Program. 161, 237–270 (2017)
Li, X.D., Sun, D.F., Toh, K.-C.: A Schur complement based semi-proximal ADMM for convex quadratic conic programming and extensions. Math. Program. 155, 333–373 (2016)
Li, X. D.: A Two-Phase Augmented Lagrangian Method for Convex Composite Quadratic Programming, PhD thesis, Department of Mathematics, National University of Singapore, (2015)
Author information
Authors and Affiliations
Corresponding author
Additional information
Chao Ding’s research was supported by the National Natural Science Foundation of China (Nos. 11671387, 11531014, and 11688101) and Beijing Natural Science Foundation (No. Z190002). Xu-Dong Li’s research was supported by the National Key R&D Program of China (No. 2020YFA0711900), the National Natural Science Foundation of China (No. 11901107), the Young Elite Scientists Sponsorship Program by CAST (No. 2019QNRC001), the Shanghai Sailing Program (No. 19YF1402600), and the Science and Technology Commission of Shanghai Municipality Project (No. 19511120700). Xin-Yuan Zhao’s research was supported by the National Natural Science Foundation of China (No. 11871002) and the General Program of Science and Technology of Beijing Municipal Education Commission (No. KM201810005004)
Rights and permissions
About this article
Cite this article
Cui, Y., Ding, C., Li, XD. et al. Augmented Lagrangian Methods for Convex Matrix Optimization Problems. J. Oper. Res. Soc. China 10, 305–342 (2022). https://doi.org/10.1007/s40305-021-00346-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40305-021-00346-9
Keywords
- Matrix optimization
- Spectral functions
- Quadratic growth conditions
- Metric subregularity
- Augmented Lagrangian methods
- Fast convergence rates
- Semismooth Newton methods