Abstract
We propose QPALM, a nonconvex quadratic programming (QP) solver based on the proximal augmented Lagrangian method. This method solves a sequence of inner subproblems which can be enforced to be strongly convex and which therefore admit a unique solution. The resulting steps are shown to be equivalent to inexact proximal point iterations on the extended-real-valued cost function, which allows for a fairly simple analysis where convergence to a stationary point at an \(R\)-linear rate is shown. The QPALM algorithm solves the subproblems iteratively using semismooth Newton directions and an exact linesearch. The former can be computed efficiently in most iterations by making use of suitable factorization update routines, while the latter requires the zero of a monotone, one-dimensional, piecewise affine function. QPALM is implemented in open-source C code, with tailored linear algebra routines for the factorization in a self-written package LADEL. The resulting implementation is shown to be extremely robust in numerical simulations, solving all of the Maros-Meszaros problems and finding a stationary point for most of the nonconvex QPs in the Cutest test set. Furthermore, it is shown to be competitive against state-of-the-art convex QP solvers in typical QPs arising from application domains such as portfolio optimization and model predictive control. As such, QPALM strikes a unique balance between solving both easy and hard problems efficiently.
Similar content being viewed by others
Data Availability Statement
This manuscript has no associated data, but all the numerical examples can be reproduced using the software available in the GitHub repository https://github.com/kul-optec/QPALM. All data analyzed during this study are publicly available. URLs are included in this published article.
References
Absil, P.A., Tits, A.L.: Newton-KKT interior-point methods for indefinite quadratic programming. Comput. Optim. Appl. 36(1), 5–41 (2007)
Amestoy, P.R., Davis, T.A., Duff, I.S.: Algorithm 837: AMD, an approximate minimum degree ordering algorithm. ACM Trans. Math. Softw. (TOMS) 30(3), 381–388 (2004)
Banjac, G., Goulart, P., Stellato, B., Boyd, S.: Infeasibility detection in the alternating direction method of multipliers for convex optimization. J. Optim. Theory Appl. 183(2), 490–519 (2019)
Benzi, M., Golub, G.H., Liesen, J.: Numerical solution of saddle point problems. Acta Numer. 14, 1 (2005)
Bertsekas, D.P.: Convexification procedures and decomposition methods for nonconvex optimization problems. J. Optim. Theory Appl. 29(2), 169–197 (1979)
Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Computer Science and Applied Mathematics. Academic Press, Boston (1982)
Bertsekas, D.P.: Nonlinear Programming. Athena Scientific (2016)
Birgin, E.G., Martínez, J.M.: Practical Augmented Lagrangian Methods for Constrained Optimization. Society for Industrial and Applied Mathematics, Philadelphia, PA (2014)
Bolte, J., Sabach, S., Teboulle, M.: Nonconvex Lagrangian-based optimization: monitoring schemes and global convergence. Math. Operations Res. 43(4), 1210–1232 (2018)
Boţ, R.I., Nguyen, D.K.: The proximal alternating direction method of multipliers in the nonconvex setting: convergence analysis and rates. Math. Operations Res. 45(2), 682–712 (2020)
Burer, S., Vandenbussche, D.: A finite branch-and-bound algorithm for nonconvex quadratic programming via semidefinite relaxations. Math. Program. 113(2), 259–282 (2008)
Chen, J., Burer, S.: Globally solving nonconvex quadratic programming problems via completely positive programming. Math. Program. Comput. 4(1), 33–52 (2012)
Chen, Y., Davis, T.A., Hager, W.W., Rajamanickam, S.: Algorithm 887: CHOLMOD, supernodal sparse Cholesky factorization and update/downdate. ACM Trans. Math. Softw. (TOMS) 35(3), 1–14 (2008)
Combettes, P.L., Pennanen, T.: Proximal methods for cohypomonotone operators. SIAM J. Control Optim. 43(2), 731–742 (2004)
Cottle, R.W., Habetler, G., Lemke, C.: On classes of copositive matrices. Linear Algebra Appl. 3(3), 295–310 (1970)
Davis, T.A.: Algorithm 849: a concise sparse Cholesky factorization package. ACM Trans. Math. Softw. (TOMS) 31(4), 587–591 (2005)
Davis, T.A.: Direct Methods for Sparse Linear Systems. Society for Industrial and Applied Mathematics (2006)
Davis, T.A., Hager, W.W.: Modifying a sparse Cholesky factorization. SIAM J. Matrix Anal. Appl. 20(3), 606–627 (1999)
Davis, T.A., Hager, W.W.: Multiple-rank modifications of a sparse Cholesky factorization. SIAM J. Matrix Anal. Appl. 22(4), 997–1013 (2001)
Davis, T.A., Hager, W.W.: Row modifications of a sparse Cholesky factorization. SIAM J. Matrix Anal. Appl. 26(3), 621–639 (2005)
Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)
Dontchev, A.L., Rockafellar, R.T.: Implicit Functions and Solution Mappings, vol. 208. Springer, Berlin (2009)
Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. II. Springer, Berlin (2003)
Ferreau, H.J., Kirches, C., Potschka, A., Bock, H.G., Diehl, M.: qpOASES: a parametric active-set algorithm for quadratic programming. Math. Program. Comput. 6(4), 327–363 (2014)
Frison, G., Diehl, M.: Hpipm: a high-performance quadratic programming framework for model predictive control. IFAC-PapersOnLine 53(2), 6563–6569 (2020)
Gertz, E.M., Wright, S.J.: Object-oriented software for quadratic programming. ACM Trans. Math. Softw. (TOMS) 29(1), 58–81 (2003)
Gill, P.E., Wong, E.: Methods for convex and general quadratic programming. Math. Program. Comput. 7(1), 71–112 (2015)
Golub, G.H., Van Loan, C.F.: Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences. Johns Hopkins University Press, Baltimore (2013)
Gould, N., Scott, J.: A note on performance profiles for benchmarking software. ACM Trans. Math. Softw. (TOMS) 43(2), 1–5 (2016)
Gould, N.I., Orban, D., Toint, P.L.: GALAHAD, a library of thread-safe fortran 90 packages for large-scale nonlinear optimization. ACM Trans. Math. Softw. (TOMS) 29(4), 353–372 (2003)
Gould, N.I., Orban, D., Toint, P.L.: CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization. Comput. Optim. Appl. 60(3), 545–557 (2015)
Gurobi Optimization, L.: Gurobi optimizer reference manual (2018). http://www.gurobi.com
Hermans, B.: LADEL: Quasidefinite sparse LDL factorization package with rank 1 updates and (symmetric) row/column additions and deletes (2022). https://doi.org/10.5281/zenodo.5939513
Hermans, B., Themelis, A., Patrinos, P.: QPALM: a Newton-type proximal augmented Lagrangian method for quadratic programs. In: 2019 IEEE 58th Conference on Decision and Control (CDC), pp. 4325–4330 (2019)
Hermans, B., Themelis, A., Patrinos, P.: QPALM (2022). https://doi.org/10.5281/zenodo.5939473
Iusem, A.N., Pennanen, T., Svaiter, B.F.: Inexact variants of the proximal point algorithm without monotonicity. SIAM J. Optim. 13(4), 1080–1097 (2003)
Knyazev, A.V.: Toward the optimal preconditioned eigensolver: locally optimal block preconditioned conjugate gradient method. SIAM J. Sci. Comput. 23(2), 517–541 (2001)
Kong, W., Melo, J.G., Monteiro, R.D.: Complexity of a quadratic penalty accelerated inexact proximal point method for solving linearly constrained nonconvex composite programs. SIAM J. Optim. 29(4), 2566–2593 (2019)
Li, G., Pong, T.K.: Global convergence of splitting methods for nonconvex composite optimization. SIAM J. Optim. 25(4), 2434–2460 (2015)
Lin, Q., Ma, R., Xu, Y.: Inexact proximal-point penalty methods for non-convex optimization with non-convex constraints. arXiv preprint arXiv:1908.11518 (2019)
Luo, Z.Q., Tseng, P.: Error bounds and convergence analysis of feasible descent methods: a general approach. Ann. Operations Res. 46(1), 157–178 (1993)
Maros, I., Mészáros, C.: A repository of convex quadratic programming problems. Optim. Methods Softw. 11(1–4), 671–681 (1999)
Mészáros, C.: The BPMPD interior point solver for convex quadratic problems. Optim. Methods Softw. 11(1–4), 431–449 (1999)
MOSEK, A.: Mosek optimization toolbox for matlab. User’s Guide and Reference Manual, Version 9.2.22 (2019). https://docs.mosek.com/8.0/toolbox/index.html
Nocedal, J., Wright, S.: Numerical Optimization. Springer Science & Business Media, Berlin (2006)
Patrinos, P., Sarimveis, H.: A new algorithm for solving convex parametric quadratic programs based on graphical derivatives of solution mappings. Automatica 46(9), 1405–1418 (2010)
Polyak, B.T.: Introduction to optimization. Inc., Publications Division, New York 1 (1987)
Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Operations Res. 1(2), 97–116 (1976)
Rockafellar, R.T., Wets, R.J.: Variational Analysis, vol. 317. Springer Science & Business Media, Berlin (2011)
Ruiz, D.: A scaling algorithm to equilibrate both rows and columns norms in matrices. Tech. rep., Rutherford Appleton Laboratorie (2001)
Sherali, H.D., Tuncbilek, C.H.: A reformulation-convexification approach for solving nonconvex quadratic programming problems. J. Global Optim. 7(1), 1–31 (1995)
Stellato, B., Banjac, G., Goulart, P., Bemporad, A., Boyd, S.: OSQP: an operator splitting solver for quadratic programs. Math. Program. Comput. 12, 637–672 (2020)
Sun, T., Jiang, H., Cheng, L., Zhu, W.: A convergence framework for inexact nonconvex and nonsmooth algorithms and its applications to several iterations. arXiv preprint arXiv:1709.04072 (2017)
Themelis, A., Ahookhosh, M., Patrinos, P.: On the acceleration of forward-backward splitting via an inexact Newton method. In: Luke, R., Bauschke, H., Burachik, R. (eds.) Splitting Algorithms, Modern Operator Theory, and Applications. Springer, Berlin (2019)
Themelis, A., Patrinos, P.: Douglas–Rachford splitting and ADMM for nonconvex optimization: tight convergence results. SIAM J. Optim. 30(1), 149–181 (2020)
Vanderbei, R.J.: Symmetric quasidefinite matrices. SIAM J. Optim. 5(1), 100–113 (1995)
Wächter, A., Biegler, L.T.: On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 106(1), 25–57 (2006)
Yannakakis, M.: Computing the minimum fill-in is NP-complete. SIAM J. Algebra. Disc. Methods 2(1), 77–79 (1981)
Ye, Y.: On affine scaling algorithms for nonconvex quadratic programming. Math. Program. 56(1–3), 285–300 (1992)
Acknowledgements
The authors are thankful to the associate editor and the anonymous reviewers for their careful reading and the insightful comments that helped to improve the paper.
Funding
The work of Ben Hermans was supported by KU Leuven-BOF PFV/10/002 Centre of Excellence: Optimization in Engineering (OPTEC), from project G0C4515N of the Research Foundation–Flanders (FWO–Flanders), from Flanders Make ICON: Avoidance of collisions and obstacles in narrow lanes, and from the KU Leuven Research project C14/15/067: B-spline based certificates of positivity with applications in engineering.
The work of Andreas Themelis was supported by the JSPS KAKENHI grant number JP21K17710.
The work of Panagiotis Patrinos was supported by the Research Foundation Flanders (FWO) research projects G081222N, G086518N, G086318N, and G0A0920N; Research Council KU Leuven C1 project No. C14/18/068; Fonds de la Recherche Scientifique—FNRS and the Fonds Wetenschappelijk Onderzoek—Vlaanderen under EOS project no 30468160 (SeLMA); European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 953348.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Proof of Theorem 2.2
Proof of Theorem 2.2
Proof of Theorem 2.2 (Inexact nonconvex PP [53, §4.1]) The proximal inequality
cf. (1.3), yields
proving assertions 2.2(ii) and 2.2(iv), and similarly 2.2(i) follows by invoking [47, Lem. 2.2.2]. Next, let \((x^k)_{k\in K}\) be a subsequence converging to a point \(x^\star \); then, it also holds that \((x^{k+1})_{k\in K}\) converges to \(x^\star \) owing to assertion 2.2(ii). From the proximal inequality (1.3) we have
so that passing to the limit for \(K\ni k\rightarrow \infty \) we obtain that \(\limsup _{k\in K}\varphi (x^{k+1})\le \varphi (x^\star )\). In fact, equality holds since \(\varphi \) is lsc, hence from assertion 2.2(i) we conclude that \(\varphi (x^{k+1})\rightarrow \varphi (x^\star )\) as \(k\rightarrow \infty \), and in turn from the arbitrarity of \(x^\star \) it follows that \(\varphi \) is constantly equal to this limit on the whole set of cluster points. To conclude the proof of assertion 2.2(iii), observe that the inclusion \( \varSigma _\textsf { x}^{-1}(x^k+e^k-x^{k+1}) {}\in {} {\hat{\partial \varphi }}(x^{k+1}) \), cf. (1.4), implies that
and with limiting arguments (recall that \(\lim _{k\in K}\varphi (x^k)=\varphi (\lim _{k\in K}x^k)\)) the claimed stationarity of the cluster points is obtained.
Rights and permissions
About this article
Cite this article
Hermans, B., Themelis, A. & Patrinos, P. QPALM: a proximal augmented lagrangian method for nonconvex quadratic programs. Math. Prog. Comp. 14, 497–541 (2022). https://doi.org/10.1007/s12532-022-00218-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12532-022-00218-0
Keywords
- Nonconvex QPs
- Proximal augmented Lagrangian
- Semismooth Newton method
- Exact linesearch
- Factorization updates