Skip to main content
Log in

QPALM: a proximal augmented lagrangian method for nonconvex quadratic programs

  • Full Length Paper
  • Published:
Mathematical Programming Computation Aims and scope Submit manuscript

Abstract

We propose QPALM, a nonconvex quadratic programming (QP) solver based on the proximal augmented Lagrangian method. This method solves a sequence of inner subproblems which can be enforced to be strongly convex and which therefore admit a unique solution. The resulting steps are shown to be equivalent to inexact proximal point iterations on the extended-real-valued cost function, which allows for a fairly simple analysis where convergence to a stationary point at an \(R\)-linear rate is shown. The QPALM algorithm solves the subproblems iteratively using semismooth Newton directions and an exact linesearch. The former can be computed efficiently in most iterations by making use of suitable factorization update routines, while the latter requires the zero of a monotone, one-dimensional, piecewise affine function. QPALM is implemented in open-source C code, with tailored linear algebra routines for the factorization in a self-written package LADEL. The resulting implementation is shown to be extremely robust in numerical simulations, solving all of the Maros-Meszaros problems and finding a stationary point for most of the nonconvex QPs in the Cutest test set. Furthermore, it is shown to be competitive against state-of-the-art convex QP solvers in typical QPs arising from application domains such as portfolio optimization and model predictive control. As such, QPALM strikes a unique balance between solving both easy and hard problems efficiently.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data Availability Statement

This manuscript has no associated data, but all the numerical examples can be reproduced using the software available in the GitHub repository https://github.com/kul-optec/QPALM. All data analyzed during this study are publicly available. URLs are included in this published article.

Notes

  1. https://github.com/kul-optec/LADEL.

  2. https://github.com/DrTimothyAldenDavis/SuiteSparse/tree/master/AMD.

  3. https://github.com/kul-optec/QPALM.

References

  1. Absil, P.A., Tits, A.L.: Newton-KKT interior-point methods for indefinite quadratic programming. Comput. Optim. Appl. 36(1), 5–41 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  2. Amestoy, P.R., Davis, T.A., Duff, I.S.: Algorithm 837: AMD, an approximate minimum degree ordering algorithm. ACM Trans. Math. Softw. (TOMS) 30(3), 381–388 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  3. Banjac, G., Goulart, P., Stellato, B., Boyd, S.: Infeasibility detection in the alternating direction method of multipliers for convex optimization. J. Optim. Theory Appl. 183(2), 490–519 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  4. Benzi, M., Golub, G.H., Liesen, J.: Numerical solution of saddle point problems. Acta Numer. 14, 1 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bertsekas, D.P.: Convexification procedures and decomposition methods for nonconvex optimization problems. J. Optim. Theory Appl. 29(2), 169–197 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  6. Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Computer Science and Applied Mathematics. Academic Press, Boston (1982)

    MATH  Google Scholar 

  7. Bertsekas, D.P.: Nonlinear Programming. Athena Scientific (2016)

  8. Birgin, E.G., Martínez, J.M.: Practical Augmented Lagrangian Methods for Constrained Optimization. Society for Industrial and Applied Mathematics, Philadelphia, PA (2014)

    Book  MATH  Google Scholar 

  9. Bolte, J., Sabach, S., Teboulle, M.: Nonconvex Lagrangian-based optimization: monitoring schemes and global convergence. Math. Operations Res. 43(4), 1210–1232 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  10. Boţ, R.I., Nguyen, D.K.: The proximal alternating direction method of multipliers in the nonconvex setting: convergence analysis and rates. Math. Operations Res. 45(2), 682–712 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  11. Burer, S., Vandenbussche, D.: A finite branch-and-bound algorithm for nonconvex quadratic programming via semidefinite relaxations. Math. Program. 113(2), 259–282 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  12. Chen, J., Burer, S.: Globally solving nonconvex quadratic programming problems via completely positive programming. Math. Program. Comput. 4(1), 33–52 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  13. Chen, Y., Davis, T.A., Hager, W.W., Rajamanickam, S.: Algorithm 887: CHOLMOD, supernodal sparse Cholesky factorization and update/downdate. ACM Trans. Math. Softw. (TOMS) 35(3), 1–14 (2008)

    Article  MathSciNet  Google Scholar 

  14. Combettes, P.L., Pennanen, T.: Proximal methods for cohypomonotone operators. SIAM J. Control Optim. 43(2), 731–742 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  15. Cottle, R.W., Habetler, G., Lemke, C.: On classes of copositive matrices. Linear Algebra Appl. 3(3), 295–310 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  16. Davis, T.A.: Algorithm 849: a concise sparse Cholesky factorization package. ACM Trans. Math. Softw. (TOMS) 31(4), 587–591 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  17. Davis, T.A.: Direct Methods for Sparse Linear Systems. Society for Industrial and Applied Mathematics (2006)

  18. Davis, T.A., Hager, W.W.: Modifying a sparse Cholesky factorization. SIAM J. Matrix Anal. Appl. 20(3), 606–627 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  19. Davis, T.A., Hager, W.W.: Multiple-rank modifications of a sparse Cholesky factorization. SIAM J. Matrix Anal. Appl. 22(4), 997–1013 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  20. Davis, T.A., Hager, W.W.: Row modifications of a sparse Cholesky factorization. SIAM J. Matrix Anal. Appl. 26(3), 621–639 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  21. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91(2), 201–213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  22. Dontchev, A.L., Rockafellar, R.T.: Implicit Functions and Solution Mappings, vol. 208. Springer, Berlin (2009)

    Book  MATH  Google Scholar 

  23. Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. II. Springer, Berlin (2003)

    MATH  Google Scholar 

  24. Ferreau, H.J., Kirches, C., Potschka, A., Bock, H.G., Diehl, M.: qpOASES: a parametric active-set algorithm for quadratic programming. Math. Program. Comput. 6(4), 327–363 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  25. Frison, G., Diehl, M.: Hpipm: a high-performance quadratic programming framework for model predictive control. IFAC-PapersOnLine 53(2), 6563–6569 (2020)

    Article  Google Scholar 

  26. Gertz, E.M., Wright, S.J.: Object-oriented software for quadratic programming. ACM Trans. Math. Softw. (TOMS) 29(1), 58–81 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  27. Gill, P.E., Wong, E.: Methods for convex and general quadratic programming. Math. Program. Comput. 7(1), 71–112 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  28. Golub, G.H., Van Loan, C.F.: Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences. Johns Hopkins University Press, Baltimore (2013)

    Google Scholar 

  29. Gould, N., Scott, J.: A note on performance profiles for benchmarking software. ACM Trans. Math. Softw. (TOMS) 43(2), 1–5 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  30. Gould, N.I., Orban, D., Toint, P.L.: GALAHAD, a library of thread-safe fortran 90 packages for large-scale nonlinear optimization. ACM Trans. Math. Softw. (TOMS) 29(4), 353–372 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  31. Gould, N.I., Orban, D., Toint, P.L.: CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization. Comput. Optim. Appl. 60(3), 545–557 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  32. Gurobi Optimization, L.: Gurobi optimizer reference manual (2018). http://www.gurobi.com

  33. Hermans, B.: LADEL: Quasidefinite sparse LDL factorization package with rank 1 updates and (symmetric) row/column additions and deletes (2022). https://doi.org/10.5281/zenodo.5939513

  34. Hermans, B., Themelis, A., Patrinos, P.: QPALM: a Newton-type proximal augmented Lagrangian method for quadratic programs. In: 2019 IEEE 58th Conference on Decision and Control (CDC), pp. 4325–4330 (2019)

  35. Hermans, B., Themelis, A., Patrinos, P.: QPALM (2022). https://doi.org/10.5281/zenodo.5939473

  36. Iusem, A.N., Pennanen, T., Svaiter, B.F.: Inexact variants of the proximal point algorithm without monotonicity. SIAM J. Optim. 13(4), 1080–1097 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  37. Knyazev, A.V.: Toward the optimal preconditioned eigensolver: locally optimal block preconditioned conjugate gradient method. SIAM J. Sci. Comput. 23(2), 517–541 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  38. Kong, W., Melo, J.G., Monteiro, R.D.: Complexity of a quadratic penalty accelerated inexact proximal point method for solving linearly constrained nonconvex composite programs. SIAM J. Optim. 29(4), 2566–2593 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  39. Li, G., Pong, T.K.: Global convergence of splitting methods for nonconvex composite optimization. SIAM J. Optim. 25(4), 2434–2460 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  40. Lin, Q., Ma, R., Xu, Y.: Inexact proximal-point penalty methods for non-convex optimization with non-convex constraints. arXiv preprint arXiv:1908.11518 (2019)

  41. Luo, Z.Q., Tseng, P.: Error bounds and convergence analysis of feasible descent methods: a general approach. Ann. Operations Res. 46(1), 157–178 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  42. Maros, I., Mészáros, C.: A repository of convex quadratic programming problems. Optim. Methods Softw. 11(1–4), 671–681 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  43. Mészáros, C.: The BPMPD interior point solver for convex quadratic problems. Optim. Methods Softw. 11(1–4), 431–449 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  44. MOSEK, A.: Mosek optimization toolbox for matlab. User’s Guide and Reference Manual, Version 9.2.22 (2019). https://docs.mosek.com/8.0/toolbox/index.html

  45. Nocedal, J., Wright, S.: Numerical Optimization. Springer Science & Business Media, Berlin (2006)

    MATH  Google Scholar 

  46. Patrinos, P., Sarimveis, H.: A new algorithm for solving convex parametric quadratic programs based on graphical derivatives of solution mappings. Automatica 46(9), 1405–1418 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  47. Polyak, B.T.: Introduction to optimization. Inc., Publications Division, New York 1 (1987)

  48. Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Operations Res. 1(2), 97–116 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  49. Rockafellar, R.T., Wets, R.J.: Variational Analysis, vol. 317. Springer Science & Business Media, Berlin (2011)

    MATH  Google Scholar 

  50. Ruiz, D.: A scaling algorithm to equilibrate both rows and columns norms in matrices. Tech. rep., Rutherford Appleton Laboratorie (2001)

  51. Sherali, H.D., Tuncbilek, C.H.: A reformulation-convexification approach for solving nonconvex quadratic programming problems. J. Global Optim. 7(1), 1–31 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  52. Stellato, B., Banjac, G., Goulart, P., Bemporad, A., Boyd, S.: OSQP: an operator splitting solver for quadratic programs. Math. Program. Comput. 12, 637–672 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  53. Sun, T., Jiang, H., Cheng, L., Zhu, W.: A convergence framework for inexact nonconvex and nonsmooth algorithms and its applications to several iterations. arXiv preprint arXiv:1709.04072 (2017)

  54. Themelis, A., Ahookhosh, M., Patrinos, P.: On the acceleration of forward-backward splitting via an inexact Newton method. In: Luke, R., Bauschke, H., Burachik, R. (eds.) Splitting Algorithms, Modern Operator Theory, and Applications. Springer, Berlin (2019)

    Google Scholar 

  55. Themelis, A., Patrinos, P.: Douglas–Rachford splitting and ADMM for nonconvex optimization: tight convergence results. SIAM J. Optim. 30(1), 149–181 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  56. Vanderbei, R.J.: Symmetric quasidefinite matrices. SIAM J. Optim. 5(1), 100–113 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  57. Wächter, A., Biegler, L.T.: On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 106(1), 25–57 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  58. Yannakakis, M.: Computing the minimum fill-in is NP-complete. SIAM J. Algebra. Disc. Methods 2(1), 77–79 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  59. Ye, Y.: On affine scaling algorithms for nonconvex quadratic programming. Math. Program. 56(1–3), 285–300 (1992)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are thankful to the associate editor and the anonymous reviewers for their careful reading and the insightful comments that helped to improve the paper.

Funding

The work of Ben Hermans was supported by KU Leuven-BOF PFV/10/002 Centre of Excellence: Optimization in Engineering (OPTEC), from project G0C4515N of the Research Foundation–Flanders (FWO–Flanders), from Flanders Make ICON: Avoidance of collisions and obstacles in narrow lanes, and from the KU Leuven Research project C14/15/067: B-spline based certificates of positivity with applications in engineering.

The work of Andreas Themelis was supported by the JSPS KAKENHI grant number JP21K17710.

The work of Panagiotis Patrinos was supported by the Research Foundation Flanders (FWO) research projects G081222N, G086518N, G086318N, and G0A0920N; Research Council KU Leuven C1 project No. C14/18/068; Fonds de la Recherche Scientifique—FNRS and the Fonds Wetenschappelijk Onderzoek—Vlaanderen under EOS project no 30468160 (SeLMA); European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 953348.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Panagiotis Patrinos.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Proof of Theorem 2.2

Proof of Theorem 2.2

Proof of Theorem 2.2 (Inexact nonconvex PP [53, §4.1]) The proximal inequality

$$\begin{aligned} \varphi (x^{k+1}) {}+{} \tfrac{1}{2}\Vert x^{k+1}-x^k-e^k\Vert _{\varSigma _\textsf { x}^{-1}}^2 {}\le {} \varphi (x^k) {}+{} \tfrac{1}{2}\Vert e^k\Vert _{\varSigma _\textsf { x}^{-1}}^2, \end{aligned}$$

cf. (1.3), yields

$$\begin{aligned} \varphi (x^{k+1})+ & {} \tfrac{1}{4}\Vert x^{k+1}-x^k\Vert _{\varSigma _\textsf { x}^{-1}}^2 \le \varphi (x^{k+1}) {}+{} \tfrac{1}{2}\Vert x^{k+1}-x^k-e^k\Vert _{\varSigma _\textsf { x}^{-1}}^2\nonumber \\&+\tfrac{1}{2}\Vert e^k\Vert _{\varSigma _\textsf { x}^{-1}}^2 {}\le {} \varphi (x^k) {}+{} \Vert e^k\Vert _{\varSigma _\textsf { x}^{-1}}^2, \end{aligned}$$
(A.1)

proving assertions 2.2(ii) and 2.2(iv), and similarly 2.2(i) follows by invoking [47, Lem. 2.2.2]. Next, let \((x^k)_{k\in K}\) be a subsequence converging to a point \(x^\star \); then, it also holds that \((x^{k+1})_{k\in K}\) converges to \(x^\star \) owing to assertion 2.2(ii). From the proximal inequality (1.3) we have

$$\begin{aligned} \varphi (x^{k+1}) {}+{} \tfrac{1}{2} \Vert x^{k+1}-x^k-e^k\Vert _{\varSigma _\textsf { x}^{-1}}^2 {}\le {} \varphi (x^\star ) {}+{} \tfrac{1}{2} \Vert x^\star -x^k-e^k\Vert _{\varSigma _\textsf { x}^{-1}}^2, \end{aligned}$$

so that passing to the limit for \(K\ni k\rightarrow \infty \) we obtain that \(\limsup _{k\in K}\varphi (x^{k+1})\le \varphi (x^\star )\). In fact, equality holds since \(\varphi \) is lsc, hence from assertion 2.2(i) we conclude that \(\varphi (x^{k+1})\rightarrow \varphi (x^\star )\) as \(k\rightarrow \infty \), and in turn from the arbitrarity of \(x^\star \) it follows that \(\varphi \) is constantly equal to this limit on the whole set of cluster points. To conclude the proof of assertion 2.2(iii), observe that the inclusion \( \varSigma _\textsf { x}^{-1}(x^k+e^k-x^{k+1}) {}\in {} {\hat{\partial \varphi }}(x^{k+1}) \), cf. (1.4), implies that

$$\begin{aligned} {{\,\mathrm{dist}\,}}\bigl (0,\partial \varphi (x^{k+1})\bigr ) {}\le {} {{\,\mathrm{dist}\,}}\bigl (0,{\hat{\partial \varphi }}(x^{k+1})\bigr ) {}\le {} \Vert \varSigma _\textsf { x}^{-1}\Vert \left( \Vert x^k-x^{k+1}\Vert {}+{} \Vert e^k\Vert \right) ,\nonumber \\ \end{aligned}$$
(A.2)

and with limiting arguments (recall that \(\lim _{k\in K}\varphi (x^k)=\varphi (\lim _{k\in K}x^k)\)) the claimed stationarity of the cluster points is obtained.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hermans, B., Themelis, A. & Patrinos, P. QPALM: a proximal augmented lagrangian method for nonconvex quadratic programs. Math. Prog. Comp. 14, 497–541 (2022). https://doi.org/10.1007/s12532-022-00218-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12532-022-00218-0

Keywords

Mathematics Subject Classification

Navigation