1 Introduction

In this paper we consider the linearly constrained convex minimization model with an objective function that is the sum of multiple separable functions and a coupled quadratic function:

$$\begin{aligned} \begin{array}{cl} \min \limits _{{{\,\mathrm{{\mathbf{x}}}\,}}} &{} f({{\,\mathrm{{\mathbf{x}}}\,}}) = \frac{1}{2}{{\,\mathrm{{\mathbf{x}}}\,}}^T {{\,\mathrm{\mathbf{H}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}+ {{\,\mathrm{{\mathbf{c}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}\\ \text{ s.t. }&{} \sum \limits _{i=1}^{p}{{\,\mathrm{\mathbf{A}}\,}}_i{{\,\mathrm{{\mathbf{x}}}\,}}_i ={{\,\mathrm{{\mathbf{b}}}\,}}\\ &{}{{\,\mathrm{{\mathbf{x}}}\,}}\in {{\,\mathrm{\mathcal {X}}\,}}\end{array} \end{aligned}$$
(1)

where \({{\,\mathrm{\mathbf{H}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{n\times n}\) is a symmetric positive semidefinite matrix, vector \({{\,\mathrm{{\mathbf{c}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^n\) and the problem parameters are the matrix \({{\,\mathrm{\mathbf{A}}\,}}=[{{\,\mathrm{\mathbf{A}}\,}}_1,\dots ,{{\,\mathrm{\mathbf{A}}\,}}_p]\), \({{\,\mathrm{\mathbf{A}}\,}}_i\in {{\,\mathrm{\mathbb {R}}\,}}^{m\times d_i}\), \(i = 1,2,\dots , p\) with \(\sum _{i=1}^{p} d_i = n\) and the vector \({{\,\mathrm{{\mathbf{b}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^m\). The constraint set \(\mathcal X\) is the Cartesian product of possibly non-convex real, closed, nonempty sets, \({\mathcal X} = {\mathcal X_1} \times \dots \times {\mathcal X_p}\), where \({{{\,\mathrm{{\mathbf{x}}}\,}}_i\in \mathcal X_i} \subseteq {{\,\mathrm{\mathbb {R}}\,}}^{d_i}\).

Problem (1) naturally arises from applications such as machine and statistical learning, image processing, portfolio management, tensor decomposition, matrix completion or decomposition, manifold optimization, data clustering and many other problems of practical importance. To solve problem (1), we consider in particular a randomly assembled multi-block and cyclic alternating direction method of multipliers (RAC-ADMM), a novel algorithm with which we hope to mitigate the problem of slow convergence and divergence issues of the classical alternating direction method of multipliers (ADMM) when applied to problems with cross-block coupled variables.

ADMM was originally proposed in 1970’s [31, 32] and after a long period without too much attention it has recently gained in popularity for a broad spectrum of applications [28, 41, 44, 57, 67]. Problems successfully solved by ADMM range from classical linear programming (LP), semidefinite programming (SDP) and quadratically constrained quadratic programming (QCQP) applied to partial differential equations, mechanics, image processing, statistical learning, computer vision and similar problems (for examples see [10, 39, 45, 53, 58, 70]) to emerging areas such as deep learning [71], medical treatment [81] and social networking [2]. ADMM is shown to be a good choice for problems where high accuracy is not a requirement but a “good enough” solution is needed to be found fast.

Cyclic multi-block ADMM (C-ADMM) is an iterative algorithm that embeds a Gaussian-Seidel decomposition into each iteration of the augmented Lagrangian method (ALM) [36, 59]. It consists of a cyclic update of the blocks of primal variables, \({{\,\mathrm{{\mathbf{x}}}\,}}_i\in {{\,\mathrm{\mathcal {X}}\,}}_{i}\), \({{\,\mathrm{{\mathbf{x}}}\,}}=({{\,\mathrm{{\mathbf{x}}}\,}}_1,\dots ,{{\,\mathrm{{\mathbf{x}}}\,}}_p)\), and a dual ascent type update of the variable \({{\,\mathrm{{\mathbf{y}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^m\), i.e.,

$$\begin{aligned} \text{ C-ADMM }:=\left\{ \begin{array}{l} {{\,\mathrm{{\mathbf{x}}}\,}}_1^{k+1}={\mathop {\hbox {arg min}}\limits _{{{\,\mathrm{{\mathbf{x}}}\,}}_1\in {{\,\mathrm{\mathcal {X}}\,}}_1}} \{L_{\beta }({{\,\mathrm{{\mathbf{x}}}\,}}_1,{{\,\mathrm{{\mathbf{x}}}\,}}_2^k,{{\,\mathrm{{\mathbf{x}}}\,}}_3^k,\dots ,{{\,\mathrm{{\mathbf{x}}}\,}}_p^k;{{\,\mathrm{{\mathbf{y}}}\,}}^k)\}, \\ \vdots \\ {{\,\mathrm{{\mathbf{x}}}\,}}_p^{k+1}={\mathop {\hbox {arg min}}\limits _{{{\,\mathrm{{\mathbf{x}}}\,}}_p\in {{\,\mathrm{\mathcal {X}}\,}}_p}} \{L_{\beta } ({{\,\mathrm{{\mathbf{x}}}\,}}_1^{k+1},{{\,\mathrm{{\mathbf{x}}}\,}}_2^{k+1},{{\,\mathrm{{\mathbf{x}}}\,}}_3^{k+1},\dots ,{{\,\mathrm{{\mathbf{x}}}\,}}_p;{{\,\mathrm{{\mathbf{y}}}\,}}^k)\},\\ {{\,\mathrm{{\mathbf{y}}}\,}}^{k+1}={{\,\mathrm{{\mathbf{y}}}\,}}^k -\beta (\sum _{i=1}^{p}{{\,\mathrm{\mathbf{A}}\,}}_i{{\,\mathrm{{\mathbf{x}}}\,}}_i^{k+1}- {{\,\mathrm{{\mathbf{b}}}\,}}) \end{array}\right. \end{aligned}$$
(2)

where \(\beta > 0\) is a penalty parameter of the Augmented Lagrangian function \( {L_{\beta }} \),

$$\begin{aligned} L_{\beta } ({{\,\mathrm{{\mathbf{x}}}\,}}_1,\dots ,{{\,\mathrm{{\mathbf{x}}}\,}}_p;{{\,\mathrm{{\mathbf{y}}}\,}}^k) := f({{\,\mathrm{{\mathbf{x}}}\,}}) -{{\,\mathrm{{\mathbf{y}}}\,}}^T\left( \sum \limits _{i=1}^{p}{{\,\mathrm{\mathbf{A}}\,}}_i{{\,\mathrm{{\mathbf{x}}}\,}}_i -{{\,\mathrm{{\mathbf{b}}}\,}}\right) + \ \big \Vert \sum _{i=1}^p {{\,\mathrm{\mathbf{A}}\,}}_i{{\,\mathrm{{\mathbf{x}}}\,}}_i -{{\,\mathrm{{\mathbf{b}}}\,}}\big \Vert ^2. \end{aligned}$$
(3)

Note that the classical ADMM [31, 32] admits only optimization problems that are separable in blocks of variables and with \(p=2\).

Another variant of multi-block ADMM was suggested in [5], where the authors introduce the distributed multi-block ADMM (D-ADMM) for separable problems. The method creates a Dantzig-Wolfe-Benders decomposition structure and sequentially solves a “master” problem followed by solving distributed multi-block “slave” problems. It converts the multi-block problem into an equivalent two-block problem via variable splitting [6] and performs a separate augmented Lagrangian minimization over \({{\,\mathrm{{\mathbf{x}}}\,}}_i\). The method assumes that the objective function is separable across blocks, \(f({{\,\mathrm{{\mathbf{x}}}\,}})=\sum _{i}f_i({{\,\mathrm{{\mathbf{x}}}\,}}_i) +{{\,\mathrm{{\mathbf{c}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}\), and is not provably working for solving problems with non-separable objective functions.

$$\begin{aligned} \text{ D-ADMM }:=\left\{ \begin{array}{l} \hbox {Update }{{\,\mathrm{{\mathbf{x}}}\,}}_i, i=1,\dots ,p\\ \quad {{\,\mathrm{{\mathbf{x}}}\,}}_i^{k+1}={\mathop {\hbox {arg min}}\limits _{{{\,\mathrm{{\mathbf{x}}}\,}}_i\in {{\,\mathrm{\mathcal {X}}\,}}_i}} f_i({{\,\mathrm{{\mathbf{x}}}\,}}_i)-({{\,\mathrm{{\mathbf{y}}}\,}}^k)^T({{\,\mathrm{\mathbf{A}}\,}}_i{{\,\mathrm{{\mathbf{x}}}\,}}_i-{{\,\mathrm{\varvec{\lambda }}\,}}_i^k) + \frac{\beta }{2}\Vert A_i{{\,\mathrm{{\mathbf{x}}}\,}}_i-{{\,\mathrm{\varvec{\lambda }}\,}}_i^k\Vert ^2 \\ \hbox {Update }{{\,\mathrm{\varvec{\lambda }}\,}}_i, i=1,\dots ,p\\ \quad {{\,\mathrm{\varvec{\lambda }}\,}}_i^{k+1}={{\,\mathrm{\mathbf{A}}\,}}_i{{\,\mathrm{{\mathbf{x}}}\,}}_i^{k+1}-\frac{1}{p}\big (\sum _{j=1}^p{{\,\mathrm{\mathbf{A}}\,}}_j{{\,\mathrm{{\mathbf{x}}}\,}}_j^{k+1}-{{\,\mathrm{{\mathbf{b}}}\,}}\big )\\ {{\,\mathrm{{\mathbf{y}}}\,}}^{k+1}={{\,\mathrm{{\mathbf{y}}}\,}}^k -\frac{\beta }{p}(\sum _{i=1}^{p}{{\,\mathrm{\mathbf{A}}\,}}_i{{\,\mathrm{{\mathbf{x}}}\,}}_i^{k+1}- {{\,\mathrm{{\mathbf{b}}}\,}}). \end{array}\right. \end{aligned}$$
(4)

Because of the variable splitting, the distributed ADMM approach based on (4) increases the number of variables and constraints in the problem, which in turn makes the algorithm not very efficient for large p in practice.

The classical two-block ADMM (Eq. 2 with \(p=2\)) and its convergence have been extensively studied in the literature (e.g. [20, 22, 31, 35, 54]. However, the two-block variable structure of the ADMM still limits the practical computational efficiency of the method, because one factorization of a large matrix is needed at least once even for linear and convex quadratic programming (e.g., [45, 65]). This drawback may be overcome by enforcing a multi-block structure of the decision variables in the original optimization problem. Indeed, due to the simplicity and practical implications of a direct extension of ADMM to the multi-block variant (2), an active research recently has been going on in developing ADMM variants with provable convergence and competitive numerical efficiency and iteration simplicity (e.g. [17, 35, 37, 58]), and on proving global convergence under some special conditions (e.g. [13, 24, 46, 47]). Unfortunately, in general the Cyclic multi-block ADMM, with more than two blocks, is not guaranteed to be convergent even for solving a single system of linear equations, which settled a long-standing open question [15].

Moreover, in contrast to the work on separable convex problems, little work has been done on understanding properties of the multi-block ADMM for (1) with a non-separable convex quadratic or even non-convex objective function. One of the rare works that addresses coupled objectives is [17] where authors describe convergence properties for non-separable convex minimization problems. A good description of the difficulties of obtaining a rigorous proof is given in [23]. For solving non-convex problems, a rigorous analysis of ADMM is by itself a very hard problem, with only a couple of works being done for generalized, but still limited (by an objective function), separable problems. For examples see [38, 40, 76, 77, 82].

Randomization is commonly used to reduce information and computation complexity for solving large-scale optimization problems. Typical examples include Q-Learning or Reinforced Learning, Stochastic Gradient Descent (SGD) for Deep Learning, Randomized Block-Coordinate-Descent (BCD) for convex programming, and so on. Randomization of ADMM has recently become a matter of interest as well. In [68] the authors devised randomly permuted multi-block ADMM (RP-ADMM) algorithm, in which on every cyclic loop the blocks are solved or updated in a randomly permuted order. Surprisingly the algorithm eliminated the divergence example constructed in [15], and RP-ADMM was shown to converge linearly in expectation for solving any square system of linear equations with any number of blocks. Subsequently, in [17] the authors focused on solving the linearly constrained convex optimization with coupled convex quadratic objective, and proved the convergence in expectation of RP-ADMM for the non separable multi-block convex quadratic programming, which is a much broader class of computational problems.

$$\begin{aligned} \text{ RP-ADMM }:=\left\{ \begin{array}{l} \hbox {Randomly permute } (1,2,...,p) \hbox { into } (\sigma _1,\sigma _2,...,\sigma _p),\\ \hbox {then solve: }\\ \quad {{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma _1}^{k+1}={\mathop {\hbox {arg min}}\limits _{{{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma _1}\in {{\,\mathrm{\mathcal {X}}\,}}_{\sigma _1}}} \{L_{\beta }({{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma _1},{{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma _2}^k,x_{\sigma _3}^k,\dots ,{{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma _p}^k,{{\,\mathrm{{\mathbf{y}}}\,}}^k)\}, \\ \quad \vdots \\ \quad {{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma _p}^{k+1}={\mathop {\hbox {arg min}}\limits _{{{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma _p}\in {{\,\mathrm{\mathcal {X}}\,}}_{\sigma _p}}} \{L_{\beta } ({{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma _1}^{k+1},{{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma _2}^{k+1},x_{\sigma _3}^{k+1}\dots ,{{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma _p},{{\,\mathrm{{\mathbf{y}}}\,}}^k)\},\\ \quad {{\,\mathrm{{\mathbf{y}}}\,}}^{k+1}={{\,\mathrm{{\mathbf{y}}}\,}}^k -\beta ({{\,\mathrm{\mathbf{A}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}^{k+1}- {{\,\mathrm{{\mathbf{b}}}\,}}). \end{array}\right. \end{aligned}$$
(5)

The main goal of the work proposed in this paper is twofold. First, we add more randomness into the ADMM by developing a randomly assembled cyclic ADMM (RAC-ADMM) where the decision variables in each block are randomly assembled. In contrast to RP-ADMM in which the variables in each block are fixed and unchanged, RAC-ADMM randomly assembles new blocks at each cyclic loop. It can be viewed as a decomposition-coordination procedure that decomposes the problem in a random fashion and combines the solutions to small local sub-problems to find the solution to the original large-scale problem. RAC-ADMM, in-line with RP-ADMM, admits multiple blocks with possibly cross-block coupled variables and updates the blocks in the cyclic order. The idea of re-constructing block variables at each cyclic loop was first mentioned in [51], where the authors present a framework for solving discrete optimization problems which decomposes a problem into sub-problems by randomly (without replacement) grouping variables into subsets. Each subset is then used to construct a sub-problem by considering variables outside the subset as fixed, and the sub-problems are then solved in a cyclic fashion. Subsets are constructed once per iteration. The algorithm presented in that paper is a variant of the block coordinate descent (BCD) method with an addition of methodology to handle a small number of special constraints, which can be seen as a special case of RAC-ADMM. In the current paper we discuss the theoretical properties of RAC-ADMM and show when the additional random assembling helps and when it hurts.

Secondly, using the theoretical guidance on RAC-ADMM, we conduct multiple numerical tests on solving both randomly generated and benchmark quadratic optimization problems, which include continuous, and binary graph-partitioning and quadratic assignment problems, and selected machine learning problems such as linear regression, LASSO, elastic-net, and support vector machine. Our numerical tests show that RAC-ADMM, with a systematic variable-grouping strategy (designate a set of variables always belonging to a same block), could significantly improve the computation efficiency on solving most quadratic optimization problems.

The current paper is organized as follows. In the next section we present RAC-ADMM algorithm and present theoretical results with respect to convergence. Next we discuss the notion of special grouping, thus selecting variables in less-random fashion by analyzing a problem structure, and the use of partial Lagrangian, approaches, which improve convergence speed of the algorithm. In Sect. 3, we present a solver, RACQP, we built that uses RAC-ADMM to address linearly constrained quadratic problems. The solver is implemented in Matlab [50] and the source code available online [61]. The solver’s performance is investigated in Sect. 4, where we compare RACQP with commercial solvers, Gurobi [34] and Mosek [55], and the academic OSQP which is a ADMM-based solver developed by [65]. We also consider machine learning problems and compare our general purpose solver with tailored heuristic solutions, Glmnet [30, 64] and LIBSVM [14]. The summary of our contributions with concluding remarks is given in Sect. 5.

2 RAC-ADMM

In this section we describe our randomly assembled cyclic alternating direction method of multipliers (RAC-ADMM). We start by presenting the algorithm, then analyze its convergence for linearly constrained quadratic problems, and finalize the section by introducing accelerated procedures that improve the convergence speed of RAC-ADMM by means of a grouping strategy of highly coupled variables and a partial Lagrangian approach. Note that although our analysis of convergence is restricted to quadratic and/or special classes of problems, it serves as a good indicator of the convergence of the algorithm in more general case.

2.1 The algorithm

RAC-ADMM is an algorithm that is applied to solve convex problems (1). The algorithm addresses equality and inequality constraints separately, with the latter converted into equalities using slack variables, \({{\,\mathrm{{\mathbf{s}}}\,}}\):

$$\begin{aligned} \begin{array}{cl} \min \limits _{{{\,\mathrm{{\mathbf{x}}}\,}},{{\,\mathrm{{\mathbf{s}}}\,}}} &{} f({{\,\mathrm{{\mathbf{x}}}\,}}) = \frac{1}{2}{{\,\mathrm{{\mathbf{x}}}\,}}^T {{\,\mathrm{\mathbf{H}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}+ {{\,\mathrm{{\mathbf{c}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}\\ \text{ s.t. }&{} {{\,\mathrm{\mathbf{A}}\,}}_{eq}{{\,\mathrm{{\mathbf{x}}}\,}}={{\,\mathrm{{\mathbf{b}}}\,}}_{eq} \\ &{} {{\,\mathrm{\mathbf{A}}\,}}_{ineq}{{\,\mathrm{{\mathbf{x}}}\,}}+ {{\,\mathrm{{\mathbf{s}}}\,}}= {{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}\\ &{}{{\,\mathrm{{\mathbf{x}}}\,}}\in {{\,\mathrm{\mathcal {X}}\,}}, {{\,\mathrm{{\mathbf{s}}}\,}}\ge {{\,\mathrm{{\mathbf{0}}}\,}}\end{array} \end{aligned}$$
(6)

where matrix \({{\,\mathrm{\mathbf{A}}\,}}_{eq}\in {{\,\mathrm{\mathbb {R}}\,}}^{m_e\times n}\) and vector \({{\,\mathrm{{\mathbf{b}}}\,}}_{eq}\in {{\,\mathrm{\mathbb {R}}\,}}^{m_e}\) describe equality constraints and matrix \({{\,\mathrm{\mathbf{A}}\,}}_{ineq}\in {{\,\mathrm{\mathbb {R}}\,}}^{m_i\times n}\) and the vector \({{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}\in {{\,\mathrm{\mathbb {R}}\,}}^{m_i}\) describe inequality constraints. Primal variables \({{\,\mathrm{{\mathbf{x}}}\,}}\in {{\,\mathrm{\mathcal {X}}\,}}\) are in constraint set \(\mathcal X\subseteq {{\,\mathrm{\mathbb {R}}\,}}^{n}\) which is the Cartesian product of possibly non-convex real, closed, nonempty sets, and slack variables \({{\,\mathrm{{\mathbf{s}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{m_i}_{+}\). The augmented Lagrangian function used by RAC-ADMM is then defined by

$$\begin{aligned} \begin{array}{cl} L_{\beta } ({{\,\mathrm{{\mathbf{x}}}\,}};{{\,\mathrm{{\mathbf{s}}}\,}};{{\,\mathrm{{\mathbf{y}}}\,}}_{eq};{{\,\mathrm{{\mathbf{y}}}\,}}_{ineq}) := &{}f({{\,\mathrm{{\mathbf{x}}}\,}}) -{{\,\mathrm{{\mathbf{y}}}\,}}_{eq}^T\bigl ({{\,\mathrm{\mathbf{A}}\,}}_{eq}{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}_{eq}\bigr ) -{{\,\mathrm{{\mathbf{y}}}\,}}_{ineq}^T\bigl ({{\,\mathrm{\mathbf{A}}\,}}_{ineq}{{\,\mathrm{{\mathbf{x}}}\,}}+{{\,\mathrm{{\mathbf{s}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}\bigr )\\ &{}+\, \frac{\beta }{2} \big (\big \Vert {{\,\mathrm{\mathbf{A}}\,}}_{eq}{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}_{eq}\big \Vert ^2 + \big \Vert {{\,\mathrm{\mathbf{A}}\,}}_{ineq}{{\,\mathrm{{\mathbf{x}}}\,}}+{{\,\mathrm{{\mathbf{s}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}\big \Vert ^2\big ) \end{array}\nonumber \\ \end{aligned}$$
(7)

with dual variables \({{\,\mathrm{{\mathbf{y}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{m_e}\) and \({{\,\mathrm{{\mathbf{z}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{m_i}\), and penalty parameter \(\beta > 0\). In (6) we keep inequality and equality constraint matrices separate so to underline a separate slack variable update step of (8) which has a close form solution described in more details in Sect. 3.

RAC-ADMM is an iterative algorithm that embeds a Gaussian-Seidel decomposition into each iteration of the augmented Lagrangian method (ALM). It consists of a cyclic update of randomly constructed blocks\(^\dagger \) of primal variables, \({{\,\mathrm{{\mathbf{x}}}\,}}_i\in {{\,\mathrm{\mathcal {X}}\,}}_{i}\), followed by the update of slack variables \({{\,\mathrm{{\mathbf{s}}}\,}}\) and a dual ascent type update for Lagrange multipliers \({{\,\mathrm{{\mathbf{y}}}\,}}_{eq}\) and \({{\,\mathrm{{\mathbf{y}}}\,}}_{ineq}\):

$$\begin{aligned} \begin{array}{rl} \text{ RAC-ADMM }:=&{}\left\{ \begin{array}{l} \text{ Randomly } \text{(without } \text{ replacement) } \text{ assemble } \text{ primal } \\ \text{ variables } {{\,\mathrm{{\mathbf{x}}}\,}}^\dagger \text{ into } p \text{ blocks, } {{\,\mathrm{{\mathbf{x}}}\,}}_i, i=1,\dots ,p, \text{ then } \text{ solve }:\\ \quad {{\,\mathrm{{\mathbf{x}}}\,}}_1^{k+1}={\mathop {\hbox {arg min}}\limits _{{{\,\mathrm{{\mathbf{x}}}\,}}_1\in X_1}} \{L_{\beta }({{\,\mathrm{{\mathbf{x}}}\,}}_1,{{\,\mathrm{{\mathbf{x}}}\,}}_2^k,\dots ,{{\,\mathrm{{\mathbf{x}}}\,}}_p^k;{{\,\mathrm{{\mathbf{s}}}\,}}^k;{{\,\mathrm{{\mathbf{y}}}\,}}_{eq}^k;{{\,\mathrm{{\mathbf{z}}}\,}}_{ineq}^k)\}, \\ \quad \vdots \\ \quad {{\,\mathrm{{\mathbf{x}}}\,}}_p^{k+1}={\mathop {\hbox {arg min}}\limits _{{{\,\mathrm{{\mathbf{x}}}\,}}_p\in X_p}} \{L_{\beta } ({{\,\mathrm{{\mathbf{x}}}\,}}_1^{k+1},{{\,\mathrm{{\mathbf{x}}}\,}}_2^{k+1},\dots ,{{\,\mathrm{{\mathbf{x}}}\,}}_p;{{\,\mathrm{{\mathbf{s}}}\,}}^k;{{\,\mathrm{{\mathbf{y}}}\,}}_{eq}^k;{{\,\mathrm{{\mathbf{z}}}\,}}_{ineq}^k)\},\\ \quad {{\,\mathrm{{\mathbf{s}}}\,}}^{k+1}={\mathop {\hbox {arg min}}\limits _{{{\,\mathrm{{\mathbf{s}}}\,}}\ge {{\,\mathrm{{\mathbf{0}}}\,}}}} \{L_{\beta } ({{\,\mathrm{{\mathbf{x}}}\,}}_1^{k+1},{{\,\mathrm{{\mathbf{x}}}\,}}_2^{k+1},\dots ,{{\,\mathrm{{\mathbf{x}}}\,}}_p^{k+1};{{\,\mathrm{{\mathbf{s}}}\,}};{{\,\mathrm{{\mathbf{y}}}\,}}_{eq}^k;{{\,\mathrm{{\mathbf{z}}}\,}}_{ineq}^k)\},\\ \quad {{\,\mathrm{{\mathbf{y}}}\,}}_{eq}^{k+1}={{\,\mathrm{{\mathbf{y}}}\,}}_{eq}^k -\beta ({{\,\mathrm{\mathbf{A}}\,}}_{eq}{{\,\mathrm{{\mathbf{x}}}\,}}^{k+1} -{{\,\mathrm{{\mathbf{b}}}\,}}_{eq}),\\ \quad {{\,\mathrm{{\mathbf{y}}}\,}}_{ineq}^{k+1}={{\,\mathrm{{\mathbf{y}}}\,}}_{ineq}^k -\beta ({{\,\mathrm{\mathbf{A}}\,}}_{ineq}{{\,\mathrm{{\mathbf{x}}}\,}}^{k+1} +{{\,\mathrm{{\mathbf{s}}}\,}}^{k+1} -{{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}).\\ \end{array} \right. \\ &{} \dagger \text{ Structure } \text{ of } \text{ a } \text{ problem, } \text{ if } \text{ known, } \text{ can } \text{ be } \text{ used } \text{ to } \text{ guide } \text{ grouping }\\ &{}\quad \text{ as } \text{ described } \text{ in } \text{ Sect. } 2.3.1 \end{array} \end{aligned}$$
(8)

Randomly assembled cyclic alternating direction method of multipliers (RAC-ADMM), can be seen as a generalization of cyclic ADMM, i.e. cyclic multi-block ADMM is a special case of RAC-ADMM in which the blocks are constructed at each iteration using a deterministic rule and optimized following a fixed block order. Using the same analogy, RP-ADMM can be seen as a special case of RAC-ADMM, in which blocks are constructed using some predetermined rule and kept fixed at each iteration, but sub-problems (i.e. blocks minimizing primal variables) are solved in a random order.

The main advantage of RAC-ADMM over other multi-block ADMM variants is in its potential to significantly reduce primal and, especially, dual residuals, which is a common obstacle for applying multi-block ADMMs. Intuitively, switching variables between the blocks increases chances of finding descent directions which favor RAC-ADMM. The following example further explains such intuition.

Example 1

Consider the problem

$$\begin{aligned} \begin{array}{cl} \min \limits _{x,y,u,v} &{} x^2-xy+u^2-uv\\ \text{ s.t. }&{} (x,y,u,v)\ge 0\\ \end{array} \end{aligned}$$

Starting from the origin with two blocks, if we group (xu) and (yv) then we cannot minimize further. But if we group (xy) and (uv) then we can find the problem is unbounded from below. Thus, re-grouping the variables gives RAC-ADMM higher chances of finding (better) descent direction(s), which consequently leads to a better performance for dual residuals.

To illustrate this feature we ran a simple experiment in which we fix the number of iterations and check the final residuals among the aforementioned multi-block ADMM variants. In Table 1 we show performance of the ADMMs when solving a simple quadratic problem with a single constraint, represented by a regularized Markowitz min–variance problem (defined in Sect. 4.1.3). Figure 1 gives the insight in evolution of the both residuals with iterations. From the figure, it is noticeable that both D-ADMM (Eq. 4) and RP-ADMM (Eq. 5) suffer from a very slow convergence speed, with the main difference that the latter gives a slightly lower error on dual residual. Multi-block Cyclic-ADMM (Eq. 2) does not converge to a KKT point for any k, but oscillates around a very weak solution. RAC-ADMM converges to the KKT solution very quickly with both residual errors below 10\(^{-8}\) in less than 40 iterations.

Table 1 Primal and dual residuals of the result returned by ADMM variants after k iterations for a randomly generated Markowitz min–variance problem
Fig. 1
figure 1

Iteration evolution of primal and dual residuals of ADMM variants

2.2 Convergence of RAC-ADMM

This section concerns with convergence properties of RAC-ADMM when applied to unbounded (i.e. \({{\,\mathrm{{\mathbf{x}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^n\)) linearly-equality constrained quadratic optimization problems. To simplify the notation, we use \(A={{\,\mathrm{\mathbf{A}}\,}}_{eq}\) and \({{\,\mathrm{{\mathbf{b}}}\,}}={{\,\mathrm{{\mathbf{b}}}\,}}_{eq}\).

$$\begin{aligned} \begin{array}{cl} \min \limits _{x} &{} \frac{1}{2}{{\,\mathrm{{\mathbf{x}}}\,}}^T {{\,\mathrm{\mathbf{H}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}+ {{\,\mathrm{{\mathbf{c}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}\\ \text{ s.t. }&{} {{\,\mathrm{\mathbf{A}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}={{\,\mathrm{{\mathbf{b}}}\,}}\end{array} \end{aligned}$$
(9)

with \({{\,\mathrm{\mathbf{H}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{n\times n}, {{\,\mathrm{\mathbf{H}}\,}}\succeq 0\), \({{\,\mathrm{{\mathbf{c}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^n\), \({{\,\mathrm{\mathbf{A}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{m\times n}\), \({{\,\mathrm{{\mathbf{b}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^m\) and \({{\,\mathrm{{\mathbf{x}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^n\).

Convergence analysis of problems that include inequalities (bounds on variables and/or inequality constraints) is still an open question and will be addressed in our subsequent work.

2.2.1 Preliminaries

Double Randomness Interpretation Let \(\Gamma _{RAC(n,p)}\) denote all possible updating combinations for RAC with n variables and p blocks, and let \(\sigma _{RAC}\in \Gamma _{RAC(n,p)}\) denote one specific updating combination for RAC-ADMM. Then the total number of updating combinations for RAC-ADMM is given by

$$\begin{aligned} {|\Gamma _{RAC(n,p)}|=\dfrac{n!}{(s!)^{p}}} \end{aligned}$$

where \(s\in {{\,\mathrm{\mathbb {Z}}\,}}_+\) denotes size of each block with \(p\cdot s=n\).

RAC-ADMM could be viewed as a double-randomness procedure based on RP-ADMM with different block compositions. Let \(\sigma _{RP}\in \Gamma _{RP(p)}\) denote an updating combinations of RP-ADMM with p blocks where the variable composition in each block is fixed. Clearly, the total number of updating combinations for RP-ADMM is given by

$$\begin{aligned} {|\Gamma _{RP(p)}|=p!} \end{aligned}$$

the total number of possible updating orders of the p blocks. Then, one may consider RAC-ADMM first randomly chooses a block composition and then applies RP-ADMM. Let \(\upsilon _i\in \Upsilon (n,p)\) denote one specific block composition or partition of n decision variables into p blocks, where \(\Upsilon (n,p)\) is the set of all possible block compositions. Then, the total number of all possible block compositions is given by

$$\begin{aligned} {|\Upsilon (n,p)|=\dfrac{|\Gamma _{RAC(n,p)}|}{|\Gamma _{RP(p)}|}=\dfrac{n!}{(s!)^{p}p!}} \end{aligned}$$

For convenience, in what follows let \(\Gamma _{RP(p),\upsilon _i}\) denote all possible updating orders with a fixed block composition \(\upsilon _i\). To further illustrate the relations of RP-ADMM and RAC-ADMM, consider the following simple example.

Example 2

Let \(n=6\), \(p=3\), so \(|\Gamma _{RP(6,3)}|=3!=6\), and the total number of block compositions or partitions is 15:

$$\begin{aligned} \begin{array}{cl} \upsilon _{i}\in \Upsilon (6,3)=\big \{&{} \{[x_1,x_2],[x_3,x_4],[x_5,x_6]\}, \{[x_1,x_2],[x_3,x_5],[x_4,x_6]\},\\ &{}\{[x_1,x_2],[x_3,x_6],[x_4,x_5]\}, \{[x_1,x_3],[x_2,x_4],[x_5,x_6]\}, \\ &{}\{[x_1,x_3],[x_2,x_5],[x_4,x_6]\}, \{[x_1,x_3],[x_2,x_6],[x_4,x_5]\}, \\ &{}\{[x_1,x_4],[x_2,x_3],[x_5,x_6]\}, \{[x_1,x_4],[x_2,x_5],[x_3,x_6]\}, \\ &{}\{[x_1,x_4],[x_2,x_6],[x_3,x_5]\}, \{[x_1,x_5],[x_2,x_3],[x_4,x_6]\}, \\ &{}\{[x_1,x_5],[x_2,x_4],[x_3,x_6]\}, \{[x_1,x_5],[x_2,x_6],[x_3,x_4]\}, \\ &{}\{[x_1,x_6],[x_2,x_3],[x_4,x_5]\}, \{[x_1,x_6],[x_2,x_4],[x_3,x_5]\},\\ &{}\{[x_1,x_6],[x_2,x_5],[x_3,x_4]\} \big \} \end{array} \end{aligned}$$

RAC-ADMM could be viewed as if, at each cyclic loop, the algorithm first selects a block composition \(\upsilon _i\) uniformly random from all possible 15 block compositions \(\Upsilon (n,p)\), and then performs RP-ADMM with the chosen specific block composition \(\upsilon _i\). In other words, RAC-ADMM then randomly selects \(\sigma \in \Gamma _{RP(p),\upsilon _i}\), which leads to a total of 90 possible updating combinations.

RAC-ADMM as a linear transformation Recall that the augmented Lagrangian function for (9) is given by

$$\begin{aligned} {L_{\beta }({{\,\mathrm{{\mathbf{x}}}\,}},{{\,\mathrm{{\mathbf{y}}}\,}})=\frac{1}{2}{{\,\mathrm{{\mathbf{x}}}\,}}^T{{\,\mathrm{\mathbf{H}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}+{{\,\mathrm{{\mathbf{c}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{y}}}\,}}^T({{\,\mathrm{\mathbf{A}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}})+\frac{1}{2}{\beta }||{{\,\mathrm{\mathbf{A}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}||^2.} \end{aligned}$$

Consider one specific update order generated by RAC, \(\sigma _{RAC}\in \Gamma _{RAC(n,p)}\). Note that we use \(\sigma \) instead \(\sigma _{RAC}\) when there is no confusion. One possible update combination generated by RAC, \(\sigma = [\sigma _1,\dots ,\sigma _p]\), where \(\sigma _i\) is an index vector of size s, is as follows,

$$\begin{aligned} \text{ RAC-ADMM}_{k+1}=\left\{ \begin{array}{l} {{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma 1}^{k+1}={\mathop {\hbox {arg min}}\limits _{{{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma 1}}} \{L_{\beta }({{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma 1},{{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma 2}^k,\dots ,{{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma p}^k;{{\,\mathrm{{\mathbf{y}}}\,}}^k)\}, \\ \vdots \\ {{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma p}^{k+1}={\mathop {\hbox {arg min}}\limits _{{{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma p}}} \{L_{\beta }({{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma 1}^{k+1},{{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma 2}^{k+1},\dots ,{{\,\mathrm{{\mathbf{x}}}\,}}_{\sigma p};{{\,\mathrm{{\mathbf{y}}}\,}}^k)\}, \\ {{\,\mathrm{{\mathbf{y}}}\,}}^{k+1}={{\,\mathrm{{\mathbf{y}}}\,}}^k -\beta ({{\,\mathrm{\mathbf{A}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}^{k+1} -{{\,\mathrm{{\mathbf{b}}}\,}}). \end{array}\right. \end{aligned}$$

For convenience, we follow the notation in [17] and [68, 69] to describe the iterative scheme of RAC-ADMM in a matrix form. Let \({{\,\mathrm{\mathbf{L}}\,}}_{\sigma }\in {{\,\mathrm{\mathbb {R}}\,}}^{n\times n}\) be \(s\times s\) block matrix defined with respect to \(\sigma _i\) rows and \(\sigma _j\) columns as

$$\begin{aligned} ({{\,\mathrm{\mathbf{L}}\,}}_{\sigma })_{\sigma _i,\sigma _j}:={\left\{ \begin{array}{ll}\begin{array}{ll} {{\,\mathrm{\mathbf{H}}\,}}_{\sigma _i,\sigma _j} + \beta {{\,\mathrm{\mathbf{A}}\,}}^T_{\sigma _i}{{\,\mathrm{\mathbf{A}}\,}}_{\sigma _j}, &{} \quad i\ge j\\ {{\,\mathrm{{\mathbf{0}}}\,}}, &{} \quad \hbox {otherwise} \end{array} \end{array}\right. } \end{aligned}$$
(10)

and let \({{\,\mathrm{\mathbf{R}}\,}}_{\sigma }\) be defined as

$$\begin{aligned} {{{\,\mathrm{\mathbf{R}}\,}}_{\sigma }:= {{\,\mathrm{\mathbf{L}}\,}}_\sigma - ({{\,\mathrm{\mathbf{H}}\,}}+ \beta {{\,\mathrm{\mathbf{A}}\,}}^T{{\,\mathrm{\mathbf{A}}\,}}).} \end{aligned}$$

By setting \({{\,\mathrm{{\mathbf{z}}}\,}}:=({{\,\mathrm{{\mathbf{x}}}\,}};{{\,\mathrm{{\mathbf{y}}}\,}})\), RAC-ADMM could be viewed as a linear system mapping iteration

$$\begin{aligned} {{{\,\mathrm{{\mathbf{z}}}\,}}^{k+1}:= {{\,\mathrm{\mathbf{M}}\,}}_{\sigma } {{\,\mathrm{{\mathbf{z}}}\,}}^{k}+\bar{{{\,\mathrm{\mathbf{L}}\,}}}^{-1}_{\sigma }\bar{{{\,\mathrm{{\mathbf{b}}}\,}}}} \end{aligned}$$

where

$$\begin{aligned} {{\,\mathrm{\mathbf{M}}\,}}_{\sigma }:= \bar{{{\,\mathrm{\mathbf{L}}\,}}}^{-1}_{\sigma }\bar{{{\,\mathrm{\mathbf{R}}\,}}}_{\sigma } \end{aligned}$$
(11)

and

$$\begin{aligned} {\begin{array}{ccc} \bar{{{\,\mathrm{\mathbf{L}}\,}}}_{\sigma }:=\begin{bmatrix}{{\,\mathrm{\mathbf{L}}\,}}_{\sigma } &{} {{\,\mathrm{{\mathbf{0}}}\,}}\\ \beta {{\,\mathrm{\mathbf{A}}\,}}&{} {{\,\mathrm{\mathbf{I}}\,}}\end{bmatrix} &{} \bar{{{\,\mathrm{\mathbf{R}}\,}}}_{\sigma }:=\begin{bmatrix}{{\,\mathrm{\mathbf{R}}\,}}_{\sigma } &{} {{\,\mathrm{\mathbf{A}}\,}}^T\\ {{\,\mathrm{{\mathbf{0}}}\,}}&{} {{\,\mathrm{\mathbf{I}}\,}}\end{bmatrix} &{} \bar{{{\,\mathrm{{\mathbf{b}}}\,}}}:=\begin{bmatrix}-{{\,\mathrm{{\mathbf{c}}}\,}}+\beta {{\,\mathrm{\mathbf{A}}\,}}^T{{\,\mathrm{{\mathbf{b}}}\,}}\\ \beta {{\,\mathrm{{\mathbf{b}}}\,}}\end{bmatrix} \end{array}} \end{aligned}$$

Define the matrix \({{\,\mathrm{\mathbf{Q}}\,}}\) by

$$\begin{aligned} {\begin{array}{ll} {{\,\mathrm{\mathbf{Q}}\,}}:={{{\,\mathrm{\mathbb {E}}\,}}_{\sigma }}{[ {{\,\mathrm{\mathbf{L}}\,}}^{-1}_{\sigma }] }&{}=\frac{1}{|\Gamma _{RAC(n,p)}|}\sum \nolimits _{\sigma \in \Gamma _{RAC(n,p)}}{{\,\mathrm{\mathbf{L}}\,}}_{\sigma }^{-1}\\ &{}=\frac{1}{|\Upsilon (n,p)|}\sum \nolimits _{\upsilon _i\in \Upsilon (n,p)}\left\{ \frac{1}{p!}\sum \nolimits _{\sigma \in \Gamma _{RP(p),\upsilon _i}}{{\,\mathrm{\mathbf{L}}\,}}^{-1}_{\sigma }\right\} \end{array}} \end{aligned}$$

Notice that for any block structure \(\upsilon _{i}\) any update order within this fixed block structure \(\sigma \in \Gamma _{RP(p),\upsilon _i}\), we have \({{\,\mathrm{\mathbf{L}}\,}}_{\sigma }^T={{\,\mathrm{\mathbf{L}}\,}}_{\bar{\sigma }}\), where \(\bar{\sigma }\) is a reverse permutation of \(\sigma \in \Gamma _{RP(p),\upsilon _i}\). Specifically, let \(\sigma =[\sigma _1,\dots ,\sigma _p]\), we have \(\bar{\sigma }=[\bar{\sigma }_1,\dots ,\bar{\sigma }_p]\), and \(\bar{\sigma }_i=\sigma _{p+1-i}\). For a specific fixed block structure \(\upsilon _i\), define matrix \({{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}\) as

$$\begin{aligned} {{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}:={{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{\mathbf{L}}\,}}_{\sigma _i}^{-1}|\upsilon _i] }=\frac{1}{p!}\sum \nolimits _{\sigma _i\in \Gamma _{RP(n,\upsilon _i)}}{{\,\mathrm{\mathbf{L}}\,}}^{-1}_{\sigma _i}, \end{aligned}$$

and because \({{\,\mathrm{\mathbf{L}}\,}}_{\sigma }^T={{\,\mathrm{\mathbf{L}}\,}}_{\bar{\sigma }}\), matrix \({{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}\) is symmetric for all i, and

$$\begin{aligned} {{\,\mathrm{\mathbf{Q}}\,}}:=\frac{1}{|\Upsilon (n,p)|}\sum \nolimits _{\upsilon _i\in \Upsilon (n,p)} {{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i} \end{aligned}$$
(12)

Finally, the expected mapping matrix \({{\,\mathrm{\mathbf{M}}\,}}\) is given by

$$\begin{aligned} {{{\,\mathrm{\mathbf{M}}\,}}:={{{\,\mathrm{\mathbb {E}}\,}}_{\sigma }}{[ {{\,\mathrm{\mathbf{M}}\,}}_\sigma ] }=\frac{1}{|\Gamma _{RAC(n,p)}|}\sum \nolimits _{\sigma \in \Gamma _{RAC(n,p)}}{{\,\mathrm{\mathbf{M}}\,}}_{\sigma }} \end{aligned}$$

or, by direct computation,

$$\begin{aligned} {{\,\mathrm{\mathbf{M}}\,}}:=\begin{bmatrix} {{\,\mathrm{\mathbf{I}}\,}}-{{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{\mathbf{S}}\,}}&{} {{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{\mathbf{A}}\,}}^T\\ \beta ({{\,\mathrm{\mathbf{A}}\,}}{{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{\mathbf{S}}\,}}-{{\,\mathrm{\mathbf{A}}\,}}) &{} {{\,\mathrm{\mathbf{I}}\,}}-\beta {{\,\mathrm{\mathbf{A}}\,}}{{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{\mathbf{A}}\,}}^T \end{bmatrix} \end{aligned}$$
(13)

where \({{\,\mathrm{\mathbf{S}}\,}}={{\,\mathrm{\mathbf{H}}\,}}+ \beta {{\,\mathrm{\mathbf{A}}\,}}^T{{\,\mathrm{\mathbf{A}}\,}}\).

2.2.2 Expected convergence of RAC-ADMM

With the preliminaries defined, we are now ready to show that RAC-ADMM converges in expectation under the following assumption:

Assumption 1

Assume that for any block of indices \(\sigma _i\) that generated by RAC-ADMM

$$\begin{aligned} {{{\,\mathrm{\mathbf{H}}\,}}_{\sigma _i,\sigma _i} + \beta {{\,\mathrm{\mathbf{A}}\,}}^T_{\sigma _i}{{\,\mathrm{\mathbf{A}}\,}}_{\sigma _i}\succ 0} \end{aligned}$$

where \(\sigma _i\) is the index vector describing indices of primal variables of the block i.

Theorem 2

Suppose that Assumption (1) holds, and that RAC-ADMM (8) is employed to solve problem (9). Then the expected output converges to some KKT point of (9).

Theorem 2 suggests that the expected output converges to some KKT point of (9). Such convergence in expectation criteria has been widely used in many randomized algorithms, including convergence analysis for RP-BCD and RP-ADMM (e.g. [16, 68]), and stochastic quasi-newton methods (e.g. [12]). It is worth mentioning that if the optimization problem is strictly convex (\(\hbox {H}>0\)), we are able to prove that the expected mapping matrix has spectrum that is strictly less than 1 (Corollary 1).

Although convergence in expectation is widely used in many literature, it is still a relatively weak convergence criteria. This is why in Sect. 2.2.4 we propose a sufficient condition for almost surely convergence of RAC-ADMM. The section also provides an example showing a problem with \(\rho (M)<1\) which does not converge. Rather it oscillates almost surely (Example 3). To the best of our knowledge, this is the first example showing that even if a randomized optimization algorithm has expected spectrum radius strictly less than 1, the algorithm may still oscillate—to construct an example with expected spectrum radius equals to 1 that does not converge is an easy task. Consider for example a sequence \(\{x_t,t\ge 0\}\) with \(x_t=-1\) and \(x_t=1\), chosen with equal probabilities (prob=1/2). Then, the sequence does not converge with probability 1. However, under the such example, the expected spectrum of this mapping procedure \(\rho (M)\) actually equals to 1, which implies that the sequence may not converge. Despite the fact that such example exists for RAC-ADMM, in all the numerical tests provided in Sect. 4, RAC-ADMM converges to the KKT point of the optimization problem under few iterations. Such strong numerical evidences imply that in practice, our algorithm does not require taking expectation over many iterations to converge.

The proof of Theorem 2 follows the proof structure of [17, 68, 69] to show that under Assumption 1:

  1. (1)

    \({{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{\mathbf{S}}\,}})\in [0,\frac{4}{3})\);

  2. (2)

    \(\forall \lambda \in {{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{M}}\,}}), {{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{\mathbf{S}}\,}})\in [0,\frac{4}{3})\implies \Vert \lambda \Vert <1\) or \(\lambda =1\);

  3. (3)

    if \(1\in {{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{M}}\,}})\), then the eigenvalue 1 has a complete set of eigenvectors;

  4. (4)

    Steps (2) and (3) imply the convergence in expectation of the RAC-ADMM.

The proof builds on Theorem 2 from [17], which describes RP-ADMM convergence in expectation under specific conditions put on matrices H and A, and Weyl’s inequality, which gives the upper bound on maximum eigenvalue and the lower bound on minimum eigenvalue of a sum of Hermitian matrices. Proofs for items (2) and (3) are identical to proofs given in [17, Section 3.2], so here the focus in on proving item (1). The following lemma completes the proof of expected convergence of RAC.

Lemma 1

Under Assumption 1, the matrix Q is positive definite, and

$$\begin{aligned} {{{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{\mathbf{S}}\,}})\subset [0,\frac{4}{3})} \end{aligned}$$

To prove Lemma 1, we first show that for any block structure \(\upsilon _i\), the following proposition holds:

Proposition 1

\({{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}{{\,\mathrm{\mathbf{S}}\,}}\) is positive semi-definite and symmetric, and

$$\begin{aligned} {{{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}{{\,\mathrm{\mathbf{S}}\,}})\subseteq [0,\frac{4}{3})} \end{aligned}$$

Intuitively, a different block structure of RAC-ADMM iteration could be viewed as relabeling variables and performing RP-ADMM procedure as described in [17].

Proof

Define block structure \(\{[x_1,\dots ,x_{s}],[x_{s+1},\dots ,x_{2s}],[x_{(p-1)s+1},\dots ,x_{ps}]\}\) as \(\upsilon _1\). For any block structure \(\upsilon _i\), there exists \(\tilde{{{\,\mathrm{\mathbf{S}}\,}}}\) and \(\tilde{{{\,\mathrm{\mathbf{Q}}\,}}}_{\upsilon _1}\) s.t.

$$\begin{aligned}{ {{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}{{\,\mathrm{\mathbf{S}}\,}})={{\,\mathrm{\hbox {eig}}\,}}(\tilde{{{\,\mathrm{\mathbf{Q}}\,}}}_{\upsilon _1}\tilde{{{\,\mathrm{\mathbf{S}}\,}}})} \end{aligned}$$

where \(\tilde{{{\,\mathrm{\mathbf{Q}}\,}}}_{\upsilon _1}\) represents formulation of \({{{\,\mathrm{\mathbb {E}}\,}}_{\sigma }}{[ {{\,\mathrm{\mathbf{L}}\,}}_{\sigma }^{-1}] }\) matrix with respect to block structure \(\upsilon _1\) and matrix \(\tilde{{{\,\mathrm{\mathbf{S}}\,}}}\). To prove this, we introduce permutation matrix \({{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _1\rightarrow \upsilon _i}\) as follows. Given

$$\begin{aligned} \begin{array}{ll} \upsilon _1=&{} \{[1,\dots , {s}],[{s+1},\dots , {2s}],[{(p-1)s+1},\dots ,{ps}]\}\\ \upsilon _i=&{} \{[\pi (1),\dots , \pi ({s})],[\pi ({s+1}),\dots , \pi ({2s})],[\pi ({(p-1)s+1}),\dots ,\pi ({ps})]\} \end{array} \end{aligned}$$

define

$$\begin{aligned}{ {{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i} =\begin{bmatrix} {{\,\mathrm{{\mathbf{e}}}\,}}_{\pi (1)}\\ {{\,\mathrm{{\mathbf{e}}}\,}}_{\pi (2)}\\ \vdots \\ {{\,\mathrm{{\mathbf{e}}}\,}}_{\pi (ps)} \end{bmatrix}} \end{aligned}$$

Where \({{\,\mathrm{{\mathbf{e}}}\,}}_{i}\) is the row vector with i th element equal to 1. Notice \({{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i}\) is orthogonal matrix for any \(\upsilon _i\), i.e. \({{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i}{{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i}^T=I\). For any fixed block structure \(\upsilon _i\), with an update order within \(\sigma _{RP}\in \Gamma _{RP}(p)\), the following equality holds

$$\begin{aligned}{ {{\,\mathrm{\mathbf{L}}\,}}_{\sigma _{RP},{{\,\mathrm{\mathbf{S}}\,}},\upsilon _i}={{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i}^T L_{\sigma _{RP},\tilde{{{\,\mathrm{\mathbf{S}}\,}}},\upsilon _1}{{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i} } \end{aligned}$$

where \({{\,\mathrm{\mathbf{L}}\,}}_{\sigma _{RP},{{\,\mathrm{\mathbf{S}}\,}},\upsilon _i}\) is the construction of \({{\,\mathrm{\mathbf{L}}\,}}\) following update order \(\sigma _{RP}\in \Gamma _{RP}(p)\) and block structure \(\upsilon _i\) with respect to \({{\,\mathrm{\mathbf{S}}\,}}\), and \( {{\,\mathrm{\mathbf{L}}\,}}_{\sigma _{RP},\tilde{{{\,\mathrm{\mathbf{S}}\,}}},\upsilon _1}\) is the construction of \({{\,\mathrm{\mathbf{L}}\,}}\) following update order \(\sigma _{RP}\in \Gamma _{RP}(p)\) and block structure \(\upsilon _1\), with coefficient matrix \(\tilde{{{\,\mathrm{\mathbf{S}}\,}}}\), and

$$\begin{aligned}{ \tilde{{{\,\mathrm{\mathbf{S}}\,}}}={{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i} {{\,\mathrm{\mathbf{S}}\,}}{{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i}^T } \end{aligned}$$

and

$$\begin{aligned}{ {{\,\mathrm{\mathbf{L}}\,}}_{\sigma ,{{\,\mathrm{\mathbf{S}}\,}},\upsilon _i}^{-1}=({{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i}^T {{\,\mathrm{\mathbf{L}}\,}}_{\sigma ,\tilde{{{\,\mathrm{\mathbf{S}}\,}}},\upsilon _1}{{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i})^{-1}={{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i}^T {{\,\mathrm{\mathbf{L}}\,}}^{-1}_{\sigma ,\tilde{{{\,\mathrm{\mathbf{S}}\,}}},\upsilon _1}{{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i}. } \end{aligned}$$

Then by the definition of \({{\,\mathrm{\mathbf{Q}}\,}}\) matrix (Eq. 12), we get

$$\begin{aligned}{ {{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i,{{\,\mathrm{\mathbf{S}}\,}}}={{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i}^T\tilde{{{\,\mathrm{\mathbf{Q}}\,}}}_{\upsilon _1,\tilde{{{\,\mathrm{\mathbf{S}}\,}}}}{{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i}} \end{aligned}$$

so that

$$\begin{aligned}{ {{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i,{{\,\mathrm{\mathbf{S}}\,}}}{{\,\mathrm{\mathbf{S}}\,}}={{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i}^T \tilde{{{\,\mathrm{\mathbf{Q}}\,}}}_{\upsilon _1,\tilde{{{\,\mathrm{\mathbf{S}}\,}}}} {{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i} {{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i}^{-1} \tilde{{{\,\mathrm{\mathbf{S}}\,}}} {{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i} ={{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i}^T \tilde{{{\,\mathrm{\mathbf{Q}}\,}}}_{\upsilon _1,\tilde{{{\,\mathrm{\mathbf{S}}\,}}}} \tilde{{{\,\mathrm{\mathbf{S}}\,}}} {{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i}. } \end{aligned}$$

Considering the eigenvalues of \({{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i,{{\,\mathrm{\mathbf{S}}\,}}}{{\,\mathrm{\mathbf{S}}\,}}\),

$$\begin{aligned}{ {{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i,{{\,\mathrm{\mathbf{S}}\,}}}{{\,\mathrm{\mathbf{S}}\,}})={{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i}^T \tilde{{{\,\mathrm{\mathbf{Q}}\,}}}_{\upsilon _1,\tilde{{{\,\mathrm{\mathbf{S}}\,}}}} \tilde{{{\,\mathrm{\mathbf{S}}\,}}} {{\,\mathrm{\mathbf{P}}\,}}_{\upsilon _i})={{\,\mathrm{\hbox {eig}}\,}}(\tilde{{{\,\mathrm{\mathbf{Q}}\,}}}_{\upsilon _1,\tilde{{{\,\mathrm{\mathbf{S}}\,}}}} \tilde{{{\,\mathrm{\mathbf{S}}\,}}} ) } \end{aligned}$$

and from [17], under Assumption (1), \(\tilde{{{\,\mathrm{\mathbf{Q}}\,}}}_{\upsilon _1,\tilde{{{\,\mathrm{\mathbf{S}}\,}}}}\) is positive definite, and

$$\begin{aligned}{ {{\,\mathrm{\hbox {eig}}\,}}(\tilde{{{\,\mathrm{\mathbf{Q}}\,}}}_{\upsilon _1,\tilde{{{\,\mathrm{\mathbf{S}}\,}}}} \tilde{{{\,\mathrm{\mathbf{S}}\,}}} )\subset [0,\frac{4}{3}) } \end{aligned}$$

which implies \({{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}\) is positive definite, and

$$\begin{aligned}{ {{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i} {{\,\mathrm{\mathbf{S}}\,}})\subset [0,\frac{4}{3}). } \end{aligned}$$

Notice that by definition of \({{\,\mathrm{\mathbf{Q}}\,}}\), we have

$$\begin{aligned}{ {{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{\mathbf{S}}\,}}=\frac{1}{\Upsilon (d,n)}\sum _{\upsilon _i}{{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}{{\,\mathrm{\mathbf{S}}\,}}.} \end{aligned}$$

and since \({{\,\mathrm{\mathbf{S}}\,}}\) is positive definite and symmetric, we could write \({{\,\mathrm{\mathbf{S}}\,}}={{\,\mathrm{\mathbf{B}}\,}}^T{{\,\mathrm{\mathbf{B}}\,}}\), so

$$\begin{aligned}{ {{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{\mathbf{S}}\,}}=\frac{1}{\Upsilon (d,n)}\sum _{\upsilon _i}{{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}{{\,\mathrm{\mathbf{B}}\,}}^T{{\,\mathrm{\mathbf{B}}\,}}.} \end{aligned}$$

Because \(\frac{1}{\Upsilon (d,n)}\sum _{\upsilon _i}{{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}\) is real symmetric, we have

$$\begin{aligned}{ \begin{array}{ll} {{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{\mathbf{S}}\,}})&{}={{\,\mathrm{\hbox {eig}}\,}}\left( \frac{1}{\Upsilon (d,n)}\sum _{\upsilon _i}{{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}{{\,\mathrm{\mathbf{B}}\,}}^T{{\,\mathrm{\mathbf{B}}\,}}\right) \\ &{}={{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{B}}\,}}(\frac{1}{\Upsilon (d,n)}\sum _{\upsilon _i}{{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}){{\,\mathrm{\mathbf{B}}\,}}^T)\\ &{}={{\,\mathrm{\hbox {eig}}\,}}\left( \frac{1}{\Upsilon (d,n)}\sum _{\upsilon _i}{{\,\mathrm{\mathbf{B}}\,}}{{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}{{\,\mathrm{\mathbf{B}}\,}}^T\right) \end{array}} \end{aligned}$$

Let \(\lambda _1({{\,\mathrm{\mathbf{A}}\,}})\) denote the maximum eigenvalue of matrix \({{\,\mathrm{\mathbf{A}}\,}}\), then as all \({{\,\mathrm{\mathbf{B}}\,}}{{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}{{\,\mathrm{\mathbf{B}}\,}}^T\) are Hermitian matrices, by Weyl’s theorem, we have

$$\begin{aligned}{ \begin{array}{ll} \lambda _1({{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{\mathbf{S}}\,}})&{}=\lambda _1\left( \frac{1}{\Upsilon (d,n)}\sum _{\upsilon _i}{{\,\mathrm{\mathbf{B}}\,}}{{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}{{\,\mathrm{\mathbf{B}}\,}}^T\right) \\ &{}\le \frac{1}{\Upsilon (d,n)}\sum _{\upsilon _i}\lambda _1({{\,\mathrm{\mathbf{B}}\,}}{{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}{{\,\mathrm{\mathbf{B}}\,}}^T)\\ &{}=\frac{1}{\Upsilon (d,n)}\sum _{\upsilon _i}\lambda _1({{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}{{\,\mathrm{\mathbf{S}}\,}}) \end{array}} \end{aligned}$$

and as \(\lambda _1({{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}{{\,\mathrm{\mathbf{S}}\,}})<\frac{4}{3}\) for each i,

$$\begin{aligned} {{{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{\mathbf{S}}\,}})\subseteq \Big [0,\frac{4}{3}\Big )} \end{aligned}$$

which completes the proof of Lemma 1, and thus establishes that RAC-ADMM is guaranteed to converge in expectation. \(\square \)

When the problem is strongly convex (\({{\,\mathrm{\mathbf{H}}\,}}\succ 0\)), we introduce the following corollary.

Corollary 1

Under Assumption 1, and \({{\,\mathrm{\mathbf{H}}\,}}\succ 0\),

$$\begin{aligned} {\rho ({{\,\mathrm{\mathbf{M}}\,}})<1} \end{aligned}$$

Proof

When \({{\,\mathrm{\mathbf{H}}\,}}\succ 0\), by definition \({{\,\mathrm{\mathbf{S}}\,}}={{\,\mathrm{\mathbf{H}}\,}}+\beta {{\,\mathrm{\mathbf{A}}\,}}^T{{\,\mathrm{\mathbf{A}}\,}}\succ 0\), and by Lemma 1, \({{\,\mathrm{\mathbf{Q}}\,}}\succ 0\), hence \({{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{\mathbf{S}}\,}})\subseteq (0,\dfrac{4}{3})\), and this implies \(\rho ({{\,\mathrm{\mathbf{M}}\,}})<1\). \(\square \)

Note that there are random sequences converging in expectation where their spectrum-radius equal to one. Therefore, for solving strongly non-separable convex quadratic optimization, the expected convergence rate of RAC-ADMM is proved to be linear, which result is stronger than just “convergence in expectation”.

2.2.3 Convergence speed of RAC-ADMM versus RP-ADMM

Following is a corollary to show that on average or in expectation, RAC-ADMM outperforms RP-ADMM with a fixed block composition in sense of spectral radius of mapping matrix.

Corollary 2

Under Assumption 1, with \({{\,\mathrm{\mathbf{H}}\,}}={{\,\mathrm{{\mathbf{0}}}\,}}\) so that \({{\,\mathrm{\mathbf{S}}\,}}=\beta {{\,\mathrm{\mathbf{A}}\,}}^T{{\,\mathrm{\mathbf{A}}\,}}\), where \({{\,\mathrm{\mathbf{A}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{n\times n}\) is a non-singular matrix, there exists some RP-ADMM (with specific block compositions), such that expected spectral radius of RAC-ADMM mapping matrix is (weakly) smaller than expected spectral radius of that of RP-ADMM.

Proof

We prove the corollary in solving linear system with A non singular, with null objective function. In this setup, the expected output converges to the unique primal dual optimal solution to (9).

Notice in this setup, we have

$$\begin{aligned}{ \begin{array}{l} \lambda \in {{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{M}}\,}})\Leftrightarrow \tau = \dfrac{(1-\lambda )^2}{1-2\lambda } \in {{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{\mathbf{A}}\,}}^T{{\,\mathrm{\mathbf{A}}\,}})\\ \lambda _{\upsilon _i}\in {{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{M}}\,}}_{RP,\upsilon _i})\Leftrightarrow \tau _{\upsilon _i} = \dfrac{(1-\lambda _{\upsilon _i})^2}{1-2\lambda _{\upsilon _i}} \in {{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}{{\,\mathrm{\mathbf{A}}\,}}^T{{\,\mathrm{\mathbf{A}}\,}}) \end{array}} \end{aligned}$$

By calculation, we could characterize \(\lambda \) as roots of quadratic polynomial [69],

$$\begin{aligned}{ \lambda _1 = 1-\tau +\sqrt{\tau (\tau -1)}, \quad \lambda _2 = 1-\tau -\sqrt{\tau (\tau -1)}. } \end{aligned}$$

Suppose corollary doesn’t hold, \(\rho ({{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{\mathbf{M}}\,}}_{RAC}] })\ge \rho ({{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{\mathbf{M}}\,}}_{RP,\upsilon _i}] })\) for all possible block structure. Define \(\underline{\tau }_{\upsilon _i}\) as the the smallest eigenvalue with respect to \({{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}{{\,\mathrm{\mathbf{S}}\,}}\), and \(\bar{\tau }_{\upsilon _i}\) as the largest eigenvalue with respect to \({{\,\mathrm{\mathbf{Q}}\,}}_{\upsilon _i}{{\,\mathrm{\mathbf{S}}\,}}\). Similarly, \(\underline{\tau }\) as the smallest eigenvalue with respect to \({{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{\mathbf{S}}\,}}\), and \(\bar{\tau }\) the largest eigenvalue of \({{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{\mathbf{S}}\,}}\). Consider the following two cases:

Case 1. \(\lambda ^*=\max _i|\lambda _i|\in \mathbb {C} \hbox { and } \lambda ^*\notin {{\,\mathrm{\mathbb {R}}\,}}\Leftrightarrow \tau _{\lambda ^*}<1\), where \(\tau _{\lambda ^*}\in {{\,\mathrm{\hbox {eig}}\,}}({{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{\mathbf{S}}\,}})\) satisfies \(\dfrac{(1-\lambda ^*)^2}{1-2\lambda ^*}=\tau _{\lambda ^*}\).

We have, \(\rho ({{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{\mathbf{M}}\,}}_{RAC}] })\ge \rho ({{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{\mathbf{M}}\,}}_{RP,\upsilon _i}] })\ \forall \ i\), which implies that

$$\begin{aligned}{ \sqrt{1-\tau _{\lambda ^*}}>\max \left\{ \sqrt{1-\tau _{\upsilon _i}},\tau _{\upsilon _i}-1+\sqrt{\tau _{\upsilon _i}(\tau _{\upsilon _i}-1)}\right\} \quad \forall i. } \end{aligned}$$

Specifically

$$\begin{aligned}{ \sqrt{1-\tau _{\lambda ^*}}>\sqrt{1-\underline{\tau }_{\upsilon _i}} \quad \forall \upsilon _i, } \end{aligned}$$

As \(f(x)=\sqrt{1-x}\) is monotone decreasing with respect to x, the above implies that

$$\begin{aligned}{ \tau _{\lambda ^*}<\underline{\tau }_{\upsilon _i} \quad \forall \upsilon _i, } \end{aligned}$$

and as \(\tau _{\lambda ^*}\ge \underline{\tau }\), the above equation implies

$$\begin{aligned}{ \underline{\tau }<\underline{\tau }_{\upsilon _i} \quad \forall \upsilon _i, } \end{aligned}$$

which is impossible, as by Weyl’s theorem,

$$\begin{aligned}{ \underline{\tau }\ge \dfrac{1}{|\Upsilon (d,b)|}\sum _i\underline{\tau }_{\upsilon _i}\ge \min _{i}\ \underline{\tau }_{\upsilon _i}. } \end{aligned}$$

Case 2. \(\lambda ^*=\max _i|\lambda _i|\in {{\,\mathrm{\mathbb {R}}\,}}\Leftrightarrow \tau _{\lambda ^*}>1\).

We have \(\rho ({{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{\mathbf{M}}\,}}_{RAC}] })\ge \rho ({{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{\mathbf{M}}\,}}_{RP,\upsilon _i}] })\ \forall i\), what implies that

$$\begin{aligned}{ \tau _{\lambda ^*}-1+\sqrt{\tau _{\lambda ^*}(\tau _{\lambda ^*}-1)}>\max \{\sqrt{1-\tau _{\upsilon _i}},\tau _{\upsilon _i}-1+\sqrt{\tau _{\upsilon _i}(\tau _{\upsilon _i}-1)}\} \quad \forall i. } \end{aligned}$$

Specifically,

$$\begin{aligned}{ \tau _{\lambda ^*}-1+\sqrt{\tau _{\lambda ^*}(\tau _{\lambda ^*}-1)}>\tau _{\upsilon _i}-1+\sqrt{\tau _{\upsilon _i}(\tau _{\upsilon _i}-1)} \quad \forall i, \quad \forall \upsilon _i. } \end{aligned}$$

As \(g(x)=x-1+\sqrt{x(x-1)}\) is a monotone increasing function for \(x\in [1,\infty )\), the above implies

$$\begin{aligned}{ \overline{\tau }\ge \tau _{\lambda ^*}>\overline{\tau }_{\upsilon _i} \quad \forall \upsilon _i, } \end{aligned}$$

which is impossible, as by Weyl’s theorem,

$$\begin{aligned}{ \bar{\tau } \le \dfrac{1}{|\Upsilon (d,b)|}\sum _i\bar{\tau }_{\upsilon _i}\le \max _{i}\ \underline{\tau }_{\upsilon _i}. } \end{aligned}$$

\(\square \)

2.2.4 Variance of RAC-ADMM

Convergence in expectation may not be a good indicator of convergence for solving all problems, as there may exist a problem for which RAC-ADMM is not stable or possesses greater variance. In order to give another probabilistic measure on performance of RAC-ADMM, this section introduces convergence almost surely (a.s.) as an indicator of the algorithm convergence. Convergence almost surely as a measure for stability has been used in linear control systems for quite some time, and is based on the mean-square stability criterion for stochastically varying systems [19]. The criterion establishes conditions for asymptotic convergence of covariance of the system states (e.g. variables).

This section builds on those results and establishes sufficient condition for RAC-ADMM to converge almost surely when applied to solve (9). The condition utilizes the Kronecker product of the mapping matrix, which captures the dynamics of the second moments of the random sequences generated by RAC-ADMM algorithm, and the expectation over the products of mapping matrices that provides the bounds on the variance of the distance between the KKT point and the random sequence generated by our algorithm.

Theorem 3

Suppose that Assumption 1 holds, and that RAC-ADMM (8) is employed to solve problem (9). Then the output of RAC-ADMM converges almost surely to some KKT point of (9) if

$$\begin{aligned} {\rho ({{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{\mathbf{M}}\,}}_\sigma \otimes {{\,\mathrm{\mathbf{M}}\,}}_\sigma ] })<1} \end{aligned}$$

where \({{\,\mathrm{\mathbf{M}}\,}}\otimes {{\,\mathrm{\mathbf{M}}\,}}\) is the Kronecker product of \({{\,\mathrm{\mathbf{M}}\,}}\) with itself.

Proof

Let \(\overline{{{\,\mathrm{{\mathbf{z}}}\,}}}=[\overline{{{\,\mathrm{{\mathbf{x}}}\,}}};\overline{{{\,\mathrm{{\mathbf{y}}}\,}}}]\in {{\,\mathrm{\mathbb {R}}\,}}^{N}\) denote the KKT point of (9), then, at \(k+1{th}\) iteration we have

$$\begin{aligned}{ ({{\,\mathrm{{\mathbf{z}}}\,}}_{k+1}-\overline{{{\,\mathrm{{\mathbf{z}}}\,}}})={{\,\mathrm{\mathbf{M}}\,}}_{\sigma _k}({{\,\mathrm{{\mathbf{z}}}\,}}_k-\overline{{{\,\mathrm{{\mathbf{z}}}\,}}}). } \end{aligned}$$

Define \({{\,\mathrm{{\mathbf{d}}}\,}}_k={{\,\mathrm{{\mathbf{z}}}\,}}_{k}-\overline{{{\,\mathrm{{\mathbf{z}}}\,}}}\), and

$$\begin{aligned}{ {{\,\mathrm{\mathbf{P}}\,}}_k={{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{{\mathbf{d}}}\,}}_k{{\,\mathrm{{\mathbf{d}}}\,}}_k^T] }. } \end{aligned}$$

There exists a linear operator \(\mathcal {T}\) s.t.

$$\begin{aligned} {{\,\mathrm{\hbox {vec}}\,}}({{\,\mathrm{\mathbf{P}}\,}}_{k+1})=\mathcal {T}{{\,\mathrm{\hbox {vec}}\,}}({{\,\mathrm{\mathbf{P}}\,}}_k) \end{aligned}$$
(14)

where \({{\,\mathrm{\hbox {vec}}\,}}(\cdot )\) is vectorization of a matrix, and \(\mathcal {T}={{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{\mathbf{M}}\,}}_\sigma \otimes {{\,\mathrm{\mathbf{M}}\,}}_\sigma ] }\), as

$$\begin{aligned}{ \begin{array}{ll} {{\,\mathrm{\hbox {vec}}\,}}({{\,\mathrm{\mathbf{P}}\,}}_{k+1})&{}={{\,\mathrm{\hbox {vec}}\,}}({{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{{\mathbf{d}}}\,}}_{k+1}{{\,\mathrm{{\mathbf{d}}}\,}}_{k+1}^T] })\\ &{}=\dfrac{1}{|\Upsilon (n,p)|}\sum ^{|\Upsilon (n,p)|}_{i=1} {{\,\mathrm{\hbox {vec}}\,}}({{\,\mathrm{\mathbf{M}}\,}}_i{{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{{\mathbf{d}}}\,}}_k{{\,\mathrm{{\mathbf{d}}}\,}}_k^T] }{{\,\mathrm{\mathbf{M}}\,}}_i^T)\\ &{}={{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{\mathbf{M}}\,}}_\sigma \otimes {{\,\mathrm{\mathbf{M}}\,}}_\sigma ] } {{\,\mathrm{\hbox {vec}}\,}}({{\,\mathrm{\mathbf{P}}\,}}_k) \end{array}} \end{aligned}$$

and \(\rho ({{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{\mathbf{M}}\,}}_\sigma \otimes {{\,\mathrm{\mathbf{M}}\,}}_\sigma ] })<1\) implies \(d_k\overset{a.s.}{\rightarrow }{{\,\mathrm{{\mathbf{0}}}\,}}\). To prove this, let \(||\cdot ||\) be the Frobenius norm of a matrix, \(||{{\,\mathrm{\mathbf{A}}\,}}||=\sqrt{\sum ^m_{i=1}\sum ^{n}_{j=1}|a_{ij}|^2}\), thus we can write

$$\begin{aligned}{ {{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ ||{{\,\mathrm{{\mathbf{d}}}\,}}_k||^2] }=\text{ Trace }({{\,\mathrm{\mathbf{P}}\,}}_k)\le ||{{\,\mathrm{\hbox {vec}}\,}}({{\,\mathrm{\mathbf{P}}\,}}_k)||^2, } \end{aligned}$$

and by (14),

$$\begin{aligned}{ ||{{\,\mathrm{\hbox {vec}}\,}}({{\,\mathrm{\mathbf{P}}\,}}_k)||^2=||\mathcal {T}{{\,\mathrm{\hbox {vec}}\,}}({{\,\mathrm{\mathbf{P}}\,}}_{k-1})||^2=||\mathcal {T}^{k}{{\,\mathrm{\hbox {vec}}\,}}({{\,\mathrm{\mathbf{P}}\,}}_0)||^2\le ||\mathcal {T}^k||^2\cdot ||{{\,\mathrm{\hbox {vec}}\,}}({{\,\mathrm{\mathbf{P}}\,}}_0)||^2 } \end{aligned}$$

If \(\rho (\mathcal {T})<1\), we know that \(\mathcal {T}\) is convergent, and there exists \(\mu >0\), \(0<\gamma <1\), s.t.

$$\begin{aligned} {||\mathcal {T}^k||^2\le \mu \gamma ^k,} \end{aligned}$$

thus there exists \({{\,\mathrm{\mathbf{M}}\,}}\) such that,

$$\begin{aligned}{ \sum ^{\infty }_{k=0}{{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ ||{{\,\mathrm{{\mathbf{d}}}\,}}_k||^2] }\le {{\,\mathrm{\mathbf{M}}\,}}\sum ^{\infty }_{k=0}\gamma ^{k} \le C<\infty } \end{aligned}$$

For any \(\epsilon >0\), by Markov inequality we have

$$\begin{aligned}{ \sum ^{\infty }_{k=0}{{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ ||{{\,\mathrm{{\mathbf{d}}}\,}}_k||^2] }\le C \Rightarrow \sum ^{\infty }_{k=0}{{\,\mathrm{\hbox {Prob}}\,}}(||{{\,\mathrm{{\mathbf{d}}}\,}}_k||^2>\epsilon )<\infty , } \end{aligned}$$

and as \(\sum ^{\infty }_{k=0}{{\,\mathrm{\hbox {Prob}}\,}}(||{{\,\mathrm{{\mathbf{d}}}\,}}_k||^2<\epsilon )<\infty \), by Borel–Cantelli, and \(||{{\,\mathrm{{\mathbf{d}}}\,}}_k||^2\in m\mathcal {F}_{+}\),

$$\begin{aligned}{ {{\,\mathrm{{\mathbf{d}}}\,}}_k\overset{a.s.}{\rightarrow }\varvec{0} \quad \hbox {as }k\rightarrow \infty } \end{aligned}$$

which then implies that randomized ADMM converges almost surely. \(\square \)

To illustrate the stability issues with RAC-ADMM, consider the following example.

Example 3

Consider the following problem

$$\begin{aligned}{ \begin{array}{ll} \max &{}\ {{\,\mathrm{{\mathbf{0}}}\,}}\cdot {{\,\mathrm{{\mathbf{x}}}\,}}\\ \hbox {s.t.} &{}\ {{\,\mathrm{\mathbf{A}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}={{\,\mathrm{{\mathbf{0}}}\,}}\end{array}} \end{aligned}$$

where

$$\begin{aligned} {{\,\mathrm{\mathbf{A}}\,}}=\begin{bmatrix} 1 &{} 1 &{} 1 &{} 1 &{}1 &{}1 \\ 1 &{} 1 &{}1 &{}1 &{}1 &{}1+\gamma \\ 1 &{}1 &{} 1 &{} 1 &{} 1+\gamma &{} 1+\gamma \\ 1 &{} 1 &{}1 &{}1+\gamma &{}1+\gamma &{}1+\gamma \\ 1&{} 1 &{} 1+\gamma &{}1+\gamma &{}1+\gamma &{}1+\gamma \\ 1&{} 1+\gamma &{} 1+\gamma &{}1+\gamma &{}1+\gamma &{}1+\gamma \\ \end{bmatrix} \end{aligned}$$
(15)

Let \([x_0,y_0]\sim N(0,5I)\), \(\beta =1\), \(\gamma =1\), and number of blocks \(p=3\). Consider RP-ADMM with the fixed block composition \([x_1,\ x_2],[x_3,\ x_4],[x_5,\ x_6]\).

Convergence in expectation for this particular block structure finds \(\rho ({{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{\mathbf{M}}\,}}_{RP,\upsilon _1}] })\) \(=0.9887>\rho ({{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{\mathbf{M}}\,}}_{RAC}] })=0.8215\). In fact, for all block compositions for this example we have, \(\rho ({{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{\mathbf{M}}\,}}_{RAC}] })>\rho ({{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{\mathbf{M}}\,}}_{RP,\upsilon _i}] }\) . However, RAC-ADMM does not converge, as shown in Fig. 2, showing that convergence in expectation may not be a sufficient indicator from this particular example.

Indeed, if we apply Theorem 3, we find out that RAC-ADMM does not converge almost surely, while RP-ADMM does for solving this example: \(\rho ({{{\,\mathrm{\mathbb {E}}\,}}_{RAC}}{[ {{\,\mathrm{\mathbf{M}}\,}}_\sigma \otimes {{\,\mathrm{\mathbf{M}}\,}}_{\sigma }] })=1.0948>1\) while \(\rho ({{{\,\mathrm{\mathbb {E}}\,}}_{RP}}{[ {{\,\mathrm{\mathbf{M}}\,}}_\sigma \otimes {{\,\mathrm{\mathbf{M}}\,}}_{\sigma }] })=0.9852<1\), what explains the results shown in Fig. 2. In fact, RP-ADMM converges almost surely for all 15 block compositions of this example.

Fig. 2
figure 2

Effect of variance on convergence for problem (15). Evaluation of \(x_1^k\) (optimal \(x_1^*=0\))

2.3 Variance reduction in RAC-ADMM

The previous section described sufficient condition for the almost sure convergence of RAC-ADMM algorithm. This section address controlability of the algorithm. More precisely we ask, given a linearly constrained quadratic problem (LCQP) (Eq. 9), what means do we have at our disposal to control convergence of a LCQP– how to bound the covariance and how to improve the convergence rate.

2.3.1 Detecting and utilizing a structure in LCQP

Although some problem types inherit a known structure (e.g. network-flow problems), in general, the structure is not known. There are many sophisticated techniques used to detect a structure of a matrix one can use and apply towards improving performance of RAC-ADMM. Although such elaborate methods have a potential of detecting hidden structure of Hessian and Jacobian matrices almost perfectly, using them or developing our own is beyond the scope of this paper. Instead, we adopt a simple matrix partitioning approach outlined in [27].

In general, for RAC-ADMM we are interested in a structure of a constraint matrix, which can be detected using the following simple approach. Given a constraint matrix A (describing equalities, inequalities or both), a desirable structure such as one shown in (16) can be derived by applying a graph partitioning method.

$$\begin{aligned} \begin{array}{ccc} \underbrace{\begin{bmatrix} {{\,\mathrm{\mathbf{V}}\,}}_1 &{} 0 &{}\cdots &{}0 \\ 0 &{} \ddots &{}&{} \vdots \\ \vdots &{} &{} {{\,\mathrm{\mathbf{V}}\,}}_v&{}0 \\ {{\,\mathrm{\mathbf{W}}\,}}_1&{}\cdots &{}{{\,\mathrm{\mathbf{W}}\,}}_v&{}{{\,\mathrm{\mathbf{W}}\,}}_{v+1}\\ \end{bmatrix}}_{\textstyle {{\,\mathrm{\mathbf{A}}\,}}} &{} \underbrace{\begin{bmatrix} {{\,\mathrm{{\mathbf{x}}}\,}}_1\\ \vdots \\ {{\,\mathrm{{\mathbf{x}}}\,}}_v\\ {{\,\mathrm{{\mathbf{x}}}\,}}_{v+1} \end{bmatrix}}_{\textstyle {{\,\mathrm{{\mathbf{x}}}\,}}} &{} =\underbrace{\begin{bmatrix} {{\,\mathrm{{\mathbf{b}}}\,}}_1 \\ \vdots \\ {{\,\mathrm{{\mathbf{b}}}\,}}_v\\ {{\,\mathrm{{\mathbf{b}}}\,}}_{v+1} \end{bmatrix}}_{\textstyle {{\,\mathrm{{\mathbf{b}}}\,}}} \end{array}. \end{aligned}$$
(16)

The outline of the process is as follows:

  1. 1.

    Build a graph representation of matrix \({{\,\mathrm{\mathbf{A}}\,}}\): Each row i and column j is a vertex; vertices are connected with edges if \(a_{i,j} \not = 0\).

  2. 2.

    Partition the graph using a graph partitioning algorithm or solver, for example [43].

  3. 3.

    Recreate \({{\,\mathrm{\mathbf{A}}\,}}\) as a block matrix from the graph partitions.

Using graph partitioning as a procedure for decomposing a problem in a way that maximizes the modularity has been studied for a while. Although the problem itself is an NP-hard integer program, many efficient algorithms have been proposed that achieve near-optimal performance with good computational scalability [1, 7, 56]. In addition, successful implementations such as those used by GCG (generic solver for mixed integer programs, part of SCIP Optimization Suite [63]) exist. Currently our solver RACQP, described in Sect. 3, does not implement an automatic routine for modularity detection, but we plan to add it.

2.3.2 Smart grouping

Smart-grouping is a pre-processing method in which we use block structure of constraint matrix \({{\,\mathrm{\mathbf{A}}\,}}\) to pre-group certain variables as a single “super-variable” (a group of variables which always stay together in one block). Following the block structure shown in (16), we make one super-variable \(\hat{\mathbf {x}}_i\) for each group \({{\,\mathrm{{\mathbf{x}}}\,}}_i\), \(i=1,\dots ,v\). Primal variables \({{\,\mathrm{{\mathbf{x}}}\,}}_{v+1}\) stay shared and are randomly assigned to sub-problems to complement super-variables to which they are coupled with via block-matrices \({{\,\mathrm{\mathbf{W}}\,}}_i\), \(i=1,\dots ,v\). More than one super-variable can be assigned to a single sub-problem, dependent upon the maximum size of a sub-problem, if defined. Note that matrix partitioning based on \({{\,\mathrm{\mathbf{H}}\,}}+{{\,\mathrm{\mathbf{A}}\,}}^T{{\,\mathrm{\mathbf{A}}\,}}\) may result in a better grouping, but is unpractical and thus not considered as a viable approach.

2.3.3 Partial Lagrangian

The idea of smart-grouping described in the previous section can be further extended by the means of the partial Lagrangian approach. Consider a LCQP (6) having the constraint matrix \({{\,\mathrm{\mathbf{A}}\,}}\) structure as shown in (16). Now consider the scenario in which we split the matrix \({{\,\mathrm{\mathbf{A}}\,}}\) such that the block \({{\,\mathrm{\mathbf{W}}\,}}=[{{\,\mathrm{\mathbf{W}}\,}}_1,\dots ,{{\,\mathrm{\mathbf{W}}\,}}_{v+1}]\) is admitted by the augmented Lagrangian while the rest of the constraints (blocks \({{\,\mathrm{\mathbf{V}}\,}}_i\)) are solved exactly as a part of a sub-problem, i.e. a sub-problem i is solved as

$$\begin{aligned} {{{\,\mathrm{{\mathbf{x}}}\,}}_i^{k+1}=\arg \min \{L_{\mathcal {P}} ({{\,\mathrm{{\mathbf{x}}}\,}}_1^{k+1},\dots ,{{\,\mathrm{{\mathbf{x}}}\,}}_i,\dots ,{{\,\mathrm{{\mathbf{x}}}\,}}_p^k;{{\,\mathrm{{\mathbf{y}}}\,}}^k){{\,\mathrm{\ |\ }\,}}{{\,\mathrm{\mathbf{V}}\,}}_j\hat{\mathbf {x}}_j={{\,\mathrm{{\mathbf{b}}}\,}}_j, j\in \mathcal {J},\ {{\,\mathrm{{\mathbf{x}}}\,}}_i\in {{\,\mathrm{\mathcal {X}}\,}}_i\},} \end{aligned}$$

where \(\mathcal {J}\) is a set of indices of super-variables \(\hat{\mathbf {x}}_j\) constituting sub-problem i at any given iteration. The partial augmented Lagrangian is defined with

$$\begin{aligned}{ L_{\mathcal {P}}({{\,\mathrm{{\mathbf{x}}}\,}},{{\,\mathrm{{\mathbf{y}}}\,}})=\frac{1}{2}{{\,\mathrm{{\mathbf{x}}}\,}}^T {{\,\mathrm{\mathbf{H}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}\ +{{\,\mathrm{{\mathbf{c}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{y}}}\,}}^T({{\,\mathrm{\mathbf{W}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}_{v+1})+\frac{\beta }{2}\Vert {{\,\mathrm{\mathbf{W}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}_{v+1}\Vert ^2.} \end{aligned}$$

There are two advantages of the partial Lagrangian approach. First, the rank of the constraint matrix used for the global constraints (matrix \({{\,\mathrm{\mathbf{W}}\,}}\)) is lower than the rank of \({{\,\mathrm{\mathbf{A}}\,}}\), and the empirical results (Sect. 4) suggest a strong correlation between a rank of a matrix and the stability of the algorithm and its rate of convergence. Next, local constraints (matrices \({{\,\mathrm{\mathbf{V}}\,}}_i\)) imply there is a feasibility region in which \({{\,\mathrm{{\mathbf{x}}}\,}}_i\) exist, and that region may not be infinite. In other words, even when the variables themselves are unbounded (i.e. \({{\,\mathrm{{\mathbf{x}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^n\)), local constraints may put implicit bounds on maximum variation of values of \({{\,\mathrm{{\mathbf{x}}}\,}}_i\).

Empirical results of the partial Lagrangian applied to mixed integer problems (Sect. 4.2) show the approach to be very useful. In such a scenario, local constraints are sets of rules that relate integer variables, while constraints between continuous variables are left global. In the case of a problems where such straight separation does not exist, or when problems are purely integer, a problem structure is let to guide the local/global constraints decision.

Although shown to be useful, the partial Lagrangian method suffers form being a mostly heuristic approach that depends on quality of solution methods applied to sub-problems—in the case of continuous problems, a simple barrier based methodology can be applied, but for the mixed integers problems (MIP), sub-problems require a more complex solution (e.g. an external MIP solver).

Example 4

To illustrate the usefulness of the smart grouping and partial Lagrange approaches, consider the following experiments done on selected instances taken from the Mittelmann LP test set [73] augmented with a diagonal quadratic objective to form a standard LCQP (9)).

For each instance, a constraint matrix (\({{\,\mathrm{\mathbf{A}}\,}}_{eq}\), \({{\,\mathrm{\mathbf{A}}\,}}_{ineq}\) or \({{\,\mathrm{\mathbf{A}}\,}}=[{{\,\mathrm{\mathbf{A}}\,}}_{eq};{{\,\mathrm{\mathbf{A}}\,}}_{ineq}]\)) was subjected to graph-partitioning procedure outlined in Sect. 2.3.1, and then solved using the smart grouping (“s_grp”) and the partial Lagrangian approach (“partial_L”). Table 2 reports on the number of iterations required by RAC-ADMM algorithm to find a solution satisfying the primal/dual residual tolerance of \(\epsilon =10^{-4}\). If the solution was not found the reason is noted (“time limit” for exceeding sub-problem maximum run-time and “iter. limit” for exceeding maximum number of iterations). Fields showing “divergence” or “oscillation” mark experiments for which RAC-ADMM algorithm experienced an unstable behavior. The baseline for the comparison is the default approach (sub-problems created at random) shown in column “Default RAC”.

Table 2 Number of iterations until termination criteria is met for various benchmark instances

The partial Lagrangian approach has a potential to help stability and rate of convergence. However, before generalizing, one needs to consider the following: stability (i.e. convergence) of RAC-ADMM algorithm, is a function, among other factors, of mapping operators (matrices \({{\,\mathrm{\mathbf{M}}\,}}_{\sigma }\), Eq. 11) which are in turn functions, among other factors, of the constraint matrix of a problem being solved. In the case of partial Lagrangian methodology, this matrix is the matrix \({{\,\mathrm{\mathbf{W}}\,}}\), meaning that if \({{\,\mathrm{\mathbf{W}}\,}}\) produces an unstable system (e.g. conditions set by Theorem 3 not met), no implicit bounding can help to stabilize it. At the same time, a “correct” \({{\,\mathrm{\mathbf{W}}\,}}\) can stabilize a problem which is unstable in its original form. Consider a problem that is not convergent in its original formulation. Amending \({{\,\mathrm{\mathbf{A}}\,}}\) by moving “bad” rows to sub-problems, thus constructing \({{\,\mathrm{\mathbf{W}}\,}}\) that produces mapping matrices satisfying \(\rho ({{{\,\mathrm{\mathbb {E}}\,}}_{}}{[ {{\,\mathrm{\mathbf{M}}\,}}_\sigma \otimes {{\,\mathrm{\mathbf{M}}\,}}_\sigma ] })<1\) makes such a problem stable. Theoretical work on structures of \({{\,\mathrm{\mathbf{W}}\,}}\) and conditions that stabilize and improve RAC-ADMM are works in progress.

Using smart grouping alone does not make RAC-ADMM unstable, but in some cases increases the number of iterations needed to satisfy feasibility tolerance, a consequence of having less randomness as described by Corollary 2.

3 RAC-ADMM quadratic programming solver

In this section we outline the implementation of the RAC-ADMM algorithm for linearly constrained quadratic problems as defined below:

$$\begin{aligned} \begin{array}{cl} \min \limits _{{{\,\mathrm{{\mathbf{x}}}\,}}} &{} \frac{1}{2}{{\,\mathrm{{\mathbf{x}}}\,}}^T {{\,\mathrm{\mathbf{H}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}+ {{\,\mathrm{{\mathbf{c}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}\\ \text{ s.t. }&{} {{\,\mathrm{\mathbf{A}}\,}}_{eq}{{\,\mathrm{{\mathbf{x}}}\,}}={{\,\mathrm{{\mathbf{b}}}\,}}_{eq}\\ &{} {{\,\mathrm{\mathbf{A}}\,}}_{ineq}{{\,\mathrm{{\mathbf{x}}}\,}}\le {{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}\\ &{} {{\,\mathrm{{\mathbf{x}}}\,}}\in {{\,\mathrm{\mathcal {X}}\,}}\end{array} \end{aligned}$$
(17)

where symmetric positive semidefinite matrix \({{\,\mathrm{\mathbf{H}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{n\times n}\) and vector \({{\,\mathrm{{\mathbf{c}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^n\) define the quadratic objective while matrix \({{\,\mathrm{\mathbf{A}}\,}}_{eq}\in {{\,\mathrm{\mathbb {R}}\,}}^{m\times n}\) and the vector \({{\,\mathrm{{\mathbf{b}}}\,}}_{eq}\in {{\,\mathrm{\mathbb {R}}\,}}^m\) describe equality constraints and matrix \({{\,\mathrm{\mathbf{A}}\,}}_{ineq}\in {{\,\mathrm{\mathbb {R}}\,}}^{s\times n}\) and the vector \({{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}\in {{\,\mathrm{\mathbb {R}}\,}}^s\) describe inequality constraints. Primal variables \({{\,\mathrm{{\mathbf{x}}}\,}}\in {{\,\mathrm{\mathcal {X}}\,}}\), can be integer or continuous, thus the constraint set \({{\,\mathrm{\mathcal {X}}\,}}\) is the Cartesian product of nonempty sets \({{\,\mathrm{\mathcal {X}}\,}}_i\subseteq {{\,\mathrm{\mathbb {R}}\,}}\) or \({{\,\mathrm{\mathcal {X}}\,}}_i\subseteq {{\,\mathrm{\mathbb {Z}}\,}}\), \(i=1,\dots ,n\). QP problems arise from many important applications themselves, and are also fundamental in general nonlinear optimization.

Introducing auxiliary variables \({{\,\mathrm{{\mathbf{s}}}\,}}\) and \(\hat{\mathbf {x}}\), results in the following equivalent of (17):

$$\begin{aligned} \begin{array}{cl} \min \limits _{{{\,\mathrm{{\mathbf{x}}}\,}}, \hat{\mathbf {x}}, {{\,\mathrm{{\mathbf{s}}}\,}}} &{} \frac{1}{2}{{\,\mathrm{{\mathbf{x}}}\,}}^T {{\,\mathrm{\mathbf{H}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}+ {{\,\mathrm{{\mathbf{c}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}\\ \text{ s.t. }&{} {{\,\mathrm{\mathbf{A}}\,}}_{eq}{{\,\mathrm{{\mathbf{x}}}\,}}={{\,\mathrm{{\mathbf{b}}}\,}}_{eq}\\ &{} {{\,\mathrm{\mathbf{A}}\,}}_{ineq}{{\,\mathrm{{\mathbf{x}}}\,}}+{{\,\mathrm{{\mathbf{s}}}\,}}= {{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}\\ &{} {{\,\mathrm{{\mathbf{x}}}\,}}-\hat{\mathbf {x}} = {{\,\mathrm{{\mathbf{0}}}\,}}\\ &{} \hat{\mathbf {x}}\in {{\,\mathrm{\mathcal {X}}\,}},\ {{\,\mathrm{{\mathbf{s}}}\,}}\ge {{\,\mathrm{{\mathbf{0}}}\,}}, {{\,\mathrm{{\mathbf{x}}}\,}}\hbox {free} \end{array}\nonumber \\ \end{aligned}$$
(18)

where the augmented Lagrangian, \(L_{\beta } ({{\,\mathrm{{\mathbf{x}}}\,}};{{\,\mathrm{{\mathbf{s}}}\,}};{{\,\mathrm{{\mathbf{y}}}\,}}_{eq};{{\,\mathrm{{\mathbf{y}}}\,}}_{ineq};{{\,\mathrm{{\mathbf{z}}}\,}})\), is given as

$$\begin{aligned} \begin{array}{cl} L_{\beta } (\cdot ) := &{}\frac{1}{2}{{\,\mathrm{{\mathbf{x}}}\,}}^T {{\,\mathrm{\mathbf{H}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}+ {{\,\mathrm{{\mathbf{c}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}\\ &{}-\,{{\,\mathrm{{\mathbf{y}}}\,}}_{eq}^T({{\,\mathrm{\mathbf{A}}\,}}_{eq}{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}_{eq}) -{{\,\mathrm{{\mathbf{y}}}\,}}_{ineq}^T({{\,\mathrm{\mathbf{A}}\,}}_{ineq}{{\,\mathrm{{\mathbf{x}}}\,}}+{{\,\mathrm{{\mathbf{s}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}) -{{\,\mathrm{{\mathbf{z}}}\,}}^T({{\,\mathrm{{\mathbf{x}}}\,}}-\hat{\mathbf {x}})\\ &{}+ \,\frac{\beta }{2} (\Vert {{\,\mathrm{\mathbf{A}}\,}}_{eq}{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}_{eq}\Vert ^2 + \Vert {{\,\mathrm{\mathbf{A}}\,}}_{ineq}{{\,\mathrm{{\mathbf{x}}}\,}}+{{\,\mathrm{{\mathbf{s}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}\Vert ^2+\Vert {{\,\mathrm{{\mathbf{x}}}\,}}-\hat{\mathbf {x}}\Vert ^2) \end{array}\nonumber \\ \end{aligned}$$
(19)

RAC-ADMM, or simply RAC, quadratic programming (RACQP) solver admits continuous, binary and mixed integer problems. Algorithm 1 outlines the solver: the solution vector is initialized to \(-\infty \) at the beginning of the algorithm, and the main RAC-ADMM loop described (lines 224). The main loop calls different procedures to optimize blocks of \({{\,\mathrm{{\mathbf{x}}}\,}}\) (lines 416), followed by updates of slack and then dual variables.

figure a

Types of the block optimizing procedure being called to update the blocks depend on the structure of the problem being solved. The default, multi-block implementation for continuous problems is based on the Cholesky factorization, with a specialized one-block variant for very sparse problems that solves the iterates using the LDL factorization. Continuous problems that exhibit a structure (see Sect. 2.3.1) can be addressed using the partial Lagrangian approach. In such a case, sub-problems are solved using either a simple interior point method based methodology, or, when sub-problems include only equality constraints, by employing Cholesky for solving KKT conditions. In addition to the aforementioned methods, the solver supports calls to external solver(s) and specialized heuristic solution to handle hard sub-problem instances.

Binary and mixed integer problems require specialized optimization techniques (e.g. branch-and-bound), that require implementations which are beyond the scope of this paper, so we have decided to delegate optimizing of the blocks with mixed variables to an external solver. Mixed integer problems are addressed by using the partial Lagrangian to solve for primal variables and a simple procedure that helps to escape local optima, as described by Algorithm 2.

Note that Algorithms given in this section are pseudo-algorithms which describe functionality of the solver rather than actual implementation. The implementation can be downloaded from [61].

3.1 Solving continuous problems

For the continuous QP problems, we consider (18) where \({{\,\mathrm{\mathcal {X}}\,}}\) are possible simple lower and upper bounds on each individual variable:

$$\begin{aligned} {{{\,\mathrm{{\mathbf{l}}}\,}}_i\le {\hat{\mathbf {x}}_i}\le {{\,\mathrm{{\mathbf{u}}}\,}}_i,\ i=1,\dots ,n.} \end{aligned}$$

Continuous problems are solved as described by Algorithm 1, which repeats three steps until termination criteria is met: first update or optimize primal variables \({{\,\mathrm{{\mathbf{x}}}\,}}\) in the RAC fashion, then update \({\hat{\mathbf {x}}}\) and \({{\,\mathrm{{\mathbf{s}}}\,}}\) in close forms and finally update dual variables \({{\,\mathrm{{\mathbf{y}}}\,}}_{eq}\), \({{\,\mathrm{{\mathbf{y}}}\,}}_{ineq}\) and \({{\,\mathrm{{\mathbf{z}}}\,}}\).

Step 1 Update primal variables \({{\,\mathrm{{\mathbf{x}}}\,}}\) Let \(\omega _i\in \Omega \) be a vector of indices of a block i, \(i=1,\dots ,p\), where p is the number of blocks. The set of vectors \(\Omega \) is randomly generated (with smart grouping when applicable as described in Sect. 2.3.2) at each iteration of the Algorithm 1 (lines 224). Let \({{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}\) be a sub-vector of \({{\,\mathrm{{\mathbf{x}}}\,}}\) constructed of components of \({{\,\mathrm{{\mathbf{x}}}\,}}\) with indices \(\omega _i\), and let \({{\,\mathrm{{\mathbf{x}}}\,}}_{-\omega _i}\) be the sub-vector of \({{\,\mathrm{{\mathbf{x}}}\,}}\) with indices not chosen by \(\omega _i\). Algorithm 1 uses either Cholesky factorization or partial Lagrangian to solve each block of variables \({{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}\) while holding \({{\,\mathrm{{\mathbf{x}}}\,}}_{-\omega _i}\) fixed. By rewriting (19) to reflect the sub-vectors, we get

$$\begin{aligned} \begin{array}{ll} L_\beta (\cdot )&{}=[{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i};{{\,\mathrm{{\mathbf{x}}}\,}}_{-\omega _i}]^T(\frac{1}{2}{{\,\mathrm{\mathbf{H}}\,}}+\frac{\beta }{2}{{\,\mathrm{\mathbf{A}}\,}}_{eq}^T{{\,\mathrm{\mathbf{A}}\,}}_{eq}+\frac{\beta }{2}{{\,\mathrm{\mathbf{A}}\,}}_{ineq}^T{{\,\mathrm{\mathbf{A}}\,}}_{ineq}+\frac{\beta }{2}{{\,\mathrm{\mathbf{I}}\,}})[{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i};{{\,\mathrm{{\mathbf{x}}}\,}}_{-\omega _i}]\\ &{}+\,({{\,\mathrm{{\mathbf{c}}}\,}}-{{\,\mathrm{\mathbf{A}}\,}}_{eq}^T{{\,\mathrm{{\mathbf{y}}}\,}}_{eq}-{{\,\mathrm{\mathbf{A}}\,}}_{ineq}^T{{\,\mathrm{{\mathbf{y}}}\,}}_{ineq}-\beta {{\,\mathrm{\mathbf{A}}\,}}_{eq}^T{{\,\mathrm{{\mathbf{b}}}\,}}_{eq}-\beta {{\,\mathrm{\mathbf{A}}\,}}_{ineq}^T{{\,\mathrm{{\mathbf{b}}}\,}}_{ineq} )^T[{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i};{{\,\mathrm{{\mathbf{x}}}\,}}_{-\omega _i}]\\ &{}+\,(\beta {{\,\mathrm{\mathbf{A}}\,}}_{ineq}^T{{\,\mathrm{{\mathbf{s}}}\,}}-{{\,\mathrm{{\mathbf{z}}}\,}}-\beta {\hat{\mathbf {x}}})^T[{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i};{{\,\mathrm{{\mathbf{x}}}\,}}_{-\omega _i}]\\ &{}=\frac{1}{2}[{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i};{{\,\mathrm{{\mathbf{x}}}\,}}_{-\omega _i}]^T{{\,\mathrm{\mathbf{Q}}\,}}[{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i};{{\,\mathrm{{\mathbf{x}}}\,}}_{-\omega _i}]+{{\,\mathrm{{\mathbf{q}}}\,}}^T[{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i};{{\,\mathrm{{\mathbf{x}}}\,}}_{-\omega _i}] \end{array} \end{aligned}$$
(20)

where \( {{{\,\mathrm{\mathbf{Q}}\,}}=({{\,\mathrm{\mathbf{H}}\,}}+\beta {{\,\mathrm{\mathbf{A}}\,}}_{eq}^T{{\,\mathrm{\mathbf{A}}\,}}_{eq}+\beta {{\,\mathrm{\mathbf{A}}\,}}_{ineq}^T{{\,\mathrm{\mathbf{A}}\,}}_{ineq}+\beta I)} \). Then we can minimize in \({{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}\) by solving

$$\begin{aligned} {{\,\mathrm{\mathbf{Q}}\,}}_{\omega _i,\omega _i}{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}=-({{\,\mathrm{{\mathbf{q}}}\,}}_{\omega _i}+{\hat{{\,\mathrm{{\mathbf{q}}}\,}}}) \end{aligned}$$
(21)

using Cholesky factorization and back substitution. The linear term resulting from \({{\,\mathrm{\mathbf{Q}}\,}}\), \({\hat{{\,\mathrm{{\mathbf{q}}}\,}}}\), is given as

$$\begin{aligned} {\hat{{\,\mathrm{{\mathbf{q}}}\,}}}= ({{\,\mathrm{\mathbf{H}}\,}}{\hat{{\,\mathrm{{\mathbf{x}}}\,}}})_{\omega _i}+\beta {{\,\mathrm{\mathbf{A}}\,}}_{\omega _i}^T({{\,\mathrm{\mathbf{A}}\,}}{\tilde{{\,\mathrm{{\mathbf{x}}}\,}}})- ({{\,\mathrm{\mathbf{H}}\,}}_{\omega _i,\omega _i}+\beta {{\,\mathrm{\mathbf{A}}\,}}_{\omega _i}^T{{\,\mathrm{\mathbf{A}}\,}}_{\omega _i}){{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i} \end{aligned}$$
(22)

where \( {{{\,\mathrm{\mathbf{A}}\,}}=[{{\,\mathrm{\mathbf{A}}\,}}_{eq},{ 0 };{{\,\mathrm{\mathbf{A}}\,}}_{ineq},{{\,\mathrm{\mathbf{I}}\,}}]} \) and \({\tilde{{\,\mathrm{{\mathbf{x}}}\,}}}=[{{\,\mathrm{{\mathbf{x}}}\,}};{{\,\mathrm{{\mathbf{s}}}\,}}]\). A square sub-matrix \({{\,\mathrm{\mathbf{H}}\,}}_{\omega _i,\omega _i}\) and column sub-matrix \({{\,\mathrm{\mathbf{A}}\,}}_{\omega _i}\) are constructed by extracting \(\omega _i\) rows and columns from \({{\,\mathrm{\mathbf{H}}\,}}\) and \({{\,\mathrm{\mathbf{A}}\,}}\) respectively.

When \(p=1\), i.e. we are solving a problem using a single-block approach, then we solve the block utilizing LDL factorization to avoid calculating \({{\,\mathrm{\mathbf{A}}\,}}^T{{\,\mathrm{\mathbf{A}}\,}}\). Although the factorization can be relatively expensive if the problem size is large as we then factorize a large matrix, the factorization is done only once and re-used in each iteration of the algorithm. From (19), we find minimizer \({{\,\mathrm{{\mathbf{x}}}\,}}\) by solving

$$\begin{aligned} {{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}=-{{\,\mathrm{{\mathbf{q}}}\,}}\end{aligned}$$
(23)

where

$$\begin{aligned}{ {{\,\mathrm{{\mathbf{q}}}\,}}={{\,\mathrm{{\mathbf{c}}}\,}}-{{\,\mathrm{\mathbf{A}}\,}}_{eq}^T{{\,\mathrm{{\mathbf{y}}}\,}}_{eq}-{{\,\mathrm{\mathbf{A}}\,}}_{ineq}^T{{\,\mathrm{{\mathbf{y}}}\,}}_{ineq}-{{\,\mathrm{{\mathbf{z}}}\,}}-\beta {{\,\mathrm{\mathbf{A}}\,}}_{eq}^T{{\,\mathrm{{\mathbf{b}}}\,}}_{eq}-\beta {{\,\mathrm{\mathbf{A}}\,}}_{ineq}^T{{\,\mathrm{{\mathbf{b}}}\,}}_{ineq} +\beta {{\,\mathrm{\mathbf{A}}\,}}_{ineq}^T{{\,\mathrm{{\mathbf{s}}}\,}}-\beta {\hat{\mathbf {x}}}. } \end{aligned}$$

With \( {{{\,\mathrm{\mathbf{A}}\,}}=[{{\,\mathrm{\mathbf{A}}\,}}_{eq};{{\,\mathrm{\mathbf{A}}\,}}_{ineq}]} \) we can express the equivalent condition to (23) with

$$\begin{aligned} \begin{array}{ccc} \begin{bmatrix}({{\,\mathrm{\mathbf{H}}\,}}+\beta {{\,\mathrm{\mathbf{I}}\,}}) &{} \sqrt{\beta }{{\,\mathrm{\mathbf{A}}\,}}^T\\ \sqrt{\beta }{{\,\mathrm{\mathbf{A}}\,}}&{} -{{\,\mathrm{\mathbf{I}}\,}}\end{bmatrix} &{} \begin{bmatrix}{{\,\mathrm{{\mathbf{x}}}\,}}\\ {{\,\mathrm{\varvec{\mu }}\,}}\end{bmatrix} &{} =\begin{bmatrix}-{{\,\mathrm{{\mathbf{q}}}\,}}\\ {{\,\mathrm{{\mathbf{0}}}\,}}\end{bmatrix} \end{array}. \end{aligned}$$
(24)

We factorize the left hand side of the above expression and use the resulting matrices to find \({{\,\mathrm{{\mathbf{x}}}\,}}\) by back substitution at each iteration of the algorithm. For single-block RACQP, LDL approach described above replaces lines 316 in Algorithm 1.

Furthermore, if \( {{{\,\mathrm{\mathbf{H}}\,}}} \) is diagonal, one can rewrite the system as

$$\begin{aligned} \begin{array}{ccc} \begin{bmatrix}{{\,\mathrm{\mathbf{I}}\,}}&{} ({{\,\mathrm{\mathbf{H}}\,}}+\beta I)^{-1}\sqrt{\beta }{{\,\mathrm{\mathbf{A}}\,}}^T\\ \sqrt{\beta }{{\,\mathrm{\mathbf{A}}\,}}&{} -{{\,\mathrm{\mathbf{I}}\,}}\end{bmatrix} &{} \begin{bmatrix}{{\,\mathrm{{\mathbf{x}}}\,}}\\ {{\,\mathrm{\varvec{\mu }}\,}}\end{bmatrix} &{} =\begin{bmatrix}-{{\,\mathrm{{\mathbf{q}}}\,}}\\ {{\,\mathrm{{\mathbf{0}}}\,}}\end{bmatrix} \end{array}. \end{aligned}$$
(25)

Then we can factorize matrix \( {({{\,\mathrm{\mathbf{I}}\,}}+\beta {{\,\mathrm{\mathbf{A}}\,}}({{\,\mathrm{\mathbf{H}}\,}}+\beta {{\,\mathrm{\mathbf{I}}\,}})^{-1}{{\,\mathrm{\mathbf{A}}\,}}^T)} \) to solve the system, which would be extremely effective when the number of constraints is very small and/or sparse, since \( {({{\,\mathrm{\mathbf{H}}\,}}+\beta {{\,\mathrm{\mathbf{I}}\,}})^{-1}} \) is diagonal and it does not change sparsity of \( {{{\,\mathrm{\mathbf{A}}\,}}} \).

Partial Lagrangian approach to solving \({{\,\mathrm{{\mathbf{x}}}\,}}\) blocks, described in Sect. 2.3.3, uses the same implementation as Cholesky approach described above, with additional steps that build local constraints which reflect free and fixed components of \({{\,\mathrm{{\mathbf{x}}}\,}}\), \({{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}\) and \({{\,\mathrm{{\mathbf{x}}}\,}}_{-\omega _i}\) respectively. The optimization problem of partial Lagrangian is formulated as

$$\begin{aligned} \begin{array}{lrl} {{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}^* =&{} \hbox {arg min}&{}\frac{1}{2}{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}^T{ {{\,\mathrm{\mathbf{Q}}\,}}^L_{\omega _i,\omega _i}}{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}+({{\,\mathrm{{\mathbf{q}}}\,}}^L_{\omega _i}+{(\hat{{\,\mathrm{{\mathbf{q}}}\,}}^L)}^T){{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}\\ &{} \hbox {s.t. } &{}{{\,\mathrm{\mathbf{A}}\,}}^L_{eq,\ \omega _i}{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}={{\,\mathrm{{\mathbf{b}}}\,}}^L_{eq}-{{\,\mathrm{\mathbf{A}}\,}}^L_{eq,\ -\omega _i}{{\,\mathrm{{\mathbf{x}}}\,}}_{-\omega _i}\\ &{}&{}{{\,\mathrm{\mathbf{A}}\,}}^L_{ineq,\ \omega _i}{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}\le {{\,\mathrm{{\mathbf{b}}}\,}}^L_{eq}-{{\,\mathrm{\mathbf{A}}\,}}^L_{ineq,\ -\omega _i}{{\,\mathrm{{\mathbf{x}}}\,}}_{-\omega _i}\\ &{}&{}{{\,\mathrm{{\mathbf{l}}}\,}}_{\omega _i}\le {{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}\le {{\,\mathrm{{\mathbf{u}}}\,}}_{\omega _i} \end{array} \end{aligned}$$
(26)

with \({{\,\mathrm{\mathbf{Q}}\,}}^L,\ {{\,\mathrm{{\mathbf{q}}}\,}}^L,\ {\hat{{\,\mathrm{{\mathbf{q}}}\,}}^L}\ {{\,\mathrm{\mathbf{A}}\,}}^L_{eq},\ {{\,\mathrm{{\mathbf{b}}}\,}}^L_{eq}\) and \({{\,\mathrm{\mathbf{A}}\,}}^L_{ineq},\ {{\,\mathrm{{\mathbf{b}}}\,}}^L_{ineq}\) describing local objective, equality and inequality constraints, respectively. Note that partial Lagrangian procedure is used by both continuous and mixed integer problems. In the case of the former we set \({{\,\mathrm{\mathcal {X}}\,}}={{\,\mathrm{\mathbb {R}}\,}}^n\), while when we solve the latter we let \({{\,\mathrm{\mathcal {X}}\,}}_i\subseteq {{\,\mathrm{\mathbb {R}}\,}}\) and implicitly enforce the bounds. The blocks are solved by either an external solver (e.g. Gurobi) or by using Cholesky to solve KKT conditions when \({{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}\) is unbounded.

Step 2 Update auxiliary variables \({\hat{\mathbf {x}}}\) With all variables but \({\hat{\mathbf {x}}}\) fixed, from augmented Lagrangian (19) we find that the optimal vector \({{\,\mathrm{{\mathbf{l}}}\,}}\le {\hat{\mathbf {x}}}\le {{\,\mathrm{{\mathbf{u}}}\,}}\) can be found by solving the optimization problem

$$\begin{aligned} {{\mathop {\hbox {arg min}}\limits _{{{\,\mathrm{{\mathbf{l}}}\,}}\le {\hat{\mathbf {x}}}\le {{\,\mathrm{{\mathbf{u}}}\,}}}}\ \frac{\beta }{2}{\hat{\mathbf {x}}}^T{\hat{\mathbf {x}}}+({{\,\mathrm{{\mathbf{z}}}\,}}-\beta {{\,\mathrm{{\mathbf{x}}}\,}}^T){\hat{\mathbf {x}}}.} \end{aligned}$$

The problem is separable and \({\hat{\mathbf {x}}}\) has a closed form solution given by

$$\begin{aligned} {{\hat{\mathbf {x}}}=\min \big \{\max \{{{\,\mathrm{{\mathbf{l}}}\,}},{{\,\mathrm{{\mathbf{x}}}\,}}-\frac{1}{\beta }{{\,\mathrm{{\mathbf{z}}}\,}}\},{{\,\mathrm{{\mathbf{u}}}\,}}\big \}} \end{aligned}$$

Step 3 Update slack variables \({{\,\mathrm{{\mathbf{s}}}\,}}\) Similarly to the previous step, with all variables but \({{\,\mathrm{{\mathbf{s}}}\,}}\) fixed, the optimal vector \({{\,\mathrm{{\mathbf{s}}}\,}}\) is found by solving

$$\begin{aligned} {{\mathop {\hbox {arg min}}\limits _{{{\,\mathrm{{\mathbf{s}}}\,}}\ge 0}}\, \frac{\beta }{2}{{\,\mathrm{{\mathbf{s}}}\,}}^T{{\,\mathrm{{\mathbf{s}}}\,}}+(-{{\,\mathrm{{\mathbf{y}}}\,}}_{ineq}+\beta ({{\,\mathrm{\mathbf{A}}\,}}_{ineq}{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}))^T{{\,\mathrm{{\mathbf{s}}}\,}}.} \end{aligned}$$

The problem is separable and \({{\,\mathrm{{\mathbf{s}}}\,}}\) has a closed form solution given by

$$\begin{aligned} {{{\,\mathrm{{\mathbf{s}}}\,}}=\max \big \{0,\frac{1}{\beta }{{\,\mathrm{{\mathbf{y}}}\,}}_{ineq}+{{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}-{{\,\mathrm{\mathbf{A}}\,}}_{ineq}{{\,\mathrm{{\mathbf{x}}}\,}}\big \}.} \end{aligned}$$

3.1.1 Termination criteria for continuous problems

Termination criteria for continuous problems include maximum run-time limit settings, maximum number of iterations and primal-dual solution (found up to some tolerance). RACQP terminates when at least one criterion is met. For primal-dual solution criterion RACQP uses the optimality conditions of problem (18) to define primal and dual relative residuals at iteration k,

$$\begin{aligned} \begin{array}{ll} r_{\mathrm{prim}}^k&{}:= \max ( r_{{{\,\mathrm{\mathbf{A}}\,}}_{eq}}^k, r_{{{\,\mathrm{\mathbf{A}}\,}}_{ineq}}^k, r_{bounds}^k),\\ r_{\mathrm{dual}}^k &{}:= \frac{\displaystyle \Vert {{\,\mathrm{\mathbf{H}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}^k+{{\,\mathrm{{\mathbf{c}}}\,}}-{{\,\mathrm{\mathbf{A}}\,}}_{eq}^T{{\,\mathrm{{\mathbf{y}}}\,}}_{eq}^k-{{\,\mathrm{\mathbf{A}}\,}}_{ineq}^T{{\,\mathrm{{\mathbf{y}}}\,}}_{ineq}^k-{{\,\mathrm{{\mathbf{z}}}\,}}^k\Vert _\infty }{\displaystyle 1+\max (\Vert {{\,\mathrm{\mathbf{H}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}^k\Vert _\infty , \Vert {{\,\mathrm{{\mathbf{c}}}\,}}\Vert _\infty ,\ \Vert {{\,\mathrm{\mathbf{A}}\,}}_{eq}^T{{\,\mathrm{{\mathbf{y}}}\,}}_{eq}^k\Vert _\infty ,\ \Vert {{\,\mathrm{\mathbf{A}}\,}}_{ineq}^T{{\,\mathrm{{\mathbf{y}}}\,}}_{ineq}^k\Vert _\infty ,\ \Vert {{\,\mathrm{{\mathbf{z}}}\,}}^k\Vert _\infty )} \end{array} \end{aligned}$$
(27)

where

$$\begin{aligned}{ \begin{array}{ll} r_{{{\,\mathrm{\mathbf{A}}\,}}_{eq}}^k&{}= \frac{\displaystyle \Vert {{\,\mathrm{\mathbf{A}}\,}}_{eq}{{\,\mathrm{{\mathbf{x}}}\,}}^k-{{\,\mathrm{{\mathbf{b}}}\,}}_{eq}\Vert _\infty }{\displaystyle 1+\max (\Vert {{\,\mathrm{\mathbf{A}}\,}}_{eq}{{\,\mathrm{{\mathbf{x}}}\,}}^k\Vert _\infty ,\ \Vert {{\,\mathrm{{\mathbf{b}}}\,}}_{eq}\Vert _\infty )}\\ r_{{{\,\mathrm{\mathbf{A}}\,}}_{ineq}}^k&{}=\frac{\displaystyle \Vert {{\,\mathrm{\mathbf{A}}\,}}_{ineq}{{\,\mathrm{{\mathbf{x}}}\,}}^k+{{\,\mathrm{{\mathbf{s}}}\,}}^k-{{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}\Vert _\infty }{\displaystyle 1+\max (\Vert {{\,\mathrm{\mathbf{A}}\,}}_{ineq}{{\,\mathrm{{\mathbf{x}}}\,}}^k+{{\,\mathrm{{\mathbf{s}}}\,}}^k\Vert _\infty ,\ \Vert {{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}\Vert _\infty )}\\ r_{bounds}^k&{}=\frac{\displaystyle \Vert {{\,\mathrm{{\mathbf{x}}}\,}}^k-{\hat{\mathbf {x}}^k}\Vert _\infty }{\displaystyle 1+\max (\Vert {{\,\mathrm{{\mathbf{x}}}\,}}^k\Vert _\infty ,\ \Vert {\hat{\mathbf {x}}^k}\Vert _\infty )} \end{array}} \end{aligned}$$

and set RACQP to terminate when the residuals become smaller than some tolerance level \(\epsilon >0\).

$$\begin{aligned} \max (r_{\mathrm{p}}^k,\ r_{\mathrm{d}}^k)<\epsilon . \end{aligned}$$
(28)

Note that the aforementioned residuals are similar to those used in [10, 65] with relative and absolute residual tolerance (\(\epsilon _{abs},\epsilon _{rel}\)) set to be equal.

3.2 Mixed integer problems

figure b

For mixed integer problems we tackle (17) without introducing \(\hat{\mathbf {x}}\), where augmented Lagrangian, \(L_{\beta } ({{\,\mathrm{{\mathbf{x}}}\,}};{{\,\mathrm{{\mathbf{s}}}\,}};{{\,\mathrm{{\mathbf{y}}}\,}}_{eq};{{\,\mathrm{{\mathbf{y}}}\,}}_{ineq})\), is given by

$$\begin{aligned}{ \begin{array}{cl} L_{\beta } (\cdot ) := &{}\frac{1}{2}{{\,\mathrm{{\mathbf{x}}}\,}}^T {{\,\mathrm{\mathbf{H}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}+ {{\,\mathrm{{\mathbf{c}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{y}}}\,}}_{eq}^T({{\,\mathrm{\mathbf{A}}\,}}_{eq}{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}_{eq}) -{{\,\mathrm{{\mathbf{y}}}\,}}_{ineq}^T({{\,\mathrm{\mathbf{A}}\,}}_{ineq}{{\,\mathrm{{\mathbf{x}}}\,}}+{{\,\mathrm{{\mathbf{s}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}_{ineq})\\ &{}+\, \frac{\beta }{2} (\Vert {{\,\mathrm{\mathbf{A}}\,}}_{eq}{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}_{eq}\Vert ^2 + \Vert {{\,\mathrm{\mathbf{A}}\,}}_{ineq}{{\,\mathrm{{\mathbf{x}}}\,}}+{{\,\mathrm{{\mathbf{s}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}\Vert ^2) \end{array}} \end{aligned}$$

where slack variables \({{\,\mathrm{{\mathbf{s}}}\,}}\ge 0\), and \(x_i\in {{\,\mathrm{\mathcal {X}}\,}}_i\), \({{\,\mathrm{\mathcal {X}}\,}}_i\subseteq {{\,\mathrm{\mathbb {R}}\,}}\) or \({{\,\mathrm{\mathcal {X}}\,}}_i\subseteq {{\,\mathrm{\mathbb {Z}}\,}}\), \(i=1,\dots ,n\). Mixed integer problems (MIP) are addressed by using the partial Lagrangian to solve for primal variables and a simple procedure that helps to escape local optima, as shown in Algorithm 2. Note that MIP and continuous problems share the same main algorithm (Algorithm 1), but the former ignores the update to \(\hat{\mathbf {x}}\) as the bounds on \({{\,\mathrm{{\mathbf{x}}}\,}}\) are explicitly set through \({{\,\mathrm{\mathcal {X}}\,}}\), and thus \(\hat{\mathbf {x}} = {{\,\mathrm{{\mathbf{x}}}\,}}\) always.

RACQP-MIP Solver, outlined in Algorithm 2, consists of a sequence of steps that work on improving the current (or initial) solution which is then “destroyed“ to be possibly improved again. This solve-perturb-solve sequence (lines 213) is repeated until termination criteria is met. The criteria for RACQP-MIP is usually set to be maximum run-time, maximum number of attempts to find a better solution, or a solution quality (assuming primal feasibility is met within some \(\epsilon >0\)). The algorithm can be seen as a variant of a neighborhood search technique usually associated with meta-heuristic algorithms for combinatorial optimization.

After being stuck at some local optimum solution, the algorithm finds a new initial point \({{\,\mathrm{{\mathbf{x}}}\,}}_0\) by perturbing the best known solution \({{\,\mathrm{{\mathbf{x}}}\,}}_{best}\) and continues from there. The new initial point does not need to be feasible, but in some cases it may be beneficial to be constructed that way. To detect a local optimum we use a simple approach that counts number of times a “feasible” solution is found without improvement in objective value. A solution is considered to be feasible if

$$\begin{aligned} {\max (\Vert {{\,\mathrm{\mathbf{A}}\,}}_{eq}{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}_{eq}\Vert _\infty ,\ \Vert {{\,\mathrm{\mathbf{A}}\,}}_{ineq}{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}\Vert _\infty )\le \epsilon ,} \end{aligned}$$

\(\epsilon >0\). Perturbation (line 11) can done, for example by choosing a random number (chosen from a truncated exponential distribution) of components of \({{\,\mathrm{{\mathbf{x}}}\,}}_{best}\) and assigning them new values, or a more sophisticated approach can be used (see Sect. 4.2 for some implementation details). Parameters of permutation are encapsulated in a generic term \(\kappa \).

4 Computational studies

The Alternating Direction Method of Multipliers (ADMM) has nowadays gained a lot of attention for solving many problems of practical importance (e.g. large-scale machine learning and signal processing, image processing, portfolio-management, to name a few). Unfortunately, the two most popular approaches, namely the two-block classical ADMM and the variable-splitting multi-block [10], both characterized by convergence speed and scaling issues somehow hindered a wide acceptance of ADMM as the solution method of choice for ML problems. RAC-ADMM offers the multi-block solution that may help to overcome the problem of ADMM acceptance.

The goal of this section is twofold: (1) to show that RAC-ADMM is a versatile algorithm that can be directly applied to a wide range of LCQP problems and compete with commercial solvers and (2) get an insight on specific ML problems and devise a RAC-ADMM based solution that outperforms or matches the performance of the best tailored solution method(s) in both solution time and quality. To address the former, in Sects. 4.1 and 4.2 we compare RACQP with the state of the art commercial solvers, Gurobi [34] and Mosek [55], and the academic OSQP which is a ADMM-based solver developed by [65]. To address the latter, we focus on Linear Regression (Elastic-Net) and Support Vector Machine (SVM), machine learning algorithm used for classification and regression analysis, and in Sect. 4.1.7 compare RACQP with glmnet [30, 64] and LIBSVM [14].

We conduct multiple numerical tests, solving randomly constructed problems and problems from benchmark test sets. Data we collect include run-time, number of iterations until termination criteria is met and quality of a solution, defined differently for continuous, mixed-integer and machine learning problems (described in corresponding subsections). Note that in some sections, due to space concerns we report on a subset of instances. Experiments using larger sets are available together with RACQP solver code online [61] in “demo” directory.

The experiments were done on MacBook Pro with 2.8 GHZ Intel Core i7 and 16Gb memory running macOS High Sierra, v 10.13.2 (Sect. 4.1.7) and 16-core Intel Xeon CPU E5-2650 machine with 96Gb memory running Debian linux 3.16.0-4-amd64 (all other sections).

4.1 Continuous problems

The section starts with the analysis of the \(l_2\) regularized regularized Markowitz mean–variance model applied to 2018 CSRP Quarterly Stock data [78] followed by randomly generated convex quadratic problems (QP) with coupled blocks. Next three sets of benchmark problems are addressed: relaxed QAPLIB [60] (binary constraint on variables removed), Maros and Meszaros Convex QP [72], and the Mittelmann LP test set [73] expanded to QP by adding a diagonal Hessian to the problem model.

The goal of the section is to show that the multi-block ADMM approach adapted by RACQP can significantly reduce solution time compared to commercial solvers and two-block ADMM (used by OSQP) for most of the problems we addressed. Results obtained in this section are all done with a single RACQP run, using fixed random number generator seed. Performance of the solver when subjected to different seeds is described in Sect. 4.1.8. The run-time settings applied to solvers to produce results reported in this section, unless noted otherwise, are shown in Table 3.

Table 3 Termination criteria used in this section by all solvers

Authors are aware that either commercial solver can be tuned for maximum performance by adjusting run-time parameters to fit a specific problem structure, which is the same with RACQP and OSQP but to the much smaller extent. In addition, the latter do not have the access to a large number of real-world instances used by the former to fine-tune algorithms to exploit “known” problem structures nor manpower to build heuristics and/or preconditioners that boost solver performance. However, in order to create a more “fair” working conditions, we decided to let Mosek and Gurobi use their default settings, except for disabling multi-threading support and aforementioned optimality termination criteria (Table 3). Although allowing the solvers to execute presolve routines seems to be unfair to RACQP (which does not implement any presolving technique except for a very simple row scaling), disabling it would be even more unfair to the opposing solvers as their performance heavily depends on finesses of the presolve algorithm(s). Multi-threading is disabled for Mosek and Gurobi because both RACQP and OSQP are single-threaded, and leaving it on would be unfair. Finally, to make RACQP and OSQP comparison more fair, and because our target is to compare two ADMM variants, RAC-ADMM and operator splitting two-block ADMM, rather than solvers’ implementations, the advanced option that OSQP uses to post-process results, “Polish results”, was turned off. Note that such an option is relatively easy to implement and a variant of thereof will be added to a future RACQP version.

For continuous problems described in this section, performance is measured in terms of run-time, number of iterations and quality of solution, expressed via primal and dual residuals. Terminating a run after residual(s) have been met (Table 3, rows 2–4) is one way of ensuring quality of a solution. However, this criteria could be misleading. To start with, some solvers use absolute residuals as termination criteria (e.g. Gurobi), some depend on relative residuals (e.g. Mosek, RACQP), and some are adjustable like QSQP.

Next, solvers usually scale problems (e.g. row and column scaling of a constraint matrix) to avoid numerical problems and make matrices with favorable condition numbers. Residuals are then calculated and checked against these scaled models, meaning that a solver may prematurely terminate unless the results are periodically re-scaled and residuals recalculated on the actual model—a “good” scaled solution can actually have a very bad “actual” residual. As each solver performs different scaling (and algorithms are not usually known as it is case with Gurobi and Mosek), direct comparison of residuals reported by the solvers is not possible.

To circumvent the issue, we re-calculate primal and dual residuals using the solutions (primal and dual variables), returned by the solvers as follows:

$$\begin{aligned}{ \begin{array}{ll} r_{\mathrm{prim}}&{}:= \max ( r_{{{\,\mathrm{\mathbf{A}}\,}}_{eq}}, r_{{{\,\mathrm{\mathbf{A}}\,}}_{ineq}}, r_{bounds})\\ r_{\mathrm{dual}} &{}:= \frac{\displaystyle \Vert {{\,\mathrm{\mathbf{H}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}^*+{{\,\mathrm{{\mathbf{c}}}\,}}-{{\,\mathrm{\mathbf{A}}\,}}^T{{\,\mathrm{{\mathbf{y}}}\,}}^*-{{\,\mathrm{{\mathbf{y}}}\,}}_{bounds}^*\Vert _\infty }{\displaystyle 1+\max (\Vert {{\,\mathrm{\mathbf{H}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}^*\Vert _\infty , \Vert {{\,\mathrm{{\mathbf{c}}}\,}}\Vert _\infty ,\ \Vert {{\,\mathrm{\mathbf{A}}\,}}^T{{\,\mathrm{{\mathbf{y}}}\,}}^*\Vert _\infty ,\ \Vert {{\,\mathrm{{\mathbf{y}}}\,}}_{bounds}^*|_\infty )} \end{array}} \end{aligned}$$

where \({{\,\mathrm{\mathbf{A}}\,}}=[{{\,\mathrm{\mathbf{A}}\,}}_{eq};{{\,\mathrm{\mathbf{A}}\,}}_{ineq}]\), \({{\,\mathrm{{\mathbf{y}}}\,}}^*\) is a vector of dual variables related to equality and inequality constraints, \({{\,\mathrm{{\mathbf{y}}}\,}}_{bounds}^*\) is a vector of dual variables related to primal variable bounds, and \({{\,\mathrm{{\mathbf{x}}}\,}}^*\) is a vector of primal variables. Residuals due to equality and inequality constraints and bounds are defined with

$$\begin{aligned}{ \begin{array}{ll} r_{{{\,\mathrm{\mathbf{A}}\,}}_{eq}}&{}= \frac{\displaystyle \Vert {{\,\mathrm{\mathbf{A}}\,}}_{eq}{{\,\mathrm{{\mathbf{x}}}\,}}^*-{{\,\mathrm{{\mathbf{b}}}\,}}_{eq}\Vert _\infty }{\displaystyle 1+\max (\Vert {{\,\mathrm{\mathbf{A}}\,}}_{eq}{{\,\mathrm{{\mathbf{x}}}\,}}^*\Vert _\infty ,\ \Vert {{\,\mathrm{{\mathbf{b}}}\,}}_{eq}\Vert _\infty )}\\ r_{{{\,\mathrm{\mathbf{A}}\,}}_{ineq}}&{}= \frac{\displaystyle \max (0, \Vert {{\,\mathrm{\mathbf{A}}\,}}_{ineq}{{\,\mathrm{{\mathbf{x}}}\,}}^*-{{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}\Vert _\infty )}{\displaystyle 1+\max (\Vert {{\,\mathrm{\mathbf{A}}\,}}_{ineq}{{\,\mathrm{{\mathbf{x}}}\,}}^*\Vert _\infty ,\ \Vert {{\,\mathrm{{\mathbf{b}}}\,}}_{ineq}\Vert _\infty )}\\ r_{bounds}&{}=\max (\frac{\displaystyle \Vert \max (0,{{\,\mathrm{{\mathbf{l}}}\,}}-{{\,\mathrm{{\mathbf{x}}}\,}}^*)\Vert _\infty }{\displaystyle 1+\max (\Vert {{\,\mathrm{{\mathbf{x}}}\,}}\Vert _\infty ,\ \Vert {\ {{\,\mathrm{{\mathbf{l}}}\,}}\ }\Vert _\infty )}, \frac{\displaystyle \Vert \max (0,{{\,\mathrm{{\mathbf{x}}}\,}}^*-{{\,\mathrm{{\mathbf{u}}}\,}})\Vert _\infty }{\displaystyle 1+\max (\Vert {{\,\mathrm{{\mathbf{x}}}\,}}\Vert _\infty ,\ \Vert \ {{{\,\mathrm{{\mathbf{u}}}\,}}}\ \Vert _\infty )}). \end{array}} \end{aligned}$$

Note that Gurobi does not provide dual variables for bounds (\({{\,\mathrm{{\mathbf{l}}}\,}}\le {{\,\mathrm{{\mathbf{x}}}\,}}\le {{\,\mathrm{{\mathbf{u}}}\,}}\)) directly. To get around we convert the bounds into inequality constraints, what makes Gurobi to produce the dual variables. This introduces negligible run-time cost as the additional constraints are discovered as bounds during presolve phase and consequently removed. The initial point \({{\,\mathrm{{\mathbf{x}}}\,}}^0\) for all instances addressed by RACQP is \(\max ({{\,\mathrm{{\mathbf{0}}}\,}},{{\,\mathrm{{\mathbf{l}}}\,}})\).

4.1.1 Choosing RACQP solver working mode

To address differences in problem structure, the following simple rules are used to decide on the RACQP solver mode:

  1. 1.

    If \({{\,\mathrm{\mathbf{H}}\,}}\) is non-diagonal and \({{\,\mathrm{\mathbf{A}}\,}}\) is non-structural or the problem is large, use multi-block mode (Eq. 21).

  2. 2.

    If \({{\,\mathrm{\mathbf{H}}\,}}\) is non-diagonal and \({{\,\mathrm{\mathbf{A}}\,}}\) is structural, which implies that \({{\,\mathrm{\mathbf{A}}\,}}\) has non-zero entries that follow some pattern and problem structure is easy to detect, use multi-block mode with smart-grouping as described in Sect. 2.3.2.

  3. 3.

    If \({{\,\mathrm{\mathbf{H}}\,}}\) is diagonal, \(m<<n\) or \({{\,\mathrm{\mathbf{H}}\,}}\) and \({{\,\mathrm{\mathbf{A}}\,}}\) are very sparse, and the problem is of moderate size, use single-block mode (group all primal variables \({{\,\mathrm{{\mathbf{x}}}\,}}\) together in one block) with localized equality constraints for the sub-problem and apply (Eq. 25).

  4. 4.

    If \({{\,\mathrm{\mathbf{H}}\,}}\) is non-diagonal, both \({{\,\mathrm{\mathbf{H}}\,}}\) and \({{\,\mathrm{\mathbf{A}}\,}}\) are very sparse, and the problem is of moderate size, use single-block ADMM. If only a subset of primal variables is bounded, solve the block using an external solver (e.g. Gurobi or Mosek) with localized bounds. Otherwise, solve the block using (Eq. 24).

4.1.2 Regularized Markowitz mean–variance model

The Markowitz mean–variance model describes N assets characterized by a random vector of returns \({{\,\mathrm{\mathbf{R}}\,}}=({{\,\mathrm{\mathbf{R}}\,}}_1,\dots ,{{\,\mathrm{\mathbf{R}}\,}}_N)\) with known expected value \({{\,\mathrm{{\mathbf{m}}}\,}}_i\) of each random variable \(R_i\) and covariance \(\sigma _{ij}\) for all pairs of random variables \({{\,\mathrm{\mathbf{R}}\,}}_i\) and \({{\,\mathrm{\mathbf{R}}\,}}_j\). Given some portfolio asset \({{\,\mathrm{{\mathbf{x}}}\,}}=(x_1,\dots ,x_N)\), where \(x_i\) is the fraction of resources invested in asset i, an investor chooses a portfolio \({{\,\mathrm{{\mathbf{x}}}\,}}\), satisfying two objectives: expected value of the portfolio return \({{\,\mathrm{{\mathbf{m}}}\,}}_{{{\,\mathrm{{\mathbf{x}}}\,}}}=E({{\,\mathrm{\mathbf{R}}\,}}_{{{\,\mathrm{{\mathbf{x}}}\,}}})=\langle {{\,\mathrm{{\mathbf{m}}}\,}},{{\,\mathrm{{\mathbf{x}}}\,}}\rangle \) is maximized and portfolio risk, measured by variance \(\sigma _{{{\,\mathrm{{\mathbf{x}}}\,}}}^2=\hbox {Var}({{\,\mathrm{\mathbf{R}}\,}}_{{{\,\mathrm{{\mathbf{x}}}\,}}})=\langle {{\,\mathrm{{\mathbf{x}}}\,}},{{\,\mathrm{\mathbf{V}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}\rangle \), \({{\,\mathrm{\mathbf{V}}\,}}=(\sigma _{ij})\) is minimized [25]. The problem of finding the optimal portfolio can be formulated as a quadratic optimization problem,

$$\begin{aligned} \begin{array}{cl} \min \limits _{{{\,\mathrm{{\mathbf{x}}}\,}}} &{} {{\,\mathrm{{\mathbf{x}}}\,}}^T {{\,\mathrm{\mathbf{V}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}- \tau {{\,\mathrm{{\mathbf{m}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}+\kappa \Vert {{\,\mathrm{{\mathbf{x}}}\,}}\Vert ^2_2\\ \text{ s.t. }&{} {{\,\mathrm{{\mathbf{e}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}= 1\\ &{} {{\,\mathrm{{\mathbf{x}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^n_+ \end{array} \end{aligned}$$
(29)

where \(\tau \ge 0\) is risk tolerance parameter, and \({{\,\mathrm{{\mathbf{e}}}\,}}\) is a vector of all ones. The above problem formulation includes the regularization term with parameter \(\kappa \).

The raw data was collected by the Center for Research in Security Price (CRSP), and provided through Wharton Research Data Services [78] covering daily prices of 4628 assets from Jan 01 to Dec 31, 2018, and monthly prices for 7958 stocks from Jan 31 to Dec 31, 2018. Missing data was filled using the yearly average price. The model uses risk tolerance parameter \(\tau =1\), and is regularized with \(\kappa =10^{-5}\). For the formulation (29), because Hessian (\({{\,\mathrm{\mathbf{V}}\,}}\)) is dense and non-diagonal, the multi-block ADMM is used, following the rules on choosing the RACQP solver mode (rule 1, Sect. 4.1.1). The number of groups p is 50, and the augmented Lagrangian penalty parameter \(\beta =1\). Default run settings (Table 3) are used by all solvers, except for OSQP that had max iteration number set to 20,000.

Table 4 Markowitz min–variance model (29)

The performance comparison between the solvers, given in Table 4, shows that multi-block RAC finds the solution of high quality in a fraction of time needed by the commercial solvers. In addition, the results show that OSQP requires many iterations to converge to a solution meeting primal/dual tolerance criteria (\(\epsilon = 10^{-5}\)), confirming the slow convergence issue of a 2-block ADMM approach.

Low-rank re-formulation Noting that the number of observations k is not large and that the covariance matrix \({{\,\mathrm{\mathbf{V}}\,}}\) is of low rank and thus can be expressed as \( {{{\,\mathrm{\mathbf{V}}\,}}={{\,\mathrm{\mathbf{B}}\,}}^T{{\,\mathrm{\mathbf{B}}\,}}} \), where

$$\begin{aligned} {{\,\mathrm{\mathbf{B}}\,}}= \dfrac{1}{\sqrt{k-1}}({{\,\mathrm{\mathbf{R}}\,}}-\dfrac{1}{k}{{\,\mathrm{{\mathbf{e}}}\,}}{{\,\mathrm{{\mathbf{e}}}\,}}^T {{\,\mathrm{\mathbf{R}}\,}}) \end{aligned}$$
(30)

and \({{\,\mathrm{\mathbf{R}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{k\times N}\), with rows corresponding to time series observations, and columns corresponding to different assets, we reformulate the problem as

$$\begin{aligned} \begin{array}{cl} \min \limits _{{{\,\mathrm{{\mathbf{x}}}\,}}} &{} \Vert {{\,\mathrm{{\mathbf{y}}}\,}}\Vert ^2_2 - \tau {{\,\mathrm{{\mathbf{m}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}+ \kappa \Vert {{\,\mathrm{{\mathbf{x}}}\,}}\Vert ^2_2 \\ \text{ s.t. }&{} {{\,\mathrm{{\mathbf{e}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}= {{\,\mathrm{{\mathbf{1}}}\,}}\\ &{} {{\,\mathrm{\mathbf{B}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}- {{\,\mathrm{{\mathbf{y}}}\,}}= \varvec{0}\\ &{} {{\,\mathrm{{\mathbf{x}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^n_+ \end{array} \end{aligned}$$
(31)

Since the Hessian of (31) is diagonal, and number of constraints is relatively small, the problem is solved using the single-block ADMM (rule 3, Sect. 4.1.1). Run-time settings are identical to those used for the regular model described previously, with the exception of the augmented Lagrangian penalty parameter which is set to \(\beta =0.1\). The performance comparison between the solvers, given in Table 5, shows that RACQP is also competitive in low-rank formulation of the problem. Run-time is given in seconds.

Table 5 Low-rank reformulation Markowitz min–variance model (31)

4.1.3 Randomly generated linearly constrained quadratic problems (LCQP)

In this section we analyze RACQP performance for different problem structures and run-time settings (number of blocks p, penalty parameter \(\beta \), tolerance \(\epsilon \)). In order to have more control over problem structure we generate synthetic problem instances starting with a simple one row Markowitz-like problem to multi-row problems of large sizes. Note that although we compare RACQP with Gurobi and Mosek on randomly generated instances, which may be considered to be unfair to the latter, our goal is not to diminish the importance of barrier type solution methods those solvers utilize, but to show that multi-block ADMM can be an approach to argument these methods when instances are large and/or dense. In this section we solve linearly constrained quadratic problems LCQP, described by (17), with \({{\,\mathrm{{\mathbf{x}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^n\).

Similarly to [80] we construct a positive definite Hessian matrix \({{\,\mathrm{\mathbf{H}}\,}}\) from a random (\(\sim U(0,1)\)) matrix \({{\,\mathrm{\mathbf{U}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{n\times n}\) and a normalized diagonal matrix \({{\,\mathrm{\mathbf{V}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}_+^n\) whose elements were chosen from a log-uniform distribution to have a specific condition number:

$$\begin{aligned} \begin{array}{cl} {{\,\mathrm{\mathbf{U}}\,}}_{\eta }&{}=\eta {{\,\mathrm{\mathbf{U}}\,}}+ (1-\eta ){{\,\mathrm{\mathbf{I}}\,}}\\ {{\,\mathrm{\mathbf{H}}\,}}&{}={{\,\mathrm{\mathbf{U}}\,}}_{\eta }{{\,\mathrm{\mathbf{V}}\,}}{{\,\mathrm{\mathbf{U}}\,}}_{\eta }^T + \zeta {{\,\mathrm{{\mathbf{e}}}\,}}{{\,\mathrm{{\mathbf{e}}}\,}}^T \end{array} \end{aligned}$$
(32)

where parameters \(\eta \in (0,1)\) and \(\zeta \ge 0\) induce different types of orientation bias. For convenience we normalize matrix H and construct vector \({{\,\mathrm{{\mathbf{c}}}\,}}\) as a random vector (\(\sim U(0,1)\)). Jacobian matrices \({{\,\mathrm{\mathbf{A}}\,}}_{eq}\) and \({{\,\mathrm{\mathbf{A}}\,}}_{ineq}\) are constructed in a way that the desired sparsity is met and \(a_{i.j}\sim N(0,1)\) for both matrices. Our analysis of LCQP is based on extensive experimentation using different problem structure embedded in the matrix \({{\,\mathrm{\mathbf{H}}\,}}\), by varying its orientation, condition number and the random seed used to construct \({{\,\mathrm{\mathbf{H}}\,}}\) (and vector \({{\,\mathrm{{\mathbf{c}}}\,}}\)).

Markowitz-like Problem Instances RACQP implementation allows solving optimization problems by multi-block ADMM. A question that arises is the optimal number of blocks p (i.e. sub-problems) to use. The optimal number, it turns out, is related to structure and density of both Hessian and Jacobian matrices. For any \({{\,\mathrm{\mathbf{H}}\,}}\) that is not a block matrix, and a dense \({{\,\mathrm{\mathbf{A}}\,}}\), as is the case with the Markowitz model, the number of blocks is related to the problem size—having more blocks leads to having more iterations before the process meets the tolerance on residual error \(\epsilon \) and more sub-problems to construct and solve. However, a sub-problem of a smaller size can be constructed and solved in less time than a larger sub-problem. Total time (\(t_T\)) is thus a function of opposing arguments. To show this interdependence, we solve simple Markowitz-like problem instances, with randomly generated \({{\,\mathrm{\mathbf{Q}}\,}}\) and \({{\,\mathrm{{\mathbf{c}}}\,}}\), and with \({{\,\mathrm{\mathbf{A}}\,}}_{eq}={{\,\mathrm{{\mathbf{e}}}\,}}^T\), \({{\,\mathrm{{\mathbf{b}}}\,}}= {{\,\mathrm{{\mathbf{1}}}\,}}\), and \({{\,\mathrm{{\mathbf{x}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^n_+\) (inequity constraints are not used). Following (29), we add a regularization term to the objective function with \(\kappa =10^{-5}\).

Table 6 RACQP performance with respect to number of blocks p for randomly generated problems of type (29)

Table 6 presents the aggregate results collected over a set of experiments (10 for each group size) using random problems constructed using (32). The reason for constructing problems in such a way is to emulate a real-world situation when a problem model (Hessian, Jacobian, \({{\,\mathrm{{\mathbf{x}}}\,}}\) upper and lower bounds) do not change, but coefficients do. The results confirm that there exist a “right” number of blocks which minimizes overall run-time. For now, choosing that number is based on experience, but we are working on formalizing the procedure. In addition to run-time cost per iteration, Table 6 reports number of iterations until convergence (k) for different number of blocks. It is interesting to observe is that k is very mildly affected by the choice of p, if tolerance \(\epsilon \) is kept the same. This leads to another interesting question on how much a change in \(\epsilon \) affect run-time. Table 7 gives an answer to this question. The table lists RACQP performance over the same problem set, but with different residual tolerances. As expected, results show that the number of iterations increases as the tolerance gets tighter.

Table 7 A typical RACQP performance with respect to primal/dual residual tolerance \(\epsilon \) for a randomly generated problems of type (29)

General LCQP Building on the results from the previous section, we expand the QP model to include general equality and inequality constraints with unbounded variables \({{\,\mathrm{{\mathbf{x}}}\,}}\). We analyze RACQP when solving sparse problems (dense problems are covered in the next section where we address relaxed QAP) for problems of size \(n=6000\) and \(n=9000\). The number of rows in both constraint matrices is equal (\(m=m_{eq}=m_{ineq}\)), and set to be a function of a problem size, \(m=r\cdot n\), with \(r=\{0.1,0.5\}\). The number of blocks used by RACQP is related to size of a block, \(p_n=n/b_{\mathrm{size}}\), with the optimal block size \(b_{\mathrm{size}}\) empirically determined to be 60. The penalty parameter \(\beta =1\) was found to produce the best results.

Tables 8 and 9 give comparative analysis of performance of the solvers with respect to run-time and primal/dual residuals. Although both OSQP and RACQP did well in terms of primal and dual residuals, the results show that multi-block RACQP converges to solutions much faster (4–10\(\times \)) then OSQP. Both solvers outperform Gurobi and Mosek in run-time, even though the tolerance on residual error is set to the same value (\(\epsilon =10^{-5}\)). Another observation is that Mosek produces solutions of inferior quality to all aforementioned solvers—dual residuals are of \(10^{-3}\) and \(10^{-4}\) levels, far below the requested \(\epsilon \) threshold. Investigation of the log files produced by Mosek reveled two problems: (1) Mosek terminates as soon as primal or dual or complementary gap residual criteria is met (unlike the other solvers which terminate when all the residual criteria are met); (2) residuals are not periodically checked on a re-scaled model, resulting in a large discrepancy between internally evaluated residuals (scaled data) and the actual one.

Table 8 Run-time comparison between the solvers for LCQP
Table 9 Primal and dual residuals comparison between the solvers for LCQP

4.1.4 Relaxed QAP

As of this section we continue the study of RACQP but, instead of randomly generating problems, we use benchmark test sets compiled by other authors which reflect real-world problems. We start by addressing large scale instances from the QAPLIB benchmark library [60] compiled by [11] and hard problems of large size, described in [21]. The quadratic assignment problem (QAP) problem is a binary problem, but for the purpose of more realistic comparison between the solvers, we relax it to a continuous problem. The numerical tests solving the binary problem formulation will be given later in Sect. 4.2.3.

The quadratic assignment problem belongs to a class of combinatorial optimization problems that arise from problems of practical interest. The QAP objective is to assign n facilities to n locations in such a way that the assignment cost is minimized. The assignment cost is the sum, over all pairs, of a weight or flow between a pair of facilities multiplied by the distance between their assigned locations. Mathematically, the QAP can be presented as follows:

$$\begin{aligned} \begin{array}{cl} \min \limits _{{{\,\mathrm{\mathbf{X}}\,}}} &{} {{\,\mathrm{\hbox {vec}}\,}}({{\,\mathrm{\mathbf{X}}\,}})^T{{\,\mathrm{\mathbf{H}}\,}}{{\,\mathrm{\hbox {vec}}\,}}({{\,\mathrm{\mathbf{X}}\,}})\\ \text{ s.t. }&{} \sum _{i=1}^r x_{ij}=1,\ \forall j=1,\dots r\quad \hbox {(a)}\\ &{} \sum _{j=1}^r x_{ij}=1,\ \forall i=1,\dots r\hbox {(b)}\\ &{} 0\le x_{ij},\ \forall i,j=1,\dots r \hbox {(c)} \end{array} \end{aligned}$$
(33)

where \(x_{ij}\) is the entry of the permutation matrix \({{\,\mathrm{\mathbf{X}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{r\times r}\). To make the problem convex and be admitted by Cholesky factorization, we make \({{\,\mathrm{\mathbf{H}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{n\times n}\) strict diagonally dominant, \({{\,\mathrm{\mathbf{H}}\,}}=\hat{{\,\mathrm{\mathbf{H}}\,}}+d\cdot {{\,\mathrm{\mathbf{I}}\,}}\), where \(\hat{{\,\mathrm{\mathbf{H}}\,}}= ({{\,\mathrm{\mathbf{A}}\,}}\otimes {{\,\mathrm{\mathbf{B}}\,}})\) and \(d=\max (\sum _{i=1,i\not =j}^{n}\hat{h}_{i,j})+\delta \), with \(\delta \) being some small positive number and \(n=r^2\). The “flow” matrix \({{\,\mathrm{\mathbf{A}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{r\times r}\) and the “distance” matrix \({{\,\mathrm{\mathbf{B}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{r\times r}\).

For QAP we apply a method for variance reduction as described in Sect. 2.3 since the assignment constraints are highly structured and observable. We group variables following a simple reasoning—given that the permutation matrix \({{\,\mathrm{\mathbf{X}}\,}}\) is doubly stochastic, each row (or column) can be seen as a single super-variable, an integer representing a permutation order. Thus, it makes sense to make one super-variable, \({{\,\mathrm{{\mathbf{x}}}\,}}_i\) for each row i of \({{\,\mathrm{\mathbf{X}}\,}}\), so that each super-variable is of size r. For each of the experiments shown we set number of groups \(p=r\) (thus we solve for one super-variable per block), and penalty parameter \(\beta \) to the best we found by running multiple experiments with different parameter values. We found that \(\beta =r\) offered the best run-time.

Table 10 Relaxed QAP [11, 21] instances
Table 11 Relaxed QAP [11, 21] instances. Primal and dual residuals comparison between the solvers

The results showing performance of solvers on a selected set of large QAP instances are summarized in Tables 10 and 11. The instances were chosen in such a way to cover a variety of problem densities (Hessian) and sizes. Table 10 shows run-time and number of iterations. Note that any comparison between barrier based solvers (Gurobi and Mosek) and ADMM solvers (RACQP, OSQP) is not possible, as the solution methods are completely different, but giving the number of iterations allow us to compare performances within each class of the solvers.

Similarly to results presented previously, RACQP is the fastest solver. Solution quality (primal and dual residual tolerance) is achieved in a fraction of time required by the other solvers. The average speedup is 214\(\times \), 86\(\times \) and 83\(\times \) with respect to Gurobi, Mosek and OSQP respectively. OSQP, although performing a similar number of iterations as RACQP does, is much slower—splitting a large problem into two parts (OSQP executes 2-block ADMM) still leaves two large matrices to solve. On the positive side, OSQP finds better solutions (primal residual smaller by the order of magnitude). Mosek is the worst performing solver—run-time-wise it is close to OSQP, only one returned solution satisfies the dual residual (tai125e01). The other instances report the dual to be as low as \(10^{-1}\). Gurobi found the best solutions, except for tai125e01 and tho150 instances, when max run-time limit (3h) was reached.

4.1.5 Maros and Meszaros convex QP

The Maros and Meszaros test set [72] is a collection of convex quadratic programming examples from a variety of sources [49] of the following form

$$\begin{aligned} {\begin{array}{cl} \min \limits _{{{\,\mathrm{{\mathbf{x}}}\,}}} &{} \frac{1}{2}{{\,\mathrm{{\mathbf{x}}}\,}}^T {{\,\mathrm{\mathbf{H}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}+ {{\,\mathrm{{\mathbf{c}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}+c_0\\ \text{ s.t. }&{} {{\,\mathrm{\mathbf{A}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}= {{\,\mathrm{{\mathbf{b}}}\,}}\\ &{} {{\,\mathrm{{\mathbf{l}}}\,}}\le {{\,\mathrm{{\mathbf{x}}}\,}}\le {{\,\mathrm{{\mathbf{u}}}\,}}\end{array}} \end{aligned}$$

with \({{\,\mathrm{\mathbf{H}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{n\times n}\) symmetric positive definite, \({{\,\mathrm{\mathbf{A}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{m\times n}\), \({{\,\mathrm{{\mathbf{b}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^m\) and \({{\,\mathrm{{\mathbf{l}}}\,}}, {{\,\mathrm{{\mathbf{u}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^n\), meaning that some of components of \({{\,\mathrm{{\mathbf{l}}}\,}}\) and \({{\,\mathrm{{\mathbf{u}}}\,}}\) may be \(-\infty \) and \(+\infty \) respectively. Constant \(c_0\) is assumed to be \(|c_0|<\infty \).

As in the previous section, only a subset of instances is used in experiments. The instances were chosen in such a way to cover a variety of problem models (density, size) but also to point to strengths and weaknesses of ADMM-based algorithms. Problem sizes n range from \(4\cdot 10^3\) to almost \(10^5\) with the number of constraints m up to \(10^5\). The Hessian matrices are either diagonal, with number of non-zero diagonal elements less or equal to n, or symmetric with no nonzero diagonal elements. The constraint matrices \({{\,\mathrm{\mathbf{A}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{m\times n}\) are very sparse across the problems; for most of the instances density is below \(10^{-3}\). In addition to being sparse, the Jacobian matrices are not block separable.

RACQP mode was set to a single-block mode according to the rules 3 and 4 of Sect. 4.1.1, with \(\beta =1\) for all instances except for CONT* and UBH1 which use \(\beta =350\) and \(\beta =12{,}000\) respectively. Residual tolerance of \(\epsilon =10^{-4}\) was used in producing the results, reported in Tables 12 and 13. The tolerance is lower than the default one (\(10^{-5}\)) because ADMM methods had hard time converging on CONT* and CVXQP* instances for tighter residuals (max number of iterations limit is 4000). Positive-definite instances (i.e. \({{\,\mathrm{\mathbf{H}}\,}}\succ 0\)) are marked with \(^\dagger \).

Table 12 Large Maros and Meszaros [72] instances
Table 13 Large Maros and Meszaros [72]

Overall, for solving sparse and Hessian-diagonal problems, both Gurobi and Mosek seem more robust than OSQP and RACQP, probably due to the linear programming structure. The latter two are of the comparable performance. The results, in terms of the gap are of similar quality, and run-time is approximately the same, except for a couple of instances, where self-adjusting methodology used by OSQP for penalty parameter estimation, gives OSQP speed advantage. Also, some of the run-time variation can also be contributed to different languages used to implement solvers; OSQP is implemented in c/c++ while RACQP uses Matlab.

RACQP solved more instances than OSQP, which in addition to not being able to meet primal/dual residuals for 25% of instances, it also could not find a feasible solution for HUES-MOD and HUETIS instances. Mosek residual issue reported in the previous section continues to persists on these problem instances. For example AUG2DQP instance solution has dual residual of \(5.5\cdot 10^{-2}\), the value that does not meet the requested tolerance.

4.1.6 Convex QP based on the Mittelmann LP test set

In this section we report on the performance of solvers when applied to very large quadratic problems. Instances are taken from the Mittelmann LP test set [73] augmented with a diagonal Hessian H to form a standard LCQP (17). The results are shown in Tables 14 and 15. Residual tolerance was set to \(10^{-4}\) (OSQP could not solve any instance but i_n13 when default tolerance of \(10^{-5}\) was used, and RACQP had hard time with nug30). Other default termination criteria apply (Table 3). For all instances the number of blocks was set to \(p=200\) and penalty parameter, to \(\beta =5\) except for nug30 that used \(\beta =50\).

Table 14 Convex QP based on the Mittelmann LP test set [73]
Table 15 Convex QP based on the Mittelmann LP test set [73]

RACQP solved very large (\(n>750{,}000\)) quadratic problems to the required accuracy (\(\epsilon =10^{-5}\)) very fast. The results were obtained using different solution strategies: multi-block Cholesky factorization approach for wide15, square15 and long15 instances, and the partial Lagrangian approach for nug30 (localized lower and upper bound of sub-problem primal variables). The best set of parameters were found by a brute-force approach, which implies that additional research work needs to be done to identify algebraic methods to characterize instances so that run-time parameters can be chosen automatically. RACQP was unable to find a solution satisfying both primal and dual residual tolerances for two instances (i_n13 and 16_n14), no matter of what run-time settings we used.

OSQP solved only one instance (i_n13) within given run-time and number of iterations limitations, while Gurobi solved all the instances to a high precision, regardless of having termination criteria, Table 3, set to \(\epsilon =10^{-5}\). Mosek did not find a single solution meeting the residual criteria, due to the aforementioned scaling and termination criteria issue.

4.1.7 Selected machine learning problems

In this section we apply RAC method and RP method to few selected machine learning (ML) problems related to convex quadratic optimization, namely Linear Regression (Elastic-Net) and Support Vector Machine (SVM). To solve the former we apply a specialized implementation of RAC-ADMM (available for download at [61]), while for the latter we use RACQP solver. RAC-ADMM is compared with the specialized methods, Glmnet [30, 64] for Elastic-Net and LIBSVM [14] for SVM problems. The results show that our general-purpose solver matches and under certain circumstances exceeds the performance of those tailored methods.

Linear Regression using Elastic Net For a classical linear regression model, with observed features \( {{{\,\mathrm{\mathbf{X}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{n\times p }} \) and labels \({{\,\mathrm{{\mathbf{y}}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^n\), where n is number of observations and p is number of features, one solves the following unconstrained optimization problem

$$\begin{aligned} \min _{{{\,\mathrm{\varvec{\beta }}\,}}} \ \frac{1}{2n}({{\,\mathrm{{\mathbf{y}}}\,}}-{{\,\mathrm{\mathbf{X}}\,}}{{\,\mathrm{\varvec{\beta }}\,}})'({{\,\mathrm{{\mathbf{y}}}\,}}-{{\,\mathrm{\mathbf{X}}\,}}{{\,\mathrm{\varvec{\beta }}\,}}) + P_{\lambda ,\alpha }({{\,\mathrm{\varvec{\beta }}\,}}) \end{aligned}$$
(34)

with \( {P_{\lambda ,\alpha }({{\,\mathrm{\varvec{\beta }}\,}})=\lambda \{\frac{1-\alpha }{2}\Vert {{\,\mathrm{\varvec{\beta }}\,}}\Vert ^2_2+\alpha \Vert {{\,\mathrm{\varvec{\beta }}\,}}\Vert _1\}} \) used for Elastic Net model. By adjusting \(\alpha \) and \(\lambda \), one could obtain different models: for ridge regression, \(\alpha = 0\), for lasso \(\alpha =1\), and for classic linear regression, \(\lambda =0\). For the problem to be solved by ADMM, we use variable splitting and reformulate the problem as follows

$$\begin{aligned} \begin{array}{cl} \min \limits _{{{\,\mathrm{\varvec{\beta }}\,}}} &{} \frac{1}{2n}({{\,\mathrm{{\mathbf{y}}}\,}}-{{\,\mathrm{\mathbf{X}}\,}}\beta )^{T}({{\,\mathrm{{\mathbf{y}}}\,}}-{{\,\mathrm{\mathbf{X}}\,}}\beta )+P_{\lambda ,\alpha }({{\,\mathrm{{\mathbf{z}}}\,}}) \\ \text{ s.t. } &{} {{\,\mathrm{\varvec{\beta }}\,}}- {{\,\mathrm{{\mathbf{z}}}\,}}\ = \ {{\,\mathrm{{\mathbf{0}}}\,}}\end{array} \end{aligned}$$
(35)

Note that in (35) we follow the standard machine learning Elastic Net notation in which \(\beta \) is the decision variable in the optimization formulation, rather than \({{\,\mathrm{{\mathbf{x}}}\,}}\).

Let \({{\,\mathrm{{\mathbf{c}}}\,}}=-\frac{1}{n}{{\,\mathrm{\mathbf{X}}\,}}^T{{\,\mathrm{{\mathbf{y}}}\,}}\), \({{\,\mathrm{\mathbf{A}}\,}}=\frac{{{\,\mathrm{\mathbf{X}}\,}}}{\sqrt{n}}\), and let \(\gamma \) denote the augmented Lagrangian penalty parameter with respect to constraint \({{\,\mathrm{\varvec{\beta }}\,}}-{{\,\mathrm{{\mathbf{z}}}\,}}\), and \({{\,\mathrm{\varvec{\xi }}\,}}\) be the dual with respect to constraint \( {{\,\mathrm{\varvec{\beta }}\,}}- {{\,\mathrm{{\mathbf{z}}}\,}}\ = \ {{\,\mathrm{{\mathbf{0}}}\,}}\). The augmented Lagrangian could then be written as

$$\begin{aligned} \begin{array}{ll}L_{\lambda }=&\frac{1}{2}{{\,\mathrm{\varvec{\beta }}\,}}^T({{\,\mathrm{\mathbf{A}}\,}}^T{{\,\mathrm{\mathbf{A}}\,}}+\gamma {{\,\mathrm{\mathbf{I}}\,}}){{\,\mathrm{\varvec{\beta }}\,}}+ ({{\,\mathrm{{\mathbf{c}}}\,}}-{{\,\mathrm{\varvec{\xi }}\,}})^T{{\,\mathrm{\varvec{\beta }}\,}}+({{\,\mathrm{\varvec{\xi }}\,}}-\gamma {{\,\mathrm{\varvec{\beta }}\,}})^T{{\,\mathrm{{\mathbf{z}}}\,}}+\frac{\gamma }{2} {{\,\mathrm{{\mathbf{z}}}\,}}^T{{\,\mathrm{{\mathbf{z}}}\,}}+P_{\lambda ,\alpha }({{\,\mathrm{{\mathbf{z}}}\,}}) \end{array} \end{aligned}$$

We apply RAC-ADMM algorithm by partitioning \(\beta \) into multi-blocks, but solve z as one block. For any given \({{\,\mathrm{\varvec{\beta }}\,}}_{k+1}\), optimizer \({{\,\mathrm{{\mathbf{z}}}\,}}^*_{k+1}\) has the closed form solution.

$$\begin{aligned} {{{\,\mathrm{{\mathbf{z}}}\,}}^*_{k+1}(i)({{\,\mathrm{\varvec{\beta }}\,}}_{k+1}(i), {{\,\mathrm{\varvec{\xi }}\,}}_k(i))=\frac{S({{\,\mathrm{\varvec{\xi }}\,}}_k(i)-\gamma {{\,\mathrm{\varvec{\beta }}\,}}_{k+1}(i),\lambda \alpha )}{(1-\alpha )\lambda +\gamma },} \end{aligned}$$

where \({{\,\mathrm{\varvec{\xi }}\,}}_i\) is the dual variable with respect to constraint \({{\,\mathrm{\varvec{\beta }}\,}}_i-{{\,\mathrm{{\mathbf{z}}}\,}}_i=0\), and S(ab) is soft-threshold operation [29].

$$\begin{aligned} {S(a,b)={\left\{ \begin{array}{ll} -(a-b), &{}\hbox {if} \ b<|a|, \ a>0 \\ -(a+b), &{}\hbox {if} \ b<|a|, \ a\le 0 \\ 0, &{} \hbox {if}\ b\ge |a| \\ \end{array}\right. }} \end{aligned}$$

In order to solve classic linear regression directly, \({{\,\mathrm{\mathbf{X}}\,}}^T{{\,\mathrm{\mathbf{X}}\,}}\) must be positive definite which can not be satisfied for \(p>n\). However, RAC-ADMM only requires each sub-block \({{\,\mathrm{\mathbf{X}}\,}}_{sub}^T{{\,\mathrm{\mathbf{X}}\,}}_{sub}\) to be positive definite, so, as long as block size \(s<n\), RAC-ADMM can be used to solve the classic linear regression.

It is worth pointing out that although the objective function here is non-smooth, we could still reformulate the problem as convex quadratic programming with bounded constraints. To see this, consider the case where \(\alpha =1\), and elastic net problem becomes a classic lasso regression, with (35) becomes

$$\begin{aligned} \begin{array}{cl} \min \limits _{{{\,\mathrm{\varvec{\beta }}\,}}} &{} \frac{1}{2N}({{\,\mathrm{{\mathbf{y}}}\,}}-{{\,\mathrm{\mathbf{X}}\,}}{{\,\mathrm{\varvec{\beta }}\,}})^{T}({{\,\mathrm{{\mathbf{y}}}\,}}-{{\,\mathrm{\mathbf{X}}\,}}{{\,\mathrm{\varvec{\beta }}\,}})+\lambda ||{{\,\mathrm{{\mathbf{z}}}\,}}||_1 \\ \text{ s.t. } &{} {{\,\mathrm{\varvec{\beta }}\,}}- {{\,\mathrm{{\mathbf{z}}}\,}}\ = \ {{\,\mathrm{{\mathbf{0}}}\,}}\end{array} \end{aligned}$$
(36)

Optimization problem 35 is equivalent as

$$\begin{aligned} {\begin{array}{cl} \min \limits _{{{\,\mathrm{\varvec{\beta }}\,}}, {{\,\mathrm{{\mathbf{z}}}\,}}',{{\,\mathrm{{\mathbf{z}}}\,}}''} &{} \frac{1}{2N}({{\,\mathrm{{\mathbf{y}}}\,}}-{{\,\mathrm{\mathbf{X}}\,}}{{\,\mathrm{\varvec{\beta }}\,}})^{T}({{\,\mathrm{{\mathbf{y}}}\,}}-{{\,\mathrm{\mathbf{X}}\,}}{{\,\mathrm{\varvec{\beta }}\,}})+\lambda ({{\,\mathrm{{\mathbf{z}}}\,}}'+{{\,\mathrm{{\mathbf{z}}}\,}}'') \\ \text{ s.t. } &{} {{\,\mathrm{\varvec{\beta }}\,}}- {{\,\mathrm{{\mathbf{z}}}\,}}' +{{\,\mathrm{{\mathbf{z}}}\,}}'' \ = \ {{\,\mathrm{{\mathbf{0}}}\,}}\\ &{} {{\,\mathrm{{\mathbf{z}}}\,}}',{{\,\mathrm{{\mathbf{z}}}\,}}''\ge 0 \end{array}} \end{aligned}$$

Let \({{\,\mathrm{\varvec{\xi }}\,}}\) be the dual with respect to constraint \( {{{\,\mathrm{\varvec{\beta }}\,}}- {{\,\mathrm{{\mathbf{z}}}\,}}' +{{\,\mathrm{{\mathbf{z}}}\,}}'' = {{\,\mathrm{{\mathbf{0}}}\,}}} \). The optimal \( {({{\,\mathrm{{\mathbf{z}}}\,}}')^k} \) and \( {({{\,\mathrm{{\mathbf{z}}}\,}}'')^k} \) at each iteration satisfies \( {({{\,\mathrm{{\mathbf{z}}}\,}}')^k-({{\,\mathrm{{\mathbf{z}}}\,}}'')^k=\frac{1}{\gamma }{S({{\,\mathrm{\varvec{\xi }}\,}}_k(i)-\gamma {{\,\mathrm{\varvec{\beta }}\,}}_{k+1}(i),\lambda )}} \), which equals to \({{\,\mathrm{{\mathbf{z}}}\,}}^k\) when we solves problem (36). Essentially, the non-smooth objective here could be reformulated as a smooth convex quadratic programming with non-negative constraints. Using the absolute value update is just a compact way to implement the algorithm, so that the expected convergence is guaranteed from our theorem.

We compare our solver with Glmnet [30, 64] and Matlab lasso implementation on synthetic data (sparse and dense problems) and benchmark regression data from LIBSVM [14].

Synthetic Data The data set for dense problems \(\mathbf {X}\) is generated uniform randomly with \(n=10,000\), \(p=50,000\), with zero sparsity, while for the ground truth \(\beta ^{*}\) we use standard Gaussian and set sparsity of \(\beta ^*\) to 0.1. Due to the nature of the problem, estimation requires lower feasibility precision, so we fix number of iterations to 10 and 20. Glmnet solver benefits from having a diminishing sequence of \(\lambda \), but given that many applications (e.g. see [3]) require a fixed \(\lambda \) value , we decided to use fixed \(\lambda \) for all solvers. Note that the computation time of RAC-ADMM solver is invariant regardless of whether \(\lambda \) is decreasing or fixed.

Table 16 reports on the average cross-validation run-time and the average absolute \(l_2\) loss for all possible pairs \((\alpha , \lambda )\) with parameters chosen from \(\alpha =\{0, \ 0.1,\dots ,1\}\) and \(\lambda = \{1,\ 0.01\}\). Without specifying, RAC-ADMM solver run-time parameters were identical across the experiments, with augmented Lagrangian penalty parameter \(\gamma = 0.1\lambda \) for sparsity \(< 0.995\), \(\gamma = \lambda \) for sparsity \(> 0.995\), and block size \(s==100\). Large scale sparse data set \(\mathbf {X}\) is generated uniform randomly with \(\{n=40,000, \ p=4,000,000\}\), using sparsity \(=0.998\). For ground truth \(\beta ^*\), the standard Gaussian with sparsity \(\beta ^*=0.5\) and fixed \(\lambda \). Noticing from the previous experiment that increasing a step size from 10 to 20 didn’t significantly improve prediction error, we fix number of iteration to 10.

Table 16 Comparison on solver performance, dense elastic net model

Table 17, report on the average cross-validation run time and the average absolute \(l_2\) loss for all possible pairs \((\alpha , \lambda )\) with parameters chosen from \(\alpha =\{0, \ 0.1,\dots ,1\}\) and \(\lambda = \{1,\ 0.01\}\). The table also shows the best \(l_2\) loss for each solver. Because it took more than 10, 000 seconds for Matlab lasso to solve even one estimation, the table reports only comparison between glmnet and RAC.

Table 17 Comparison on solver performance, elastic net model

Experimental results on synthetic data show that RAC-ADMM solver outperforms significantly all other solvers in total time while being competitive in absolute \(l_2\) loss. Further RAC-ADMM speedups could be accomplished by fixing block-structure (RP-ADMM). In terms of run-time, for dense problem, RAC-ADMM is 3 times faster compared with Matlab lasso and 7 times faster compared with glmnet. RP-ADMM is 6 times faster compared with Matlab lasso, and 14 times faster compared with glmnet. For sparse problem, RAC-ADMM is more than 30 times faster compared with Matlab lasso, and 3 times faster compared with glmnet. RP-ADMM is 4 times faster compared with glmnet.

Following Corollary 2 RP-ADMM is slower that RAC-ADMM when convergence is measured in number of iterations, and experimental evidence (Table 1) show that it also suffers from slow convergence to a high precision level on L1-norm of equality constraints. However, the benefit of RP-ADMM is that it could store pre-factorized sub-block matrices, as block structure is fixed at each iteration,in contrast to RAC-ADMM which requires reformulation of sub-blocks at each iteration, what it turn makes each iteration more time-wise costly . In many machine learning problems, including regression, due to the nature of problem, a less precision level is required. This makes RP-ADMM an attractive approach, as it could converge within fewer steps and potentially be faster than RAC-ADMM. In addition, while performing simulations we observed that increasing number of iteration does not significantly improve performance of prediction. In fact, absolute \(l_2\) loss remains similar even when number of iteration is increased to 100. This further gives an advantage to RP-ADMM, as it benefits the most when number of iteration is relatively small.

LIBSVM Benchmark instances Regression data E2006-tfidf feature size is 150, 360 with number of training and testing data points of 16, 087 and 3308 respectively. The null training error of test set is 221.8758. Following the findings from the section on synthetic problems and noticing that this dataset is sparse (density=0.991), this setup uses fixed number of iterations to 10, and vary \(\lambda = \{1, \ 0.01\}\) and \(\alpha = \{0, \ 0.1, \ 0.2,\dots ,1\}\). The training set is used to predict \(\beta ^*\), and the model error (ME) of test set is compared across different solvers.

Table 18 E2006-tfidf performance summary for Lasso problem (\(\alpha =1\), \(\lambda = 0.01\))

Table 18 shows the performance of OSQP and Matlab lasso for \(\alpha =1\) and \(\lambda = 0.01\), and Table 19 compares compare RAC-ADMM with glmnet. The reason for splitting the results in two tables is related to inefficiency of factorizing a big matrix by OSQP solver and Matlab lasso implementation. Each solver requires more than than 1000 seconds to solve the problem for even 10 iterations, making them impractical to use. On the other hand, glmnet, which uses a cyclic coordinate descent algorithm on each variable, performs significantly faster than OSQP and Matlab lasso. However, glmnet can still be inefficient, as a complete cycle through all p variables requires O(pN) operations [30]. Results given in Table 19 are the averages over run-time and training error collected from experiments with \(\alpha =\{0,0.1,\dots ,1\}\). The results show that RAC-ADMM is faster than glmnet for all different parameters and that it achieves the best training model error, 22.0954, among all the solvers. In terms of run-time, RAC-ADMM is 14 times faster than OSQP, 38 times faster than Matlab lasso, and 4 times faster than glmnet. RP-ADMM is 28, 18 and 8 times faster than OSQP, Matlab lasso and glmnet, respectively.

Table 19 E2006-tfidf performance summary

For log1pE2006 benchmark , feature size is 4, 272, 227, number of training data is 16, 087 and number of testing data is 3308. The null training error of test set is 221.8758 and sparsity of data is 0.998. Similarly to the previous benchmark, the performance results are split into two tables. Table 20 shows the performance of OSQP and Matlab lasso, while Table 21 compares RAC-ADMM and glmnet.

Table 20 log1pE2006 performance summary for Lasso problem (\(\alpha =1\), \(\lambda = 0.01\))
Table 21 log1pE2006 performance summary

The results show that RAC-ADMM and RP-ADMM are still competitive and are of same level as glmnet with respect to model error, and all outperform OSQP and Matlab. In terms of run-time, RAC-ADMM is 12 times faster than OSQP, and 5 times faster than glmnet. RP-ADMM is 16 and 7 times faster than OSQP and glmnet, respectively.

Support Vector Machine A Support Vector Machine (SVM) is a machine learning method for classification, regression, and other learning tasks. The method learns a mapping between the features \({{\,\mathrm{{\mathbf{x}}}\,}}_i\in {{\,\mathrm{\mathbb {R}}\,}}^r\), \(i=1,\dots n\) and the target label \(y_i\in \{-1,1\}\) of a set of data points using a training set and constructs a hyperplane \( {{{\,\mathrm{{\mathbf{w}}}\,}}^T\phi ({{\,\mathrm{{\mathbf{x}}}\,}})+b} \) that separates the data set. This hyperplane is then used to predict the class of further data points. The objective uses Structural Risk Minimization principle which aims to minimize the empirical risk (i.e. misclassification error) while maximizing the confidence interval (by maximizing the separation margin) [74, 75].

Training an SVM is a convex optimization problem, with multiple formulations, such as C-support vector classification (C-SVC), \(\upsilon \)-support vector classification (\(\upsilon \)-SVC), \(\epsilon \)-support vector regression (\(\epsilon \)-SVR), and many more. As our goal is to compare RACQP, a general QP solver, with specialized SVM software and not to compare SVM methods themselves, we decided on using C-SVC [8, 18], with the dual problem formulated as

$$\begin{aligned} \begin{array}{cl} \min \limits _{{{\,\mathrm{{\mathbf{z}}}\,}}} &{} \frac{1}{2}{{\,\mathrm{{\mathbf{z}}}\,}}^T {{\,\mathrm{\mathbf{Q}}\,}}{{\,\mathrm{{\mathbf{z}}}\,}}\ -\ {{\,\mathrm{{\mathbf{e}}}\,}}^T{{\,\mathrm{{\mathbf{z}}}\,}}\\ \text{ s.t. } &{} {{\,\mathrm{{\mathbf{y}}}\,}}^T{{\,\mathrm{{\mathbf{z}}}\,}}\ = \ 0 \\ &{} {{\,\mathrm{{\mathbf{z}}}\,}}\in [0,C] \end{array} \end{aligned}$$
(37)

with \({{\,\mathrm{\mathbf{Q}}\,}}\in {{\,\mathrm{\mathbb {R}}\,}}^{n\times n}\), \({{\,\mathrm{\mathbf{Q}}\,}}\succeq 0\), \( {q_{i,j}=y_iy_jK({{\,\mathrm{{\mathbf{x}}}\,}}_i,{{\,\mathrm{{\mathbf{x}}}\,}}_j)} \), where \( {K({{\,\mathrm{{\mathbf{x}}}\,}}_i,{{\,\mathrm{{\mathbf{x}}}\,}}_j):=\phi ({{\,\mathrm{{\mathbf{x}}}\,}}_i)^T\phi ({{\,\mathrm{{\mathbf{x}}}\,}}_j)} \) is a kernel function, and regularization parameter \(C >0\). The optimal \({{\,\mathrm{{\mathbf{w}}}\,}}\) satisfies \( {{{\,\mathrm{{\mathbf{w}}}\,}}=\sum _{i=1}^ny_i{{\,\mathrm{{\mathbf{z}}}\,}}_i\phi ({{\,\mathrm{{\mathbf{x}}}\,}}_i)} \), and the bias term b is calculated using the support vectors that lie on the margins (i.e. \(0<{{\,\mathrm{{\mathbf{z}}}\,}}_i<C\)) as \( {b_i= {{\,\mathrm{{\mathbf{w}}}\,}}^T\phi ({{\,\mathrm{{\mathbf{x}}}\,}}_i) - y_i } \). To avoid numerical stability issues, b is then found by averaging over \(b_i\). The decision function is defined with \( {f({{\,\mathrm{{\mathbf{x}}}\,}}) =\text{ Sign }({{\,\mathrm{{\mathbf{w}}}\,}}^T\phi ({{\,\mathrm{{\mathbf{x}}}\,}})+b)} \).

We compare RACQP with LIBSVM [14], due its popularity, and with Matlab-SVM, due to its ease of use. These methods implement specialized approaches to address the SVM problem (e.g. LIBSVM uses a Sequential Minimal Optimization, SMO, type decomposition method [9, 26]), while our approach solves the optimization problem (37) directly.

The LIBSVM benchmark library provides a large set of instances for SVM, and we selected a representative subset: training data sets with sizes ranging from 20,000 to 580,000; number of features from eight to 1.3 million. We use the test data sets when provided, otherwise, we create test data by randomly choosing 30% of testing data and report cross-validation accuracy results.

In Table 22 we report on model training run-time and accuracy, defined as (num. correctly predicted data)/(total testing data size)\(\times \)100%. RAC-ADMM parameters were as follows: max block size \(s=100, 500,\) and 1000 for small, medium and large instances, respectively and augmented Lagrangian penalty \(\beta =0.1p\), where p is the number of blocks, which in this case is found to be \(p=\lceil n/s \rceil \) with n being the size of training data set. In the experiments we use Gaussian kernel, \( {K({{\,\mathrm{{\mathbf{x}}}\,}}_i,{{\,\mathrm{{\mathbf{x}}}\,}}_j) =\exp (-\frac{1}{2\sigma ^2}\Vert {{\,\mathrm{{\mathbf{x}}}\,}}_i-{{\,\mathrm{{\mathbf{x}}}\,}}_j\Vert ^2)} \). Kernel parameters \(\sigma \) and C were estimated by running a grid-check on cross-validation. We tried different pairs \((C,\sigma )\) and picked those that returned the best cross-validation accuracy (done using randomly choose 30% of train data) when instances were solved using RAC-ADMM. Those pairs were then used to solve the instances with LIBSVM and Matlab. The pairs were chosen from a relatively coarse grid, \(\sigma ,C\in \{0.1, 1, 10\}\) because the goal of this experiment is to compare RAC-ADMM with heuristic implementations rather than to find the best classifier. Termination criteria were either primal/dual residual tolerance (\(\epsilon _p = 10^{-1}\) and \(\epsilon _d = 10^{-0}\)) or maximum number of iterations, \(k=10\), whichever occurs the first. Dual residual was set to such a low value because empirical observations showed that restricting the dual residual does not significantly increase accuracy of the classification but effects run-time disproportionately. Maximum run-time was limited to 10 hours for mid-size problems, and unlimited for the large ones. Run-time is shown in seconds, unless noted otherwise.

Table 22 Model training performance comparison for SVM

The results show that RACQP produces classification models of competitive quality as models produced by specialized software implementations in a much shorter time. RACQP is in general faster than LIBSVM (up to 27\(\times \)) except for instances where ratio of number of observations n with respect to number of features r is very large. It is noticeable that while producing (almost) identical results as LIBSVM, the Matlab implementation is significantly slower.

For small and mid-size instances (training test size < 100K) we tried, the difference in accuracy prediction is less than 2%, except for problems where test data sets are much larger than the training sets. In the case of “rcv1.binary” instance test data set is 5\(\times \) larger than the training set, and for “cod_rna” instance is 4\(\times \) larger. In both cases RACQP outperforms LIBSVM (and Matlab) in accuracy, by 20% and 9%, respectively.

All instances except for “news20.binary” have \(n>>r\) and the choice of the Gaussian kernel is the correct one. For instances where the number of features is larger than the number of observations, linear kernel is usually the better choice as the separability of the model can be exploited [79] and problem solved to similar accuracy in a fraction of time required to solve it with the non-linear kernel. The reason we used the Gaussian kernel on “news20.binary’ instance is that we wanted to show that RACQP is only mildly affected by the feature set size. Instances of similar sizes but different number of features are all solved by RACQP in approximately the same time, which is in contrast with LIBSVM and Matlab that are both affected by the feature space size. LIBSVM slows down significantly while Matlab, in addition to slowing down could not solve ”news.binary“—the implementation of fitcsvm() function that invokes Matlab-SVM algorithm requires full matrices to be provided as the input which in the case of ”news.binary“ requires 141.3GB of main memory.

“Skin_nonskin” benchmark instance “marks” a point where our direct approach starts showing weaknesses—LIBSVM is 5\(\times \) faster than RACQP because of the fine-tuned heuristics which exploit very small feature space (with respect to number of observations). The largest instance we addressed is “covtype.binary”, with more than half of million observations and the (relatively) small feature size (\(p=54\)). For this instance, RACQP continued slowing down proportionately to the increase in problem size, while LIBSVM experienced a large hit in run-time performance, requiring almost two days to solve the full size problem. This indicates that the algorithms employed by LIBSVM are put to the limit and specialized algorithms (and implementations) are needed to handle large-scale SVM problems. RACQP accuracy is lower than that of LIBSVM, but can be improved by tightining residual tolerances under the cost of increased run-time.

For large-size problems RACQP performance degraded, but the success with the mid-size problems suggests that a specialized “RAC-SVM” algorithm could be developed to address very large problems. Such a solution could merge RAC-ADMM algorithm with heuristic techniques to (temporarily) reduce the size of the problem (e.g. [42]), smart kernel approximation techniques, probabilistic approach(es) to shrinking the support vector set (e.g. [62]), and similar.

4.1.8 Changing random seed for RACQP

When it comes to algorithms that are stochastic in nature, as RAC-ADMM is, the question that always comes onto mind is about robustness of the algorithm. More precisely, how much is RAC-ADMM sensitive to variations in problem data for a given problem model, and to variations arising from differences in sub-problems due to from randomness of block building procedure (Algorithm 1, line 3). The answer to the former question has been provided in Sect. 4.1.3, and this section tackles the latter.

To answer the question on RACQP sensitivity to sub-problem structure we subject RACQP to different random seeds—each sub-problem is solving minimization problem defined by Lagrangian (Eq. 20), which is, in turn, a function of blocks of primal variables constructed using a stochastic process, following the procedure outlined in Sect. 3.1, Step 1. This stochastic process is guided by values drawn from a pseudo random number generator, which is initialized using a random seed number. For different seeds the generator produces different sequences of numbers, what in turn produces different sub-problems addressed by RACQP.

Table 23 shows results over a selected set of instances chosen to represent each problem type addressed so far. The table aggregates statistical data collected by solving each instances using ten different seeds per primal/dual tolerance \(\epsilon \). Note that CuteR instances (Sect. 4.1.5) are not included in the analysis as all the instances are solved using a single-block approach. The results show that RACQP is a robust algorithm and that using a single run was a correct choice to make, at least when it comes to problem instances reported in this section. To generalize the claim about RACQP robustness with respect to randomness of block building scheme would require much more experiments and theoretical analysis, what we delegate to our future work.

Table 23 RACQP performance—number of iterations over different random seeds

4.2 Binary and mixed integer problems

The RAC-ADMM multi-block approach can be applied directly to binary (and mixed integer) problems without any adaptation. However, when dealing with combinatorial problems, a divide-and-conquer approach does not necessary lead to a good solution, because solver may get stuck in some local optima. To mitigate this problem RACQP, we introduce additional randomness into the implementation: a simple perturbation scheme shown in Algorithm 2 that helps the solver to “escape” the local optimum and to continue search for another one (and possibly find the global optimum). Thus, in addition to the run-time parameters used for continuous problems, for MIP we need to specify perturbation parameters such as probability distribution to use when choosing how many variables are perturbed (\(N_p\)) and the parameters thereof. As a default, RACQP implements truncated exponential distribution, \(N_p\sim \hbox {Exp}(\lambda )\) with parameter \(\lambda =0.4n\), minimum number of variables \(N_{p,\hbox {min}}=2\), and maximum number of variables \(N_{p,\hbox {max}}=n\), based on the observation that for most of the problems “good” solutions tend to be grouped. Variables are chosen at random, and in the general case, perturbation is done by assigning “new” values (within bounds) to the chosen variables. Default number of trials before perturbation, \(N_{trial}=\min (2,0.005n)\). For all binary problems presented in this section the primal residual error was zero, i.e. the problems were solved to feasibility.

As the default solver for sub-problems, RACQP uses Gurobi, but any other solver that admits mixed integer quadratic problem would suffice. The results reported in this section are based on Gurobi 7.5, and may be outdated. However, since we use Gurobi as the sub-solver, we expect RACQP to implicitly gain by the improvements made to Gurobi. Gurobi was ran using its default run-time settings (e.g. presolve option was turned on).

In [66] the authors present a mixed integer quadratic solver, MIQPs, which uses OSQP solver for solving sub-problems resulting from branch-and-bound strategy. Since the solver is built for small and medium size problems that occur in embedded applications, we do not include it in our current study. However, given that MIQPs showed a promising numerical performance (3\(\times \) faster than Gurobi) even though being implemented in Python, it would be interesting to use it within RACQP as the external solver for MIP (Algorithm 1, line 8) instead of our default solver (Gurobi) and compare performance. We defer this comparison to future work.

To solve MIP problems RACQP uses the partial Lagrangian approach, described in Sect. 2.3.3, to handle bounds on variables \({{\,\mathrm{\mathcal {X}}\,}}_i\), \({{\,\mathrm{{\mathbf{x}}}\,}}_i\in {{\,\mathrm{\mathcal {X}}\,}}_i\). Additionally, depending on a problem structure, equality and inequality constraints can also be moved to the local constraint set. Our experiments show that moving some (as it done for QAP), or all constraints (e.g. graph cut problems) to a local set is beneficial in terms of block sizes, run-time, and overall solution quality. By using local constraints we help the sub-solver (e.g. Gurobi) reduce the size of the problem and tighten its formulation (using presolve and cutting plane algorithms).

Rather than solving the binary QP problem exactly, our goal is to find a (randomized or deterministic) algorithm that could find a better solution under a fixed solution time constraint. Our preliminary tests show that solving a large-scale problem using RAC-ADMM based approach can lead to a very good quality solution for an integer problem in a very limited time.

The quality of solutions is given in a form of a gap between the objective value of the optimal solution \(x_{\mathrm{opt}}^*\) and the objective value of the solution found by a solver S, \(x_S^*\):

$$\begin{aligned} gap_{S}=\frac{f(x_S^*) - f(x_{\mathrm{opt}}^*)}{1+\hbox {abs}(f(x_{\mathrm{opt}}^*))} \end{aligned}$$
(38)

For the instances for which the optimal solution remains unknown (e.g. QAPLIB and GSET instances), we use the best known results from the literature. Note that for maximization problems (e.g. Max-Cut, Max-Bisection) gap is the negative of (38). All binary problems are solved with primal residual equal to zero (i.e. the solutions are feasible and integer).

4.2.1 Randomness helps

We start the analysis of RACQP for binary problems with a short example showing that having blocks that are randomly constructed at each iteration, as done by RAC-ADMM, is the main feature that makes RACQP work well for combinatorial problems, without a need for any special adaptation of the algorithm for the discrete domain.

RAC-ADMM can be easily adapted to execute classical ADMM or RP-ADMM algorithms, so here we compare these three ADMM variants when applied to combinatorial problems. We use a small size problem (\(n=1000\)) and construct a problem using (32) applied to problem of Markowitz type (39),

$$\begin{aligned} \begin{array}{cl} \min \limits _{{{\,\mathrm{{\mathbf{x}}}\,}}} &{} {{\,\mathrm{{\mathbf{x}}}\,}}^T {{\,\mathrm{\mathbf{V}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}+ \tau {{\,\mathrm{{\mathbf{m}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}+\kappa \Vert {{\,\mathrm{{\mathbf{x}}}\,}}\Vert ^2_2\\ \text{ s.t. }&{} {{\,\mathrm{{\mathbf{e}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}= r\\ &{} {{\,\mathrm{{\mathbf{x}}}\,}}\in \{0,1\}^n \end{array} \end{aligned}$$
(39)

with \(\kappa =10^{-5}\) and a positive integer number \(r\in {{\,\mathrm{\mathbb {Z}}\,}}_+\), \(r\in (1,n)\) that defines how many stocks from a given portfolio must be chosen.

A typical evaluation of the algorithms is shown in Fig. 3. Results show that RAC-ADMM is much better suited for binary optimization problems than either cyclic ADMM or RP-ADMM or distributed-ADMM, which is not surprising since more randomness is adapted into the algorithm making it more likely to escape local optima. All the algorithms are quick to find a local optimum, but besides RAC-ADMM stay at that first found point, while RAC-ADMM continues to find local optima, which could be better or worse than previously found. Because of this behavior, one can keep track the best solution found (\({{\,\mathrm{{\mathbf{x}}}\,}}_{best}\), Algorithm 2). The algorithms seem robust with respect to the structure of the Hessian and choice of initial point. For completeness of the comparison, we have implemented distributed-ADMM (Eq. 4) for binary problems and ran the algorithm on the same data. The results are omitted from the Fig. 3 because the algorithm was unable to find a single feasible solution in 500 iterations.

Fig. 3
figure 3

A typical evaluation of the objective function value of (39) for different ADMM algorithms. \(n=1000\), \(r=n/2\),\(p=50\), \(\beta =50\)

4.2.2 Markowitz portfolio selection

Similarly to the section on continuous problems, we compare RACQP performance with that of Gurobi on Markowitz cardinality constrained portfolio selection problem (39) using real data coming from CRSP 2018 [78]. In the experiments, we set \(r=n/2\) with all other settings identical to those used in Sect. 4.1.2, including \({{\,\mathrm{\mathbf{V}}\,}}\) and \({{\,\mathrm{{\mathbf{m}}}\,}}\), estimated from CRSP 2018 data. The default perturbation RACQP settings with \(\beta =0.05\), \(p=100\) were used in the experiments. Gap is measured from the “Optimal” objective values of the solutions found by Gurobi in about 1 hour run-time after relaxing MIPGAP parameter to 0.1.

Table 24 Markowitz portfolio selection model (39)

From the results (Table 24) it is noticeable that RACQP finds relatively good solutions (gap \(10^{-2} - 10^{-4}\)) in a very short time, in some cases even before Gurobi had time to finalize root relaxation step of its binary optimization procedure. Maximal allowed run-time of 1 min was far too short for Gurobi to find any solution, so it returned a heuristic ones. Note that those solutions (third column of the table) are extremely weak, suggesting that a RAC-ADMM based solution could be implemented and used instead.

Low-rank Markowitz portfolio selection model

Similarly to (31) we formulate the model for low-rank covariance matrix V as

$$\begin{aligned} \begin{array}{cl} \min \limits _{{{\,\mathrm{{\mathbf{x}}}\,}}} &{} \Vert {{\,\mathrm{{\mathbf{y}}}\,}}\Vert ^2_2 - \tau {{\,\mathrm{{\mathbf{m}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}+ \kappa \Vert {{\,\mathrm{{\mathbf{x}}}\,}}\Vert ^2_2 \\ \text{ s.t. }&{} {{\,\mathrm{{\mathbf{e}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}= r\\ &{} {{\,\mathrm{\mathbf{B}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}- {{\,\mathrm{{\mathbf{y}}}\,}}= \varvec{0}\\ &{} {{\,\mathrm{{\mathbf{x}}}\,}}\in \{0,1\}^n \end{array} \end{aligned}$$
(40)

and solve the model for CRSP 2018 data. We use \(\beta =0.5\), \(p=50\). RACQP gap was measured from the optimal solution returned by Gurobi. In Table 25 we report on the best solutions found by RACQP with max run-time limited to 60 seconds. Results are hard to compare. When Hessian is diagonal and the number of constraints are small, as the case for this data, Gurobi has a very easy time solving the problems (monthly and daily data)—it finds good heuristic points to start with, and solves problems at a root node after a couple of hundreds of simplex iterations. On the other hand, RACQP, which does not directly benefit from diagonal Hessian, needs to execute multiple iterations of ADMM. Even though the problems are small and solved very quickly, the overhead of preparing the sub-problems and initializing Gurobi to solve sub-problems accumulates to the point of overwhelming RACQP run-time. In that light, for the rest of this section we consider problems where Hessian is a non-diagonal matrix, and address the problems that are hard to solve directly by Gurobi (and possibly other MIP QP solvers).

Table 25 Low-rank reformulation Markowitz portfolio selection model (39)

4.2.3 QAPLIB

The binary quadratic assignment problem (QAP) is known to be NP-hard and that binary instances of larger sizes (dimension of the permutation matrix \(r>40\)) are considered to be intractable and cannot be solved exactly (though some instances of a large size with special structure have been solved). Currently, the only practical solutions for solving large QAP instances are heuristic methods.

For binary QAP we apply the same method for variance reduction as we did for relaxed QAP (Sect. 4.1.4). We group variables following the structure of constraints, which is dictated by the permutation matrix \(X\in \{0,1\}^{r\times r}\) (see Eq. 33 for QAP problem formulation)—we construct one super-variable, \({{\,\mathrm{{\mathbf{x}}}\,}}_i\) for each row i of X. Next we make the use of the partial Lagrangian, and split constraints into the local constraint set consisting of (33) (a) and the global constraint set consisting of (33) (b), so that the partial Lagrangian is

$$\begin{aligned} {L_{\beta }({{\,\mathrm{{\mathbf{x}}}\,}},y)=\frac{1}{2}{{\,\mathrm{{\mathbf{x}}}\,}}^T{{\,\mathrm{\mathbf{H}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{y}}}\,}}^T({{\,\mathrm{\mathbf{A}}\,}}_{global}{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{1}}}\,}})+\frac{ \beta }{2}||{{\,\mathrm{\mathbf{A}}\,}}_{global}{{\,\mathrm{{\mathbf{x}}}\,}}-{{\,\mathrm{{\mathbf{1}}}\,}}||^2. } \end{aligned}$$

At each iteration, we update the i th block by solving

$$\begin{aligned} {{{\,\mathrm{{\mathbf{x}}}\,}}_i^{k+1}=\hbox {arg min} \{L_{\beta }(\cdot ) |\, {{\,\mathrm{\mathbf{A}}\,}}_{local}{{\,\mathrm{{\mathbf{x}}}\,}}_i={{\,\mathrm{{\mathbf{1}}}\,}},\ {{\,\mathrm{{\mathbf{x}}}\,}}_i\in \{0,1\}^n\}.} \end{aligned}$$

Next, continuing on the discussion on perturbation from the previous section, we turn the feature on and set parameters as follows: number of super-variables to perturb is drawn from truncated exponential distribution, \(N_p\sim \hbox {Exp}(\lambda )\) with parameter \(\lambda =0.4r\), minimum number of variables \(N_{p,\hbox {min}}=2\) and maximum number of variables \(N_{p,\hbox {max}}=r\). The number of trials before perturbation \(N_{trial}\) is set to its default value. Note that we do not perturb single variables (\(x_{i,j}\)), rather super-variables that we choose at random. If a super-variable \({{\,\mathrm{{\mathbf{x}}}\,}}_i\) has value of ’1’ at one location, and ’0’ on all other entries, then we randomly swap location of ’1’ within the super variable (thus keeping the row-wise constraint on X for row i satisfied). If the super-variable is not feasible (number of ’1’\(\not =1\)), we flip values of a random number of variables that make \({{\,\mathrm{{\mathbf{x}}}\,}}_i\). The initial point is a random feasible vector. The penalty parameter is a function of the problem size, \(\beta =n\), while the number of blocks depends on the permutation matrix size and it is \(p=\lceil r/2\rceil \).

The summary of the QAPLIB benchmark [60] results is given in Table 26. Out of 133 total instances the benchmark includes, RACQP found the optimal solution (or the best known from literature as not all instances have proven optimal solution) for 18 instances within 10 min of run-time. For the rest of the instances, RACQP returned solutions with an average gap of \(\mu =0.07\). Gurobi solved only three instances to optimality. The average gap of the unsolved instances is \(\mu =12.15\), which includes heuristic solutions returned when root relaxation step was not finalized (20 instances). Removing those outliers results in the average gap of \(\mu =5.57\).

Table 26 Number of instances = 133

Table 27 gives detailed information on 21 large instances from QAPLIB data set. The most important takeaway from the table is that Gurobi can not even start solving very large problems as it can not finalize the root relaxation step within given maximum run time, while RACQP can.

Table 27 QAPLIB, large problems [11, 52] and RACQP/Gurobi objective values

4.2.4 Maximum cut problem

The maximum-cut (Max-Cut) problem consists of finding a partition of the nodes of a graph \(G = (V,E)\), into two disjoint sets \(V_1\) and \(V_2\) (\(V_1\cap V_2 = \emptyset \), \(V_1\cup V_2 = V\)) in such a way that the total weight of the edges that have one endpoint in \(V_1\) and the other in \(V_2\) is maximized. The problem has numerous important practical applications, and is one of Karp’s 21 NP-complete problems. A standard formulation of the problem is \( { \max \limits _{{{\,\mathrm{{\mathbf{y}}}\,}}_i\in \{-1,1\}} \frac{1}{4}\sum _{i,j} w_{i,j}(1-y_iy_j) } \), which can be re-formulated into quadratic unconstrained binary problem

$$\begin{aligned} \begin{array}{cl} \min \limits _{{{\,\mathrm{{\mathbf{x}}}\,}}} &{} {{\,\mathrm{{\mathbf{x}}}\,}}^T {{\,\mathrm{\mathbf{H}}\,}}{{\,\mathrm{{\mathbf{x}}}\,}}\\ \text{ s.t. }&{} {{\,\mathrm{{\mathbf{x}}}\,}}\in \{0,1\}^n \end{array} \end{aligned}$$
(41)

where \(h_{i,j}=w_{i,j}\) and \(h_{i,i}=-\frac{1}{2}(\sum _{j=1}^n w_{i,j} + \sum _{j=1}^n w_{j,i})\).

We use the Gset benchmark from [33], and compare the results of our experiments with the optimal solutions (found by Gurobi) and the best known solutions from the literature [4, 48]. For perturbation we use default parameters and perform perturbation by choosing a random number of variables and negating their values, i.e. \(x_i=1-x_i\). The number of blocks is equal for all instances, \(p=4\), and the initial point is set to zero (\({{\,\mathrm{{\mathbf{x}}}\,}}_0={{\,\mathrm{{\mathbf{0}}}\,}}\)) for all the experiments. Note that as the max-cut problem is unconstrained, the penalty parameter \(\beta \) is not used (and RACQP is doing a randomly assembled cyclic BCD).

Table 28 Max-Cut, GSET instances

In contrast to continuous sparse problems (rule 4, Sect. 4.1.1), sparse binary problems benefit from using a randomized multi-block approach, as shown in Table 28. The table compares RACQP and Gurobi results collected from experiments on Gset instances for three different maximum run-time limit settings, 10, 30 and 60 minutes. RACQP again outperforms Gurobi, overall, it finds better solutions when run-time is limited. Although Gurobi does better on a few problems, on average RACQP is better. Note that for large(r) problems (\(n\ge 5000\)) RACQP keeps improving, which can be explained by the difference in number of perturbations—for smaller problems, good points have already being visited and a chance to find a better one are small. Adaptively changing perturbation parameters could help, but this topic is out of scope of this work.

4.2.5 Maximum bisection problem

The maximum bisection problem is a variant of the Max-Cut problem that involves partitioning the vertex set V of a graph \(G = (V,E)\) into two disjoint sets \(V_1\) and \(V_2\) of equal cardinality (i.e. \(V_1\cap V_2 = \emptyset \), \(V_1\cup V_2 = V\), \(|V_1| = |V_2|\)) such that the total weight of the edges whose endpoints belong to different subsets is maximized. The problem formulation follows (41) with the addition of a constraint \({{\,\mathrm{{\mathbf{e}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}= \lfloor n/2 \rfloor \), where n is the graph size.

For Max-Bisection, at each iteration we update the i th block by solving

$$\begin{aligned} {{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}^{k+1}={\mathop {\hbox {arg min}}\limits _{{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}\in \{0,1\}^{d_i}}} \{{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}^T {{\,\mathrm{\mathbf{H}}\,}}_{\omega _i} {{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i} -y({{\,\mathrm{{\mathbf{e}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}-{{\,\mathrm{{\mathbf{b}}}\,}}_{\omega _i}) +\frac{\beta }{2}\Vert {{\,\mathrm{{\mathbf{e}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}-{{\,\mathrm{{\mathbf{b}}}\,}}_{\omega _i}\Vert ^2 \} } \end{aligned}$$

where \(d_i\) is the size of block i, \({{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}\) is a sub-vector of \({{\,\mathrm{{\mathbf{x}}}\,}}\) constructed of components of \({{\,\mathrm{{\mathbf{x}}}\,}}\) with indices \(\omega _i\in \Omega \), and \(b_{\omega _i}=\lfloor n/2 \rfloor -{{\,\mathrm{{\mathbf{e}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}_{-\omega _i}\) with \({{\,\mathrm{{\mathbf{x}}}\,}}_{-\omega _i}\) being the sub-vector of \({{\,\mathrm{{\mathbf{x}}}\,}}\) with indices not chosen by \(\omega _i\). Solving the sub-problems directly has shown to be very time consuming. However, noticing that Gurobi, while solving the problem as whole, makes a good use of cuts for this type of problems (matrix Q structure), we decided to reformulate the sub-problems as follows

$$\begin{aligned} \begin{array}{cl} \min \limits _{{{\,\mathrm{{\mathbf{x}}}\,}}} &{} {{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}^T {{\,\mathrm{\mathbf{H}}\,}}_{\omega _i} {{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i} -yr +\frac{\beta }{2}r^2 \\ \text{ s.t. }&{} {{\,\mathrm{{\mathbf{e}}}\,}}^T{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}-r={{\,\mathrm{{\mathbf{b}}}\,}}_{\omega _i} \\ &{}{{\,\mathrm{{\mathbf{x}}}\,}}_{\omega _i}\in \{0,1\}^{d_i},\ r\in \{0,1\}. \end{array} \end{aligned}$$

Note that r can be also defined as a bounded continuous or integer variable, but because the optimal value is zero and because Gurobi makes good use of binary cuts, we decided to define r as binary.

As in the previous section, we use Gset benchmark library and compare the results of our experiments with the best known solutions for max-bisection problems found in the literature [48]. The experimental setup is identical to that of Max-Cut experiments except for the use of the penalty parameter \(\beta =0.005\) and the initial point \({{\,\mathrm{{\mathbf{x}}}\,}}_0\) which is a feasible random vector. Perturbation is done with a simple swap—an equal number of variables with values “1” and “0” is chosen and the new value set to be the negation of the old value.

Table 29 Max-Bisection, GSET instances

The results are shown in Table 29. Compared to the unconstrained max-cut problem, RACQP seems to have less trouble solving max-bisection problem—adding a single constraint boosted its performance by up to 2\(\times \). Gurobi performance on the other worsened. Overall, RACQP outperforms Gurobi, finding better solutions when run-time is limited. Both Gurobi and RACQP continue gaining on solution quality (gap gets smaller) with longer time limits.

5 Summary

In this paper, we introduced a novel randomized algorithm, randomly assembled multi-block and cyclic alternating direction method of multipliers (RAC-ADMM), for solving continuous and binary convex quadratic problems. We provided a theoretical proof of the performance of our algorithm for solving linear-equality constrained continuous convex quadratic programming, including the expected convergence of the algorithm and sufficient condition for almost surely convergence of the algorithm. We further provided open source code of our solver, RACQP, and numerical results on demonstrating the efficiency of our algorithm.

We conducted multiple numerical tests on solving synthetic, real-world, and bench-mark quadratic optimization problems, which include continuous and binary problems. We compare RACQP with Gurobi, Mosek and OSQP for cases that do not require high accuracy, but a strictly improved solution in shortest possible run-time. Computational results show that RACQP, except for a couple of instances with a special structure, finds solutions of a very good quality in a much shorter time than the compared solvers.

In addition to general linearly constrained quadratic problems we applied RACQP to few selected machine learning problems, Linear Regression, LASSO, Elastic-Net, and SVM. Our solver matches the performance of the best tailored methods such as Glmnet and LIBSVM, and often gives better results than that of tailored methods. In addition, our solver uses much less computation memory space than other ADMM based method do, so that it is suitable in real applications with big data.

The following is a quick summary of the pros and cons of RACQP, implementation of RAC-ADMM, for solving quadratic problems, and suggests the future research:

  • RACQP is remarkably effective for solving continuous and binary convex QP problems when the Hessian is non-diagonal, the constraint matrix are unstructured, or the number of constraints are small. These findings are demonstrated by solving Markowitz portfolio problems with real or random data, and randomly generated sparse convex QP problems.

  • RACQP, coupled with smart-grouping and a partial augmented Lagrangian, is equally effective when the structure of the constraints is known. This finding is supported by solving continuous and binary bench-mark Quadratic Assignment, Max-Cut, and Max-Bisection problems. However, efficiently deciding on grouping strategy is also challenging. We plan to build an “automatic-smart-grouping” method as a pre-solver for unknown structured problem data.

  • Computational studies done on binary problems show that RAC-ADMM approach to solving problems offers an advantage over the traditional direct approach (solving the problem as whole) when finding a good quality solution for a large-scale integer problem in a very limited time. However, exact binary QP solvers, such as Gurobi, are needed, because our binary RACQP relies on solving many small or medium sized binary sub-problems. Of course, we plan to explore more high efficiency solvers for medium-sized binary problems for RACQP.

  • The ADMM-based approach, either RACQP or OSQP, is less competitive when the Hessian of the convex quadratic objective is diagonal and the constraints are sparse but structured such as a network-flow type. We believe in this case both Gurobi and Mosek can utilize more efficient Cholesky factorization that is commonly used by interior-point algorithms for solving linear programs; see more details in Sect. 3.1. In contrary, RACQP has considerable overhead cost of preparing block data and initialization time of the sub-problem solver, and the time spent on solving diagonal sub-problems was an order of magnitude shorter than time needed to prepare data. This, together with the divergence problem of multi-block ADMM, hints that there must be something connected to the problem structure that makes such instances hard for the ADMM-based approach. We plan on conducting additional research to identify problem instances that are well-suited and those that are unsuitable for ADMM.

  • There are still many other open questions regarding RAC-ADMM. For example, there is little work on how to optimally choose run-time parameters to work with RAC-ADMM, including penalty parameter \(\beta \), number of blocks, and so for.