COSMO: A conic operator splitting method for convex conic problems

This paper describes the Conic Operator Splitting Method (COSMO) solver, an operator splitting algorithm for convex optimisation problems with quadratic objective function and conic constraints. At each step the algorithm alternates between solving a quasi-definite linear system with a constant coefficient matrix and a projection onto convex sets. The low per-iteration computational cost makes the method particularly efficient for large problems, e.g. semidefinite programs that arise in portfolio optimisation, graph theory, and robust control. Moreover, the solver uses chordal decomposition techniques and a new clique merging algorithm to effectively exploit sparsity in large, structured semidefinite programs. A number of benchmarks against other state-of-the-art solvers for a variety of problems show the effectiveness of our approach. Our Julia implementation is open-source, designed to be extended and customised by the user, and is integrated into the Julia optimisation ecosystem.


Introduction
We consider convex optimisation problems in the form minimize f (x) subject to g i (x) ≤ 0, i = 1, . . . , l h i (x) = 0, i = 1, . . . , k, where we assume that both the objective function f : R n → R and the inequality constraint functions g i : R n → R are convex, and that the equality constraints h i (x) := a i x − b i are affine. We will denote an optimal solution to this problem (if it exists) as x * . Convex optimisation problems feature heavily in a wide range of research areas and industries, including problems in machine learning [CV95], finance [BBD + 17], optimal control [BEGFB94], and operations research [BHH01]. Concrete examples of problems fitting the general form (1) include linear programming (LP), convex quadratic programming (QP), second-order cone programming (SOCP), and semidefinite programming (SDP) problems. Methods to solve each of these standard problem classes are well known and a number of open-and closed-source solvers are widely available. However, the trend for data and training sets of increasing size in decision making problems and machine learning poses a challenge for state-of-the-art software.
Algorithms for LPs were first used to solve military planning and allocation problems in the 1940s [Dan63]. In 1947 Danzig developed the simplex method that solves LPs by searching for the optimal solution along the vertices of the inequality polytope. Extensions to the method led to the general field of active-set methods [Wol59] that are able to solve both LPs and QPs, and which search for an optimal point by iteratively constructing a set of active constraints. Although often efficient in practice, a major theoretical drawback is that the worst-case complexity increases exponentially with the problem size [WN99].
The most common approach taken by modern convex solvers is the interior-point method [WN99], which stems from Karmarkar's original projective algorithm [Kar84], and is able to solve both LPs and QPs in polynomial time. Interior point methods have since been extended to problems with positive semidefinite (PSD) constraints in [HRVW96] and [AHO98]. The primal-dual interior point methods apply variants of Newton's method to iteratively find a solution to a set of optimality KKT conditions. At each iteration the algorithm alternates between a Newton step that involves factoring a Jacobian matrix and a line search to determine the magnitude of the step to ensure a feasible iterate. Most notably, the Mehrotra predictor-corrector method in [Meh92] forms the basis of several implementations because of its strong practical performance [Wri97]. However, interior-point methods typically do not scale well for very large problems since the Jacobian matrix has to be calculated and factored at each step.
Two main approaches to overcome this limitation are active research areas. Firstly, a renewed focus on first-order methods with computationally cheaper per-iteration-cost, and secondly the exploitation of sparsity in the problem data. First-order methods are known to handle larger problems well at the expense of reduced accuracy compared to interior-point methods. In the 1960s Everett [Eve63] proposed a dual decomposition method that allows one to decompose a separable objective function, making each iteration cheaper to compute. Augmented Lagrangian methods by Miele ([MCIL71, MCL71, MMLC72]), Hestenes [Hes69], and Powell [Pow69] are more robust and helped to remove the strict convexity conditions on problems, while losing the decomposition property. By splitting the objective function, the alternating direction method of multipliers (ADMM), first described in [GM75], allowed the advantages of dual decomposition to be combined with the superior convergence and robustness of augmented Lagrangian methods. Subsequently, it was shown that ADMM can be analysed from the perspective of monotone operators and that it is a special case of Douglas-Rachford splitting [Eck89] as well as of the proximal point algorithm in [Roc76], which allowed further insight into the method.
ADMM methods are simple to implement and computationally cheap, even for large problems.
However, they tend to converge slowly to a high accuracy solution and the detection of infeasibility is more involved compared to interior-point methods. They are therefore most often used in applications where a modestly accurate solution is sufficient [PB + 14]. Most of the early advances in first-order methods such as ADMM happened in the 1970s/80s long before the demand for large scale optimisation, which may explain why they stayed less well-known and have only recently resurfaced.
A method of exploiting the sparsity pattern of PSD constraints in an interior-point algorithm was developed in [FKMN01]. This work showed that if the coefficient matrices of an SDP exhibit an aggregate sparsity pattern represented by a chordal graph, then both primal and dual problems can be decomposed into a problem with many smaller PSD constraints on the nonzero blocks of the original matrix variable. These blocks are associated with subsets, called cliques, of graph vertices. Moreover, it can be advantageous to merge some of these blocks, for example if they overlap significantly. The optimal way to merge the blocks, or equivalently the corresponding graph cliques, after the initial decomposition depends on the solver algorithm and is still an open question. Sparsity in SDPs has also been studied for problems with underlying graph structure, e.g. optimal power-flow problems in [MHLD13] and graph optimisation problems in [Ali95].
Both Fukuda [FKMN01] and Sun [SAV14] developed interior-point solvers that exploit chordal sparsity patterns in PSD constraints. Some heuristic methods to merge cliques have been proposed for interior-point methods in [MHLD13] and been implemented in the SparseCoLO package [FKK + 09] and the CHOMPACK package [AV15].
Several solvers based on the ADMM method have been released recently. The solver OSQP [SBG + 18] is implemented in C and detects infeasibility based on the differences of the iterates [BGSB17], but only solves LPs and QPs. The C-based SCS [OCPB16] implements an operator splitting method that solves the primal-dual pair of conic programs in order to provide infeasibility certificates. The underlying homogeneous self-dual embedding method has been extended by [ZFP + 19] to exploit sparsity and implemented in the MATLAB solver CDCS. The conic solvers SCS and CDCS are unable to handle quadratic cost functions directly. Instead they are forced to reformulate problem with quadratic objective functions by adding a second-order cone constraint, which increases the problem size. Moreover, they rely on primal-dual formulations to detect infeasibility.
Outline In Section 2 we define the general conic problem format, its dual problem, as well as optimality and infeasibility conditions. Section 3 describes the ADMM algorithm that is used by COSMO. Section 4 explains how to decompose SDPs in a preprocessing step provided the problem data has an aggregated sparsity pattern. In Section 5 we describe a new clique merging strategy and compare it to existing approaches. Implementation details and code related design choices are discussed in Section 6. Section 7 shows benchmark results of COSMO vs. other state-of-the art solvers on a number of test problems. Section 8 concludes the paper.
Contributions With the solver package described in this paper we make the following contributions: 1. We implement a first-order method for large conic problems that is able to detect infeasibility without the need of a homogeneous self-dual embedding.
2. COSMO directly supports quadratic objective functions, thus reducing overheads for applications with both quadratic objective function and PSD constraints. This also avoids a major disadvantage of conic solvers compared to native QP solvers, i.e no additional matrix factorisation for the conversion is needed and favourable sparsity in the objective can be maintained.
3. Large structured positive semidefinite programs are analysed and, if possible, chordally decomposed. This typically allows one to solve very large sparse and structured problems orders of magnitudes faster than competing solvers. For complex sparsity patterns, further performance improvements are achieved by recombining some of the sub-blocks of the initial decomposition in an optimal way. For this purpose, we propose a new clique graph based merging strategy and compare it to existing heuristic approaches.
4. The open-source solver is written in a modular way in the fast and flexible programming language Julia. The design allows users to extend the solver by specifying a specific linear system solver and by defining their own convex cones or custom projection methods.

Notation
The following notation and definitions will be used throughout this paper. Denote the space of real numbers R, the n-dimensional real space R n , the n-dimensional zero cone {0} n , the nonnegative orthant R n + , the space of symmetric matrices S n , and the set of positive semidefinite matrices S n + . In some of the following sections matrix data is considered in vectorized form. Denote the vectorization of a matrix X by stacking it columns as x := vec(X) and the inverse operation as vec −1 (X) = mat(x). For symmetric matrices it is often computationally beneficial to work only with the upper-triangular elements of the matrix. Denote the transformation of a symmetric matrix V ∈ S n with i,j-th element V ij into a vector of upper-triangular elements as Here the scaling factor of √ 2 preserves the matrix inner product, i.e. tr(AB) = svec(A) svec(B) for symmetric matrices A, B ∈ S n . The inverse operation is denoted by svec −1 (s) =: smat(s). Denote the Kronecker product of two matrices A and B as A ⊗ B. The Frobenius norm of a matrix A is given by ||A|| F = ij |a ij | 2 ) 1/2 . Sometimes we consider positive semidefinite constraints in vector form, so we define the space of vectorized symmetric positive semidefinite matrices as For a convex cone K denote the polar cone by and, following [Roc70], the recession cone of K by The proximal operator of a convex, closed and proper function f : R n → R is given by We denote the indicator function of a nonempty, closed convex set C ⊆ R n by I C (x) := 0 x ∈ C +∞ otherwise, and the projection of x ∈ R n onto C by: We further denote the support function of C by: x, y .

Conic Problems
We will address convex optimisation problems with a quadratic objective function and a number of conic constraints in the form: where x ∈ R n is the primal decision variable and s ∈ R m is the primal slack variable. The objective function is defined by positive semidefinite matrix P ∈ S n + and vector q ∈ R n . The constraints are defined by matrix A ∈ R m×n , vector b ∈ R m and a non-empty, closed, convex cone K which itself can be a Cartesian product of cones in the form with cone dimensions N i=1 m i = m. Note that any LP, QP, SOCP, or SDP can be written in the form (3) using an appropriate choice of cones, as well as problems involving the power or exponential cones and their duals.
The dual problem associated with (3) is given by: The conditions for optimality (assuming linear independence constraint qualification) follow from the Karush-Kuhn-Tucker (KKT) conditions: Assuming strong duality, if there exists a x * ∈ R n , s * ∈ R m , and y * ∈ R m that fulfil (6a)-(6c) then the pair (x * , s * ) is called the primal solution and y * is called the dual solution of problem (3).

Infeasibility certificates
Primal and dual infeasibility conditions were developed for ADMM in [BGSB17]. These conditions are directly applicable to problems of the form (3). To simplify the notation of the conditions, define the coneK := −K + {b}. Then, the following sets provide certificates for primal and dual infeasibility: The existence of some y ∈ D certifies that problem (3) is primal infeasible, while the existence of some x ∈ P certifies dual infeasibility.

ADMM Algorithm
We use the same splitting as in [SBG + 18] to transform problem (3) into standard ADMM format. The problem is rewritten by introducing the dummy variablesx = x ands = s: where the indicator functions of the sets {(x, s) ∈ R n × R m | Ax + s = b} and K were used to move the constraints of (3) into the objective function. The augmented Lagrangian of (9) is given by with step size parameters ρ > 0 and σ > 0 and dual variables λ ∈ R n and y ∈ R m . The corresponding ADMM iteration is given by: where we relaxed the z-update and the dual variable update with relaxation parameter α ∈ (0, 2) according to [EB92]. Notice from (11b) and (11d) that the dual variable corresponding to the constraint x =x satisfies λ k = 0 for all k.

Solution of the equality constrained QP
The minimization problem in (11a) has the form of an equality-constrained quadratic program: The solution of (12) can be obtained by solving a single linear system. The corresponding Lagrangian is given by: where the Lagrangian multiplier ν ∈ R m accounts for the equality-constraint Ax + s = b. Thus, the KKT optimality conditions for this equality constrained QP are given by: Elimination ofs k+1 from these equations leads to the linear system: Note that the introduction of the dummy variablex led to the term σI in the upper-left corner of the coefficient matrix in (17). Consequently, the coefficient matrix in (17) is always quasidefinite [Van95], i.e. it always has a positive definite upper-left block and a negative definite lowerright block, and is therefore full rank even when P = 0 or A is rank deficient. Following [Van95] the left hand side of (17) always has a well-defined LDL factorization with a diagonal D.

Projection step
As shown in [PB + 14] the minimization problem in (11c) can be interpreted as the ρ-weighted proximal operator of the indicator function I K . It is therefore equivalent to the Euclidean projection Π K onto the cone K, i.e.
If K is a Cartesian product of cones as in (4) this projection is equivalent to the projection of the relevant components of the argument of Π K (·) onto each cone K i . A problem with N SDP constraints therefore requires N projections, but, since each of these operates on an independent segment of the input vector, they can be performed in parallel.

Algorithm steps
The calculations performed at each iteration are summarized in Algorithm 1. Observe that the coefficient matrix of the linear system in (17) is constant, so that one can precompute and cache the LDL factorization and efficiently evaluate line 2 with changing right hand sides. A refactorisation is only necessary if the step size parameters ρ and σ are updated.
Algorithm 1: ADMM iteration Input : initial values x 0 , s 0 , y 0 , problem data P , q, A, b, and parameters σ > 0, ρ > 0, α ∈ (0, 2) 1 Do 2 (x k+1 , ν k+1 ) ← solve linear system (17); 7 while termination criteria not satisfied; Lines 3, 4, and 6 are computationally inexpensive since they involve only vector addition and scalar-vector multiplication. The projection in line 5 is crucial to the performance of the algorithm depending on the particular cones employed in the model: projections onto the zero-cone or the nonnegative orthant are inexpensive, while a projection onto the positive-semidefinite cone of dimension N involves an eigenvalue decomposition. Since direct methods for eigen-decompositions have a complexity of approximately O(N 3 ), this turns line 5 into the most computationally expensive operation of the algorithm for large SDPs, and improving the efficiency of this step will be the objective of much of Sections 4 and 5.

Algorithm convergence
For feasible problems, Algorithm 1 produces a sequence of iterates (x k , s k , y k ) that converges to a limit satisfying the optimality conditions in (6) as k → ∞. The authors in [SBG + 18] show convergence of Algorithm 1 by applying the Douglas-Rachford splitting to a problem reformulation.
In the Douglas-Rachford formulation, lines 5 and 6 become projections onto K and K • respectively, so that the conic constraints in (6c) always hold. Furthermore, convergence of the residual iterates in (6a)-(6b) can be concluded from the convergence of the splitting variables which generally holds for Douglas-Rachford splitting [BC11].
For infeasible problems, [BGSB17] showed that Algorithm 1 leads to convergence of the successive differences between iterates For primal infeasible problems δy = lim k→∞ δy k will satisfy condition (8), whereas for dual infeasible problems δx = lim k→∞ δx k is a certificate of (7).

Scaling the problem data
The rate of convergence of ADMM and other first-order methods depends in practice on the scaling of the problem data; see [GB14]. Particularly for badly conditioned problems, this suggests a preprocessing step where the problem data is scaled in order to improve convergence. For certain problem classes an optimal scaling has been found, see [GB14,GB15,Gre97]. However, the computation of the optimal scaling is often more complicated than solving the original problem.
Consequently, most algorithms rely on heuristic methods such as matrix equilibration.
We scale the equality constraints by diagonal positive definite matrices D and U . The scaled form of (3) is given by: Px +q x subject toÂx +ŝ =b, and the scaled convex cone After solving (22) the original solution is obtained by reversing the scaling: One heuristic strategy that has been shown to work well in practice is to choose the scaling matrices D and U to equilibrate, i.e. reduce the condition number of, the problem data. The Ruiz equilibration technique described in [Rui01] iteratively scales the rows and columns of a matrix to have an infinity-norm of 1 and converges linearly. We apply the modified Ruiz algorithm shown in Algorithm 2 to reduce the condition number of the symmetric matrix R which represents the problem data. Since R is symmetric it suffices to consider the columns R c,i of R. At each iteration the scaling routine calculates the norm of each column. For the columns with norms higher than the tolerance τ scaling vector c is updated with the inverse square root of the norm 1 . If the norm is below the tolerance, the corresponding column will be scaled by 1.
Since the matrix U scales the (possibly composite) cone constraint, the scaling must ensure that if s ∈ K then U −1 s ∈ K. Let K be a Cartesian product of N cones as in (4) and partition U into blocks Algorithm 2: Modified Ruiz equilibration with block U i ∈ R m i ×m i which scales the constraint corresponding to K i . For each cone K i ∈ R m i that requires a scalar or symmetric scaling, e.g. a second-order cone or positive semidefinite cone, the corresponding block U i is replaced with where the mean value of the diagonal entries of the original block in U , u i = tr(U i )/m i , was chosen as a heuristic scaling factor.

Termination criteria
The termination criteria discussed in this section are based on the unscaled problem data and iterates. Thus, before checking for termination the solver first reverses the scaling according to equations (23)-(24). To measure the progress of the algorithm, we define the primal and dual residuals of the problem as: According to Section 3.3 of [BPC + 11] a valid termination criterion is that the size of the norms of the residual iterates in (28) are small. Our algorithm terminates if the residual norms are below the sum of an absolute and a relative tolerance term: where abs and rel are user defined tolerances.
Following [BGSB17], the algorithm determines if the one-step differences δx k and δy k of the primal and dual variable fulfil the normalized infeasibility conditions (8)-(7) up to certain tolerances p,inf and d,inf . The solver returns a primal infeasibility certificate if holds and a dual infeasibility certificate if with v ∞ ≤ d,inf δx k ∞ , holds.

Chordal Decomposition
As noted in Section 3.3, for large SDPs the eigen-decomposition in the projection step (19) is the principal performance bottleneck for the algorithm. However, since large-scale SDPs often exhibit a certain structure or sparsity pattern, a sensible strategy is to exploit any such structure to alleviate this bottleneck. If the aggregated sparsity pattern is chordal, Agler's [AHMR88] and Grone's [GJSW84] theorems can be used to decompose a large PSD constraint into a collection of smaller PSD constraints and additional coupling constraints. The projection step applied to the set of smaller PSD constraints is usually significantly faster than when applied to the original constraint. Since the projections are independent of each other, further performance improvement can be achieved by carrying them out in parallel. Our approach is to apply chordal decomposition to a standard form SDP in the form (3) to produce a decomposed problem, and then transform the resulting decomposed problem back to a problem in the form (3) but with more variables and a collection of smaller PSD cone constraints. This process allows us to apply chordal decomposition as a preprocessing step before the problem is handed to the solver. As discussed in Section 4.2, a chordal sparsity pattern can be imposed on any PSD constraint.

Graph preliminaries
In the following we define graph-related concepts that are useful to describe the sparsity structure of a problem. A good overview of this topic is given in the survey paper [VA + 15]. Consider the undirected graph G(V, E) with vertex set V = {1, . . . , n} and edge set E ⊆ V × V . If all vertices are pairwise adjacent, i.e. E = {{v, u} | v, u ∈ V, v = u}, the graph is called complete. We follow the convention of [VA + 15] by defining a clique as a subset of vertices C ⊆ V that induces a maximal complete subgraph of G. The number of vertices in a clique is given by the cardinality |C|. A cycle is a path of edges (i.e. a sequence of distinct edges) joining a sequence of vertices in which only the first and last vertices are repeated.
The following decomposition theory relies on a subset of graphs that exhibit the important property of chordality. A graph G is chordal (or triangulated) if every cycle of length greater than three has a chord, which is an edge between nonconsecutive vertices of the cycle. A non-chordal graph can always be made chordal by adding extra edges.
An undirected graph with n vertices can be used to represent the sparsity pattern of a symmetric matrix S ∈ S n . Every nonzero entry S ij = 0 in the lower triangular part of the matrix introduces an edge (i, j) ∈ E. An example of a sparsity pattern and the associated graph is shown in Figure 1  For a given sparsity pattern G(V, E), we define the following symmetric sparse matrix cones: Given the definition in (32) and a matrix S ∈ S n (E, 0), note that the diagonal entries S ii and the off-diagonal entries S ij with (i, j) ∈ E may be zero or nonzero. Moreover, we define the cone of positive semidefinite completable matrices as For a matrix Y ∈ S n + (E, ?) we can find a positive semidefinite completion by choosing appropriate values for all entries (i, j) / ∈ E. An algorithm to find this completion is described in [VA + 15]. An important structure that conveys a lot of information about the nonzero blocks of a matrix, or equivalently the cliques of a chordal graph, is the clique tree (or junction tree). For a chordal graph G let B = {C 1 , . . . , C p } be the set of cliques. The clique tree T (B, E) is formed by taking the cliques as vertices and by choosing edges from E ⊆ B × B such that the tree satisfies the running-intersection property: Definition 4.1 (Running intersection property). For each pair of cliques C i , C j ∈ B, the intersection C i ∩ C j is contained in all the cliques on the path in the clique tree connecting C i and C j .
This property is also referred to as the clique-intersection property in [NFF + 03] and the induced subtree property in [VA + 15]. For a given chordal graph, a clique tree can be computed using the algorithm described in [PS90]. The clique tree for an example sparsity pattern is shown in Figure 1(c).
In a post-ordered clique tree the descendants of a node are given consecutive numbers, and a suitable post-ordering can be found via depth-first search. For a clique C we refer to the first clique encountered on the path to the root as its parent clique C par . Conversely C is called the child of C par . If two cliques have the same parent clique we refer to them as siblings.
We define the function par : 2 V → 2 V and the multivalued function ch : 2 V ⇒ 2 V such that par(C ) and ch(C ) return the parent clique and set of child cliques of C , respectively, where 2 V is the power set (set of all subsets) of V .
Note that each clique in Figure 1(c) has been partitioned into two sets. The upper row represents the separators η = C ∩par(C ), i.e. all clique elements that are also contained in the parent clique. We call the sets of the remaining vertices shown in the lower rows the clique residuals or supernodes ν = C \ η . Keeping track of which vertices in a clique belong to the supernode and the separator is useful as the information is needed to perform a positive semidefinite completion. Following the authors in [HS12], we say that two cliques C i , C j form a separating pair P ij = {C i , C j } if their intersection is non-empty and if every path in the underlying graph G from a vertex C i \ C j to a vertex C j \ C i contains a vertex of the intersection C i ∩ C j .

Agler's and Grone's theorems
To explain how the concepts in the previous section can be used to decompose a positive semidefinite constraint, we first consider an SDP in standard primal form: with variable X and coefficient matrices A k , C ∈ S n . The corresponding dual problem is with dual variable y ∈ R m and slack variable S. Let us assume that the problem matrices in (35) and (36) each have their own sparsity pattern The aggregate sparsity of the problem is given by the graph G(V, E) with edge set In general G(V, E) will not be chordal, but a chordal extension can be found by adding edges to the graph. We denote the extended graph as G(V,Ē), where E ⊆Ē. Finding the minimum number of additional edges required to make the graph chordal is an NP-complete problem [Yan81]. Consider a 0-1 matrix M with edge set E. A commonly used heuristic method to compute the chordal extension is to perform a symbolic Cholesky factorisation of M [FKMN01], with the edge set of the Cholesky factor then providing the chordal extensionĒ. To simplify the notation in the remainder of the article, we henceforward assume that E represents a chordal graph or has been appropriately extended.
Using the aggregate sparsity of the problem we can modify the constraints on the matrix variables in (35) and (36) to be in the respective sparse positive semidefinite matrix spaces: We further define the entry-selector matrices T ∈ R |C |×n for a clique C as where C (i) is the ith vertex of C . We can express the constraints in (37) in terms of multiple smaller coupled constraints using Grone's and Agler's theorems.
Applying this theorem to (35) while restricting X to the positive semidefinite completable matrix cone as in (37) yields the decomposed problem: For the dual problem we utilise Agler's theorem, which is the dual to Theorem 1: Theorem 2 (Agler's theorem [AHMR88]). Let G(V, E) be a chordal graph with a set of maximal cliques {C 1 , . . . , C p }. Then S ∈ S n + (E, 0) if and only if there exist matrices S ∈ S |C | + for = 1, . . . , p such that Note that the matrices T serve to extract the submatrix S such that S = T ST has rows and columns corresponding to the vertices of the clique C . With this theorem, we transform the dual form SDP in (36) with the restriction on S in (37) to arrive at: (42)

Returning a decomposed problem into standard SDP form
After the decomposition results of Section 4.2 have been applied, the SDP problem (42) has to be transformed back into standard form (3). For the undecomposed problem in (36), this is achieved by first relabeling y as x. We then choose To transform the decomposed dual problem (42), we will make use of the fact that the decision variable S ∈ S n and all the submatrices S are symmetric and consider instead s = svec(S ).
The main challenge is to keep track of the overlapping entries in the individual blocks and ensure they sum to the original entry in S. Different possible transformations achieving this are described in [KKMY11].
All the necessary information about the overlapping entries is stored in the clique tree T (B, E) that represents the sparsity pattern of S. We assume that the clique tree is post-ordered with cliques C 1 , . . . , C p . Define a vector to represent slack variables for overlapping entries θ : is the total number of overlapping entries in the upper triangle of the sparsity pattern. The s vector of the decomposed problem is created by stacking the vectorized subblocks s according to the postordering of the clique tree. Define ω(i, j, ) as the index of s corresponding to the (i, j)th element of block S . The equality constraint in problem (42) can be represented in the equivalent standard form (3) as: The matricesÃ k andB take on the values of A k and B in the locations corresponding to the elements in the submatrix S . If a matrix entry of clique C l in A k,ij and B ij is overlapped by an entry of the parent clique, i.e. both (i, j) are included in sep(C l ), it is set to zero. Each column of the linking matrix L ∈ R m d ×no links one overlapping entry in the clique tree. L is generated by first collecting all the matrix indices (i, j) of the separators sep(C ) in the clique tree: Then L is constructed column-by-column, each representing one overlapping entry. The column vector l c is equal to 1 in the row r corresponding to the (i, j) -th entry of S and −1 in the row corresponding to the same entry of the parent block S k , where C = ch(C k ). Thus, each element in l c is defined by: As an example consider a problem with m = 1 and p = 3 and the simple sparsity pattern and clique tree given by: Using the transformation in (43) this constraint can be represented by three constraints on the submatrices S 1 , S 2 and S 3 and six overlap variables θ 1 , . . . , θ 6 : Notice how the overlap variables drop out if the original matrix S is reassembled according to Theorem 2 by adding the entries of each subblock.

Clique Merging
Given an initial decomposition with (chordally completed) edge set E and a set of cliques {C 1 , . . . , C p }, we are free to re-merge any number of cliques back into larger blocks. This is equivalent to treating structural zeros in the problem as numerical zeros, leading to additional edges in the graph.
Looking at the decomposed problem in (40) and (42), the effects of merging two cliques C i and C j are twofold: 1. We replace two positive semidefinite matrix constraints of dimensions |C i | and |C j | with one constraint on a larger clique with dimension |C i ∪ C j |, where the increase in dimension depends on the size of the overlap.
2. We remove consistency constraints for the overlapping entries between C i and C j , thus reducing the size of the linear system of equality constraints.
When merging cliques these two factors have to be balanced, and the optimal merging strategy depends on the particular SDP solution algorithm employed. In [NFF + 03] and [SAV14] a clique tree is used to search for favourable merge candidates; We will refer to their two approaches as SparseCoLO and the parent-child strategy, respectively, in the following sections. We will then propose a new merging method in Section 5.2 whose performance is superior to both methods when used in ADMM. Given a merging strategy, Algorithm 3 describes how to merge a set of cliques within the set B and update the edge set E accordingly.
Algorithm 3: Function mergeCliques(B, E, B m ). Input : A set of cliques B with edge set E, a subset of cliques B m = {C m,1 , C m,2 , . . . , C m,r } ⊆ B to be merged. Output: A reduced set of cliquesB with edge setÊ and the merged clique C m .

Existing clique tree-based strategies
The parent-child strategy described in [SAV14] traverses the clique tree in a depth-first order and merges a clique C with its parent clique C par( ) := par(C ) if at least one of the two following conditions are met: with heuristic parameters t fill and t size . These conditions keep the amount of extra fill-in and the supernode cardinalities below the specified thresholds. The SparseCoLO strategy described in [NFF + 03] and [FFN06] considers parent-child as well as sibling relationships. Given a parameter σ > 0, two cliques C i , C j are merged if the following merge criterion holds This approach traverses the clique tree depth-first, performing the following steps for each clique C : (49) holds, then: • C i and C j are merged, or • if (C i ∪ C j ) ⊇ C , then C i , C j , and C are merged.

For each clique pair
We note that the open-source implementation of the SparseCoLO algorithm described in [NFF + 03] follows the algorithm outlined here, but also employs a few additional heuristics.
An advantage of these two approaches is that the clique tree can be computed easily and the conditions are inexpensive to evaluate. However, a disadvantage is that choosing parameters that work well on a variety of problems and solver algorithms is difficult. Secondly, clique trees are not unique and in some cases it is beneficial to merge cliques that are not directly related on the particular clique tree that was computed. To see this, consider a chordal graph G(V, E) consisting of three connected subgraphs: and some additional vertices {1, 2, m a +1}. The graph is connected as shown in Figure 2(a), where the complete subgraphs are represented as nodes V a , V b , V c . A corresponding clique tree is shown in Figure 2(b). By choosing the cardinality |V c |, the overlap between cliques C 1 = {1, 2} ∪ V c and C 3 = {m a + 1} ∪ V c can be made arbitrarily large while |V a |, |V b | can be chosen so that any other merge is disadvantageous. However, neither the parent-child strategy nor SparseCoLO would consider merging C 1 and C 3 since they are in a "nephew-uncle" relationship. In fact for the particular sparsity graph in Figure 2(a) eight different clique trees can be computed. Only in half of them do the cliques C 1 and C 3 appear in a parent-child relationship. Therefore, a merge strategy that only considers parent-child relationships would miss this favorable merge in half the cases. Figure 2: Sparsity graph (a) that can lead to clique tree (b) with an advantageous "nephew-uncle" merge between C 1 and C 3 .

A new clique graph-based strategy
To overcome the limitations of existing strategies we propose a merging strategy based on the reduced clique graph G(B, ξ), which is defined as the union of all possible clique trees of G; see [HS12] for a detailed discussion. The set of vertices of this graph is given by the maximal cliques of the sparsity pattern. We then create the edge set ξ by introducing edges between pairs of cliques (C i , C j ) if they form a separating pair P ij . We remark that ξ is a subset of the edges present in the clique intersection graph which is obtained by introducing edges for every two cliques that intersect. However, the reduced clique graph has the property that it remains a valid reduced clique graph of the altered sparsity pattern after performing a permissible merge between two cliques. This is not always the case for the clique intersection graph. For convenience, we will refer to the reduced clique graph simply as the clique graph in the following sections. Based on the permissibility condition for edge reduction in [HS12] we define a permissibility condition for clique merges: Definition 5.1 (Permissible merge). Given a reduced clique graph G(B, ξ), a merge between two cliques (C i , C j ) ∈ ξ is permissible if for every common neighbour C k it holds that C i ∩ C k = C j ∩ C k .
We further define a monotone edge weighting function e : 2 V × 2 V → R that assigns a weight w ij to each edge This function is used to estimate the per-iteration computational savings of merging a pair of cliques depending on the targeted algorithm and hardware. It evaluates to a positive number if a merge reduces the per-iteration time and to a negative number otherwise. For a first-order method, whose per-iteration cost is dominated by an eigenvalue factorisation with complexity O |C| 3 , a simple choice would be: More sophisticated weighting functions can be determined empirically; see Section 7.5. After a weight has been computed for each edge (C i , C j ) in the clique graph, we merge cliques as outlined in Algorithm 4. This strategy considers the edges in terms of their weights, starting with the Algorithm 4: Clique graph-based merging strategy. Input : A weighted clique graph G(B, ξ). Output: A merged clique graph G(B,ξ). permissible clique pair (C i , C j ) with the highest weight w ij . If the weight is positive, the two cliques are merged and the edge weights for all edges connected to the merged clique C m = C i ∪ C j are updated. This process continues until no edges with positive weights remain.
The clique graph for the clique tree in Figure 1(c) is shown in Figure 3(a) with the edge weighting function in (50). Following Algorithm 4 the edge with the largest weight is considered first and the corresponding cliques are merged, i.e. {3, 6, 7, 8} and {6, 7, 8, 9}. Note that the merge is permissible because both cliques intersect with the only common neighbour {4, 5, 8} in the same way. The revised clique graph G(B,ξ) is shown in Figure 3(b). Since no edges with positive weights remain, the algorithm stops.
After Algorithm 4 has terminated, it is possible to recompute a valid clique tree from the revised clique graph. This can be done in two steps. First, the edge weights in G(B,ξ) are replaced with the cardinality of their intersection: Second, a clique tree is then given by any maximum weight spanning tree of the newly weighted clique graph, e.g. using Kruskal's algorithm described in [Kru56].
Our merging strategy has some clear advantages over competing approaches. Since the clique graph covers a wider range of merge candidates, it will consider edges that do not appear in clique tree-based approaches such as the "nephew-uncle" example in Figure 2 weighting function allows one to make a merge decision based on the particular solver algorithm and hardware used. One downside is that this approach is more computationally expensive than the other methods. However, our numerical experiments show that the time spent on finding the clique graph, merging the cliques, and recomputing the clique tree represent only a very small fraction of the total computational savings relative to other merging methods when solving SDPs.
6 Open-Source Implementation The second important part of the algorithm is the projection step onto a Cartesian product of convex sets. By default COSMO supports the zero cone, the nonnegative orthant, the hyperbox, the secondorder cone, the PSD cone, the exponential cone and its dual, and the power cone and its dual. Our Julia implementation also allows the user to define their own convex cones 2 and custom projection functions. To implement a custom cone K c the user has to provide: • a projection function that projects an input vector onto the cone • a function that determines if a vector is inside the dual cone K * c • a function that determines if a vector is inside the recession cone of −K c The latter two functions are required for our solver to implement checks for infeasibility as described in Sections 2.1 and 3.
4. An example that shows the advantages of defining a custom cone is provided in Section 7.2.
The authors in [RGN19] used COSMO's algorithm with a specialized implementation of the projection function for positive semidefinite constraints. The projection method used approximate matrix eigendecompositions to significantly reduce the projection time, while maintaining all the features of COSMO such as scaling, infeasibility detection and interfaces to linear system solvers. It was demonstrated that this can provide a significant, up to 20x, reduction in solve time.
Given a problem with multiple constraints, the projection step (19) can be carried out in parallel. This is particularly advantageous when used in combination with chordal decomposition, which typically yields a large number of smaller PSD constraints. For the eigendecomposition involved in the projection step of a PSD constraint, the LAPACK [ABB + 99] function sveyr is used, which can also utilise multiple threads. Consequently, this leads to two-level parallelism in the computation, i.e on the higher level the projection functions are carried out in parallel and each projection function independently calls sveyr. Determining the optimal allocation of the CPU cores to each of these tasks depends on the number of PSD constraints and their dimensions and is a difficult problem. For the problem sets considered in section 7 we achieved the best performance by running sveyr single-threaded and using all physical CPUs to carry out the projection functions in parallel.
Moreover, Julia's type abstraction features are used to enable the solver to solve problems of arbitrary floating-point precision. This allows for example to reduce the memory usage of the solver for very large problems by switching to 32-bit single-precision floating-point format.

Numerical Results
This section presents benchmark results of COSMO against the interior-point solver MOSEK v9.0 and the accelerated first-order ADMM solver SCS v2.1.1. When applied to a quadratic program, COSMO's main algorithm becomes very similar to the first-order QP solver OSQP. To test the performance penalty of using a pure Julia implementation against a similar C implementation we also compare our solver against OSQP v0.6.0 on QP problems.
We selected a number of problem sets to test different aspects of COSMO. The advantage of supporting a quadratic cost function in a conic solver is shown by solving QPs from the Maros and Mészáros QP repository [MM99] in Section 7.1 and SDPs with quadratic objectives in the form of nearest correlation matrix problems in Section 7.3.
To highlight the advantages of implementing custom constraints, we consider a problem set with doubly-stochastic matrices in Section 7.2. We then show how chordal decomposition can speed up the solver for SDPs that exhibit a block-arrow sparsity pattern in Section 7.4.
The performance of chordal decomposition is further explored in Section 7.5 by solving large structured problems from the SDPLib benchmark set [Bor99] as well as some non-chordal SDPs generated with sparsity patterns from the SuiteSparse Matrix Collections [DH11]. Using the same problems we additionally evaluate the performance of different clique merging strategies.
All the experiments were carried out on a computing node of the University of Oxford ARC-HTC cluster with 16 logical Intel Xeon E5-2560 cores and 64 GB of DDR3 RAM. All the problems were run using Julia v1.3 and the problems were passed to the solvers via MathOptInterface [LDGL20].
To evaluate the accuracy of the returned solution we compute three errors adapted from the DIMACS error measures for SDPs [JPA00]: where A a , b a and y a correspond to the rows of A, b and y that represent active constraints. This is to ensure meaningful values even if the problem contains inactive constraints with very large, or possibly infinite, values b i . The maximum of the three errors for each problem and solver is reported in the results below.
We configured COSMO, MOSEK, SCS and OSQP to achieve an accuracy of = 10 −3 . We set the maximum allowable solve time for the Maros and Mészáros problems to 5 min and to 30 min for the other problem sets. All other solver parameters were set to the solvers' standard configurations.
COSMO uses a Julia implementation of the QDLDL solver to factor the quasi-definite linear system. Similarly, we configured SCS to use a direct solver for the linear system.

Maros and Mészáros QP test set
The Maros and Mészáros test problem set [MM99] is a repository of challenging convex QP problems that is widely used to compare the performance of QP solvers. For comparison metrics we compute the failure rate, the number of fastest solve time and the normalized shifted geometric mean for each solver. The shifted geometric mean is more robust against large outliers (compared to the arithmetic mean) and against small outliers (compared to the geometric mean) and is commonly used in optimisation benchmarks; see [SBG + 18, Mit]. The shifted geometric mean µ g,s is defined as: with total solver time t p,s of solver s and problem p, shifting factor sh and size of the problem set n. In the reported results a shifting factor of sh = 10 was chosen and the maximum allowable time t p,s = 300 s was used if solver s failed on problem p. Lastly, we normalize the shifted geometric mean for solver s by dividing by the geometric mean of the fastest solver. The failure rate f r,s is given by the number of unsolved problems compared to the total number of problems in the problem set. As unsolved problems we count instances where the algorithm does not converge within the allowable time or fails during the setup or solve phase. Table 1 shows the normalized shifted geometric mean and the failure rate for each solver. Additionally, the number of cases where solver s was the fastest solver is shown. OSQP shows the best performance in terms of lowest failure rates, number of fastest solves and in the shifted geometric mean of solve times. COSMO follows very closely. The shifted geometric mean of MOSEK seems to suffer from a higher failure rate compared to OSQP/COSMO, and SCS fails on a large number of problems. The higher failure rate could be due to the necessary transformation into a second-order-cone problem.
For this problem set of QPs COSMO's algorithm reduces, with some minor differences, to the algorithm of OSQP. Consequently, this benchmark is useful to evaluate the performance penalty that COSMO pays due to its implementation in the higher-order language Julia. The results in Table 1 show that the performance difference is very small. This can also be seen by looking at the solve times of each solver for increasing problem dimension, as shown in Figure 4. features in our Julia implementation that support more than one constraint type during problem setup. The marginally better resulting performance of OSQP for the smallest problems in the test set is the reason that OSQP is the faster solver in a larger number of cases in Table 1.

Custom convex cones
In many cases writing a custom solver algorithm for a particular problem can be faster than using available solver packages if a particular aspect of the problem structure can be exploited to speed up parts of the computations. As mentioned earlier, COSMO supports user customisation by allowing the definition of new convex cones. This is useful if constraints of the problem can be expressed using this new convex cone and a fast projection method onto the cone exists. A fast specialized projection method in an ADMM framework has for example been used by the authors in [BLDR13] to solve the error-correcting code decoding problem.
To demonstrate the advantage of custom convex cones, consider the problem of finding the doubly stochastic matrix that is nearest, in the Frobenius norm, to a given symmetric matrix C ∈ S n . Doubly stochastic matrices are used for instance in spectral clustering [ZS07] and matrix balancing [RHD + 14]. A specialized algorithm for this problem type has been recently discussed by the authors in [RG19]. Doubly stochastic matrices have the property that all rows and columns each sum to one and all entries are nonnegative. The nearest doubly stochastic matrix X can be found by solving the following optimisation problem: with symmetric real matrix C ∈ S n and decision variable X ∈ R n×n . This problem can be solved as a QP in the following form using equality and inequality constraints: with x = vec(X) and c = vec(C) However, the problem can be written in a more compact form by using a custom projection function to project the matrix iterate onto the affine set of matrices C , whose rows and columns each sum to one. In general the projection of vector s ∈ R n onto the affine set C a = {s ∈ R n | As = b} is given by: where A is assumed to have full rank. In the case of C a = C we can exploit the fact that the inverse of AA can be efficiently computed. The projection can be carried out as described in Algorithm 5; see Appendix B.1 for a derivation. Notice that Algorithm 5 can be implemented efficiently without Algorithm 5: Projection of s ∈ R n onto C 1 A r = 1 n ⊗ I n A c = I n−1 ⊗ 1 n 0 n−1×n ; 2 A = A r A c ; 3 r = [r 1 , r 2 ] = As − 1 2n−1 ; 4 η 2 = 1 n I n−1 + 1 n−1 1 n−1 · r 2 − 1 n 1 n−1 1 n r 1 ; 5 η 1 = 1 n r 1 − 1 n 1 n−1 η 2 ; 6 η = [η 1 , η 2 ] ; 7 Π C (s) = s − A η; ever assembling and storing A and 11 . By using the custom convex set C and the corresponding projection function, (53) can now be rewritten as: The sparsity pattern of the new constraint matrix A only consists of two diagonals and the number of non-zeros reduces from 3n 2 to 2n 2 . We expect this to reduce the initial factorisation time of the linear system in (17) as well as the forward-and back-substitution steps. Figure 5 shows the total solve time of all the solvers for problem (53) with randomly generated dense matrix C with C ij ∼ U(0, 1) and increasing matrix dimension. Additionally, we show the solve time for COSMO in the problem form (53) and with a specialized custom set and projection function as in (56). It is not surprising that COSMO and OSQP scale in the same way for this prob- lem type. MOSEK is slightly slower for smaller problem dimension and overtakes COSMO/OSQP for problems of dimensions n ≥ 500. This might be due to fact that MOSEK uses a faster multithreaded linear system solver while OSQP/COSMO rely on the single-threaded solver QDLDL. The longer solve time of SCS is due to slow convergence of the algorithm for this problem type. Furthermore, when the problem is solved with a custom convex set as in (56) COSMO(CS) is able to outperform all other solvers. Table 2 shows the total solve time and the factorisation time of the two versions of COSMO for small, medium and large problems. As predicted the lower solve time can be mainly attributed to the faster factorisation time.

Nearest correlation matrix
Consider the problem of projecting a matrix C onto the set of correlation matrices, i.e. real symmetric positive semidefinite matrices with diagonal elements equal to 1. This problem is for example relevant in portfolio optimisation [Hig02]. The correlation matrix of a stock portfolio might lose 1 solving (56) with a custom convex set C and projection function its positive semidefiniteness due to noise and rounding errors of previous data manipulations. Consequently, it is of interest to find the nearest correlation matrix X to a given data matrix C ∈ R n×n . The problem is given by: In order to transform the problem into the standard form (3) used by COSMO, C and X are vectorized and the objective function is expanded: with c = vec(C) ∈ R n 2 and x = vec(X) ∈ R n 2 . Here E ∈ R n×n 2 is a matrix that extracts the n diagonal entries X ii from its vectorized form x.
For the benchmark problems we randomly sample the data matrix C with entries C i,j ∼ U(−1, 1) from a uniform distribution. Figure 6 shows the benchmark results for increasing matrix dimension n. Unsurprisingly, the first-order methods SCS and COSMO outperform the interior-point solver MOSEK for these large SDPs. Furthermore, for larger problems the solve times of COSMO and SCS scale in a similar way. COSMO seems to benefit from directly supporting the quadratic objective term in the problem while SCS has to transform it into an additional second-order-cone constraint. This increases the factorisation time and the projection time; see Table 5.

Block-arrow sparse SDPs
To demonstrate the benefits of the chordal decomposition discussed in Section 4, we consider randomly generated SDPs of the form (36) with a block-arrow aggregate sparsity pattern similar to test problems in [ZFP + 19, ADV10]. Figure 7 shows the sparsity pattern of the PSD constraint. The sparsity pattern is generated based on the following parameters: block size d, number of blocks N b and width of the arrow head w. Note that the graph corresponding to the sparsity pattern is always chordal and that, for this sparsity pattern, clique merging yields no benefit. In the following we consistently faster than MOSEK and SCS. The reason why COSMO performs better than the other first order solver SCS can be explained by the significantly lower number of iterations (Table 6). Furthermore, one can see that in both cases the solver time for each solver rises when the number of blocks and the block sizes are increased. The increase is smaller for COSMO(CD) which is more affected by the number of iterations than the problem dimension.

Non-chordal problems with clique merging
To compare our proposed clique graph-based merge approach with the clique tree-based strategies of [NFF + 03] and [SAV14], all three methods discussed in Section 5 were used to preprocess large sparse SDPs from SDPLib, a collection of SDP benchmark problems [Bor99]. This problem set contains maximum cut problems, SDP relaxations of quadratic programs and Lovász theta problems. Moreover, we consider a set of test SDPs generated from (non-chordal) sparsity patterns of matrices from the SuiteSparse Matrix Collections [DH11]. The sparsity patterns for these problems are shown in Figure 10. Both problem sets were used in the past to benchmark structured SDPs [ZFP + 19, ADV10]. This section discusses how the different decompositions affect the per-iteration computation times of the solver. In a second step we compare the solver time of COSMO with our clique graph merging strategy to those of MOSEK and SCS.
For the strategy described in [NFF + 03] we used the SparseCoLO package to decompose the problem. The parent-child method discussed in [SAV14] and the clique graph based method described in Section 5.2 are available as options in COSMO. We further investigate the effect of using different edge weighting functions. The major operation affecting the per-iteration time is the projection step. This step involves an eigenvalue decomposition of the matrices corresponding to the cliques. Since the eigenvalue decomposition of a symmetric matrix of dimension N has a complexity of O (N 3 ), we define a nominal edge weighting function as in (50). However, the exact relationship will be different because the projection function involves copying of data and is affected by hardware properties such as cache size. We therefore also consider an empirically estimated edge weighting function. To determine the relationship between matrix size and projection time, the execution time of the relevant function inside COSMO was measured for different matrix sizes. We then approximated the relationship between projection time, t proj , and matrix size, N , as a polynomial: where a, b were estimated using least squares ( Figure 11). The estimated weighting function is then defined as Six different cases were considered: no decomposition (NoDe), no clique merging (NoMer), decomposition using SparseCoLO (SpCo), parent-child merging (ParCh), and the clique graphbased method with nominal edge weighting (CG1) and estimated edge weighting (CG2). SparseCoLO was used with default parameters. All cases were run single-threaded. Since the per-iteration projection times for some problems lie in the millisecond range every problem was benchmarked ten times and the median values are reported. Table 3 shows the solve time, the mean projection time, the number of iterations, the number of cliques after merging, and the maximum clique size of the sparsity pattern for each problem and strategy. The solve time includes the time spent on decomposition and clique merging. We do not report the total solver time when SparseCoLO was used for the decomposition because this has to be done in a separate preprocessing step in MATLAB which was orders of magnitude slower than the other methods.
Our clique graph-based methods lead to a reduction in overall solver time. The method with estimated edge weighting function CG2 achieves the lowest average projection times for the majority of problems. In four cases ParCh has a narrow advantage. The geometric mean of the ratios of projection time of CG2 compared to the best non-graph method is 0.701, with a minimum ratio of 0.407 for problem mcp500-2. There does not seem to be a clear pattern that relates the projection time to the number of cliques or the maximum clique size of the decomposition. This is expected as the optimal merging strategy depends on the properties of the initial decomposition such as the overlap between the cliques. The merging strategies ParCh, CG1 and CG2 generally result in similar maximum clique sizes compared to SparseCoLO, with CG1 being the most conservative in the number of merges. Table 4 shows the benchmark results of COSMO with merging strategy CG2, MOSEK, and SCS. The decomposition helps COSMO to solve most problems faster than MOSEK and SCS. This is even more significant for the larger problems that were generated from the SuiteSparse Matrix Collection. The decomposition does not seem to provide a major benefit for the slightly denser problems mcp500-3 and mcp500-4. Furthermore, COSMO seems to converge slowly for qpG51 and thetaG51. Similar observations for mcp500-3, mcp500-4 and thetaG51 have been made by the authors in [ADV10]. Finally, many of the larger problems were not solvable within the time limit or caused out-of-memory problems if no decomposition was used in MOSEK and SCS.

Conclusions
This paper describes the first-order solver COSMO and the ADMM algorithm on which it is based. The solver combines direct support of quadratic objectives, infeasibility detection, custom constraints, chordal decomposition of PSD constraints and automatic clique merging. The performance of the solver is illustrated on a number of benchmark problems that challenge different aspects of modern solvers.
Having computed the elements of the dual variable η, the projected vector s p is obtained by solving (64).