Efficient Semidefinite Programming with Approximate ADMM

Tenfold improvements in computation speed can be brought to the alternating direction method of multipliers (ADMM) for Semidefinite Programming with virtually no decrease in robustness and provable convergence simply by projecting approximately to the Semidefinite cone. Instead of computing the projections via “exact” eigendecompositions that scale cubically with the matrix size and cannot be warm-started, we suggest using state-of-the-art factorization-free, approximate eigensolvers, thus achieving almost quadratic scaling and the crucial ability of warm-starting. Using a recent result from Goulart et al. (Linear Algebra Appl 594:177–192, 2020. https://doi.org/10.1016/j.laa.2020.02.014), we are able to circumvent the numerical instability of the eigendecomposition and thus maintain tight control on the projection accuracy. This in turn guarantees convergence, either to a solution or a certificate of infeasibility, of the ADMM algorithm. To achieve this, we extend recent results from Banjac et al. (J Optim Theory Appl 183(2):490–519, 2019. https://doi.org/10.1007/s10957-019-01575-y) to prove that reliable infeasibility detection can be performed with ADMM even in the presence of approximation errors. In all of the considered problems of SDPLIB that “exact” ADMM can solve in a few thousand iterations, our approach brings a significant, up to 20x, speedup without a noticeable increase in ADMM’s iterations.


Introduction
Semidefinite Programming is of central importance in many scientific fields.
Areas as diverse as kernel-based learning [20], dimensionality reduction [9] analysis and synthesis of state feedback policies of linear dynamical systems [6], sum of squares programming [33], optimal power flow problems [21] and fluid mechanics [15] rely on Semidefinite Programming as a crucial enabling technology.
The wide adoption of Semidefinite Programming was facilitated by reliable algorithms that can solve semidefinite problems with polynomial worst-case complexity [6].For small to medium sized problems, it is widely accepted that primal-dual Interior Point methods are efficient and robust and are therefore often the method of choice.Several open-source solvers, like SDPT3 [40] and SDPA [42], as well as the commercial solver MOSEK [25] exist that follow this approach.However, the limitations of interior point methods become evident in large-scale problems, since each iteration requires factorizations of large Hessian matrices.First-order methods avoid this bottleneck and thereby scale better to large problems, with the ability to provide modest-accuracy solutions for many large scale problems of practical interest.
We will focus on the Alternating Directions Method of Multipliers (ADMM), a popular first-order algorithm that has been the method of choice for several popular optimization solvers both for Semidefinite Programming [28], [43], [12] and other types of convex optimization problems such as Quadratic Programming (QP) [38].Following an initial factorization of an m × m matrix, every iteration of ADMM entails the solution of a linear system via forward/backward substitution and a projection to the Semidefinite Cone.For SDPs, this projection operation typically takes the majority of the solution time, sometimes 90% or more.Thus, reducing the per-iteration time of ADMM is directly linked to computing conic projections in a time-efficient manner.
The projection of a symmetric matrix n × n matrix A to the Semidefinite Cone is defined as and can be computed in "closed form" as a function of the eigendecomposition of X.Indeed, assuming where V + (respectively V − ) is an orthonormal matrix containing the positive (nonpositive) eigenvectors, and Λ + (Λ − ) is a diagonal matrix that contains the respective positive (nonnegative) eigenvalues of A, then The computation of Π S+ therefore entails the (partial) eigendecomposition of A followed by a scaled matrix-matrix product.
The majority of optimization solvers, e.g.SCS [28] and COSMO.jl[12], calculate Π S+ by computing the full eigendecomposition using LAPACK's syevr routine 1 .There are two important limitations associated with computing full eigendecompositions.Namely, eigendecomposition has cubic complexity with respect to the matrix size n [14, §8], and it cannot be warm started.This has prompted research on methods for the approximate computation of a few eigenpairs in an iterative fashion [35], [10], [31], and the associated development of relevant software tools such as the widely used ARPACK [22], and the more recent BLOPEX [19] and PRIMME [37].The reader can find surveys of relevant software in [17] and [37, §2] However, the use of iterative eigensolvers in the Semidefinite optimization community has been very limited.To the best of our knowledge, the use of approximate eigensolvers has been limited in widely-available ADMM implementations.In a related work, [24] considered the use of polynomial subspace extraction to avoid expensive eigenvalue decompositions required by first 1 Detailed in https://software.intel.com/mkl-developer-reference-c-syevrorder-methods (including ADMM) and showed improved performance in problems of low-rank structure, while still maintaining convergence guarantees.In the wider area of first-order methods, [41, §3.1] considered ARPACK but disregarded it on the basis that it does not allow efficient warm starting, suggesting that it should only be used when the problem is known a priori to have low rank. .Wen's suggestion of using ARPACK for SDPs whose solution are expected to be low rank has been demonstrated recently by [36].At every iteration, [36] uses ARPACK to compute the r largest eigenvalues/vectors and then uses the approximate projection Π(A) = r i=1 max(λ i , 0)v i v T i .The projection error can then be bounded by The parameter r is chosen such that it decreases with increasing iteration count so that the projection errors are summable.The summability of the projection errors is important, as it has been shown to ensure convergence of averaged non-expansive operators [4, Proposition 5.34] and for ADMM in particular [11,Theorem 8].
However, the analysis of [36] depends on the assumption that the iterative eigensolver will indeed compute the r largest eigenpairs "exactly".This is both practically and theoretically problematic; the computation of eigenvectors is numerically unstable since it depends inverse-proportionally on the spectral gap (defined as the distance between the corresponding eigenvalue and its nearest neighboring eigenvalue; refer to §5.2 for a concise definition), and therefore no useful bounds can be given when repeated eigenvalues exist.
In contrast, our approach relies on a novel bound that characterizes the projection accuracy independently of the spectral gaps, depending only on the residual norms.The derived bounds do not require that the eigenpairs have been computed "exactly", but hold for any set of approximate eigenpairs obtained via the Rayleigh-Ritz process.This allows us to compute the eigenpairs with a relatively loose tolerance while still retaining convergence guarantees.
Furthermore, unlike [36], our approach has the ability of warm-starting of the eigensolver, which typically results in improve computational efficiency.
On the theoretical side, we extend recent results regarding the detection of primal or dual infeasibility.It is well known that if an SDP problem is infeasible then the iterates of ADMM will diverge [11].This is true even when the iterates of ADMM are computed approximately with summable approximation errors.Hence, infeasibility can be detected in principle by stopping the ADMM algorithm when the iterates exceed a certain bound.However, this is unreliable both in practice, because it depends on the choice of the bound, and in theory, because it does not provide certificates of infeasibility [8].Recently, [3] has shown that the successive differences of ADMM's iterates, which always converge regardless of feasibility, can be used to reliably detect infeasibility and construct infeasibility certificates.This approach has been used successfully in the optimization solver OSQP [38].We extend Banjac's results to show that they hold even when ADMM's iterates are computed approximately, under the assumption that the approximation errors are summable.

Approximate ADMM
Although the focus of this paper is on Semidefinite Programming, our analysis holds for more general convex optimization problems that allow for combinations of semidefinite Problems, Linear Programs (LPs), Quadratic Programs (QPs), Second Order Cone Programs (SOCPs) among others2 .In particular, the problem form we consider is defined as where x ∈ R n and z ∈ R m are the decision variables, P ∈ S n + , q ∈ R n , A ∈ R m×n and C is a translated composition of the positive orthant, second order and/or semidefinite cones.
We suggest solving (P), i.e. finding a solution (x, z, ȳ) where ȳ is a Lagrange multiplier for the equality constraint of (P), with the approximate version of ADMM described in Algorithm 1.As is common in the case in ADMM meth-Algorithm 1: Solving (P) with approximate ADMM 1 given initial values x 0 , y 0 , z 0 , parameters ρ > 0, σ > 0, α ∈ (0, 2), the summable sequences (µ k ) k∈N , (ν k ) k∈N and x ≈ y denoting that the vectors x, y satisfy x − y ≤ ; for k = 0, . . .until convergence do ods, our Algorithm consists of repeated solutions of linear systems (line 2) and projections to C (line 4).These steps are the primary drivers of efficiency of ADMM and are typically computed to machine precision via matrix factorizations.Indeed, Algorithm 1 was first introduced by [38] and [3] in the absence of approximation errors.However, "exact" computations can be prohibitively expensive for large problems (and indeed impossible in finite-precision arithmetic), and the practitioner may have to rely on approximate methods for their computation.For example, [7, §4.3] suggests using the Conjugate Gradient method for approximately solving the linear systems embedded in ADMM.
In Section 5, we suggest specific methods for the approximation computation of ADMM steps with a focus in the operation of line 4. Before moving into particular methods, we first discuss the convergence properties of Algorithm 1.
Our analysis explicitly accounts for approximation errors and provides convergence guarantees, either to solutions or certificates of infeasibility, in their presence.In general when ADMM's steps are computed approximately, ADMM might lose its convergence properties.Indeed, when the approximation errors are not controlled appropriately, the Fejér monotonicity [4] of the iterates and any convergence rates of ADMM can be lost.In the worst case, the iterates could diverge.However, the following Theorem, which constitutes the main theoretical result of this paper, shows that Algorithm 1 converges either to a solution or to a certificate of infeasibility of (P) due to the requirement that the approximation errors are summable across the Algorithm's iterations.still converge and can be used to detect infeasibility as follows: (i) If δy = 0 then (P) is primal infeasible and δy is a certificate of primal infeasibility [3, Proposition 3.1] in that it satisfies A T δy = 0 and S C (δy) < 0.
(iii) If both δx = 0 and δy = 0 then (P) is both primal and dual infeasible and (δx, δy) are certificates of primal and dual infeasibility as above.
In order to prove Theorem 2.1 we must first discuss some key properties of ADMM.This will provide the theoretical background that will allow us to present the proof in section 4.Then, in section 5 we will discuss particular methods for the approximate computation of ADMM's steps that can lead to significant speedups.

The asymptotic behaviour of approximate ADMM
In this section we present ADMM in a general setting, express it as an iteration over an averaged operator, and then consider its convergence when this operator is computed only approximately.
Although (admm 1 )-(admm 3 ) are useful for implementing ADMM, theoretical analyses of the algorithm typically consider ADMM as an iteration over an averaged operator.To express ADMM in operator form, note that (admm 1 ) and (admm 2 ) can be expressed in terms of the proximity operator [4, §24] and the similarly defined prox g , as respectively.Now, using the reflections of prox f and prox g , i.e.R f := 2prox f − Id and R g := 2prox g − Id, we can express ADMM as an iteration over the 1 2 α-averaged operator on the variable φ k := χk +ω k−1 (see [34, §3.A] or [13, Appendix B] for details).
The variables ψ, χ, ω of (admm 1 )-(admm 3 ) can then be obtained from φ as We are interested in the convergence properties of ADMM when the operators prox f , prox g , and thus T , are computed inexactly.In particular, we suppose that the iterates are generated as for some error sequences k f , k g ∈ R .Our convergence results will depend on the assumption that k f and k g are summable.This implies that φ k can be considered as an approximate iteration over T , i.e.
for some summable error sequence ( k ).Indeed, since R g and R f are nonexpansive, we have It is well known that, when k f , k g are summable, ( 9) converges to a solution of (S), obtained by φ according to (8), provided that (S) has a KKT point [11,Theorem 8].We will show that, under the summability assumption, δφ = lim φ k+1 − φ k always converges, regardless of whether (S) has a KKT point: Proof This is a special case of Proposition A.1 of Appendix A.
Theorem 3.1 will prove useful in detecting infeasibility, as we will show in the following section.

Proof of Theorem 2.1
We now turn our attention to proving Theorem 2.1.To this end, note that (P) can be regarded as a special case of (S) [3,38].This becomes clear if we set χ = (x, z), ψ = (x, z), and define where I C (z) denotes the indicator function of C. Furthermore, using the analysis of the previous section and defining the norm we find that Algorithm 1 is equivalent to iteration (9).
Due to Theorem 3.1 and [3, Lemma 5.1] we conclude that lim k→∞ δx k and lim k→∞ δυ k , defined by the iterates of Algorithm 1, converge to the respective limits defined by the iterates of Algorithm 1 with µ k = ν k = 0 ∀k ∈ N.
To show the same result for δy, first recall that y k = ρ(υ k − z k ).It then suffices to show the desired result for lim k→∞ δz k .We show this using arguments similar to [3, Proposition 5.1 (iv)].Indeed, note that due to (15c)-(15d) we have and thus lim k→∞ δx k = lim k→∞ δ xk and lim k→∞ δz k = lim k→∞ δz k .Furthermore, due to (11) we have Ax k+1 − zk+1 = e k for some sequence (e k ) with summable norms, thus and the claim follows due to [3, Proposition 5.1 (i) and (iv)].

Krylov-Subspace Methods for ADMM
In this Section, we suggest suitable methods for calculating the individual steps of Algorithm 1.We will focus on Semidefinite Programming, i.e., when C is the semidefinite cone.After an initial presentation of state-of-the-art methods used for solving linear systems approximately, we will describe (in §5.1) LOBPCG, the suggested method for projecting onto the semidefinite cone.Note that some of our presentation recalls established linear algebra techniques that we include for the sake of completeness.
We begin with a discussion of the Conjugate Gradient method, a widely used method for the solution of the linear systems embedded in Algorithm 1.
Through CG's presentation we will introduce the Krylov Subspace which is a critical component of LOBPCG.Finally, we will show how we can assure that the approximation errors are summable across ADMM iterations, thus guaranteeing convergence of the algorithm.
The linear systems embedded in Algorithm 1 are in the following form The linear system ( 16) belongs to the widely studied class of symmetric quasidefinite systems [5], [29].Standard scientific software packages, such as the Intel Math Kernel Library and the Pardiso Linear Solver, implement methods that can solve ( 16) approximately.Since the approximate solution ( 16) can be considered standard in the Linear Algebra community, we will only discuss the popular class of Krylov Subspace methods, which includes the celebrated Conjugate Gradient method 4 .Although CG has been used in ADMM extensively [ §4. 3.4][7], [28], its presentation will be useful for introducing some basic concepts that are shared with the main focus of this section, i.e. the approximate projection to the semidefinite cone.
From an optimization perspective, Krylov subspace algorithms for solving linear systems can be considered an improvement of gradient methods.Indeed, solving Ax = b, where A ∈ S n ++ via gradient descent on the objective function where β k is the step size at iteration k.Note that , where K k (A, r 0 ) is known as the Krylov Subspace.As a result, the following algorithm, is guaranteed to yield results that are no worse than gradient descent.What is remarkable is that (CG) can be implemented efficiently in the form of twoterm recurrences, resulting in the Conjugate Gradient (CG) Algorithm [14, §11.3].
We now turn our attention to the projection to the Semidefinite cone, which we have already defined in the introduction.Recalling (2), we note that the projection to the semidefinite cone can be computed via either the positive or the negative eigenpairs of A. As we will see, the cost of approximating eigenpairs of a matrix depends on their cardinality, thus computing Π S+ (A) with the positive eigenpairs of A is preferable when A has mostly nonpositive eigenvalues, and vice versa.In the following discussion we will focus on methods that compute the positive eigenpairs of A, thus assuming that A has mostly nonpositive eigenvalues.The opposite case can be easily handled by considering −A.
Similarly to CG, the class of Krylov Subspace methods is very popular for the computation of "extreme" eigenvectors of an n×n symmetric matrix A and can be considered as an improvement to gradient methods.In the subsequent analysis we will make frequent use of the real eigenvalues of A, which we denote with λ 1 ≥ • • • ≥ λ n and a set of corresponding orthogonal eigenvectors υ 1 , . . .υ n .The objective to be maximized in this case is the Rayleigh Quotient, due to the fact that the maximum and the minimum values of r(x) are λ 1 and λ n respectively with υ 1 and υ n as corresponding maximizers [14, Theorem 8.1.2].Thus, we end up with the following gradient ascent iteration where the "stepsizes" α k and β k and the initial point x 0 are chosen so that all the iterates lie on the unit sphere.Although r(x) is nonconvex, (19) can be shown to converge when appropriate stepsizes are used.For example, if we Similarly to the gradient descent method for linear systems, the iterates of ( 19) lie in the Krylov subspace K k (A, x 0 ).As a result, the following Algorithm is guaranteed to yield no worse results than any variant of (19) in finding an eigenvector associated with max λ i , and in practice the difference is often remarkable.But how can the Rayleigh Quotient be maximized over a subspace?
This can be achieved with the Rayleigh-Ritz Procedure, defined in Algorithm Note that unlike (20), Algorithm (21) provides approximations to not only one, but k eigenpairs, with the extremum ones exhibiting a faster rate of convergence.
Remarkably, similarly to the Conjugate Gradient algorithm, ( 21) and ( 20) also admit an efficient implementation, in the form of three-term recurrences known as the Lanczos Algorithm [14, §10.1].In fact, the Lanczos Algorithm produces a sequence of orthonormal vectors that tridiagonalize A. Given this sequence of vectors, the computation of the associated Ritz pairs is inexpensive [14, 8.4].The Lanczos Algorithm is usually the method of choice for computing a few extreme eigenpairs for a symmetric matrix.However, although the Lanczos Algorithm is computationally efficient, the Lanczos process can suffer from lack of orthogonality, with the issue becoming particularly obvious when a Ritz pair is close to converging to some (usually extremal) eigenpair [30].Occasional re-orthogonalizations, with a cost of O(n 2 l k ) where l k is the dimension of the k−th trial subspace, are required to mitigate the effects of the numerical instability.To avoid such a computational cost, the Krylov subspace is restarted or shrunk so that l k , and thus the computational costs of re-othogonalizations, are bounded by an acceptable amount.The Lanczos Al-gorithm with occasional restarts is the approach employed by the popular eigensolver ARPACK [22] for symmetric matrices.
However, there are two limitations of the Lanczos Algorithm.Namely, it does not allow for efficient warm starting of multiple eigenvectors since its starting point is a single eigenvector, and it cannot detect the multiplicity of the approximated eigenvalues as it normally provides a single approximate eigenvector for every invariant subspace of A.
Block Lanczos addresses both of these issues.Similarly to the standard Lanczos Algorithm, Block Lanczos computes Ritz pairs on the trial block Krylov Subspace K k (A, X 0 ) := span(X 0 , AX 0 , . . ., A k X 0 ) where X 0 is an n × m matrix that contains a set of initial eigenvector guesses.Thus, Block Lanczos readily allows for the warm starting of multiple Ritz pairs.Furthermore, block methods handle clustered and multiple eigenvectors (of multiplicity up to m) well.However, these benefits comes at the cost of higher computational costs, as the associated subspace is increased by m at every iteration.This, in turn, requires more frequent restarts, particularly for the case where m is comparable to n.
In our experiments we observed that a single block iteration often provides Ritz pairs that give good enough projections for Algorithm 1.This remarkably good performance motivated us to use the Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG), presented in the following subsection.
5.1 LOBPCG: The suggested eigensolver LOBPCG [19] is a block Krylov method that, after the first iteration, uses the trial subspace span(X k , AX k , ∆X k ), where ∆X k := X k − X k−1 , and sets X k+1 to Ritz vectors corresponding to the m largest eigenvalues.Thus, the size of the trial subspace is fixed to 3m.As a result, LOBPCG keeps its computational costs bounded and is particularly suitable for obtaining Ritz pairs of modest accuracy, as it not guaranteed to exhibit the super-linear convergence of Block Lanczos [10] which might only be observed after a large number of iterations.Algorithm 3 presents LOBPCG for computing the positive eigenpairs of a symmetric matrix 5 .Note that the original LOBPCG Algorithm [19, Algorithm 5.1] is more general in the sense that it allows for the solution of generalized eigenproblems and supports preconditioning.We do not discuss these features of LOBPCG as they are not directly relevant to Algorithm 1.
On the other hand [19] assumes that the number of desired eigenpairs is known a priori.However, this is not the case for Π S+ , where the computation of all positive eigenpairs is required.
In order to allow the computation of all the positive eigenpairs, X k is expanded when more than m positive eigenpairs are detected in the Rayleigh- It might appear compelling to expand the subspace to include all the positive Ritz pairs computed by Rayleigh-Ritz.However, this can lead to ill-conditioning, as we proceed to show.Indeed, consider the case where we perform LOBPCG starting from an initial matrix X 0 .In the first iteration, Rayleigh Ritz is performed on span(X 0 , AX 0 ).Suppose that all the Rayleigh values are positive and we thus decide to include all of the Ritz vectors in X 1 , setting X 1 = [X 0 AX 0 ]W for some nonsingular W .In the next iteration we perform Rayleigh Ritz on the subspace spanned by The problem is that the above matrix is rank deficient.Thus one has to rely on a numerically stable Algorithm, like Householder QR, for its orthonormalization (required by the Rayleigh-Ritz Procedure) instead of the more efficient Cholesky QR algorithm [39, page 251].Although, for this example, one can easily reduce columns from the matrix so that it becomes full column rank, the situation becomes more complicated when not all of the Rayleigh values are positive.In order to avoid this numerical instability, and thus be able to use Cholesky QR for othonormalizations, we expand X k whenever necessary by a fixed size (equal to a small percentage of n (the size of A), e.g.n/50) with a set of randomly generated vectors.
Algorithm 3: The LOBPCG Algorithm for Computing the Positive Eigenpairs of a Symmetric Matrix 1 given A ∈ S n and the n × m thin matrix X 0 that spans the initial trial subspace; 2 (Λ 0 , X 0 ) ← Rayleigh-Ritz for A on the trial subspace span(X 0 ); 3 ∆X 0 ← empty n × 0 matrix; ) ← Apply Rayleigh-Ritz for A on the trial subspace span(X k , R k , ∆X k ) and return the m largest eigenpairs; Expand Λ k+1 , X k+1 with randomly generated elements and set m = size(X k+1 , 2) = size(Λ k+1 , 2) if the positive Ritz values of line 6 were more than m. 9 end 10 return X k , Λ k containing m Ritz pairs that approximate the positive eigenpairs of A When is projecting to S + with LOBPCG most efficient?Recall that there exist two ways to project a matrix A into the semidefinite cone.The first is to compute all the positive eigenpairs Λ + , V + of A and set The opposite approach is to compute all the negative eigenpairs Λ − , V − of A where m is the number of computed eigenpairs.Thus, when most of the eigenvalues are nonpositive, then the positive eigenpairs should be approximated, and vice versa.
As a result, LOBPCG is most efficient when the eigenvalues of the matrix under projection are either almost all nonnegative or almost all nonpositive, in which case LOBPCG exhibits an almost quadratic complexity, instead of the cubic complexity of the full eigendecomposition.This is the case when ADMM converges to a low rank primal or dual solution of (P).Fortunately, low rank solutions are often present or desirable in practical problems [23].On the other hand, the worst case scenario is when half of the eigenpairs are nonpositive and half nonnegative, in which case LOBPCG exhibits worse complexity than the full eigendecomposition and thus the latter should be preferred.

Error Analysis & Stopping Criteria
Algorithm 1 requires that the approximation errors in lines 3 and 5 are bounded by a summable sequence.As a result, bounds on the accuracy of the computed solutions are necessary to assess when the approximate algorithms (CG and LOBPCG) can be stopped.
For the approximate solution of the Linear System (16) one can easily devise such bounds.Indeed, note that the left hand matrix of ( 16) is fixed across iterations and is full rank.We can check if an approximate solution [x k+1 ; zk+1 ] satisfies the condition of Algorithm 1 easily, since (recalling Q is the KKT matrix defined in 16) Since Q −1 is constant across iterations, it can be ignored when considering the summability of the approximation errors 22.Thus, we can terminate CG (or any other iterative linear system solver employed) when the residual r k of the approximate solution [x k+1 ; zk+1 ] becomes less than a summable sequence e.g.1/k 2 .
On the other hand, controlling the accuracy of the projection to the Semidefinite Cone requires a closer examination.Recall that, given a symmetric matrix A that is to be projected6 , our approach uses LOBCPG to compute a set of positive Ritz pairs Ṽ , Λ approximating V + , Λ + of (1) which we then use to ap- A straightforward approach would be to quantify the projection's accuracy with respect to the accuracy of the Ritz pairs.Indeed, if we assume that our approximate positive eigenspace is "sufficiently rich" in the sense that λ max ( Ṽ⊥ A Ṽ⊥ ) ≤ 0, then we get m = m [31, Theorem 10.1.1],thus we can define ∆Λ = Λ + − Λ, ∆V = V + − Ṽ which then gives the following bound with ∆ := max( ∆V , ∆Λ ).Standard results of eigenvalue perturbation theory can be used to bound the error in the computation of the eigenvalues, This implies that the projection accuracy depends on the separation of the spectrum and can be very poor in the presence of small eigenvalues.Note that unlike R that is readily computable from ( Ṽ , Λ), "gap" is, in general, unknown and non-trivial to compute, thus further complicating the analysis.
To overcome these issues, we employ a novel bound that shows that, although the accuracy of the Ritz pairs depends on the separation of eigenvalues, the approximate projection does not: Theorem 5.1 Assume that Ṽ and Λ are such that Λ = Ṽ T A Ṽ .Then Proof This is a restatement of [16,Corollary 2.1].
Note that the above result does not depend on the assumption that λ max ( Ṽ⊥ A Ṽ⊥ ) is nonpositive or that m = m.Nevertheless, with a block Krylov subspace method it is often expected that λ max ( Ṽ⊥ A Ṽ⊥ ) will be either small or negative, thus the bound of Theorem 5. COSMO.jlcomputes ADMM's steps to machine precision and supports any cone K i for which a method to calculate its projection is provided 9 .COSMO.jl provides default implementations for various cones, including the Semidefinite cone, where LAPACK's syevr function is used for its projection.The modified solver used in the experiments of this section can be found online: https://github.com/nrontsis/ApproximateCOSMO.jl .
Code reproducing the results of this section is also publicly available. 10  We compared the default version of COSMO.jl with a version where the operation syevr for the Semidefinite Cone is replaced with Algorithm 3. We have reimplemented BLOPEX, the original MATLAB implementation of LOBPCG [19], in Julia.For the purposes of simplicity, our implementation supports only symmetric standard eigenproblems without preconditioning.For these problems, our implementation was tested against BLOPEX to assure that exactly 9 Operations for testing if a vector belongs to K i , its polar and its recession must be provided.These operations might be used to check for termination of the Algorithm, which, by default, is checked every 40 iterations.For the Semidefinite Cone, both of these tests can be implemented via the Cholesky factorization. 10 For subsections 6.1 and 6.2 at https://github.com/nrontsis/SDPExamples.jl.
the same results (up to machine precision) are returned for identical problems.Furthermore, according to §5.1 we provide the option to compute all eigenvalues that are larger or smaller than a given bound.
At every iteration k of Algorithm 1 we compute approximate eigenpairs of every matrix that is to be projected onto the semidefinite cone.If, at the previous iteration of ADMM, a given matrix was estimated to have less than a third of its eigenvectors positive, then LOBPCG is used to compute its positive eigenpairs, according to (2) (middle).If it had less than a third of its eigenvectors negative, then LOBPCG computes its negative eigenpairs according to (2) (right).Otherwise, a full eigendecomposition is used.
In every case, LOBPCG is terminated when all of the Ritz pairs have a residual with norm less than 10/k 1.01 .According to §5.2, this implies that the projection errors are summable across ADMM's iterations, assuming that the rightmost term of Theorem 5.1 is negligible.Indeed, in our experiments, these terms were found to converge to zero very quickly, and we therefore ignored them.A more theoretically rigorous approach would require the consideration of these terms, a bound of which can obtained using e.g. a projected Lanczos algorithm, as discussed in §5.2.
The linear systems of Algorithm 1 are solved to machine precision via an LDL factorization [27, §16.2].We did not rely on an approximate method for the solution of the linear system because, in the problems that we considered, the projection to the Semidefinite Cone required the majority of the total time of Algorithm 1.Nevertheless, the analysis of presented in Sections 2-4 allows for the presence of approximation errors in the solution of the linear systems.

Results for the SDPLIB collection
We first consider problems of the SDPLIB collection, in their dual form, i.e. maximize F 0 , Y The problems are stored in the sparse SDPA form, which was designed to efficiently represent SDP problems in which the matrices F i , i = 0, . . .m are block diagonal with sparse blocks.If the matrices F i consist of diagonal blocks, then the solution of ( 25) can be obtained by solving where F i,j denotes the j−th diagonal block of F i and Y j the respective block of Y .Note that (26) has more but smaller semidefinite variables than (25); thus it is typically solved by solvers like COSMO.jl more efficiently than (25).
As a result, our results refer to the solution of problems in the form (26).
Table 1, presented in the Appendix, shows the results on all the problems of SDPLIB problems for which the largest semidefinite variable is of size at least 50.We observe that our approach can lead to a significant speedup of up to 20x.At the same time, the robustness of the solver is not affected, in the sense that the number of iterations to reach convergence is not, on average, increased by using approximate projections.It is remarkable that for every problem that the original COSMO.jlimplementation converges within 2500 iterations (i.e. the default maximum iteration limit), our approach also converges with a faster overall solution time.

Infeasible Problems
Next, we demonstrate the asymptotic behavior of Algorithm 1 on the problem infd1 of the SDPLIB collection.This problem can be expressed in the form (P) with C = vec u (X) X ∈ S 30 (the set of vectorized 30 × 30 positive semidefinite matrices), and x ∈ R 10 .
As the name suggests, infd1 is dual infeasible.Following [3, §5.2], COSMO detects dual infeasibility in conic problems when the certificate (4) holds approximately, that is when δx k = 0 and dist C ∞ Ax k < dinf , and q T xk < dinf , where xk := δx k /||δx k ||, for a positive tolerance dinf .Figure 1, depicts the convergence of these quantities both for the case where the projection to the semidefinite cone are computed approximately and when LOBPCG is used.
The convergence of the successive differences to a certificate of dual infeasibility is practically identical.To demonstrate the detection of primal infeasibility we consider the dual of infd1.Following [3, §5.2], COSMO detects primal infeasibility in conic problems when the certificate (3) is satisfied approximately, that is when δy k = 0 and where ȳk := δy k /||δy k ||, for a positive tolerance pinf .Note that, for the case of the dual of infd1, the first condition is trivial since P = 0. Figure 2 compares the convergence of our approach, against standard COSMO, to a certificate of infeasibility.LOBPCG yields practically identical convergence as the exact projection for all of the quantities except A T ȳk , where slower convergence is observed.
Note that SDPLIB also contains two instances of primal infeasible problems: infp1 and infp2.However, in these problems, there is a single positive semidefinite variable of size 30 and, in ADMM, the matrices projected to the semidefinite cone have rank 15 = 30/2 across all the iterations (except for the very first few).Thus, according to §5.1 LOBPCG yields identical results to the exact projection, hence a comparison would be of little value.

Conclusions
We have shown that state-of-the art approximate eigensolvers can bring significant speedups to ADMM for the case of Semidefinite Programming.We have extended the results of [3] to show that infeasibility can be detected even in the presence of appropriately controlled projection errors, thus ensuring the same overall asymptotic behavior as an exact ADMM method.Future research directions include exploring the performance of other state-of-the-art eigensolvers from the Linear Algebra community [37].

A Convergence of approximate iterations of nonexpansive operators
In this section we provide a proof for Theorem 3.1.We achieve this by generalizing some of the results of [32], [2] and [18] to account for sequences generated by approximate evaluation of averaged operators T for which cl R(Id − T ) has the minimum property defined below: Note that cl R(Id − T ) has the minimum property when T is defined as (7) because the domain of ( 7) is convex [32,Lemma 5].Thus Theorem 3.1 follows from the following result: Proposition A.1 Consider some D ⊆ H that is closed, an averaged T : D → D and assume that cl R(Id − T ) has the minimum property.For any sequence defined as for some x 0 ∈ D and a summable nonnegative sequence ( k ) k∈N , we have (27), because δ k → 0. Note that both limits in the above equation exist and are bounded, as discussed above.
We will show (28) by showing that where r := x k − T x k .
The proof for both of these bounds depends on the following equality: The upper bound follows easily from the triangular inequality: since δ k+i → 0.
To get the lower bound, define u i := T x i − x i and s := 1 − t, and note that for all i ∈ N: Thus, using (29), and since δ k+i → 0 we have where [18, Lemma 1] was used above, because of (32) and because 2 where [18, Lemma 1] was used in the last equality above.Thus or, since s > 0, we get the desired lower bound that concludes the proof We can now proceed with the Proof of Proposition A.1.We first show lim k→∞ x k /k = − .The nonexpansiveness of T gives Notation used : Let H denote a real Hilbert space equipped with an innerproduct induced norm • = •, • and Cont(D) the set of nonexpansive operators in D ⊆ H. cl D denotes the closure of D, conv D the convex hull of D, and R(T ) the range of T .Id denotes the identity operator on H while I denotes an identity matrix of appropriate dimensions.For any scalar, nonnegative , let x ≈ y denote the following relation between x and y: x − y ≤ .S + denotes the set of positive semidefinite matrices with a dimension that will be obvious from the context.Finally, define Π C the projection, C ∞ the recession cone, and S C the support function associated with a set C.

Theorem 2 . 1
Consider the iterates x k , z k , and y k of Algorithm 1.If a KKT point exists for (P), then (x k , z k , y k ) converges to a KKT point, i.e. a solution of (P), when k → ∞.Otherwise, the successive differences δx := lim k→∞ x k+1 − x k , and δy := lim k→∞ y k+1 − y k .

Proposition 4 . 1
exists.It remains to show points (i) − (iii) of Theorem 2.1.These are a direct consequence of [3, Theorem 5.1] and the following proposition: The following limits δx := lim k→∞ x k+1 − x k , δy := lim k→∞ y k+1 − y k , defined by the iterates of Algorithm 1, converge to the respective limits defined by the iterates of Algorithm 1 with µ k = ν k = 0 ∀k ∈ N.

Algorithm 2 :
19) is simply the Power Method, which is known to converge linearly to an eigenvector associated with max |λ i |.Other stepsize choices can also assure convergence to an eigenvector associated with max λ i[10, 11.3.4],[1,Theorem 3].The Rayleigh-Ritz Procedure 1 given A ∈ S n and an n × m thin matrix S that spans the trial subspace; 2 orthonormalize S; 3 ( Λ, W ) ← Eigendecomposition of S T AS with Λ(1,1) ≤ • • • ≤ Λ(m,m) ; 4 return the Ritz vectors S W and Ritz values Λ of A on span(S); Procedure in Line 6 of Algorithm 3. Note that the Rayleigh-Ritz Procedure produces 3m Ritz pairs, of which usually n are approximate eigenpairs, (or 2m in the first iteration of LOBPCG) and the number of positive Ritz values is always no more than the positive eigenvalues of A [31, 10.1.1],thus the subspace X k must be expanded when more than m positive Ritz values are found.

2 F 2 F
1 will be dominated by R .The assumption Λ = Ṽ T A Ṽ is satisfied when Ṽ and Λ are generated with the Rayleigh Ritz Procedure and thus holds for Algorithm 3. In fact, the use of the Rayleigh-Ritz, which is employed by Algorithm 3, is strongly suggested by Theorem 5.1 as it minimizes R F [31, Theorem 11.4.2].We suggest terminating Algorithm 3 when every positive Ritz pair has a residual with norm bounded by a sequence that is summable across ADMM's iterations.Then, excluding the effect of Π S+ ( Ṽ T ⊥ A Ṽ⊥ ) , which appears to be negligible according to the results of the next section, Theorem 5.1 implies that the summability requirements of Algorithm 3 will be satisfied.Either way, the term Π S+ ( Ṽ T ⊥ A Ṽ⊥ ) can be bounded by (n − m)λ 2 max ( Ṽ⊥ A Ṽ⊥ ), where λ 2 max ( Ṽ⊥ A Ṽ⊥ ) can be estimated with a projected Lanczos methods.In this section we provide numerical results for Semidefinite Programming with Algorithm 1, where the projection to the Semidefinite Cone is performed with Algorithm 3. Our implementation is essentially a modification of the optimization solver COSMO.jl.COSMO.jl is an open-source Julia implementation of Algorithm 1 which allows for the solution of problems in the form (P) for which C is a composition of translated cones {K i + b i }.Normally,

Fig. 1 :
Fig. 1: Convergence of δx k / δx k k∈N to a certificate of dual infeasibility for problem infd1 from the SDPLIB.A fixed value of ρ = 10 −3 is used in Algorithm 1.

Fig. 2 :
Fig. 2: Convergence of δy k / δy k k∈N to a certificate of primal infeasibility for the dual of the problem infd1 from the SDPLIB.A fixed value of ρ = 10 3 is used in Algorithm 1.

Definition A. 1 (
Minimum Property) Let K ⊆ H be closed and let be the minimumnorm element of cl conv K.The set K has the minimum property if ∈ K.
[31,hich computes approximate eigenvalues/vectors (called Ritz values/vectors) that are restricted to lie on a certain subspace and are, under several notions, optimal[31, 11.4] (see discussion after Theorem 5.1).Indeed, every iterate x k+1 of (20) coincides with the last column of X k+1 , i.e. the largest Ritz vector, of the following Algorithm[31, Theorem 11.4.1]

Table 1 :
Results for the SDPLIB collection ( §6.1).nmax denotes the dimensions of the largest semidefinite variable at each problem, "rank" the maximum number of computed ritz pairs by LOBPCG in the last iteration of Algorithm 3, while t exact , t exact proj , Iter exact , f exact the solution time (in seconds), the time spent in projecting to the semidefinite cone, the ADMM iterations and the resulting objective when computing the projections exactly.Iter and f denote identical metrics when computing projections approximately."Speedup" and "Speedup proj " denote the speedups achieved for the full ADMM algorithm and only the projection part when using LOBPCG, and f * the optimal objective of each problem.Hardware used: Intel Gold 5120 with 192GB of memory.