An Interior Point-Proximal Method of Multipliers for Linear Positive Semi-Definite Programming

In this paper we generalize the Interior Point-Proximal Method of Multipliers (IP-PMM) presented in Pougkakiotis and Gondzio (Comput Optim Appl 78:307–351, 2021. https://doi.org/10.1007/s10589-020-00240-9) for the solution of linear positive Semi-Definite Programming (SDP) problems, allowing inexactness in the solution of the associated Newton systems. In particular, we combine an infeasible Interior Point Method (IPM) with the Proximal Method of Multipliers (PMM) and interpret the algorithm (IP-PMM) as a primal-dual regularized IPM, suitable for solving SDP problems. We apply some iterations of an IPM to each sub-problem of the PMM until a satisfactory solution is found. We then update the PMM parameters, form a new IPM neighbourhood, and repeat this process. Given this framework, we prove polynomial complexity of the algorithm, under mild assumptions, and without requiring exact computations for the Newton directions. We furthermore provide a necessary condition for lack of strong duality, which can be used as a basis for constructing detection mechanisms for identifying pathological cases within IP-PMM.


Introduction
Positive Semidefinite Programming (SDP) problems have attracted a lot of attention in the literature for more more than two decades, and have been used to model a plethora of different problems arising from control theory [3,Chapter 14], power systems [13], stochastic optimization [6], truss optimization [28], and many other application areas (e.g.see [3,27]).More recently, SDP has been extensively used for building tight convex relaxations of NP-hard combinatorial optimization problems (see [3,Chapter 12], and the references therein).
As a result of the seemingly unlimited applicability of SDP, numerous contributions have been made to optimization techniques suitable for solving such problems.The most remarkable milestone was achieved by Nesterov and Nemirovskii [17], who designed a polynomially convergent Interior Point Method (IPM) for the class of SDP problems.This led to the development of numerous successful IPM variants for SDP; some of theoretical (e.g.[15,29,31]) and others of practical nature (e.g.[4,5,16]).While IPMs enjoy fast convergence, in theory and in practice, each IPM iteration requires the solution of a very large-scale linear system, even for small-scale SDP problems.What is worse, such linear systems are inherently ill-conditioned.A viable and successful alternative to IPMs for SDP problems (e.g.see [30]), which circumvents the issue of ill-conditioning without significantly compromising convergence speed, is based on the so-called Augmented Lagrangian method (ALM), which can be seen as the dual application of the proximal point method (as shown in [20]).The issue with ALMs is that, unlike IPMs, a consistent strategy for tuning the algorithm parameters is not known.Furthermore, polynomial complexity is lost, and is replaced with merely a finite termination.An IPM scheme combined with the Proximal Method of Multipliers (PMM) for solving SDP problems was proposed in [9], and was interpreted as a primal-dual regularized IPM.The authors established global convergence, and numerically demonstrated the efficiency of the method.However, the latter is not guaranteed to converge to an ǫ-optimal solution in a polynomial number of iterations, or even to find a global optimum in a finite number of steps.Finally, viable alternatives based on proximal splitting methods have been studied in [12,25].Such methods are very efficient and require significantly less computations and memory per iteration, as compared to IPM or ALM.However, as they are first-order methods, their convergence to high accuracy might be slow.Hence, such methodologies are only suitable for finding approximate solutions with low-accuracy.
In this paper, we are extending the Interior Point-Proximal Method of Multipliers (IP-PMM) presented in [19].In particular, the algorithm in [19] was developed for convex quadratic programming problems and assumed that the resulting linear systems are solved exactly.Under this framework, it was proved that IP-PMM converges in a polynomial number of iterations, under mild assumptions, and an infeasibility detection mechanism was established.An important feature of this method is that it provides a reliable tuning for the penalty parameters of the PMM; indeed, the reliability of the algorithm is established numerically in a wide variety of convex problems in [7,8,10,19].In particular, the IP-PMMs proposed in [7,8,10] use preconditioned iterative methods for the solution of the resulting linear systems, and are very robust despite the use of inexact Newton directions.In what follows, we develop and analyze an IP-PMM for linear SDP problems, which furthermore allows for inexactness in the solution of the linear systems that have to be solved at every iteration.We show that the method converges polynomially under standard assumptions.Subsequently, we provide a necessary condition for lack of strong duality, which can serve as a basis for constructing implementable detection mechanisms for pathological cases (following the developments in [19]).As is verified in [19], IP-PMM is competitive with standard non-regularized IPM schemes, and is significantly more robust.This is because the introduction of regularization prevents severe ill-conditioning and rank deficiency of the associated linear systems solved within standard IPMs, which can hinder their convergence and numerical stability.For detailed discussions on the effectiveness of regularization within IPMs, the reader is referred to [1,2,18], and the references therein.A particularly important benefit of using regularization, is that the resulting Newton systems can be preconditioned effectively (e.g.see the developments in [7,10]), allowing for more efficient implementations, with significantly lowered memory requirements.We note that the paper is focused on the theoretical aspects of the method, and an efficient, scalable, and reliable implementation would require a separate study.Nevertheless, the practical effectiveness of IP-PMM (both in terms of efficiency, scalability, and robustness) has already been demonstrated for linear, convex quadratic [7,10,19], and non-linear convex problems [8].
The rest of the paper is organized as follows.In Section 2, we provide some preliminary background and introduce our notation.Then, in Section 3, we provide the algorithmic framework of the method.In Section 4, we prove polynomial complexity of the algorithm, and establish its global convergence.In Section 5, a necessary condition for lack of strong duality is derived, and we discuss how it can be used to construct an implementable detection mechanism for pathological cases.Finally, we derive some conclusions in Section 6.

Primal-Dual Pair of SDP Problems
Let the vector space S n := {B ∈ R n×n : B = B ⊤ } be given, endowed with the inner product A, B = Tr(AB), where Tr(•) denotes the trace of a matrix.In this paper, we consider the following primal-dual pair of linear positive semi-definite programming problems, in the standard form: where A is a linear operator on S n , A * is the adjoint of A, and X 0 denotes that X is positive semi-definite.We note that the norm induced by the inner product A, B = Tr(AB) is in fact the Frobenius norm, denoted by • F .Furthermore, the adjoint For the rest of this paper, except for Section 5, we will assume that the linear operator A is onto and that problems (P) and (D) are both strictly feasible (that is, Slater's constraint qualification holds for both problems).It is well-known that under the previous assumptions, the primal-dual pair (P)-(D) is guaranteed to have optimal solution for which strong duality holds (see [17]).Such a solution can be found by solving the Karush-Kuhn-Tucker (KKT) optimality conditions for (P)-(D), which read as follows:

A Proximal Method of Multipliers
The author in [20] presented for the first time the Proximal Method of Multipliers (PMM), in order to solve general convex programming problems.Let us derive this method for the pair (P)-(D).Given arbitrary starting point (X 0 , y 0 ) ∈ S n + × R m , the PMM can be summarized by the following iteration: where µ k is a positive penalty parameter.The previous iteration admits a unique solution, for all k.
We can write (2.2) equivalently by making use of the maximal monotone operator T L : R m × S n ⇒ R m × S n (see [20,21]), whose graph is defined as: where δ S n + (•) is an indicator function defined as: and ∂(•) denotes the sub-differential of a function, hence (from [22,Corollary 23.5.4]): By convention, we have that ∂δ Given a bounded pair (X * , y * ) such that (0, 0) ∈ T L (X * , y * ), we can retrieve a matrix Z * ∈ ∂δ S n + (X * ), using which (X * , y * , −Z * ) is an optimal solution for (P)-(D).By defining the proximal operator : where I n+m is the identity operator of size n + m, and describes the direct sum of the idenity operators of S n and R m , we can express (2.2) as: and it can be shown that P k is single valued and firmly non-expansive (see [21]).

An Infeasible Interior Point Method
In what follows we present a basic infeasible IPM suitable for solving the primal-dual pair (P)-(D).Such methods handle the conic constraints by introducing a suitable logarithmic barrier in the objective (for an extensive study of logarithmic barriers, the reader is referred to [17]).At each iteration, we choose a barrier parameter µ > 0 and form the logarithmic barrier primal-dual pair: (2.8) The first-order (barrier) optimality conditions of (2.7)-(2.8)read as follows: where S n ++ := {B ∈ S n : B ≻ 0}.For every chosen value of µ, we want to approximately solve the following non-linear system of equations: where, with a slight abuse of notation, we set w = (X, y, Z).Notice that F IP M σ,µ (w) = 0 is a perturbed form of the barrier optimality conditions.In particular, σ ∈ (0, 1) is a centering parameter which determines how fast µ will be forced to decrease at the next IPM iteration.For σ = 1 we recover the barrier optimality conditions in (2.9), while for σ = 0 we recover the optimality conditions in (2.1).
In IPM literature it is common to apply Newton method to solve approximately the system of non-linear equations F IP M σ,µ (w) = 0. Newton method is favored for systems of this form due to the self-concordance of the logarithmic barrier (see [17]).However, a well-known issue in the literature is that the matrix XZ is not necessarily symmetric.A common approach to tackle this issue is to employ a symmetrization operator H P : R n×n → S n , such that H P (XZ) = µI if and only if XZ = µI, given that X, Z ∈ S n + .Following Zhang ([29]), we employ the following operator: H P : R n×n → S n : where P is a non-singular matrix.It can be shown that the central path (a key notion used in IPMs-see [17]) can be equivalently defined as H P (XZ) = µI, for any non-singular P .In this paper, we will make use of the choice k .For a plethora of alternative choices, the reader is referred to [26].We should note that the analysis in this paper can be tailored to different symmetrization strategies, and this choice is made for simplicity of exposition.
At the beginning of the k-th iteration, we have w k = (X k , y k , Z k ) and µ k available.The latter is defined as . By substituting the symmetrized complementarity in the last block equation and applying Newton method, we obtain the following system of equations: where E k := ∇ X H P k (X k Z k ), and

Vectorized Format
In what follows we vectorize the associated operators, in order to work with matrices.In particular, given any matrix B ∈ R m×n , we denote its vectorized form as B, which is a vector of size mn, obtained by stacking the columns of B, from the first to the last.For the rest of this manuscript, any boldface letter denotes a vectorized matrix.Furthermore, if A : S n → R m is a linear operator, we can define it component-wise as (AX) i := A i , X , for i = 1, . . ., m, and any X ∈ S n , where A i ∈ S n .Furthermore, the adjoint of this operator, that is A * : R m → S n is defined as A * y := m i=1 y i A i , for all y ∈ R m .Using this notation, we can equivalently write (P)-(D) in the following form: ) The first-order optimality conditions can be re-written as: where Below we summarize any additional notation that is used later in the paper.An iteration of the algorithm is denoted by k ∈ N. Given an arbitrary matrix A (resp., vector x), A k (resp., x k ) denotes that the matrix (resp., vector) depends on the iteration k.An optimal solution to the pair (P)-(D) will be denoted as (X * , y * , Z * ).Optimal solutions of different primal-dual pairs will be denoted using an appropriate subscript, in order to distinguish them (e.g.(X * r , y * r , Z * r ) will denote an optimal solution for a PMM subproblem).Any norm (resp., semi-norm) is denoted by • χ , where χ is used to distinguish between different norms (e.g. • 2 denotes the Euclidean norm).Given two matrices X, Y ∈ S n + , we write X Y when X is larger than Y with respect to the Loewner ordering.Given two logical statements T 1 , T 2 , the condition T 1 ∧ T 2 is true only when both T 1 and T 2 are true.Given two real-valued positive increasing functions T (•) and f (•), we say that , for all x ≥ x 0 .We write T (x) = Θ(f (x)) if and only if T (x) = O(f (x)) and T (x) = Ω(f (x)).Finally, let an arbitrary matrix A be given.The maximum (resp., minimum) singular value of A is denoted by η max (A) (resp., η min (A)).Similarly, the maximum (resp., minimum) eigenvalue of a square matrix A is denoted by ν max (A) (resp., ν min (A)).

An Interior Point-Proximal Method of Multipliers for SDP
In this section we present an inexact extension of IP-PMM presented in [19], suitable for solving problems of the form of (P)-(D).Assume that we have available an estimate λ k for a Lagrange multiplier vector at iteration k.Similarly, denote by Ξ k ∈ S n + an estimate of a primal solution.As we discuss later, these estimate sequences (i.e.{λ k }, {Ξ k }) are produced by the algorithm, and represent the dual and primal proximal estimates, respectively.During the k-th iteration of the PMM, applied to (P), the following proximal penalty function has to be minimized: with µ k > 0 being some non-increasing penalty parameter.Notice that this is equivalent to the iteration (2.2).We approximately minimize (3.1) by applying one (or a few) iterations of the previously presented infeasible IPM.We alter (3.1) by adding a logarithmic barrier: and we treat µ k as the barrier parameter.In order to form the optimality conditions of this sub-problem, we equate the gradient of L IP −P M M µ k (•; Ξ k , λ k ) to the zero vector, i.e.: where the second system is obtained by introducing the symmetrization in (2.10), and by vectorizing the associated matrices and operators.
Given an arbitrary vector b ∈ R m , and matrix C ∈ R n×n , we define the semi-norm: A similar semi-norm was used before in [15], as a way to measure infeasibility for the case of linear programming problems.For a discussion of the properties of the aforementioned semi-norm, as well as how to evaluate it (using an appropriate QR factorization, which can be computed in a polynomial time), the reader is referred to [15,Section 4].
Starting Point.Let us define the starting point for IP-PMM.For that, we set (X 0 , Z 0 ) = ρ(I n , I n ), for some ρ > 0. We also set y 0 to some arbitrary value (e.g.y 0 = 0), and . Using the aforementioned triple, we have: for some b ∈ R m , and C ∈ S n .
Neighbourhood.Below, we describe a neighbourhood in which the iterations of the method should lie.Unlike most path-following methods, we have to define a family of neighbourhoods that depend on the PMM sub-problem parameters.Given (3.5), some µ k , λ k , and Ξ k , we define the regularized set of centers: where b, C are as in (3.5).The term set of centers originates from [15].
We enlarge the previous set, by defining the following set: , where K N > 0 is a constant, γ S ∈ (0, 1) and ρ > 0 is as defined in the starting point.The vector bk and the matrix Ck represent the current scaled (by µ 0 µ k ) infeasibilities, and will vary depending on the iteration k.While these can be defined recursively, it is not necessary.Instead it suffices to know that they satisfy the bounds given in the definition of the previous set.In essence, the previous set requires these scaled infeasibilities to be bounded above by some constants, with respect to the 2-norm as well as the semi-norm defined in (3.4).We can now define a family of neighbourhoods: where γ µ ∈ (0, 1) is a constant restricting the symmetrized complementarity products.Obviously, the starting point defined in (3.5) belongs to the neighbourhood N µ 0 (Ξ 0 , λ 0 ), with ( b0 , C0 ) = (0, 0).Notice that the neighbourhood depends on the choice of the constants K N , γ S , γ µ .However, as the neighbourhood also depends on the parameters µ k , λ k , Ξ k , the dependence on the constants is omitted for simplicity of notation.
Newton System.As discussed earlier, we employ the Newton method for approximately solving a perturbed form of system (3.3), for all k.In particular, we perturb (3.3) in order to take into consideration the target reduction of the barrier parameter µ k (by introducing the centering parameter σ k ), as well as to incorporate the initial infeasibility, given our starting point in (3.5).In particular, we would like to approximately solve the following system: where b, C are as in (3.5).We note that we could either first linearize the last block equation of (3.3) and then apply the symmetrization, defined in (2.10), or first apply the symmetrization directly to the last block equation of (3.3) and then linearize it.Both approaches are equivalent.Hence, following the former approach, we obtain the vectorized Newton system, that has to be solved at every iteration of IP-PMM: where k X k , and (E d,k , ǫ p,k , E µ,k ) models potential errors, occurring by solving the symmetrized version of system (3.7)inexactly (e.g. by using a Krylov subspace method).In order to make sure that the computed direction is accurate enough, we impose the following accuracy conditions: where σ min is the minimum allowed value for σ k , K N , γ S are constants defined by the neighbourhood in (3.6), and ρ is defined in the starting point in (3.5).Notice that the condition E µ,k 2 = 0 is imposed without loss of generality, since it can be easily satisfied in practice.For more on this, see the discussion in [31, Section 3] and [11,Lemma 4.1].Furthermore, as we will observe in Section 4, the bound on the error with respect to the semi-norm defined in (3.4) is required to ensure polynomial complexity of the method.While evaluating this semi-norm is not particularly practical (and is never evaluated in practice, e.g.see [7,8,19]), it can be done in a polynomial time (see [15,Section 4]), and hence does not affect the polynomial nature of the algorithm.The algorithmic scheme of the method is summarized in Algorithm IP-PMM.Algorithm IP-PMM deviates from standard IPM schemes due to the solution of a different Newton system, as well as due to the possible updates of the proximal estimates, i.e.Ξ k and λ k .Notice that when these estimates are updated, the neighbourhood in (3.6) changes as well, since it is parametrized by them.Intuitively, when this happens, the algorithm accepts the current iterate as a sufficiently accurate solution to the associated PMM sub-problem.However, as we will see in Section 4, it is not necessary for these estimates to converge to a primal-dual solution, for Algorithm IP-PMM to converge.

Algorithm IP-PMM Interior Point-Proximal Method of Multipliers
Starting point: Set as in (3.5). for Choose σ k ∈ [σ min , σ max ] and solve (3.8) so that (3.9) holds.Choose α k , as the largest α ∈ (0, 1], s.t.µ k (α) ≤ (1 − 0.01α)µ k , and: where, Instead, it suffices to ensure that these estimates will remain bounded.In light of this, Algorithm IP-PMM is not studied as an inner-outer scheme, but rather as a standard IPM scheme.We will return to this point at the end of Section 4.

Convergence Analysis
In this section we prove polynomial complexity of Algorithm IP-PMM, and establish its global convergence.The analysis is modeled after that in [19].We make use of the following two standard assumptions, generalizing those employed in [19] to the SDP case.
Assumption 1.The problems (P) and (D) are strictly feasible, that is, Slater's constraint qualification holds for both problems.Furthermore, there exists an optimal solution (X * , y * , Z * ) and a constant K * > 0 independent of n and m such that Assumption 2. The vectorized constraint matrix A of (P) has full row rank, that is rank(A) = m.Moreover, there exist constants K A,1 > 0, K A,2 > 0, K r,1 > 0, and K r,2 > 0, independent of n and m, such that: Remark 1.Note that the independence of the previous constants from n and m is assumed for simplicity of exposition.In particular, as long as these constants depend polynomially on n (or m), the analysis still holds, simply by altering the worst-case polynomial bound for the number of iterations of the algorithm (given later in Theorem 4.12).
Remark 2. Assumption 1 is a direct extension of that in [19, Assumption 1].Given the positive semi-definiteness of X * and Z * , it implies that Tr(X * ) + Tr(Z * ) ≤ 2K * n (from equivalence of the norms • 1 and • 2 ), which is one of the assumptions employed in [29,31].Notice that we assume n > m, without loss of generality.The theory in this section would hold if m > n, simply by replacing n by m in the upper bound of the norm of the optimal solution as well as of the problem data.
Before proceeding with the convergence analysis, we briefly provide an outline of it, for the convenience of the reader.Firstly, it should be noted that polynomial complexity as well as global convergence of Algorithm IP-PMM is proven by induction on the iterations k of the method.To that end, we provide some necessary technical results in Lemmas 4.1-4.3.Then, in Lemma 4.4 we are able to show that the iterates (X k , y k , Z k ) of Algorithm IP-PMM will remain bounded for all k.Subsequently, we provide some additional technical results in Lemmas 4.5-4.7,which are then used in Lemma 4.8, where we show that the Newton direction computed at every iteration k is also bounded.All the previous are utilized in Lemmas 4.9-4.10,where we provide a lower bound for the step-length α k chosen by Algorithm IP-PMM at every iteration k.Then, Q-linear convergence of µ k (with R-linear convergence of the regularized residuals) is shown in Theorem 4.11.Polynomial complexity is proven in Theorem 4.12, and finally, global convergence is established in Theorem 4. 13.
Let us now use the properties of the proximal operator defined in (2.5).
Proof.The thesis follows from the developments in [21, Proposition 1].
In the following lemma, we bound the solution of every PMM sub-problem encountered by Algorithm IP-PMM, while establishing bounds for the proximal estimates Ξ k , and λ k .Lemma 4.2.Given Assumptions 1, 2, there exists a triple (X * r k , y * r k , Z * r k ), satisfying: Proof.We prove the claim by induction on the iterates, k ≥ 0, of Algorithm IP-PMM.At iteration k = 0, we have that λ 0 = y 0 and Ξ 0 = X 0 .But from the construction of the starting point in (3.5), we know that (X 0 , y 0 (assuming n > m).Invoking Lemma 4.1, there exists a unique pair (X * r 0 , y * r 0 ) such that: where (X * , y * , Z * ) solves (P)-(D), and from Assumption 1, is such that X * , y * , Z * 2 = O( √ n).Using the triangular inequality, and combining the latter inequality with our previous observations, yields that (X * r 0 , y * r 0 ) 2 = O( √ n).From the definition of the operator in (2.6), we know that: where ) is the sub-differential of the indicator function defined in (2.4).Hence, there must exist −Z * r 0 ∈ ∂δ S n + (X * r 0 ) (and thus, Z * r 0 ∈ S n + , X r 0 , Z r 0 = 0), such that: where Let us now assume that at some iteration k of Algorithm IP-PMM, we have There are two cases for the subsequent iterations: 1.The proximal estimates are updated, that is (Ξ k+1 , λ k+1 ) = (X k+1 , y k+1 ), or 2. the proximal estimates stay the same, i.e. (Ξ k+1 , λ k+1 ) = (Ξ k , λ k ).
Case 1.We know by construction that this occurs only if the following is satisfied: where r p , R d are defined in Algorithm IP-PMM.However, from the neighbourhood conditions in (3.6), we know that: Combining the last two inequalities by applying the triangular inequality, and using the inductive hypothesis Hence, (Ξ k+1 , λ k+1 ) 2 = O( √ n).Then, we can invoke Lemma 4.1, with λ = λ k+1 , Ξ = Ξ k+1 and any µ ≥ 0, which gives A simple manipulation shows that (X * r k+1 , y In the next lemma we define and bound a triple solving a particular parametrized nonlinear system of equations, which is then used in Lemma 4.4 in order to prove boundedness of the iterates of Algorithm IP-PMM.Lemma 4.3.Given Assumptions 1, 2, and (Ξ k , λ k ), produced at an arbitrary iteration k ≥ 0 of Algorithm IP-PMM, and any µ ∈ [0, ∞), there exists a triple ( X, ỹ, Z) which satisfies the following system of equations: for some arbitrary θ > 0 where bk , Ck are defined in (3.6), while b, C are defined with the starting point in (3.5).Furthermore, ν min ( X) ≥ ξ and ν min ( Z) ≥ ξ, for some positive ξ = Θ(1).
Proof.Let k ≥ 0 denote an arbitrary iteration of Algorithm IP-PMM.Let also b, C as defined in (3.5), and bk , Ck , as defined in the neighbourhood conditions in (3.6).Given an arbitrary positive constant θ > 0, we consider the following barrier primal-dual pair: min Let us now define the following triple: ( X, ŷ, Ẑ) := arg min From the neighbourhood conditions (3.6), we know that ( bk , Ck ) S ≤ γ S ρ, and from the definition of the semi-norm in (3.4), we have that ( X, Ẑ) 2 ≤ γ S ρ.Using (3.4) alongside Assumption 2, we can also show that ŷ 2 = Θ( ( X, Ẑ) 2 ).On the other hand, from the definition of the starting point, we have that (X 0 , Z 0 ) = ρ(I n , I n ).By defining the following auxiliary point: ( X, ȳ, Z) = (X 0 , y 0 , Z 0 ) + ( X, ŷ, Ẑ), , that is, the eigenvalues of these matrices are bounded by constants that are independent of the problem under consideration.By construction, the triple ( X, ȳ, Z) is a feasible solution for the primaldual pair in (4.4)-(4.5),giving bounded primal and dual objective values, respectively.This, alongside Weierstrass's theorem on a potential function, can be used to show that the solution of problem (4.4)-(4.5) is bounded.In other words, for any choice of θ > 0, there must exist a bounded triple (X * s , y * s , Z * s ) solving (4.4)-(4.5),i.e.: In turn, combining this with Assumption 2 implies that (X * s , y * s , Let us now apply the PMM to (4.4)-(4.5),given the estimates Ξ k , λ k .We should note at this point that the proximal operator used here is different from that in (2.5), since it is based on a different maximal monotone operator to that in (2.3).In particular, we associate a single-valued maximal monotone operator to (4.4)-(4.5),with graph: As before, the proximal operator is defined as P := (I n+m + TL ) −1 , and is single-valued and non-expansive.We let any µ ∈ [0, ∞) and define the following penalty function: By defining the variables y = λ k − 1 µ (AX − (b + b + bk )) and Z = θX −1 , we can see that the optimality conditions of this PMM sub-problem are exactly those stated in (4.3).Equivalently, we can find a pair ( X, ỹ) such that ( X, ỹ) = P(Ξ k , λ k ) and set Z = θ X−1 .We can now use the non-expansiveness of P, as in Lemma 4.1, to obtain: But we know, from Lemma 4.2, that ( Combining this with our previous observations, yields that ( X, ỹ) To conclude the proof, let us notice that the value of Lµ,θ (X; Ξ k , λ k ) will grow unbounded as ν min (X) → 0 or ν max (X) → ∞.Hence, there must exist a constant K > 0, such that the minimizer of this function satisfies Hence, there exists some ξ = Θ(1) such that ν min ( X) ≥ ξ and ν min ( Z) ≥ ξ.
In the following lemma, we derive boundedness of the iterates of Algorithm IP-PMM.Lemma 4.4.Given Assumptions 1 and 2, the iterates (X k , y k , Z k ) produced by Algorithm IP-PMM, for all k ≥ 0, are such that: , produced by Algorithm IP-PMM during an arbitrary iteration k ≥ 0, be given.Firstly, we invoke Lemma 4.3, from which we have a triple ( X, ỹ, Z) satisfying (4.3), for µ = µ k .Similarly, by invoking Lemma 4.2, we know that there exists a triple (X * r k , y * r k , Z * r k ) satisfying (4.2), with µ = µ k .Consider the following auxiliary point: Using (4.6) and (4.2)-(4.3)(for µ = µ k ), one can observe that: where the last equality follows from the definition of the neighbourhood N µ k (Ξ k , λ k ).Similarly, one can show that: By combining the previous two relations, we have: Observe that (4.7) can equivalently be written as: However, from Lemmas 4.2 and 4.3, we have that X ξI n and Z ξI n , for some positive constant ξ = Θ(1), and ( X, Z) F = O( √ n).Furthermore, by definition we have that nµ k = X k , Z k .By combining all the previous, we obtain: where the first inequality follows since X * r k , Z * r k , X, Z ∈ S n + and ( X, Z) ξ(I n , I n ).In the penultimate equality we used (4.2) (i.e.X * r k , Z * r k = 0).Hence, (4.8) implies that: From positive definiteness we have that (X k , Z k ) F = O(n).Finally, from the neighbourhood conditions we know that: All terms above (except for y k ) have a 2-norm that is bounded by some quantity that is O(n) (note that ( C, b) 2 = O( √ n) using Assumption 2 and the definition in (3.5)).
Hence, using again Assumption 2 (i.e.A is full rank, with singular values independent of n and m) yields that y k 2 = O(n), and completes the proof.
In what follows, we provide Lemmas 4.5-4.7,which we use to prove boundedness of the Newton direction computed at every iteration of Algorithm IP-PMM, in Lemma 4.8.
k , where S k = E k F k , and E k , F k are defined as in the Newton system in (3.8).Then, for any M ∈ R n×n , where γ µ is defined in (3.6).Moreover, we have that: Proof.The proof of the first two inequalities follows exactly the developments in [31, Lemma 5].The bound on the 2-norm of the matrix D −T k follows by choosing M such that M is a unit eigenvector, corresponding to the largest eigenvalue of D −T k .Then, . But, we have that: where we used the cyclic property of the trace as well as Lemma 4.4.The same reasoning applies to deriving the bound for D k 2 2 .
Lemma 4.6.Let D k and S k be defined as in Lemma 4.5.Then, we have that: where k .Furthermore, where γ µ is defined in (3.6).
Proof.The equality follows directly by pre-multiplying by S − 1 2 on both sides of the third block equation of the Newton system in (3.8) and by then taking the 2-norm (see [29,Lemma 3.1]).For a proof of the inequality, we refer the reader to [29,Lemma 3.3].Lemma 4.7.Let S k as defined in Lemma 4.5, and R µ,k as defined in Lemma 4.6.Then, Proof.The proof is omitted since it follows exactly the developments in [31, Lemma 7].
We are now ready to derive bounds for the Newton direction computed at every iteration of Algorithm IP-PMM.Lemma 4.8.Given Assumptions 1 and 2, and the Newton direction (∆X k , ∆y k , ∆Z k ) obtained by solving system (3.8)during an arbitrary iteration k ≥ 0 of Algorithm IP-PMM, we have that: Proof.Consider an arbitrary iteration k of Algorithm IP-PMM.We invoke Lemmas 4.2, 4.3, for µ = σ k µ k .That is, there exists a triple (X * r k , y * r k , Z * r k ) satisfying (4.2), and a triple ( X, ỹ, Z) satisfying (4.3), for µ = σ k µ k .Using the centering parameter σ k , define: where b, C, µ 0 are given by (3.5) and ǫ p,k , E d,k model the errors which occur when system (3.7) is solved inexactly.Notice that these errors are required to satisfy (3.9) at every iteration k.Using Lemmas 4.2, 4.3, 4.4, relation (3.9), and Assumption 2, we know that ( Ĉ, b) 2 = O(n).Then, by applying again Assumption 2, we know that there must exist a matrix X ∈ R n×n such that A X = b, X F = O(n), and by setting Ẑ = Ĉ + µ X, we have that Ẑ F = O(n) and: Using (X * r k , y * r k , Z * r k ), ( X, ỹ, Z), as well as the triple ( X, 0, Ẑ), where ( X, Ẑ) is defined in (4.10), we define the following auxiliary triple: Using (4.11), (4.9), and the second block equation of (3.8): Then, by deleting opposite terms in the right-hand side, and employing (4.2)-(4.3)(evaluated at µ = σ k µ k from the definition of (X * r k , y * r k , Z * r k ) and ( X, ỹ, Z)), we have where the last equation follows from the neighbourhood conditions (i.e.
. Similarly, we can show that: The previous two equalities imply that: On the other hand, using the last block equation of the Newton system (3.8),we have: where R µ,k is defined as in Lemma 4.6.Let S k be defined as in Lemma 4.5.By multiplying both sides of the previous equation by S k , we get: But from (4.12) we know that X, Z ≥ 0 and hence: Combining (4.13) with the previous inequality, gives: We take square roots, use (4.11) and apply the triangular inequality, to get: We now proceed to bounding the terms in the right hand side of (4.14).A bound for the first term of the right hand side is given by Lemma 4.7, that is: On the other hand, we have (from Lemma 4.5) that Hence, using the previous bounds, as well as Lemmas 4.2, 4.3, and (4.10), we obtain: Combining all the previous bounds yields that k ).One can bound D k ∆Z k 2 in the same way.The latter is omitted for ease of presentation.
Furthermore, we have that: ).Similarly, we can show that ∆Z k 2 = O(n 3 ).From the first block equation of the Newton system in (3.8), alongside Assumption 2, we can show that ∆y k 2 = O(n 3 ).
Finally, using the previous bounds, as well as Lemma 4.6, we obtain the desired bound on H P k (∆X k ∆Z k ) F , that is: which completes the proof.
We can now prove (Lemmas 4.9-4.10)that at every iteration of Algorithm IP-PMM there exists a step-length α k > 0, using which, the new iterate satisfies the conditions required by the algorithm.The lower bound on any such step-length will later determine the polynomial complexity of the method.To that end, we assume the following notation: Lemma 4.9.Given Assumptions 1, 2, and by letting , there exists a steplength α * ∈ (0, 1), such that for all α ∈ [0, α * ] and for all iterations k ≥ 0 of Algorithm IP-PMM, the following relations hold: where, without loss of generality, β 1 = σ min 2 and β 2 = 0.99.Moreover, α * ≥ κ * n 4 for all k ≥ 0, where κ * > 0 is independent of n, m.
Proof.From Lemma 4.8, there exist constants K ∆ > 0 and K H∆ > 0, independent of n and m, such that: From the last block equation of the Newton system (3.7),we can show that: The latter can also be obtained from (3.8), since we require E µ,k = 0. Furthermore: where (X k+1 , y k+1 , Z k+1 ) = (X k + α∆X k , y k + α∆y k , Z k + α∆Z k ).We proceed by proving (4.15).Using (4.18), we have: where we set (without loss of generality) β 1 = σ min 2 .The most-right hand side of the previous inequality will be non-negative for every α satisfying: In order to prove (4.16), we will use (4.19) and the fact that from the neighbourhood conditions we have that For that, we use the result in [29, Lemma 4.2], stating that: By combining all the previous, we have: where we used the neighbourhood conditions in (3.6), the equality which can be derived from (4.18)), and the third block equation of the Newton system (3.8).The most-right hand side of the previous is non-positive for every α satisfying: Finally, to prove (4.17), we set (without loss of generality) β 2 = 0.99.We know, from Algorithm IP-PMM, that σ max ≤ 0.5.With the previous two remarks in mind, we have: The last term will be non-positive for every α satisfying: . By combining all the previous bounds on the step-length, we have that (4.15)-(4.17)hold for every α ∈ (0, α * ), where: Since α * = Ω 1 n 4 , we know that there must exist a constant κ * > 0, independent of n, m and of the iteration k, such that α * ≥ κ n 4 , for all k ≥ 0, and this completes the proof.
Proof.Let α * be given as in Lemma 4.9 (i.e. in (4.20)) such that (4.15)-(4.17)are satisfied.We would like to find the maximum ᾱ ∈ (0, α * ), such that: where and In other words, we need to find the maximum ᾱ ∈ (0, α * ), such that: , for all α ∈ (0, ᾱ).(4.23) If the latter two conditions hold, then (X k (α), , for all α ∈ (0, ᾱ).Then, if Algorithm IP-PMM updates Ξ k , and λ k , it does so only when similar conditions (as in (4.23)) hold for the new parameters.Indeed, notice that the estimates Ξ k and λ k are only updated if the last conditional of Algorithm IP-PMM is satisfied.But this is equivalent to saying that (4.23) is satisfied after setting Ξ k = X k (α) and λ k = y k (α).On the other hand, if the parameters are not updated, the new iterate lies in the desired neighbourhood because of (4.23), alongside (4.15)-(4.17).We start by rearranging rp (α).Specifically, we have that: where we used the definition of bk in the neighbourhood conditions in (3.6), and the second block equation in (3.8).By using again the neighbourhood conditions, and then by deleting the opposite terms in the previous equation, we obtain: Similarly, we can show that: ∆ n 4 µ k , and define the following quantities where α * is given by (4.20).Using the definition of the starting point in (3.5), as well as results in Lemmas 4.4, 4.8, we can observe that ξ 2 = O(n 4 µ k ).On the other hand, using Assumption 2, we know that for every pair (r . Hence, we have that ξ S = O(n 4 µ k ).Using the quantities in (4.26), equations (4.24), (4.25), as well as the neighbourhood conditions, we have that: for all α ∈ (0, α * ), where α * is given by (4.20) and the error occurring from the inexact solution of (3.7), (ǫ p,k , E d,k ), satisfies (3.9).From (4.15), we know that: By combining the last three inequalities, using (3.9) and setting β 1 = σ min 2 , we obtain that: Hence, we have that: Since ᾱ = Ω 1 n 4 , we know that there must exist a constant κ > 0, independent of n, m and of the iteration k, such that ᾱ ≥ κ n 4 , for all k ≥ 0, and this completes the proof.The following theorem summarizes our results.Theorem 4.11.Given Assumptions 1, 2, the sequence {µ k } generated by Algorithm IP-PMM converges Q-linearly to zero, and the sequences of regularized residual norms Proof.From (4.17) we have that: while, from (4.27), we know that ∀ k ≥ 0, ∃ ᾱ ≥ κ n 4 such that α k ≥ ᾱ.Hence, we can easily see that µ k → 0. On the other hand, from the neighbourhood conditions, we know that for all k ≥ 0: and This completes the proof.
The polynomial complexity of Algorithm IP-PMM is established in the following theorem.
Theorem 4.12.Let ε ∈ (0, 1) be a given error tolerance.Choose a starting point for Algorithm IP-PMM as in (3.5), such that µ 0 ≤ K ε ω for some positive constants K, ω.Given Assumptions 1 and 2, there exists an index k 0 ≥ 0 with: such that the iterates {(X k , y k , Z k )} generated from Algorithm IP-PMM satisfy: Proof.The proof can be found in [19,Theorem 3.8].
Finally, we present the global convergence guarantee of Algorithm IP-PMM.
Theorem 4.13.Suppose that Algorithm IP-PMM terminates when a limit point is reached.Then, if Assumptions 1 and 2 hold, every limit point of {(X k , y k , Z k )} determines a primal-dual solution of the non-regularized pair (P)-(D).
Proof.From Theorem 4.11, we know that {µ k } → 0, and hence, there exists a sub-sequence K ⊆ N, such that: However, since Assumptions 1 and 2 hold, we know from Lemma 4.4 that {(X k , y k , Z k )} is a bounded sequence.Hence, we obtain that: One can readily observe that the limit point of the algorithm satisfies the optimality conditions of (P)-(D), since X k , Z k → 0 and X k , Z k ∈ S n + .
Remark 1.As mentioned at the end of Section 3, we do not study the conditions under which one can guarantee that X k − Ξ k → 0 and y k − λ k → 0, although this could be possible.This is because the method is shown to converge globally even if this is not the case.Indeed, notice that if one were to choose X 0 = 0 and λ 0 = 0, and simply ignore the last conditional statement of Algorithm IP-PMM, the convergence analysis established in this section would still hold.In this case, the method would be interpreted as an interior point-quadratic penalty method, and we could consider the regularization as a diminishing primal-dual Tikhonov regularizer (i.e. a variant of the regularization proposed in [23]).

A Sufficient Condition for Strong Duality
We now drop Assumptions 1, 2, in order to analyze the behaviour of the algorithm when solving problems that are strongly (or weakly) infeasible, problems for which strong duality does not hold (weakly feasible), or problems for which the primal or the dual solution is not attained.For a formal definition and a comprehensive study of the previous types of problems we refer the reader to [14], and the references therein.Below we provide a well-known result, stating that strong duality holds if and only if there exists a KKT point.Proof.This is a well-known fact, the proof of which can be found in [24, Proposition The following analysis extends the result presented in [19,Section 4], and is based on the developments in [9,Sections 10 & 11].In what follows, we show that Premises 1 and 2 are contradictory.In other words, if Premise 2 holds (which means that strong duality does not hold for the problem under consideration), then Premise 1 cannot hold, and hence Premise 1 is a sufficient condition for strong duality (and its negation is a necessary condition for Premise 2).We show that if Premise 1 holds, then the algorithm converges to an optimal solution.If not, however, it does not necessarily mean that the problem under consideration is infeasible.For example, this could happen if either (P) or (D) is strongly infeasible, weakly infeasible, and in some cases even if either of the problems is weakly feasible (e.g.see [14,24]).As we discuss later, the knowledge that Premise 1 does not hold could be useful in detecting pathological problems.
Lemma 5.1.Given Premise 1, and by assuming that X k , Z k > ε, for some ε > 0, for all iterations k of Algorithm IP-PMM, the Newton direction produced by (3.8) is uniformly bounded by a constant dependent only on n and/or m.
Proof.The proof is omitted since it follows exactly the developments in [9, Lemma 10.1].We notice that the regularization terms (blocks (1,1) and (2,2) in the Jacobian matrix in (3.8)) depend on µ k which by assumption is always bounded away from zero: µ k ≥ ǫ n .
In the following Lemma, we prove by contradiction that the parameter µ k of Algorithm IP-PMM converges to zero, given that Premise 1 holds.Lemma 5.2.Given Premise 1, and a sequence (X k , y k , Z k ) ∈ N µ k (Ξ k , λ k ) produced by Algorithm IP-PMM, the sequence {µ k } converges to zero.
Proof.Assume, by virtue of contradiction, that µ k > ε > 0, for all k ≥ 0.Then, we know (from Lemma 5.1) that the Newton direction obtained by the algorithm at every iteration, after solving (3.8), will be uniformly bounded by a constant dependent only on n, that is, there exists a positive constant K † , such that (∆X k , ∆y k , ∆Z k ) 2 ≤ K † .We define rp (α) and Rd (α) as in (4.21) and (4.22), respectively, for which we know that equalities (4.24) and (4.25) hold, respectively.Take any k ≥ 0 and define the following functions: We would like to show that there exists α * > 0, such that: These conditions model the requirement that the next iteration of Algorithm IP-PMM must lie in the updated neighbourhood N µ k+1 (Ξ k , λ k ) (notice however that the restriction with respect to the semi-norm defined in (3.4) is not required here, and indeed it cannot be incorporated unless rank(A) = m).Since Algorithm IP-PMM updates the parameters λ k , Ξ k only if the selected new iterate belongs to the new neighbourhood, defined using the updated parameters (again, ignoring the restrictions with respect to the semi-norm), it suffices to show that (X k+1 , y k+1 , Z k+1 ) ∈ N µ k+1 (Ξ k , λ k ).
Proving the existence of α * > 0, such that each of the aforementioned functions is positive, follows exactly the developments in Lemmas 4.9-4.10,with the only difference being that the bounds on the directions are not explicitly specified in this case.Using the same methodology as in aforementioned lemmas, while keeping in mind our assumption, namely X k , Z k > ε, we can show that: where ξ 2 is a bounded constant, defined as in (4.26), and dependent on K † .However, using the inequality: we get that µ k → 0, which contradicts our assumption that µ k > ε, ∀ k ≥ 0, and completes the proof.
Finally, using the following Theorem, we derive a necessary condition for lack of strong duality.
Theorem 5.3.Given Premise 2, i.e. there does not exist a KKT triple for the pair (P)-(D), then Premise 1 fails to hold.
Proof.By virtue of contradiction, let Premise 1 hold.In Lemma 5.2, we proved that given Premise 1, Algorithm IP-PMM produces iterates that belong to the neighbourhood (3.6) and µ k → 0. But from the neighbourhood conditions we can observe that: and Hence, we can choose a sub-sequence K ⊆ N, for which: But since y k − λ k 2 and X k − Ξ k F are bounded, while µ k → 0, we have that: This contradicts Premise 2, i.e. that the pair (P)-(D) does not have a KKT triple, and completes the proof.
In the previous Theorem, we proved that the negation of Premise 1 is a necessary condition for Premise 2. Nevertheless, this does not mean that the condition is also sufficient.In order to obtain a more reliable algorithmic test for lack of strong duality, we have to use the properties of Algorithm IP-PMM.In particular, we can notice that if there does not exist a KKT point, then the PMM sub-problems will stop being updated after a finite number of iterations.In that case, we know from Theorem 5.3 that the sequence (X k − Ξ k , y k − λ k ) 2 will grow unbounded.Hence, we can define a maximum number of iterations per PMM sub-problem, say k † > 0, as well as a very large constant K † .Then, if (X k − Ξ k , y k − λ k ) 2 > K † and k in ≥ k † (where k in counts the number of IPM iterations per PMM sub-problem), the algorithm is terminated with a guess that there does not exist a KKT point for (P)-(D).
Remark 2. Let us notice that the analysis in Section 4 employs the standard assumptions used when analyzing a non-regularized IPM.However, the method could still be useful if these assumptions were not met.Indeed, if for example the constraint matrix was not of full row rank, one could still prove global convergence of the method, using the methodology employed in this section by assuming that Premise 1 holds and Premise 2 does not (or by following the developments in [9]).Furthermore, in practice the method would not encounter any numerical issues with the inversion of the Newton system (see [2]).Nevertheless, showing polynomial complexity in this case is still an open problem.The aim of this work is to show that under the standard Assumptions 1, 2, Algorithm IP-PMM is able to enjoy polynomial complexity, while having to solve better conditioned systems than those solved by standard IPMs at each iteration, thus ensuring better stability (and as a result better robustness and potentially better efficiency).

Conclusions
In this paper we developed and analyzed an interior point-proximal method of multipliers, suitable for solving linear positive semi-definite programs, without requiring the exact solution of the associated Newton systems.By generalizing appropriately some previous results on convex quadratic programming, we show that IP-PMM inherits the polynomial complexity of standard non-regularized IPM schemes when applied to SDP problems, under standard assumptions, while having to approximately solve better-conditioned Newton systems, compared to their non-regularized counterparts.Furthermore, we provide a tuning for the proximal penalty parameters based on the well-studied barrier parameter, which can be used to guide any subsequent implementation of the method.Finally, we study the behaviour of the algorithm when applied to problems for which no KKT point exists, and give a necessary condition which can be used to construct detection mechanisms for identifying such pathological cases.
A future research direction would be to construct a robust and efficient implementation of the method, which should utilize some Krylov subspace solver alongside an appropriate preconditioner for the solution of the associated linear systems.Given previous implementations of IP-PMM for other classes of problems appearing in the literature, we expect that the theory can successfully guide the implementation, yielding a competitive, robust, and efficient method.

2 . 1 Premise 1 .Premise 2 .
]. Let us employ the following two premises: During the iterations of Algorithm IP-PMM, the sequences { y k − λ k 2 } and { X k − Ξ k F }, remain bounded.There does not exist a primal-dual triple, satisfying the KKT conditions in (2.1) associated with the primal-dual pair (P)-(D).