Interior point method on semi-definite linear complementarity problems using the Nesterov–Todd (NT) search direction: polynomial complexity and local convergence

We consider in this paper an infeasible predictor–corrector primal–dual path following interior point algorithm using the Nesterov–Todd search direction to solve semi-definite linear complementarity problems. Global convergence and polynomial iteration complexity of the algorithm are established. Two sufficient conditions are also given for superlinear convergence of iterates generated by the algorithm. Preliminary numerical results are finally provided when the algorithm is used to solve semi-definite linear complementarity problems.


Introduction
The class of semi-definite linear complementarity problems (SDLCPs), which contains the class of semi-definite programs (SDPs) as an important subclass, has many real life applications, for example, in optimal control, estimation and signal processing, communications and networks, statistics, finance [3].Semi-definite programming has wide applications in NP-hard combinatorial problems [8] and global optimization, where it is used to find bounds on optimal values and to find approximate solutions.Interior point methods have been proven to be successful in solving linear programs, with many works in the literature devoted to its study since the 80s.Semi-definite programs are extensions of linear programs to the space of symmetric matrices, and mentioning that among these local convergence results, we show that for the important class of linear semi-definite feasibility problems, only a suitably chosen initial iterate is needed for superlinear convergence, unlike what is generally believed to be needed to achieve superlinear convergence, which is for iterates to get close to the central path by repeatedly solving the corrector-step linear system in an iteration (see for example [13,22]).We should also mention that although local convergence results using the NT search direction has been established in [16] by "narrowing" the central path neighborhood, the algorithm considered there generates feasible iterates, while here we consider an infeasible algorithm, which is more practical, and the analysis for the infeasible case is more complicated than that for the feasible case.Finally, our results in this paper indicate that using the NT search direction in an interior point algorithm to solve semi-definite linear complementarity problems is as good as using the HKM search direction both from the "polynomial complexity" and the "local convergence" point of view.

Facts, notations and terminology
The space of symmetric n × n matrices is denoted by S n .The cone of positive semidefinite (resp., postive definite) symmetric matrices is denoted by S n + (resp., S n ++ ).The identity matrix is denoted by I n×n , where n stands for the size of the matrix.We omit the subscript when the size of the identity matrix is clear from the context.Given a symmetric matrix G, λ min (G) and λ max (G) are denoted to be the minimum and maximum eigenvalue of G respectively.
Given matrices G and K in n 1 ×n 2 , the inner product, G • K , between the two matrices is defined to be G • K := Tr(G T K ) = Tr(G K T ), where Tr(•) is the trace of a square matrix.• 2 for a vector in n refers to its Euclidean norm, and for a matrix in n 1 ×n 2 , it refers to its operator norm.On the other hand, G F := √ G • G, for G ∈ n 1 ×n 2 , refers to the Frobenius norm of G.
For a matrix G ∈ n 1 ×n 2 , we denote its component in the ith row and the jth column by G i j .G i• denotes the ith row of G and G • j the jth column of G.In the case when G is partitioned into blocks of submatrices, then G i j refers to the submatrix in the corresponding (i, j) position.
Given square matrices G i ∈ n i ×n i , i = 1, . . ., N , Diag(G 1 , . . ., G N ) is a square matrix with G i , i = 1, . . ., N , as its main diagonal blocks arranged in accordance to the way they are lined up in Diag(G 1 , . . ., G N ).All the other entries in Diag(G 1 , . . ., G N ) are zeroes.
Note that for all X , Y ∈ S n , X • Y = svec(X ) T svec(Y ).Hence, X F = svec(X ) 2 for X ∈ S n .
Given G, K ∈ n×n , G ⊗ s K is a square matrix of size ñ defined by Fact 1 (Appendix of [39]) Let G, K , L ∈ n×n .
(c) If G and K are commuting symmetric matrices.Let {x i } be their common basis of eigenvectors with corresponding eigenvalues λ G i and λ K i .Then G ⊗ s K is symmetric and has the set of eigenvalues given by 1  2 (λ G i λ K j + λ G j λ K i ) .Also, svec 1  2 (x i x T j + x j x T i ) is an eigenvector corresponding to the eigenvalue Fact 3 For x ∈ , x ≥ 0, we have Given functions f : → E and g : → ++ , where is an arbitrary set and E is a normed vector space with norm • .For a subset ˆ ⊆ , we write f (w) = O(g(w)) for all w ∈ ˆ to mean that f (w) ≤ Mg(w) for all w ∈ ˆ , where M > 0 is a positive constant.Suppose E = S n .Then we write f (w) = (g(w)) if for all w ∈ ˆ , f (w) ∈ S n ++ , f (w) = O(g(w)) and f (w) −1 = O(1/g(w)).The subset ˆ should be clear from the context.For example, ˆ = (0, ŵ) for some ŵ > 0 or ˆ = {w k ; k ≥ 0}, where w k → 0 as k → ∞.In the latter case, we write

A primal-dual path following interior point algorithm on an SDLCP
We consider a semi-definite linear complementarity problem (SDLCP) which is the problem to find a solution, (X , Y ), to the following system: where q ∈ ñ and A, B : S n → ñ are linear operators mapping S n to the space ñ , ñ := n(n + 1)/2.A and B take the form where A i , B i ∈ S n for all i = 1, . . ., ñ.We also called the system (1)-(3) an SDLCP.
The following assumptions are assumed to hold for the system (1)-( 3) in this and the next section, while we replace Assumption 1(b) by Assumption 2 in Sect.4, although we still assume Assumptions 1(a), (c) in that section.
The first assumption [Assumption 1(a)] is satisfied for the class of semi-definite programs (SDPs), with equality for X • Y instead of inequality.The second assumption ensures that ( 2) is satisfied for some positive definite matrix pair, while the last assumption is a technical assumption that can be satisfied for any SDP.Note that Assumption 1(b) is only used in this paper to ensure the existence of a solution to the SDLCP (1)-( 3).
An SDP in its primal and dual form is given by In the above formulation of an SDP, it is without loss of generality to assume that A i , i = 1, . . ., m, are linearly independent.
An SDP is a special case of an SDLCP by letting A i = 0 for i = m + 1, . . ., ñ, B i = 0 for i = 1, . . ., m, in (4).B i , i = m + 1, . . ., ñ, in (4) are chosen to be linearly independent and belong to the subspace in S n orthogonal to the space spanned by A i , i = 1, . . ., m.
Primal-dual path following interior point algorithms can be used to solve an SDLCP.We consider an infeasible predictor-corrector primal-dual path following interior point algorithm, as found in [25,34], in this paper.In [25,34], the search direction used is the Helmberg-Kojima-Monteiro (HKM) search direction [11,14,17], while in this paper, we consider the algorithm using the Nesterov-Todd (NT) search direction [20,21].The difference between the two search directions is the way "symmetrization" is being done on (1).For P ∈ n×n an invertible matrix, the similarly transformed symmetrization operator H P (•), introduced in [45], is given by where U ∈ n×n .Hence, H P (•) is a map from n×n to S n .Different search directions correspond to different P, as will be explained later.Infeasible primal-dual path following interior point algorithm works on the principle that iterates generated by the algorithm "follows" an (infeasible) central path (X (μ), Y (μ)), μ > 0, which is the unique solution to where for some X 0 , Y 0 ∈ S n ++ and μ 0 > 0. Here, X 0 , Y 0 are such that X 0 Y 0 = μ 0 I .The existence and uniqueness of this central path follows from Theorem 2.3 in [35].It also follows from Theorem 2.4 in [35] that there exists a solution (X * , Y * ) to the SDLCP (1)- (3).Although these theorems apply to the feasible central path when r 0 = 0 in [35], they can be easily shown to hold for an infeasible central path when r 0 = 0 by assuming Assumptions 1(a)-(c).We leave their proofs as exercises for the reader.
From now onwards, (X * , Y * ) denotes a solution to the SDLCP (1)-( 3).Using H P (•), the above central path (X (μ), Y (μ)), μ > 0, is also the unique solution to since we have for X , Y ∈ S n ++ and μ > 0, Below we describe the infeasible predictor-corrector primal-dual path following interior point algorithm considered in this paper.This algorithm is the same as that in [25], although there is a wider choice for β 1 , β 2 here.The key to the algorithm is solving the following system of linear equations: for X , Y ∈ S n , where τ > 0, 0 ≤ σ ≤ 1 and r ∈ ñ .Different choice of τ, σ and r results in different step in Algorithm 1 described below.
The following (narrow) neighborhood of the central path is used in this paper: where τ > 0 and 0 < β < 1.
Note that P that appears in ( 5) and ( 7) is not chosen arbitrary, but is related to X , Y ∈ S n ++ as we will see next, after describing the algorithm we are analyzing in this paper.
as an approximate solution to the system (1)-( 3), and terminate.

and go to Step (a1).
The above algorithm is an infeasible predictor-corrector primal-dual path following interior point algorithm.P k in the algorithm is chosen such that Examples of P k that satisfy this are P k = Y 1/2 k , and P k such that The former corresponds to the dual HKM search direction, while the latter corresponds to the NT search direction.Pk in the algorithm is also chosen to satisfy Pk Xk Ŷk P−1 k ∈ S n .
We remark that ( in Algorithm 1 exist and are unique, by observing that the left hand side of ( 5) and ( 6) together when written in matrixvector product has the matrix invertible, which holds because (X k , Y k ) ∈ N 1 (β 1 , τ k ) (by Proposition 4) and ( Xk , Ŷk ) ∈ N 1 (β 2 , (1 − αk )τ k ), respectively.(We leave the details to show the existence and uniqueness of ( and ( X c k , Y c k ) to the reader.)Furthermore, in the above algorithm, we note that there is a wider range of choice for β 1 and β 2 compared with the algorithm in [25].
Let us make an observation on our choice of P k , Pk in the following proposition: Proposition 1 Suppose P ∈ n×n is an invertible matrix with P X Y P −1 ∈ S n , where X , Y ∈ S n ++ .Then P X P T and P −T Y P −1 have a common set of eigenvectors with corresponding real positive eigenvalues λ X i and λ Y i , i = 1, . . ., n, respectively.Also, P XY P −1 has the same set of eigenvectors with corresponding eigenvalues λ X i λ Y i , i = 1, . . ., n.
Proof Since P XY P −1 ∈ S n , this implies that P X P T , P −T Y P −1 ∈ S n ++ commute.Hence they share a common set of eigenvectors with corresponding real positive eigenvalues λ X i , λ Y i , i = 1, . . ., n.Furthermore, it is easy to see that P XY P −1 = (P X P T )(P −T Y P −1 ) has the same set of eigenvectors with corresponding eigenvalues From now onwards, we consider Algorithm 1 using the NT search direction in Steps (a2) and (a3) of the algorithm, which means that P k and Pk in these steps satisfy with Ŵk Ŷk Ŵk = Xk respectively.Proposition 1 then applies to P k and Pk given in this way since they satisfy P k X k Y k P −1 k , Pk Xk Ŷk P−1 k ∈ S n , respectively.Furthermore, there are different ways in which P k and Pk can be chosen to form the NT search direction in these steps of Algorithm 1.In Sect.4, using a particular choice of P k , we establish two sufficient conditions for superlinear convergence using the algorithm on SDLCPs that satisfy the strict complementarity assumption.
We also require that P that appears in (7) to be related to X , Y in (7) by The following are properties satisfied by (X k , Y k ) and ( Xk , Ŷk ) in Algorithm 1, which are useful in the analysis given in the paper on the convergence behavior of iterates generated by the algorithm.
, where τ > 0, 0 < β < 1, and P satisfies P T P = W −1 with W Y W = X , then Proof The relation (12) follows immediately from P T P = W −1 , W Y W = X and then taking inverses.
The following technical result leads to Proposition 4, which ensures that the set in ( 11) is nonempty.
, where τ > 0, 0 < β < 1, and P satisfies then Proof Relation (20) can be written as The latter can in turn be expressed as by Fact 1(b).Since P satisfies P T P = W −1 with W Y W = X , by ( 12), we have P X P T = P −T Y P −1 .Hence, from (21), taking the inverse of I ⊗ s (P X P T ), we get Hence, where the first inequality follows from (19) and Assumption 1(a), the first equality follows from (22), and the last inequality follows from Proposition 2.
The following result shows that iterates (X k , Y k ) generated by Algorithm 1 always belong to the narrow neighborhood of the central path (7).
Proof We prove the proposition by induction.It is easy to see that (X 0 , Y 0 ) ∈ N 1 (β 1 , τ 0 ).Hence, the proposition holds for k = 0. Suppose the proposition holds for We wish to show that which then prove the proposition by induction.First observe that where the first equality follows since , and the inequality follows from Lemma 2.2 in [25], again using where the third equality holds since ( X c k 0 , Y c k 0 ) is the solution to the linear system (5), (6), in which X = Xk 0 , Y = Ŷk 0 , P = Pk 0 , σ = 1, τ = τ k 0 +1 and r = 0. Hence, where the second inequality follows from Fact 2 and is the solution to the linear system (5), (6), in which Therefore, by the above inequality, ( 23) and ( 24) leads to where the last inequality holds since ).By induction, the proposition is proved.
As mentioned earlier, the above proposition ensures that α k,2 given by ( 11) is meaningful.The following result allows us to say more about α k,2 and also shows that we can always find αk that satisfies (8) in Algorithm 1.
Proof The proposition is proved by showing that for all 0 We have for all 0 ≤ α ≤ α k,1 where the second equality holds as and the last inequality follows from ( 9), (10) ++ , the following holds.
so that the above inequality holds by Lemma 2.2 in [25].Putting everything together, if 0 We remark that the above proposition implies that α k,2 given by ( 11) is always positive.

Global convergence and polynomial complexity of interior point algorithm
In this section, we show global convergence and iteration complexity results for iterates {(X k , Y k )} generated by Algorithm 1, by considering the "duality gap", The following proposition relates αk , τ k and (X k , Y k ) generated by Algorithm 1 with μ k and r k .First, we have the following definition. Hence, It then follows from Proposition 1 that The first result in the proposition then follows by noting that by induction on k ≥ 0.
Equality in (25) holds for k = 0. Suppose (25) holds for k = k 0 for some k 0 ≥ 0. Then where the first equality follows from 6) with r = 0, the third equality follows from ( 6) with r = r k 0 , and the fifth equality follows by induction hypothesis.Hence, (25) holds for k = k 0 + 1, and by induction, (25) holds for all k ≥ 0.
To show global convergence and iteration complexity results using μ k and r k , we only need to investigate the behavior of ψ k−1 , since τ k = ψ k−1 τ 0 and μ k ≤ (1+β 1 )τ k , and r k = ψ k−1 r 0 .By definition of ψ k in Definition 1, this is achieved by analyzing α j .We consider α j,1 instead, which is given by ( 9) since it is a lower bound to α j .Since α j,1 given by ( 9) is expressed in terms of δ j , to analyze α j,1 , we only need to analyze δ j .We have the following upper bound on δ j .
Lemma 1 For all k ≥ 0, we have Here, We prove the above lemma in Sect.3.1, as its proof is quite involved.
Based on Lemma 1, we have the following global convergence theorem using Algorithm 1.
Proof Since (X 0 , S 0 ) ∈ S n ++ and for any (X * , Y * ), a solution to the SDLCP (1)-( 3), it is easy to see that L x and L y given by ( 26), (27) respectively are positive constants, since ς , ς x and ς y are constants.Therefore, by the relation between δ j and L x , L y in Lemma 1, δ j is bounded above by a positive constant (that depends on X 0 , Y 0 , X * , Y * ), say, L, independent of j ≥ 0. From (9), we therefore have for all j ≥ 0. Since Therefore, the theorem is proved by applying Proposition 6.
Let us now state an iteration complexity result using Algorithm 1.

Proof We are given
Observe that where the second inequality holds by (14) (since using ( 16) and max{ P 0 Therefore, L x , L y given by ( 26), (27) respectively are less than or equal to L 0 n, where L 0 is a large enough number that depends only on β 1 .Hence, by Lemma 1, ( 8) and ( 9), for all j ≥ 0. Therefore, where the last inequality holds by Fact 3.
and the result then follows.

Proof of Lemma 1
Note that where the first inequality holds by Fact 2. To show Lemma 1, we analyze (28) further by bounding them from above as given in the following proposition. where Now, by Proposition 6, and with A(X * ) + B(Y * ) = q, from (31), we obtain That is, On the other hand, where the last equality holds since , Proposition 3 can be applied to (32), (33).We have from the proposition The proposition then follows by applying triangle inequality to (34), (35) and upon algebraic manipulations.
Our objective to prove Lemma 1 is achieved by bounding t x , t y and t that appear in the upper bounds to P k X k F in the above proposition appropriately.We need the following results to achieve this.
Result then follows from the definition of ς , (13) (since Remark 1 Since (X 0 , Y 0 ) ∈ S n ++ × S n ++ , we see easily from the above proposition that {(X k , Y k ) ; k ≥ 0} is bounded.
Using Proposition 8, the following holds.
Proposition 9 For all k ≥ 0, Proof It suffices to prove the first inequality since the proofs for the last three inequalities are similar.We have where the first inequality holds since P k X 0 P T k ∈ S n ++ , the second inequality holds by Fact 2, and the last inequality holds by Proposition 8.
With the above, we are ready to prove Lemma 1 by providing suitable upper bounds for t x , t y and t as given below.
Proof We first prove the upper bound on t.We have Hence, On the other hand, where the third inequality follows by Proposition 9, and the last inequality follows from ( 15), (16).Similarly, Putting everything together, we have (36).
In a similar way, we can show (37) and (38).

Local convergence study of interior point algorithm
In this section, we investigate the local convergence behavior of iterates {(X k , Y k )} generated by Algorithm 1.
We first need an assumption as follows.
Assumption 2 There exists a strictly complementary solution * , Y * ) to the SDLCP (1)- Assumption 2 is usually applied when we study the local convergence behavior of interior point algorithms on semi-definite linear complementary problems [13,16,24,25,34].Paper [5] consider asymptotic behavior of the central path for an SDP when this assumption is relaxed.
(X * , Y * ) now denotes a strictly complementary solution to the SDLCP ( 1)-( 3).Since X * , Y * commute, they are jointly diagonalizable by some orthogonal matrix Q. Applying this orthogonal similarity transformation on the matrices in the SDLCP (1)-( 3), we may assume without loss of generality that where Henceforth, whenever we partition a matrix U ∈ S n , it is always understood that it is partitioned as , where We study local superlinear convergence using Algorithm 1 in the sense of This is equivalent to by Proposition 6.Note that we have τ k → 0 as k → ∞, by Theorem 1 and Proposition 6. Superlinear convergence in the sense of ( 40) is intimately related to local convergence behavior of iterates, as investigated for example in [23].The following can be verified easily.(41) to hold is

Proposition 11 A sufficient condition for
Proof Note that (41) holds if α k,1 → 1 as k → ∞.By (9), this sufficient condition is equivalent to δ k → 0 as k → ∞, where δ k is given by (10).Hence, the proposition is proved.
In the rest of this section, we are going to show that certain condition holds for (X k , Y k ) and for certain subclass of SDLCPs, superlinear convergence using Algorithm 1 can be achieved.This is achieved by showing that the above sufficient condition (42) holds.Towards this end, we transform the system of equations, ( 5), (6), that relates to an equivalent system of equations, and then analyze the resulting system.
The task now is to transform (45) to an equivalent system of equations that allows us to show that if certain condition on iterates (X k , Y k ) as given in Theorem 3 is satisfied, and for certain subclass of SDLCPs as given in Theorem 4, superlinear convergence using Algorithm 1 can be ensured.First, we observe the following. .
Since the proof of the above proposition follows from proofs of similar results in [26,36] and the relation between μ k and τ k as stated in Proposition 6, it will not be shown here.
The new system of equations that we are going to derive involves "iterate" ( Xk , Ȳk ) corresponding to (X k , Y k ), Wk corresponding to W k , and corresponding "predicted step" ( and Proposition 13
By definition of Wk in (46), we have Wk Ȳk Wk = Xk , which implies that Wk = k , from which we see that { Wk } is a sequence of symmetric, positive definite matrices and has accumulation points which are symmetric, positive definite as k tends to infinity, since these are so for { Xk } and { Ȳk }.Therefore, (47) holds.
By partitioning matrices A i and B i in A, B, as appeared in (4), respectively into the 4 blocks format as discussed near the beginning of this section, we perform block Gaussian elimination on (A B) so that A, B can be rewritten as respectively.This technique has been used for example in [33,36].See also [15,26].We will take A, B to be expressed as in (48) from now onwards.Note that (A B) ∈ ñ×2 ñ has full row rank by Assumption 1(c), and A, B also satisfy The implication in (49) holds by Assumption 1(a).

Remark 2
In the case of an SDP, A and B are written as where A 1 consists of m rows and B 1 consists of ñ − m rows.As discussed in [34], by performing block Gaussian elimination, A 1 and B 1 are given by , respectively.Note that the way A, B for an SDP are written in (50) is different from that for an SDLCP, see (48).We can however write them in the form of (48) by appropriately interchanging rows in (A B) for the SDP.Now, in order to transform the equation system ( 45) to an equivalent system, let us define Ā(τ ) ∈ ñ× ñ and B(τ ) ∈ ñ× ñ to be respectively, for τ ≥ 0.
The following proposition, whose proof can be found for example in [32,33,36], relates A with Ā(τ ) and B with B(τ ): and An important property of Ā(τ ), B(τ ) is given below.
Proposition 15 For τ ≥ 0, ( Ā(τ ) B(τ )) has full row rank, and Proof We note that (A(τ ) B(τ )) has full row rank for all τ ≥ 0 follows from Assumption 1(c) and the way block Gaussian elimination is performed on (A B) to obtain A, B in the form (48)-see explanations for example in [36].
Because of the way A, B are now structured, we have Proposition 16 q has the following form where q 1 ∈ i 1 .
Proof By Proposition 6, the following equation holds This equation is equivalent to Since the left hand side of the above equation r 0 are bounded as k tends to infinity, we conclude that q must take the form as given in the proposition.

Remark 3
From the proof of the above proposition, we see that ( Xk , Ȳk ) satisfies Finally, by defining X p k and Ȳ p k to be we obtain from (45) the following new system of equations derived from the original system ( 45) Let us take the inverse of the matrix on the left hand side of the above equation, which can be shown to exist for τ k > 0. Define and Note that G k and Ḡk are related by We have from (58), by taking the inverse of the matrix on the left hand side in (58), the following svec( where to obtain the second equality, we use the identity Given that we are using (59), which involves X p k , Ȳ p k , to derive meaningful conditions for superlinear convergence using Algorithm 1, let us now express (42) in terms of them.We know that P T k P k = W −1 k and by (46), we can let In the following lemma, we provide the sufficient condition for superlinear convergence using Algorithm 1 in terms of then superlinear convergence in the sense of (40) using Algorithm 1 follows.
Proof A sufficient condition for superlinear convergence in the sense of ( 40) is (42).Now observe that for (42) to hold, it is sufficient to have By (47), P k given by (60) satisfies By ( 63), (62) holds if and only if Hence, since (62) is sufficient for (40) to hold, the lemma follows by applying ( 56) and ( 57) on (64).
The system of equations in (59) relates (X k , Y ) through ( Xk , Ȳk ) to ( X p k , Ȳ p k ), and allows us to have a way to validate conditions on (X k , Y k ) for superlinear convergence using Algorithm 1 by showing that (61) holds.Before we provide such a condition in Theorem 3 below, let us observe the following.

Proposition 17
where the first equality holds by definition of r0,k in (55), and the second equality holds by (52), (53) and the structure of q in Proposition 16.
then Algorithm 1 is a superlinearly convergent algorithm in the sense of (40).
Since (61) holds in this case, the theorem is proved.
In Theorem 3, provide a sufficient condition for superlinear convergence using Algorithm 1 on any SDLCP that satisfies Assumptions 1(a), (c) and Assumption 2. This sufficient condition is similar to that found in [24,34] using Algorithm 1 with the HKM search direction.We have shown in the above theorem that the same condition is also sufficient for superlinear convergence using Algorithm 1 with the NT search direction.Superlinear convergence result has been establised in [16] using the NT search direction by "narrowing" the neighborhood of the central path, although in [16], a feasible algorithm is considered, while here, we consider an infeasible algorithm, with more involved analysis.
In the following theorem, we give another sufficient condition for superlinear convergence using Algorithm 1 on SDLCPs that have certain structure Theorem 4 Let A, B be such that for all . Furthermore, let q satisfies either one of the following two conditions: if q satisfies the first condition, and if q satisfies the second condition, then iterates generated by Algorithm 1 converge superlinearly in the sense of (40).
Proof We only need to show the theorem when the first condition (condition 1) on q is satisfied.The proof of the theorem when the second condition on q holds is similar.By Remark 3 and Wk Ȳk Wk = Xk , we have Let ( X * , Ȳ * ) be any accumulation point of {( Xk , Ȳk )} as k tends to infinity, with W * the corresponding accumulation point of { Wk }.Then, superlinear convergence using Algorithm 1.Only strict complementarity assumption and a suitable initial iterate are needed.In fact, we see from Corollary 1 that if the linear semi-definite feasibility problem has primal feasible region with nonempty interior in the case when C = 0 in (P), and has dual feasible region with nonempty interior in the case when b j = 0, j = 1, . . ., m, in (D), then Algorithm 1 is always a superlinearly convergent algorithm, irrespectively of starting point (X 0 , Y 0 ).This is so because the conditions in the corollary are satisfied trivially, as k 0 = n in the former case and k 0 = 0 in the latter case.That is, in these cases, the matrices that we considered are no longer partitioned into 4 blocks, and we have i 1 = ñ and i 2 = 0 in both cases.

Numerical study
In this section, we report on the numerical results we obtained upon applying Algorithm 1 to solve instances of SDLCP ( 1)-( 3).The algorithm is implemented by writing Matlab R2018a scripts and is run on a personal computer with an Intel(R) Core(TM) i5-4460 CPU and 8 GB of memory.In existing SDP solvers, such as SeDuMi [37], SDPT3 [40], there is an option for the equation system to find the corrector step in the interior point algorithm to have a second order term.Having the second order term tends to enhance practical efficiency of the algorithm.However, we decide to perform our numerical experiments without introducing a second order term in the equation system to find the corrector step in Algorithm 1.This is mainly due to more observed numerical warnings when we run our Matlab programs as iterates get closer to an optimal solution for instances of SDLCP (1)-(3) we tested when the algorithm is used with the second order term introduced.It is also worthwhile to note that with or without the second order term in the equation system to find the corrector step, the number of iterations to solve an instance of SDLCP (1)-( 3), namely, a linear semi-definite feasibility problem (LSDFP) with using our implemented algorithm differs only by 1 iteration, when tolerance is set to 10 −10 .In all our numerical experiments, we set β 1 = 0.3, β 2 = 0.45 and the tolerance to be 10 −10 in Algorithm 1.
We generate random instances of SDLCP ( 1)-( 3) by first generating diagonal matrices D Â, D B ∈ ñ× ñ , where ñ = n(n + 1)/2, such that the main diagonal entries of D B are randomly taken from the uniform distribution between −5 and −1, and the main diagonal entries of D Â are zero or nonzero with equal probability.If a main diagonal entry of D Â nonzero, then it is randomly assigned a value from the uniform distribution between 0 and 4. We obtain the matrices Â, B ∈ ñ× ñ by where U , V ∈ ñ× ñ are randomly generated orthogonal matrices, see [31].We have A, B ∈ ñ× ñ to be obtained from Â, B by interchanging corresponding columns in the latter matrices when a random number generated from the uniform (0, 1) distribution is less than 0.5, and keeping these columns when the random number is greater than or equal to 0.5.A, B thus obtained satisfy Assumption 1(a),(c).On the other hand, Assumption 1(b) is satisfied by setting q to be A(svec(I n×n )) + B(svec(I n×n )). 1 This choice of initial iterate (X 0 , Y 0 ) is motivated by [40].
For each n from 5 to 15, we attempt to solve, using Algorithm 1, 100 instances of SDLCP (1)-(3) of size n, randomly generated with given initial iterate (X 0 , Y 0 ) as described in the last paragraph.For each n, we compute the average number of iterations taken and average runtime for the algorithm to terminate for those instances that give real valued (X k , Y k ) upon termination of the algorithm.We denote I n to be the number of these instances.Our results are given in Table 1.
Comparing our implemented algorithm with existing solvers, using our implemented algorithm on the LSDFP with data given by (71), we need 12 iterations and a runtime of 0.02 secs before termination, while SDPT3 needs 7 iterations with a runtime of 0.13 secs as reported by the software (OPTIONS.gaptol in SDPT3 is set to 1e − 10), and SeDuMi needs 5 iterations with a runtime of 1.28 secs as reported by the software (pars.eps in SeDuMi is set to 1e − 14).The initial iterate (X 0 , Y 0 ) used for our implemented algorithm on this LSDFP is (72) The same initial iterate is also used when solving this problem using SDPT3.We now report on our numerical investigation on the local convergence of Algorithm 1 when it is used to solve the LSDFP with data given by (71).It is easy to check that the LSDFP satisfies Assumption 2 with where x 1 , x 2 , y 1 , y 2 , y 3 are constrained for these matrices to be positive semi-definite.It is also easy to check that when the initial iterate (X 0 , Y 0 ) in Algorithm 1 is chosen to be given by (72), the condition in Corollary 1 is satisfied.As predicted by our theoretical result in the corollary, our numerical results given in Table 2 shows that superlinear convergence of iterates generated by the implemented algorithm when solving this LSDFP takes place.It is interesting to note that with other choices of initial iterate (X 0 , Y 0 ), we also observe superlinear convergence of iterates generated by the implemented algorithm.

Conclusion
In this paper, we consider an infeasible predictor-corrector primal-dual path following interior point algorithm, using the Nesterov-Todd (NT) search direction, to solve a semi-definite linear complementarity problem (SDLCP).Global convergence is shown using the algorithm to solve an SDLCP and an iteration complexity bound which is polynomial in n, the size of the matrices involved, is also provided.This complexity bound is the best known so far for infeasible interior point algorithms when the "narrow" neighborhood is used for solving SDPs.Furthermore, we study superlinear convergence using the algorithm under strict complementarity assumption.Two suf- ficient conditions are provided for this to occur.The first sufficient condition is on the behavior of iterates generated by the algorithm, while the second sufficient condition is on the structure of SDLCPs.We finally report on preliminary numerical results we obtained upon implementing the interior point algorithm and using it to solve SDLCPs that are not necessarily SDPs.

Table 2
Convergence behavior of iterates generated by implemented interior point algorithm on LSDFP with data given by (71)k X k • Y k /X k−1 • Y k−1