Interior-point algorithm for sufficient LCPs based on the technique of algebraically equivalent transformation

We present a short-step interior-point algorithm (IPA) for sufficient linear complementarity problems (LCPs) based on a new search direction. An algebraic equivalent transformation (AET) is used on the centrality equation of the central path system and Newton’s method is applied on this modified system. This technique was offered by Zsolt Darvay for linear optimization in 2002. Darvay first used the square root function as AET and in 2012 Darvay et al. applied this technique with an other transformation formed by the difference of the identity map and the square root function. We apply the AET technique with the new function to transform the central path equation of the sufficient LCPs. This technique leads to new search directions for the problem. We introduce an IPA with full Newton steps and prove that the iteration bound of the algorithm coincides with the best known one for sufficient LCPs. We present some numerical results to illustrate performance of the proposed IPA on two significantly different sets of test problems and compare it, with related, quite similar variants of IPAs.


Introduction
The standard LCP is a classical problem of the optimization theory. Given a matrix M ∈ R n×n and a vector q ∈ R n our task is to find a vector pair (x, s) ∈ R n × R n satisfying − Mx + s = q, xs = 0, x, s ≥ 0, where xs stands for the Hadamard-product (or componentwise product) of two vectors.
There are different versions of LCPs, such as horizontal-LCP, mixed LCP, geometric LCP. Anitescu et al. [4] proved that the above versions of LCP can be transformed into the standard LCP. The primal-dual pair of linear or convex quadratic optimization problems can be reformulated as an LCP. Furthermore, several important practical problems lead to LCP. For a comprehensive treatment of LCPs theory and applications, we refer the reader to the monographs of Cottle et al. [8] and Kojima et al. [35].
Cottle et al. [9] introduced the concept of sufficient matrix. A matrix M ∈ R n×n is called column sufficient if for all x ∈ R n we have where I = {1, 2, . . . , n} is an index set. Moreover, M is row sufficient matrix if M T is column sufficient. A matrix is called sufficient if it is both row and column sufficient.
The sufficient matrix class can be defined in a different way through the P * matrices. The concept of P * -matrices was introduced by Kojima et al. [35]. Let κ ≥ 0 be a nonnegative number. We call the matrix M ∈ R n×n a P * (κ)-matrix if holds for any x ∈ R n , where I + (x) = {i ∈ I | x i (Mx) i > 0} and I − (x) = {i ∈ I | x i (Mx) i < 0}. We may define the set of P * -matrices as the union of sets of P * (κ)-matrices in the following way: Kojima et al. [35] proved that a P * -matrix is also column sufficient. Subsequently Guu and Cottle [28] proved that a P * -matrix is also row sufficient. Väliaho [42] proved the reverse inclusion, therefore the class of sufficient matrices and that of P * -matrices are the same. Considering the definition of P * -matrix class, a matrix is sufficient if and only if it is a P * (κ)-matrix for some κ ≥ 0. The smallest κ with this property is called the handicap of the matrix [43].
If the matrix of the LCP has the P * (κ) property the LCP is called P * (κ)-LCP. There are several approaches to solve P * (κ)-LCPs. Among them IPAs received much more attention than pivot algorithms [10][11][12]22,26,27]. Since the primal-dual linear optimization (LO) problem pair can be formulated as a special class of LCPs, it isstill-an interesting question whether any IPA could be extended to solve efficiently P * (κ)-LCPs. First Kojima et al. [35] followed this way and generalized a primaldual path-following IPA to P * (κ)-LCP. Their algorithm has O((1 + κ) √ nL) iteration complexity, where L is the input size of the problem with integer data. This complexity is still the best one for solving P * (κ)-LCP. Since then a lot of known IPAs introduced for solving LO problems were extended to P * (κ)-LCPs [32].
Let us define the transformation that Darvay introduced and analysed for the central path of the LO problem as a continuously differentiable monotone increasing function where 0 ≤ ξ < 1 is a fixed value, which depends on the function ϕ. In the view of [13,18], we may say that the first interesting transformation of the nonlinear equation of the central path is that has been used widely in the area of LO (for details see Roos et al. [40]). First monotone function analysed by Darvay [13] was Recently, Darvay et al. [18] used Naturally, for functions used in [13,18], the transformed central path equation and hence the right-hand side of the Newton system becomes more complicated, with considerable effects on the computations and the complexity analysis of the algorithm. Kheirfam [33] and Pirhaji et al. [39] investigated the function ϕ(t) = √ t 2(1+ √ t) for P * (κ)-LCPs and circular cone optimization, respectively.
Due to the fact that function ϕ used by Darvay and others need to be a continuously differentiable monotone increasing function, we refer to this transformation technique as algebraic equivalent transformation of the central path equation.
Motivated by the mentioned works we propose a full-Newton step IPA for solving the P * (κ)-LCP based on the direction introduced by Darvay et al. [18]. We analyze the algorithm and obtain the complexity bound, which coincides with the best known result for P * (κ)-LCP, in this way showing that the direction presented in [18] can be used in IPAs for P * (κ)-LCPs. We mention that our approach is significantly different from [12], where criss-cross algorithms are analyzed. There are also important differences between our algorithm and the one presented in [32], which focuses on general LCPs, because we apply the AET technique, which is not considered in [32].
The outline of the rest of the paper is as follows. In Sect. 2 we recall some basic concepts and describe Darvay's method for LCPs. In Sect. 3, we present a full-Newton step feasible IPA for P * (κ) − LC Ps using the direction induced by the function given in (5) and state two theorems about the convergence and complexity of the algorithm. In Sect. 4 we provide some numerical results to illustrate performance of the proposed IPA and compare it, with other variants. Finally, we give some conclusions in Sect. 5.
The notations used in this paper are presented in "Appendix A.1". Moreover, some important well-known results used in this paper are summerized in "Appendix A.2".
In the next section we consider the AET technique presented by Darvay [13,15] in order to obtain new search directions.

Newton system for the transformed central path equation
Let 0 ≤ ξ < 1 and ϕ : (ξ, ∞) → R be a continuously differentiable and invertible function. For a vector v ∈ R n we define ϕ(v) in the following way: We use an AET on the centrality condition of (26) getting the following equivalent system where μ > 0. Furthermore, we apply Newton's method in order to approximate the solutions of (6). A simple calculus yields the following system From Lemma 3 the above transformed Newton system has a unique solution if M is a P * (κ)-matrix.
This type of search direction was introduced by Zsolt Darvay for linear optimization. In [15] he used the function ϕ(t) = √ t and later in [18] the function ϕ(t) = t − √ t. There are several generalization of Darvay's method with the square root function [34,46,47].
In this paper we use the function ϕ(t) = t − √ t to obtain a new search direction for short step IPA that solves P * (κ)-LCPs. We mention that this direction has been applied for the first time for LCPs in [17] and for horizontal LCPs in [19].

Scaling the Newton system
Scaling is a usual technique in the area of IPAs (see [40]). The aim of it is to simplify the analysis of IPAs. We use the following notations Using (8) a simple computation yields the following scaled Newton system where For more details on obtaining the vector p v we refer to [13].

Centrality measure and some bounds
We use the following centrality measure This measure meets our expectations: it is nonnegative and it is zero if and only if the (x, s) vector pair lies on the central path C. In this paper we use the function given by (5) which was investigated for the first time in [16]. This function is not invertible on R + , but it is invertible on the interval 1 4 , ∞ . Thus, we can apply ϕ for v 2 = xs μ if all coordinates of this vector are greater than or equal to 1 4 . For the function (5) we get Using Lemma 3 for the system (7) we can easily prove the following inequalities.

The IPA and its analysis
In this section, we present a full Newton-step IPA for the P * (κ)-LCP. It is based on a new search direction coming from the system (7) using the function ϕ(t) = t − √ t. (7);

Algorithm 1:
The IPA for sufficient LCP

Analysis of the short step IPA
The next theorem states that the Algorithm 1 is well-defined. This means that the full-Newton step is feasible in each iteration and the initial conditions for the new (x, s) ∈ F + and μ hold, i. e. δ := δ(x, s; μ) < τ and v 2 > 1 4 e.
Taking square root we obtain (11), thus the inequality v + > 1 2 e follows. Now we need to show that δ(x + , s + ; μ + ) < τ. We have Based on the first term of the previous expression, we introduce the function .
Using f we can write The function f is continuous, monotonically decreasing and positive on the interval 1 2 , ∞ . Combining the properties of the function f with the inequality (11) we obtain , for all indices i ∈ I. (15) Using the inequalities (14) and (15) we get Bounding the last term yields In the last inequality we used Corollary 1, the inequalities v 2 ≥ 2v − e ≥ 0 and the definition of δ given in (10). Using (16) and the previous estimation we are ready to bound δ(x + , s + ; μ + ) as follows where Let us bound from above θ as Therefore, Thus, using that the function g is decreasing we get Using the inequality (17) we obtain the following bound Substituting the values of the parameters θ and τ and using the fact that c(κ) ≤ 3 2 + 4κ we have δ(x + , s + ; μ + ) ≤ 2 3 So far we proved that the Algorithm 1 is well defined. The iterations are repeatable because of the inheritance of initial conditions.

Complexity analysis of the short step IPA
In this subsection we prove a theorem which deals with the complexity of the Algorithm 1. We show that this algorithm enjoys the best theoretical polynomial complexity.
First we examine the duality gap. If the Newton-step took us exactly to the μ-center then (x + ) T s + n = μ would hold. The next lemma examines the relationship between the terms (x + ) T s + n and μ.
Proof Let us compute x + s + . We have The last inequality follows from v ≥ 1 2 e, which implies 2v − e v 2 ∈ [0, 1] n . From (19) we get We use the following simple estimation: From (20) and (21) we obtain the statement of the lemma.
Based on the previous lemma it is easy to derive the complexity result of the algorithm.

e then the Algorithm 1 provides an ε-solution of the LCP after at most
Proof Let us assume that we are in the (k + 1) th iteration of our algorithm. At the beginning of the iteration we have x = x k , s = s k , μ = μ k and δ k = δ(x k , s k ; μ k ).
Using Lemma 1 we have Using the equality μ k = (1 − θ) k μ 0 and the stopping criteria of our IPA we conclude that the algorithm will surely stop if the inequality holds. Taking logarithm of (23) we get Using the inequality log(1 − θ) ≤ −θ and the fact that θ = 1 (9κ+8) √ n we get that the algorithm stops if Now using simple computation we have which proves the theorem.

e then the Algorithm 1 provides an ε-solution of the LCP after at most
Proof Using n ≥ 2 and μ 0 ≥ ε we obtain 2nμ 0 ε ≤ n 2 (μ 0 ) 2 ε . Thereofore, we have which implies the corollary.
Using the ε-solution provided by Corollary 2 we can apply the rounding procedure presented in Illés et al. [30] to obtain a maximally complementary solution in strongly polynomial time.

Numerical results
We implemented our algorithm in the C++ programming language using the objectoriented code developed by Darvay and Takó [20]. The main difference between our implementation and the analysed theoretical version of the algorithm is that in the implemented version of the PD IPA, the barrier update parameter, θ is a given constant between 0 and 1, while in the theoretical version of the algorithm θ = 1 (9 κ+8) √ n . The reason of this modification lies in the fact that usually the handicap is not exactly known or might be very (exponentially) large. In the latter case the barrier update parameter θ is much smaller than it is necessary for a reliable implementation.
However, the price of this deviation from the theoretical version of the algorithm is that there is no guarantee that the full step can be performed. In each iteration we calculated the maximum step length α max using the classical ratio test, which guarantees that the new iterates x + = x + α Δx, s + = s + α Δs remain nonnegative. To ensure the strict feasibility of the new iterates we used a factor ρ = 0.95 to shorten the step length, thus the used step length is α + = ρ α max .
Another problem caused by choosing arbitrary θ ∈ (0, 1) is related to the selection of the new value of the μ. Since the δ(x + , s + ; μ) < τ may not hold for any 0 < τ < 1 the selection of the target μ value need to be modified comparing the theoretical version of the PD IPA. Namely, the new target value of μ is computed as where θ ∈ {0.999, 0.1} in our computations presented in this paper.
We compared three primal-dual algorithms based on the AET technique. The only difference between these methods were in the selection of the function ϕ. We considered the identity map, the square root function and the our one, given in (5).
We tested the above mentioned algorithms on two different types of test problems. The first set of problems used sufficient matrices with positive handicap provided by Illés and Morapitiye [29]. For these problems we set the initial interior point as x 0 = s 0 = e, because the righthand side vector has been computed as q := −M e + e.
The second test set problem is based on the P-matrix proposed by Zsolt Csizmadia and presented in [23]: A n = a i j 1≤i, j≤n , where The handicap κ(A n ) ≥ 2 2n−8 − 1 4 as de Klerk and E.-Nagy [23] proved. The righthand side vectors of sufficient LCPs using matrix A n has been computed as q := −A n e + e, thus x 0 = s 0 = e are ideal initial interior points for starting analized IPAs.
Based on the definition of P * (κ)-matrices, (2), for a given vector y ∈ R n and P * (κ)matrix, M with the property that y T M y < 0, corresponds to a κ(y) > 0 parameter such that (2) holds with equality for the given vector y. Interestingly enough, κ(Δx) may take large values when the handicap of the given P * (κ)-matrix M, is large, like for the Csizmadia-matrix, A n .
Before presenting and analysing our computational results let us shortly discuss from complexity point of view the three implemented algorithms that differ only in the AET function.
Observe that independently from the used AET functions for transforming the central path equation (for ϕ(t) = t see [35]; ϕ(t) = √ t see [5], and for ϕ(t) = t − √ t see Corollary 2 of this paper), the best complexity results for primal-dual IPAs for sufficient LCPs are just the same, namely O((1 + κ) √ n log n ε ). Let us discuss computational performance of the PD IPAs differing only in the used ϕ functions on those earlier described two sufficient LCP test sets.

Computational results on LCPs with positive handicap
The sufficient matrices generated by Illés and Morapitiye [29] have positive handicap, but not very large. Furthermore, these matrices were generated using same rules and similar starting matrices therefore does not differ significantly from each other. This observation has been also confirmed by the behavior of the different versions of the PD IPAs on the first test set of problems. Table 1 The average number of iterations for LCPs based on sufficient matrices presented in [29], which have positive handicap The barrier parameter update was θ = 0.999 We tested PD IPAs on sufficient LCPs for sizes n = 10, 20, 50, 100, 200, 500. For each of these sizes the test set contains 10 problems. Due to the fact that for a given size the iteration number and the CPU time was very similar, instead of giving results for separate problems, we present a table containing the necessary average iteration number for different sizes (Table 1).

Computational results on LCPs with Csizmadia-matrix
The handicap of Csizmadia matrix κ(A n ) ≥ 2 2n−8 − 1 4 (see [23]). The worst case complexity result with the default value of θ (see Algorithm 1) ensures that the IPA has polynomial iteration complexity depending on κ. Since we use much larger values for the update parameter θ in our computations, no theoretical guarantee follows for polynomial number of iteration of the IPA from Theorem 2.
On the other hand, the implemented variants of theoretical IPAs for different AETs with several safeguard modifications, as we expected, performed very well (see Tables Table 3 The number of  iterations for LCPs  For instance, at A 100 , θ = 0.999, the κ(Δx k ) was continuously decreasing from a very large number 1.88905 · 10 34 to 0, and in the meantime the step length α was continuously increasing from a very small number 7.49944 · 10 −18 to 1, that is a full Newton-step (see [21]). Naturally, the gap decreases more when the step length is larger.
We tested the effect of the barrier parameter update θ by using different values of it. Results with θ = 0.1 (see Table 3) show that the selection of AET function could have minor effect on the iteration number of the PD IPA depending on the value of θ .
However, in case of θ = 0.1 the influence of the handicap of the given matrix on the increase of the iteration numbers can be observed, as well.

Conclusions
In this article we have developed a new short-step path-following algorithm for solving LCPs. Our approach is a generalization of the results presented in [15]. We dealt with P * (κ) − LC Ps and derived an algorithm with the currently best known complexity. We provided some numerical results which prove the practical efficiency of the method as well.
Darvay and his coauthors together with their followers demonstrated that this approach can be used widely. Moreover, some other interesting questions can be raised, which need further investigation. Can any Darvay-type AET be used to derive search directions for IPAs for different optimization problem classes? Is it possible to characterize those one-to-one transformations that are applicable to obtain new search directions for IPAs of different problem classes? Can we rank different (Darvay-type) search directions as one is better than the other in the sense of generating significantly less iterations in a practical implementation? We think that there are possibilities to modify the complexity analysis of Darvay's IPAs for LO [13,18], with careful applications of ideas presented in the book of Roos et al. [40]. As a further research we would like to extend the AET approach to the case of general LCPs analysed by Illés, Nagy and Terlaky [32]. Another question of interest is the following: is there a short-step IPA for P * (κ)-LCPs (or for a well defined subclass of it), where the centrality parameter τ doesn't depend on κ? We suspect that there isn't. Understanding this phenomenon requires further research as well.
Statement iii) of the previous theorem says that the central path of a P * (κ)-LCP is unique 1 .
Illés et al. [31] (see M. Nagy [38]) proved that under the F + = ∅ condition F * is compact as well. Finally, we would like to state two important lemmas as we use them later. Recall that P 0 denotes the class of matrices with all the principal minors nonnegative [24,25].

Lemma 2 The matrix
is a nonsingular matrix for any positive diagonal matrices X , S ∈ R n×n , where I is the n-dimensional identity matrix if and only if M is a P 0 matrix. Lemma 2 was proved by Kojima et al. [35]. The next lemma states some wellknown results regarding the Newton system (see for example Corollary 7 and Lemma 9 in [32]).