Advertisement

A corrector–predictor interior-point method with new search direction for linear optimization

  • Zs. Darvay
  • T. IllésEmail author
  • B. Kheirfam
  • P. R. Rigó
Open Access
Article
  • 217 Downloads

Abstract

We introduce a feasible corrector–predictor interior-point algorithm (CP IPA) for solving linear optimization problems which is based on a new search direction. The search directions are obtained by using the algebraic equivalent transformation (AET) of the Newton system which defines the central path. The AET of the Newton system is based on the map that is a difference of the identity function and square root function. We prove global convergence of the method and derive the iteration bound that matches best iteration bounds known for these types of methods. Furthermore, we prove the practical efficiency of the new algorithm by presenting numerical results. This is the first CP IPA which is based on the above mentioned search direction.

Keywords

Linear optimization Corrector–predictor methods New search directions Polynomial complexity 

Mathematics Subject Classification

90C05 90C51 

1 Introduction

Karmarkar (1984) presented the first projective IPA for solving LO problems with polynomial-time complexity. After that, several results related to this theory have been published. The theory and practice of IPAs can be found in monographs written by Roos et al. (1997), Wright (1997), Ye (1997) and Nesterov and Nemirovski (1994). The IPAs for LO can be classified in multiple ways. One way to classify these algorithms is based on the step length. In this way, we can distinguish between short- and long-step IPAs. In theory, short-update algorithms give usually more efficient theoretical results with simpler analysis, while in practice, the large-step versions perform generally better. Another way to categorize the IPAs is related to feasibility of the iterates, hence, we can consider feasible and infeasible IPAs. Another type of IPAs that have proven to be efficient in practice are predictor–corrector IPAs. These algorithms consist of iterations using two types of steps, one predictor step and one or more corrector steps. For further details on classification of different IPAs see Illés and Terlaky (2002), Roos et al. (1997), Wright (1997) and Ye (1997).

The determination of the search directions plays a key role in the theory of IPAs. The most widely used technique for obtaining the search directions is based on barrier functions. By considering self-regular functions, Peng et al. (2002) reduced the theoretical complexity of large-step IPAs. Darvay (2002) introduced a new technique for finding search directions in case of these algorithms, namely the algebraic equivalent transformation of the system which defines the central path. Central path has been introduced independently by Sonnevend (1985, 1986) and by Megiddo (1989). The importance of the central path in the literature of IPAs can be highlighted by the fact that it is unique, see Roos et al. (1997), Terlaky (2001), Wright (1997) and Ye (1997). Sonnevend (1985, 1986) proved that the central path led to a unique optimal solution called the analytic center. General idea of IPA is to approximately trace the central path and to compute an interior point, called \(\varepsilon \)-optimal solution, that well approximates the analytic center. From an \(\varepsilon \)-optimal solution with small enough \(\varepsilon > 0\), using the so-called rounding procedure (Illés and Terlaky 2002; Roos et al. 1997), an optimal solution can be computed in strongly polynomial time.

The function of AET is applied to both sides of the nonlinear equation of the system that defines the unique central path. After the transformation the central path remains unique, and the Newton’s method is applied to the transformed system in order to determine the displacements. In the literature, the most widely used function for AET is the identity map, namely in most of IPAs the central path has not been transformed. In the papers of Darvay (2002, 2003) \(\psi (t)=\sqrt{t}\) is used while in Darvay et al. (2016) the authors introduced an IPA for LO based on the direction using a new function, namely \(\psi (t)=t-\sqrt{t}\), where the domain is \(D_{\psi }=\left( \frac{1}{4},\infty \right) .\) IPAs based on AET (Darvay 2002, 2003; Darvay et al. 2016) achieve usually the best known iteration bounds, alongside many other IPAs (Roos et al. 1997; Wright 1997; Ye 1997). Further research is needed to investigate whether IPAs that use AETs are more efficient than IPAs that are not. It would be also interesting to identify some class of LO problems for which application of AET based IPA may be beneficial.

The first PC IPA has been independently developed by Mehrotra (1992) and Sonnevend et al. (1990). The PC IPAs consist of a predictor and several corrector steps in a main iteration. The aim of the predictor step is to approach the optimal solution of the problem in a greedy way. The usual consequence of the greedy predictor step is that the obtained strictly feasible solution no longer belongs to the given neighborhood of the central path. The goal of the corrector steps is to return the iterate back to the designated neighborhood. Mizuno et al. (1993) proposed the first PC IPA which uses only one corrector step in a main iteration. Darvay (2005, 2009) introduced PC IPAs for LO that are based on the AET technique and he used the function \(\psi (t)=\sqrt{t}\) with domain \(D_{\psi }=\left( 0, \infty \right) \) in order to determine the transformed central path and the modified Newton system. The unique solution of this system led to a new search direction. Kheirfam (2016, 2015) proposed CP IPAs for convex quadratic symmetric cone optimization and second-order cone optimization, respectively.

Before summarizing the structure and results of this paper, it is worthwhile to mention that IPAs for LO have been extensively generalized to linear complementary problems (LCPs) (Cottle et al. 1992; Illés et al. 2010a, b; Kojima et al. 1991; Lešaja and Roos 2010; Potra and Sheng 1996; Yoshise 1996). There are several generalizations of the Mizuno-Todd-Ye PC IPA (Mizuno et al. 1993) from LO to sufficient LCPs like that of Potra (2002), and Illés and Nagy (2007). Recently, Potra (2014) published a new PC IPA for sufficient LCPs using wide neighborhood with optimal iteration complexity. The AET method for determining search directions for IPAs has been also extended to LCPs (Achache 2010; Asadi and Mansouri 2013; Kheirfam 2014; Wang et al. 2009) and LCPs over symmetric cones (Asadi et al. 2017a, b; Mohammadi et al. 2015; Wang 2012).

In this paper, a new CP IPA for LO is introduced. We use the AET method for the system which defines the central path that is based on the function \(\psi (t)=t-\sqrt{t}\). Newton’s method is then applied to the transformed system in order to find the search directions. The analysis of the algorithm is more complicated with this function. Nevertheless, we were able to prove global convergence of the method and derive the iteration bound that matches best-known iteration bound for these types of methods. We also present some numerical results and we compare our CP IPA with the classical primal-dual method, which is based on the same search direction and uses only one step in each iteration.

The paper is organized as follows. In Sect. 2, the primal-dual LO problem and the main concepts of the AET of the system defining the central path are given. In the following section the new CP IPA is presented. Section 4 contains the analysis of the proposed algorithm, while in Sect. 5 the iteration bound for the algorithm is derived. In Sect. 6, we provide some numerical results that prove the efficiency of this algorithm. in the last Section some concluding remarks are provided.

2 Preliminaries

Consider the LO problem in the standard form
$$\begin{aligned} (P)~~~~~~~~~\min ~\{c^Tx:~Ax=b, ~x\ge 0\},~~~~~ \end{aligned}$$
and its dual problem
$$\begin{aligned} (D)~~~~~~~~~\max ~\{b^Ty:~A^Ty+s=c, ~ s\ge 0\}, \end{aligned}$$
where \(A\in {R}^{m\times n}\) with \(rank(A)=m, b\in R^m\) and \(c\in R^n\). We assume that the interior-point condition (IPC) holds for both problems; that is, there exists \((x^0, y^0, s^0)\) such that
$$\begin{aligned} Ax^0=b,~A^Ty^0+s^0=c,~x^0>0,~s^0>0. \end{aligned}$$
Using the self-dual embedding model presented by Ye et al. (1994), Roos et al. (1997) and Terlaky (2001) we conclude that the IPC can be assumed without loss of generality. In this case, the all-one vector can be considered as a starting point.
Under the IPC, finding an optimal solution of the primal-dual pair is equivalent to solving the following system
$$\begin{aligned} \begin{array}{ccccccc} Ax=b,~~ x\ge 0,\\ A^Ty+s=c,~~ s\ge 0,~~~~~~\\ xs=0.~~~~~~~~~ \end{array} \end{aligned}$$
(1)
The main idea of primal-dual IPAs is to replace the third equation in (1), the so-called complementarity condition for (P) and (D), by the perturbed equation \(xs=\mu e\) with \(\mu >0\). Hence, we obtain the following system of equations:
$$\begin{aligned} \begin{array}{ccccccc} Ax=b,~~ x\ge 0,\\ A^Ty+s=c,~~ s\ge 0,~~~~~~\\ xs=\mu e.~~~~~~~ \end{array} \end{aligned}$$
(2)
It is proved in Roos et al. (1997) that there is a unique solution \((x(\mu ), y(\mu ), s(\mu ))\) to the system (2) for any \(\mu >0\), assuming the IPC holds. The set of all such solutions constructs a homotopy path, which is called the central path (see Megiddo 1989; Sonnevend 1986). If \(\mu \) tends to zero, then the central path converges to the optimal solution of the problem.
In what follows, we recall the AET introduced by Darvay et al. (2016) for LO that leads to calculation of new search direction for IPAs. For this purpose, we consider the continuously differentiable function \(\psi : \mathbb {R}_+\rightarrow \mathbb {R}_+\), and assume that its inverse \(\psi ^{-1}\) exists. Note that the system (2) can be rewritten in the following form:
$$\begin{aligned} \begin{array}{ccccccc} Ax=b,~~ x\ge 0,\\ A^Ty+s=c,~~ s\ge 0,~~~~~~\\ \psi \left( \frac{xs}{\mu }\right) =\psi (e),~~~~~~~~~ \end{array} \end{aligned}$$
(3)
where \(\psi \) is applied componentwisely. Applying Newton’s method to system (3) for a strictly feasible solution (xys) produces the following system for search direction \((\varDelta x, \varDelta y, \varDelta s)\)
$$\begin{aligned} \begin{array}{ccccccc} A\varDelta x=0,~\\ A^T\varDelta y+\varDelta s=0,~~~~~~~~~~\\ \frac{s}{\mu }\psi ^{'}\big (\frac{xs}{\mu }\big )\varDelta x+\frac{x}{\mu }\psi ^{'}\big (\frac{xs}{\mu }\big )\varDelta s=\psi (e)-\psi \big (\frac{xs}{\mu }\big ).~~~~~~~~~~~ \end{array} \end{aligned}$$
(4)
Let
$$\begin{aligned} v=\sqrt{\frac{xs}{\mu }}. \end{aligned}$$
Defining the scaled search directions as
$$\begin{aligned} d_x:=\frac{v\varDelta x}{x},~~~~~~~ d_s:=\frac{v\varDelta s}{s}, \end{aligned}$$
(5)
one easily verifies that system (4) can be written in the form
$$\begin{aligned} \begin{array}{ccccccc} {{\bar{A}}}d_x=0,\\ {{\bar{A}}}^T\frac{\varDelta y}{\mu }+d_s=0,~~~~~~~~\\ d_x+d_s=p_{v},~~ \end{array} \end{aligned}$$
(6)
where \({\bar{A}}:=A \, \mathrm{diag}(\frac{x}{v})\) and \(p_v:=\frac{\psi (e)-\psi (v^2)}{v\psi ^{'}(v^2)}.\) For different \(\psi \) functions (see Darvay 2002, 2003; Darvay et al. 2016; Roos et al. 1997), one get different values for the \(p_{\nu }\) vector that lead to different search directions. Based on the idea of Darvay et al. (2016) idea, we take \(\psi (t)=t-\sqrt{t}\), which gives
$$\begin{aligned} p_v=\frac{2(v-v^2)}{2v-e}. \end{aligned}$$
(7)
For analysis of our algorithm, we define a norm-based proximity measure \(\delta (x, s; \mu )\) as follows:
$$\begin{aligned} \delta (v):=\delta (x, s; \mu )=\frac{\Vert p_v\Vert }{2}=\Big \Vert \frac{v-v^2}{2v-e}\Big \Vert , \end{aligned}$$
(8)
which has been considered for feasible IPAs for the first time in Darvay et al. (2016). Considering (6) we have \( d_x^T d_s=0\). Thus, the vectors \( d_x\) and \( d_s\) are orthogonal. Using (8) and \(v>0\), one can easily verify that
$$\begin{aligned} \delta (v)=0 \Leftrightarrow v=e \Leftrightarrow xs=\mu e. \end{aligned}$$
Hence, the value of \(\delta (v)\) can be considered as an appropriate measure for the distance between the given triple (xys) and \((x(\mu ), y(\mu ), s(\mu ))\). Moreover, note that if \(\psi (t)=t\) then \(p_v=v^{-1}-v\) and we obtain the standard proximity measure \(\delta (v)=\frac{1}{2}\Vert v-v^{-1}\Vert \) given in Roos et al. (1997). From \(\psi (t)=\sqrt{t}\) it follows that \(p_v=2(e-v)\), thus \(\delta (v)=\Vert e-v\Vert \), which was discussed in Darvay (2002, 2003). Let
$$\begin{aligned} {} q_v= d_x - d_s. \end{aligned}$$
(9)
Then, the orthogonality of the vectors \( d_x\) and \( d_s\) implies
$$\begin{aligned} \Vert p_v\Vert =\Vert q_v\Vert . \end{aligned}$$
As an effect of this relation, we can also express the proximity measure using \(q_v\), thus
$$\begin{aligned} \delta ( x, s; \mu ) = \frac{\Vert q_v\Vert }{2}. \end{aligned}$$
Furthermore,
$$\begin{aligned} d_x=\frac{ p_v+ q_v}{2}\quad \text{ and }\quad d_s=\frac{ p_v- q_v}{2}, \end{aligned}$$
thus
$$\begin{aligned} d_x d_s=\frac{ p_v^2- q_v^2}{4}, \end{aligned}$$
(10)
holds.

The lower and upper bounds on the components of the vector v are given in the following lemma.

Lemma 1

[cf. Lemma 2 in Kheirfam (2018)] If \(\delta :=\delta (v)\), then
$$\begin{aligned} \frac{1}{2}+\frac{1}{4\rho (\delta )}\le v_i\le \frac{1}{2}+\rho (\delta ),\quad ~i=1, \ldots , n, \end{aligned}$$
where \(\rho (\delta )=\delta +\sqrt{\frac{1}{4}+\delta ^2}\).

3 Corrector–predictor algorithm

In this section, we present a CP path-following algorithm for LO problems based on Darvay et al.’s idea. For this purpose, we define an \(\tau \)-neighborhood of the central path as follows:
$$\begin{aligned} {{\mathcal {N}}}(\tau ):=\{(x, y, s): Ax=b, ~A^Ty+s=c,~ x>0,~ s>0, \delta (x, s; \mu )\le \tau \}, \end{aligned}$$
where \(0< \tau < 1\). The algorithm begins with a given strictly feasible primal-dual solution \((x^0, y^0,s^0) \in {{\mathcal {N}}}(\tau )\). If for the current iterate (xys), \(n\mu >\epsilon \), then the algorithm calculates a new iterate by performing corrector and predictor steps. In the corrector step, we define \(v=\sqrt{\frac{xs}{\mu }}\), \({{\bar{A}}}=A \, \mathrm{diag}(\frac{x}{v})\) and we obtain the scaled search directions \(d_x\) and \(d_s\) by solving (6) with \(p_v\) given in (7), namely
$$\begin{aligned} \begin{array}{ccccccc} {{\bar{A}}}d_x=0,~~\\ {{\bar{A}}}^T\frac{\varDelta y}{\mu }+d_s=0,~~~~~~~~~~~\\ d_x+d_s=\frac{2(v-v^2)}{2v-e}. \end{array} \end{aligned}$$
(11)
Newton directions of the original system (4), i.e., \(\varDelta x=\frac{x}{v}d_x, \varDelta s=\frac{s}{v}d_s\) can be expressed easily and the corrector iterate is obtained by a full-Newton step as follows:
$$\begin{aligned} (x^+, y^+, s^+):=(x, y, s)+(\varDelta x, \varDelta y, \varDelta s). \end{aligned}$$
In the predictor step, we define
$$\begin{aligned} v^+=\sqrt{\frac{x^+s^+}{\mu }}, \quad {{\bar{A}}}_+=A\mathrm{diag} \, \Big (\frac{x^+}{v^+}\Big ), \end{aligned}$$
and we obtain the search directions \(d^p_x\) and \(d^p_s\) by solving the following system:
$$\begin{aligned} \begin{array}{ccccccc} {{\bar{A}}}_+d^p_x=0,~~\\ {{\bar{A}}}_+^T\frac{{\varDelta ^p y}}{\mu }+d^p_s=0,~~~~~~~~\\ d^p_x+d^p_s=-2v^+. \end{array} \end{aligned}$$
(12)
Note that the right-hand side of the system (12) is inspired by the predictor step proposed in Darvay (2005). Similarly to \(\varDelta x\) and \(\varDelta s\), we define \(\varDelta ^px=\frac{x^+}{v^+}d^p_x, \varDelta ^ps=\frac{s^+}{v^+}d^p_s\) and the predictor iterate is obtained by
$$\begin{aligned} (x^p, y^p, s^p):=(x^+, y^+, s^+)+\theta (\varDelta ^px, \varDelta ^py, \varDelta ^ps), \end{aligned}$$
where \(\theta \in (0, \frac{1}{2})\) and also \(\mu ^p=(1-2\theta )\mu \). At the beginning of the algorithm, we assume that \((x^0, y^0,s^0) \in {{\mathcal {N}}}(\tau )\). We would like to determine the values of \(\tau \) and \(\theta \) in such a way that after a corrector step \((x^+,y^+,s^+) \in {{\mathcal {N}}}(\omega (\tau ))\)  (where \(\omega (\tau ) < \tau \) will be defined later) and after a predictor step \((x^p,y^p,s^p) \in {{\mathcal {N}}}(\tau )\). The algorithm repeats corrector and predictor steps alternatively until \(x^Ts\le \epsilon \) is satisfied. A formal description of the algorithm is given in Fig. 1.
Fig. 1

The algorithm

4 Analysis of the algorithm

The following technical lemma introduced by Wright (1997), is a generalization of Lemma C.4 (first \(u{-}v\) Lemma) in Roos et al. (1997). We will use it to estimate the norm of the product of scaled search directions.

Lemma 2

[Lemma 5.3 in Wright (1997)] Let u and v be two arbitrary vectors in \(\mathbb {R}^n\) with \(u^Tv\ge 0\). Then
$$\begin{aligned} \Vert uv\Vert \le \frac{1}{2\sqrt{2}}\Vert u+v\Vert ^2. \end{aligned}$$

In the following two sections, we will analyse the predictor and the corrector steps in detail, respectively. Note that the first step performed by the algorithm is in fact a corrector step.

4.1 The predictor step

The next lemma will prove the strict feasibility after a predictor step.

Lemma 3

Let \((x^+, y^+, s^+)\) be a strictly feasible primal-dual solution obtained after a corrector step and \(\mu >0\). Furthermore, let \(0<\theta <\frac{1}{2}\), and
$$\begin{aligned} x^p=x^++\theta \varDelta ^px,\quad y^p=y^++\theta \varDelta ^py, \quad s^p=s^++\theta \varDelta ^ps, \end{aligned}$$
denote the iterates after a predictor step. Then \((x^p, y^p, s^p)\) is a strictly feasible primal-dual solution if
$$\begin{aligned} h(\delta _+, \theta , n):=\left[ \frac{1}{2}+\frac{1}{4\rho (\delta _+)}\right] ^2-\frac{\sqrt{2}\theta ^2 n}{1-2\theta }\left[ \frac{1}{2}+\rho (\delta _+)\right] ^2 > 0, \end{aligned}$$
where \(\delta _+:=\delta (x^+, s^+; \mu ).\)

Proof

For each \(0\le \alpha \le 1\), denote \(x^p(\alpha )=x^++\alpha \theta \varDelta ^px\) and \(s^p(\alpha )=s^++\alpha \theta \varDelta ^ps.\) Therefore, using the third equation in (12), we obtain
$$\begin{aligned} {x^p(\alpha ) \, s^p(\alpha )}= & {} \frac{x^+s^+}{(v^+)^2} \, \left( v^+ + \alpha \theta d^p_x\right) \, \left( v^+ + \alpha \theta d^p_s\right) ~~~~~~~~~~ \nonumber \\= & {} \mu \, \left[ (v^+)^2+\alpha \theta v^+\left( d^p_x+d^p_s\right) +\alpha ^2\theta ^2 d^p_xd^p_s \right] \nonumber \\= & {} \mu \, \left[ (1-2\alpha \theta )(v^+)^2+\alpha ^2\theta ^2d^p_xd^p_s\right] .~~~~~~~~~ \end{aligned}$$
(13)
From (13), it follows that
$$\begin{aligned} \min \left[ \frac{x^p(\alpha )s^p(\alpha )}{(1-2\alpha \theta )\mu }\right]= & {} \min \left[ (v^+)^2+\frac{\alpha ^2\theta ^2}{1-2\alpha \theta }d^p_xd^p_s\right] ~~~~~~~~~~~~~~~~~~\\\ge & {} \min \big ((v^+)^2\big )-\frac{\alpha ^2\theta ^2}{1-2\alpha \theta }\big \Vert d^p_xd^p_s\big \Vert _{\infty }~~~~~~~~~~~~\\\ge & {} \min \big ((v^+)^2\big )-\frac{\theta ^2}{1-2\theta }\big \Vert d^p_xd^p_s\big \Vert _{\infty }~~~~~~~~~~~~~~\\\ge & {} \left[ \frac{1}{2}+\frac{1}{4\rho (\delta _+)}\right] ^2-\frac{\theta ^2}{2\sqrt{2}(1-2\theta )}\big \Vert d^p_x+d^p_s\big \Vert ^2\\= & {} \left[ \frac{1}{2}+\frac{1}{4\rho (\delta _+)}\right] ^2-\frac{4\theta ^2}{2\sqrt{2}(1-2\theta )}\big \Vert v^+\big \Vert ^2~~~~~~\\= & {} \left[ \frac{1}{2}+\frac{1}{4\rho (\delta _+)}\right] ^2-\frac{4\theta ^2}{2\sqrt{2}(1-2\theta )}\sum _{i=1}^n(v^+)^2_i~~\\\ge & {} \left[ \frac{1}{2}+\frac{1}{4\rho (\delta _+)}\right] ^2-\frac{\sqrt{2}\theta ^2 n}{1-2\theta }\left[ \frac{1}{2}+\rho (\delta _+)\right] ^2~~~~\\= & {} h(\delta _+, \theta , n)>0.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{aligned}$$
The second inequality is due to the fact that \(f(\alpha ):=\frac{\alpha ^2\theta ^2}{1-2\alpha \theta }\) is monotonically increasing with respect to \(\alpha \); that is, \(f(\alpha )\le f(1)\). The third inequality follows from Lemmas  1 and 2. The second equality can be derived from the third equation of (12). The inequality before the last line follows from the upper bound given in Lemma 1.

The above inequality implies that \(x^p(\alpha )s^p(\alpha )>0\), for all \(0\le \alpha \le 1\). Therefore, \(x^p(\alpha )\) and \(s^p(\alpha )\) are not changing sign on \(0\le \alpha \le 1\). Since \(x^p(0)=x^+>0\) and \(s^p(0)=s^+>0\), thus we conclude that \(x^p(1)=x^++\theta \varDelta ^px=x^p>0\) and \(s^p(1)=s^++\theta \varDelta ^ps=s^p>0\) and the proof is complete. \(\square \)

We define
$$\begin{aligned} v^p=\sqrt{\frac{x^ps^p}{\mu ^p}}. \end{aligned}$$
(14)
It follows from (13), with \(\alpha =1\), that
$$\begin{aligned} \big (v^p\big )^2=\big (v^+\big )^2+\frac{\theta ^2}{1-2\theta }d^p_xd^p_s, \end{aligned}$$
(15)
and
$$\begin{aligned} \min \big (v^p\big )^2\ge h(\delta _+, \theta , n). \end{aligned}$$
(16)

Lemma 4

Let \((x^+, y^+, s^+)\) be a strictly feasible primal-dual solution and \(\mu ^p=(1-2\theta )\mu \) with \(0<\theta <\frac{1}{2}\). Moreover, let \(h(\delta _+, \theta , n)> \frac{1}{4}\) and assume that \((x^p, y^p, s^p)\) denotes the iterate after a predictor step. Then \(v^p > \frac{1}{2} e\) and
$$\begin{aligned} \delta ^p:=\delta (x^p, s^p; \mu ^p)\le \frac{\sqrt{h(\delta _+, \theta , n)} \Big [3\delta ^2+ \frac{\sqrt{2}\theta ^2n}{1-2\theta }\big [\frac{1}{2}+\rho (\delta _+)\big ]^2\Big ]}{2h(\delta _+, \theta , n)+\sqrt{h(\delta _+, \theta , n)}-1}. \end{aligned}$$

Proof

Since \(h(\delta _+, \theta , n)>\frac{1}{4}\), from (16) we have \(\min \big (v^p\big )^2\ge \frac{1}{4}\), which yields \(v^p>\frac{1}{2}e\). Moreover, from Lemma 3 we deduce that the predictor step is strictly feasible; \(x^p>0\) and \(s^p>0\). Now, by the definition of proximity measure, we have
$$\begin{aligned} \delta ^p:= & {} \delta (x^p, s^p; \mu ^p)=\Big \Vert \frac{v^p-(v^p)^2}{2v^p-e}\Big \Vert =\Big \Vert \frac{v^p}{(2v^p-e)(e+v^p)}\big (e-(v^p)^2\big )\Big \Vert ~\nonumber \\\le & {} \frac{v^p_{\min }}{(2v^p_{\min }-1)(v^p_{\min }+1)}\big \Vert e-(v^p)^2\big \Vert ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~\nonumber \\\le & {} \frac{\sqrt{h(\delta _+, \theta , n)}}{2h(\delta _+, \theta , n)+\sqrt{h(\delta _+, \theta , n)}-1}\big \Vert e-(v^p)^2\big \Vert ~~~~~~~~~~~~~~~~~~~~~ ~~~~~\nonumber \\= & {} \frac{\sqrt{h(\delta _+, \theta , n)}}{2h(\delta _+, \theta , n)+\sqrt{h(\delta _+, \theta , n)}-1}\Big \Vert e-(v^+)^2-\frac{\theta ^2}{1-2\theta }d^p_xd^p_s\Big \Vert ~~~~~~~\nonumber \\\le & {} \frac{\sqrt{h(\delta _+, \theta , n)}}{2h(\delta _+, \theta , n)+\sqrt{h(\delta _+, \theta , n)}-1}\Big [\big \Vert e-(v^+)^2\big \Vert +\frac{\theta ^2}{1-2\theta }\big \Vert d^p_xd^p_s\big \Vert \Big ], \end{aligned}$$
(17)
where the first two inequalities are due to Lemma 5.2 in Darvay et al. (2016) and (16), respectively. The last equality follows from (15) and the last inequality is due to the triangle inequality.
We will give an upper bound for \(\big \Vert e-\left( v^+\right) ^2 \big \Vert \). Using the definition of \(v^{+}=\sqrt{\frac{x^+ s^+}{\mu }}\) and Eq. (10), we have
$$\begin{aligned} {} \left\| e- \left( v^+\right) ^2 \right\|= & {} \Vert (v+d_x) (v+d_s)-e\Vert \nonumber \\= & {} \Vert v^2 + v \left( d_x + d_s\right) - e + d_x d_s \Vert \nonumber \\\le & {} \Vert v^2 + v p_v - e\Vert + \left\| \frac{p_v^2 - q_v^2}{4}\right\| . \end{aligned}$$
(18)
Moreover, the third equation of system (11) yields
$$\begin{aligned} {} v^2 + v p_v - e= & {} v^2 + \frac{2v^2(e-v)}{2v-e} - e = \frac{(v-e)^2}{2v-e} \nonumber \\\le & {} \frac{(v-e)^2 v^2}{(2v-e)^2} = \frac{p_v^2}{4}. \end{aligned}$$
(19)
Using (18), (19) and the fact that \(\Vert x^2\Vert \le \Vert x\Vert ^2\), we have
$$\begin{aligned} {} \left\| e- \left( v^+\right) ^2 \right\|\le & {} \frac{\Vert p_v\Vert ^2}{4}+ \frac{\Vert p_v\Vert ^2}{4}+\frac{\Vert q_v\Vert ^2}{4} \le 3 \delta ^2. \end{aligned}$$
(20)
Thus, using (20) and Lemmas 1 and 2, we obtain
$$\begin{aligned} \big \Vert e-(v^+)^2\big \Vert +\frac{\theta ^2}{1-2\theta }\big \Vert d^p_xd^p_s\big \Vert\le & {} 3\delta ^2+\frac{\theta ^2}{2\sqrt{2}(1-2\theta )}\Vert d_x^p+d^p_s\Vert ^2 \nonumber \\= & {} 3\delta ^2+\frac{\sqrt{2}\theta ^2}{1-2\theta }\big \Vert v^+\big \Vert ^2~~~~~~~~~~~~~\\\le & {} 3\delta ^2+ \frac{\sqrt{2}\theta ^2n}{1-2\theta }\Big [\frac{1}{2}+\rho (\delta _+)\Big ]^2.~~~ \end{aligned}$$
Substitution of this bound into (17) yields the desired inequality. \(\square \)

4.2 The corrector step

In this subsection, we deal with the corrector step. One can observe that the algorithm presented in Fig. 1 performs a full-Newton step as a corrector step, which can be obtained in the same way as the one presented in the primal-dual algorithm of Darvay et al. (2016). Thus, for the analysis of this case the lemmas proved in Darvay et al. (2016) can be applied. The next lemma gives a condition for strict feasibility of full-Newton step.

Lemma 5

[Lemma 5.1 in Darvay et al. (2016)] Let \(\delta :=\delta (x, s, \mu )<1\) and assume that \(v\ge \frac{1}{2}e\). Then, \(x^+>0\) and \(s^+>0\).

In the next lemma we show local quadratic convergence of the Newton process.

Lemma 6

[Lemma 5.3 in Darvay et al. (2016)] Suppose that \(\delta :=\delta (x, s, \mu )<\frac{1}{2}\) and \(v\ge \frac{1}{2}e\). Then, \(v^+>\frac{1}{2}e\) and
$$\begin{aligned} \delta _+:=\delta (x^+, s^+; \mu )\le \frac{9-3\sqrt{3}}{2}\delta ^2. \end{aligned}$$

The following lemma gives an upper bound of the duality gap after a corrector step.

Lemma 7

[Lemma 5.4 in Darvay et al. (2016)] Let \(\delta :=\delta (x, s, \mu )<\frac{1}{2}\) and \(v\ge \frac{1}{2}e\). Then
$$\begin{aligned} (x^+)^Ts^+\le \mu \left( n+\frac{1}{4}\right) . \end{aligned}$$

In the next subsection, we analyse the update of the duality gap after a main iteration of the algorithm.

4.3 The effect on duality gap after a main iteration

The purpose of the algorithm is to reduce the produced duality gap of the primal-dual pair. Therefore, in order to measure this reduction, in the next lemma we give an upper bound for the gap obtained after performing an iteration of the algorithm.

Lemma 8

Suppose that \(\delta :=\delta (x, s, \mu )<\frac{1}{2}\) and \(v\ge \frac{1}{2}e\). Moreover, let \(0<\theta <\frac{1}{2}\). Then
$$\begin{aligned} (x^p)^Ts^p\le \mu ^p\left( n+\frac{1}{4}\right) . \end{aligned}$$

Proof

Using (14) and (15), we obtain
$$\begin{aligned} (x^p)^Ts^p=\mu ^p e^T(v^p)^2= & {} (1-2\theta )\mu e^T\left[ (v^+)^2+\frac{\theta ^2}{1-2\theta }d^p_xd^p_s\right] \nonumber \\= & {} (1-2\theta )(x^+)^Ts^++\mu \theta ^2(d^p_x)^Td^p_s~~~~\nonumber \\\le & {} \mu (1-2\theta )\big (n+\frac{1}{4}\big )=\mu ^p\left( n+\frac{1}{4}\right) ,~~ \end{aligned}$$
(21)
where the inequality is due to Lemma 7 and \((d^p_x)^Td^p_s=0\). This completes the proof. \(\square \)

4.4 Determining appropriate values of parameters

In this section, we want to fix the parameters \(\tau \) and \(\theta \) with suitable values, which guarantee that after a main iteration, the proximity measure will not exceed \(\tau \).

Let \((x, y, s) \in {{\mathcal {N}}}(\tau )\) be the iterate at the start of a main iteration with \(x>0\) and \(s>0\) such that \(\delta =\delta (x, s; \mu )\le \tau <\frac{1}{2}\). After a corrector step, by Lemma 6, we have
$$\begin{aligned} \delta _+\le \frac{9-3\sqrt{3}}{2} \, \delta ^2. \end{aligned}$$
It is obvious that the right-hand side of the above inequality is monotonically increasing with respect to \(\delta \), and this implies that
$$\begin{aligned} \delta _+\le \frac{9-3\sqrt{3}}{2} \, \tau ^2=:\omega (\tau ). \end{aligned}$$
(22)
Following the predictor step and the \(\mu \)-update, by Lemma 4, we have
$$\begin{aligned} \delta ^p\le \frac{\sqrt{h(\delta _+, \theta , n)} \Big [3\delta ^2+ \frac{\sqrt{2}n\theta ^2}{1-2\theta }\big [\frac{1}{2}+\rho (\delta _+)\big ]^2\Big ]}{2h(\delta _+, \theta , n)+\sqrt{h(\delta _+, \theta , n)}-1}, \end{aligned}$$
(23)
where \(\delta _+\) and \(h(\delta _+, \theta , n)\) are defined as in Lemma 3. It can be easily verified that \(h(\delta _+, \theta , n)\) is decreasing with respect to \(\delta _+\), so \(h(\delta _+, \theta , n)\ge h(\omega (\tau ), \theta , n)\). Let us consider the function \(f(t)=\frac{\sqrt{t}}{2t+\sqrt{t}-1}\), for \(t>\frac{1}{4}\). From \(f^{'}(t)<0\), it follows that f is decreasing, therefore
$$\begin{aligned} f\big (h(\delta _+, \theta , n)\big )\le f\big (h(\omega (\tau ), \theta , n)\big ). \end{aligned}$$
(24)
Using (22), (23), (24) and the fact that \(\rho \) is increasing with respect to \(\delta _+\), we obtain
$$\begin{aligned} \delta ^p\le \frac{\sqrt{h(\omega (\tau ), \theta , n)} \Big [3\tau ^2+ \frac{\sqrt{2}n\theta ^2}{1-2\theta }\big [\frac{1}{2}+\rho (\omega (\tau ))\big ]^2\Big ]}{2h(\omega (\tau ), \theta , n)+\sqrt{h(\omega (\tau ), \theta , n)}-1}, \end{aligned}$$
(25)
when \(h(\omega (\tau ), \theta , n)>\frac{1}{4}.\) If we take \(\tau =\frac{1}{4}\) and \(\theta = \frac{1}{5\sqrt{n}}\), then \(\delta ^p < \tau \) and \(h(\delta _+,\theta ,n) > \frac{1}{4}\). It should be mentioned, that the iterates after the corrector steps are in the \(\mathcal {N}(\omega (\tau ))\) neighbourhood, while the iterates after the predictor steps are in the \(\mathcal {N}(\tau )\) neighbourhood.

5 Iteration bound

The next lemma gives an upper bound for the number of iterations produced by the algorithm.

Lemma 9

Let \((x^0, y^0, s^0)\) be a strictly feasible primal-dual solution, \(\mu ^0 = \frac{\left( x^0\right) ^Ts^0}{n}\) and \(\delta (x^0, s^0, \mu ^0 )\le \tau \). Moreover, let \(x^k\) and \(s^k\) be the iterates obtained after k iterations. Then, \(\left( x^k\right) ^T s^k \le \epsilon \) for
$$\begin{aligned} k\ge 1+\left\lceil {\frac{1}{2\theta } \log \frac{5\left( x^0\right) ^T s^0}{4\epsilon }}\right\rceil . \end{aligned}$$

Proof

From Lemma 8, it follows that
$$\begin{aligned} (x^k)^Ts^k\le \mu ^k\left( n+\frac{1}{4}\right) <\frac{5}{4}(1-2\theta )^{k-1}\mu ^0 n=\frac{5}{4}(1-2\theta )^{k-1}(x^0)^Ts^0. \end{aligned}$$
This means that \((x^k)^Ts^k\le \epsilon \) holds if
$$\begin{aligned} \frac{5}{4} (1-2\theta )^{k-1} \left( x^0\right) ^Ts^0\le \epsilon . \end{aligned}$$
If we take logarithms, we get
$$\begin{aligned} (k-1) \log (1-2\theta ) + \log \frac{5\left( x^0\right) ^Ts^0}{4} \le \log \epsilon . \end{aligned}$$
Since \(\log (1+\xi )\le \xi \), \(\xi >-1\), using \(\xi =-2\theta \), we obtain that the above inequality holds if
$$\begin{aligned} -2\theta (k-1)+\log \frac{5\left( x^0\right) ^Ts^0}{4} \le \log \epsilon . \end{aligned}$$
This proves the lemma. \(\square \)

Using the above result follows the main result of our paper.

Theorem 1

Let \(\tau =\frac{1}{4}\) and \(\theta =\frac{1}{5\sqrt{n}}\). Then, the corrector–predictor interior point algorithm given in Fig. 1 is well defined and the algorithm requires at most
$$\begin{aligned} O\left( 5\sqrt{n} \log \frac{5(x^0)^Ts^0}{4\epsilon } \right) \end{aligned}$$
iterations. The output is a strictly feasible primal-dual solution (xys) satisfying \(x^Ts\le \epsilon \).

It is worth mentioning that this CP algorithm has advantages compared to the one-step method presented in Darvay et al. (2016), because in the case of the one-step algorithm \(\theta =\frac{1}{27\sqrt{n}}\), which is smaller than the value of \(\theta \) used by us. Hence, this paper leads to a slightly better complexity result.

6 Numerical results

In order to demonstrate the efficiency of our CP algorithm we implemented it in the C++ programming language (Darvay and Takó 2012) in such a way to be able to compare it with the primal-dual (PD) method proposed in Darvay et al. (2016). To do so, we made some changes in the algorithm. In case of both algorithms, we used the normalized duality gap \(\frac{x^T s}{n}\) to obtain the value of the barrier parameter \(\mu \) in the next iterate. In the implementation of the PD algorithm, we multiplied the normalized duality gap by \(\sigma = 0.95\) in order to be able to reduce the value of \(\mu \) in each iterate and to maintain the short-step strategy of the method. As it is usual in the case of the implementation of predictor–corrector algorithms, for our CP variant we applied Mehrotra’s (1992) heuristics to get the value of \(\mu \) for the corrector step. Our CP algorithm performed one corrector and one predictor step in each iteration. After calculating the search direction, we determined the maximal step size to the boundary of the feasible region. We reduced the obtained step by multiplying it by the parameter \(\rho = 0.5\). For both algorithms, we set the value of the proximity parameter \(\epsilon \) to \(10^{-5}\). We tested both algorithms on two set of problems. The first set contains randomly generated problems with maximum size \(50\times 50\). We generated ten problems of maximum size \(10\times 10\), \(20\times 20\) and \(50\times 50\), respectively. In each case, we calculated the average number of iterations (Avg. It.) and CPU times (Avg. CPU) in seconds. The obtained results are presented in Table 1.
Table 1

Number of iterations of the algorithms for the randomly generated problems

Maximum size

Avg. It. CP

Avg. CPU CP

Avg. It. PD

Avg. CPU PD

\(10\times 10\)

23.2

0.1649

269.8

1.6773

\(20\times 20\)

24.8

0.4636

294.5

4.253

\(50\times 50\)

28.7

2.824

317.2

28.1405

The second set of problems on which we tested the algorithms was chosen from the Netlib test collection (Gay 1985). The number of iterations (Nr. It.) and CPU times (CPU) in seconds are summarized in Table 2.
Table 2

Number of iterations of the algorithms for problems from Netlib

Problem

Nr. It. CP

CPU CP

Nr. It. PD

CPU PD

afiro

53

3.618

646

39.208

adlittle

86

51.362

>1000

>564.819

blend

72

58.961

571

442.728

sc50a

56

16.006

529

134.543

sc50b

56

15.785

491

125.298

sc105

63

138.865

555

1175.96

sc205

80

1161.56

507

7413.31

scagr7

88

344.113

640

2170.95

recipe

92

1352.36

>1000

>14,863.5

Based on the obtained results for the two set of problems we conclude that our CP algorithm outperforms the classical primal-dual method which uses the same search direction.

7 Conclusions and future research

We proposed a CP IPA for LO. We used the AET method for the system which defines the central path that is based on the function \(\psi (t)=t-\sqrt{t}\) (Darvay et al. 2016). We then applied Newton’s method to the transformed system in order to get the new search directions. Furthermore, we presented the analysis of the proposed algorithm and proved that the method finds an \(\varepsilon \)-optimal solution in polynomial time. To our best knowledge, this is the first CP IPA where the function \(\psi (t)=t-\sqrt{t}\) is used to derive the search directions. The novelty of this paper consists of the techniques used in the analysis of the algorithm. We had to assure that the components of v-vectors of the scaled space are greater than \(\frac{1}{2}\). We highlighted the practical efficiency of the method by providing numerical results on the selected set of test problems from NETLIB. We made a comparison between the number of iterations performed by our algorithm and the classical one-step method. We concluded that our algorithm is more efficient than the classical one.

Future theoretical research direction includes extending our CP IPA to more general problems. The computational direction for further research includes implementation with different functions \(\psi \) used for AET. We would like to compare the effect of these functions on the computational results of the algorithms on quite large and well selected LO test problems. Moreover, this kind of computational study might give some insight into the good selection strategy of the target point of the search direction for different LO problems. Ideal situation would be, if the algorithm could be able to choose automatically among different \(\varphi \) functions depending on the structure of the problem. Furthermore, such computational study on the effect of different AETs could be based on the similar LO test set, like in the study of Illés and Nagy (2014) for pivot algorithms.

Notes

Acknowledgements

Open access funding provided by Budapest University of Technology and Economics (BME). This research has been partially supported by the Hungarian Research Fund, OTKA (Grant No. NKFIH 125700). The research of T. Illés and P.R. Rigó has been partially supported by the Higher Education Excellence Program of the Ministry of Human Capacities in the frame of Artificial Intelligence research area of Budapest University of Technology and Economics (BME FIKP-MI/FM). The research of Zs. Darvay and P.R. Rigó was supported by a Grant of Romanian Ministry of Research and Innovation, CNCS - UEFISCDI, Project Number PN-III-P4-ID-PCE-2016-0190, within PNCDI III.

References

  1. Achache M (2010) Complexity analysis and numerical implementation of a short-step primal-dual algorithm for linear complementarity problems. Appl Math Comput 216(7):1889–1895Google Scholar
  2. Asadi S, Mansouri H (2013) Polynomial interior-point algorithm for \({P}_*(\kappa )\) horizontal linear complementarity problems. Numer Algorithms 63(2):385–398Google Scholar
  3. Asadi S, Mansouri H, Darvay Zs (2017a) An infeasible full-NT step IPM for horizontal linear complementarity problem over Cartesian product of symmetric cones. Optimization 66(2):225–250Google Scholar
  4. Asadi S, Zangiabadi M, Mansouri H (2017b) A predictor–corrector interior-point algorithm for \(P_*(\kappa )-HLCPs\) over Cartesian product of symmetric cones. Numer Funct Anal Optim 38:20–38Google Scholar
  5. Cottle RW, Pang J-S, Stone RE (1992) The linear complementarity problem, computer science and scientific computing. Academic Press Inc., BostonGoogle Scholar
  6. Darvay Zs (2002) A new algorithm for solving self-dual linear optimization problems. Studia Univ Babeş-Bolyai Ser Inform 47(1):15–26Google Scholar
  7. Darvay Zs (2003) New interior-point algorithms in linear programming. Adv Model Optim 5(1):51–92Google Scholar
  8. Darvay Zs (2005) A new predictor–corrector algorithm for linear programming. Alkalm Mat Lapok 22:135–161 (in Hungarian)Google Scholar
  9. Darvay Zs (2009) A predictor–corrector algorithm for linearly constrained convex optimization. Studia Univ Babeş-Bolyai Ser Inform 54(2):121–138Google Scholar
  10. Darvay Zs, Takó I (2012) Computational comparison of primal-dual algorithms based on a new software. Unpublished manuscript (2012)Google Scholar
  11. Darvay Zs, Papp IM, Takács PR (2016) Complexity analysis of a full-Newton step interior-point method for linear optimization. Period Math Hung 73(1):27–42Google Scholar
  12. Gay D (1985) Electronic mail distribution of linear programming test problems. Math Program Soc COAL Newsl 3:10–12Google Scholar
  13. Illés T, Nagy M (2007) A new variant of the Mizuno–Todd–Ye predictor–corrector algorithm for sufficient matrix linear complementarity problem. Eur J Oper Res 181(3):1097–1111Google Scholar
  14. Illés T, Nagy A (2014) Computational aspects of simplex and MBU-simplex algorithms using different anti-cycling pivot rules. Optimization 63(1):49–66Google Scholar
  15. Illés T, Terlaky T (2002) Pivot versus interior point methods: pros and cons. Eur J Oper Res 140:6–26Google Scholar
  16. Illés T, Nagy M, Terlaky T (2010a) A polynomial path-following interior point algorithm for general linear complementarity problems. J Glob Optim 47(3):329–342Google Scholar
  17. Illés T, Nagy M, Terlaky T (2010b) Polynomial interior point algorithms for general linear complementarity problems. Algorithmic Oper Res 5:1–12Google Scholar
  18. Karmarkar NK (1984) A new polynomial-time algorithm for linear programming. Combinatorica 4(4):373–395Google Scholar
  19. Kheirfam B (2014) A predictor–corrector interior-point algorithm for \(P_{*}(\kappa )\)-horizontal linear complementarity problem. Numer Algorithms 66(2):349–361Google Scholar
  20. Kheirfam B (2015) A corrector–predictor path-following method for convex quadratic symmetric cone optimization. J Optim Theory Appl 164(1):246–260Google Scholar
  21. Kheirfam B (2016) A corrector–predictor path-following method for second-order cone optimization. Int J Comput Math 93(12):2064–2078Google Scholar
  22. Kheirfam B (2018) An infeasible interior point method for the monotone SDLCP based on a transformation of the central path. J Appl Math Comput 57(1):685–702Google Scholar
  23. Kojima M, Megiddo N, Noma T, Yoshise A (1991) A unified approach to interior point algorithms for linear complementarity problems, vol 538. Lecture notes in computer science. Springer, BerlinGoogle Scholar
  24. Lešaja G, Roos C (2010) Unified analysis of kernel-based interior-point methods for \(P_*(\kappa )\)-linear complementarity problems. SIAM J Optim 20(6):3014–3039Google Scholar
  25. Mohammadi N, Mansouri H, Zangiabadi M, Asadi S (2015) A full Nesterov–Todd step infeasible-interior-point algorithm for Cartesian \(P_*(\kappa )\) horizontal linear complementarity problem over symmetric cones. Optimization 65:539–565Google Scholar
  26. Megiddo N (1989) Pathways to the optimal set in linear programming. In: Megiddo N (ed) Progress in mathematical programming. Interior-point and related methods. Springer, New York, pp 131–158Google Scholar
  27. Mehrotra S (1992) On the implementation of a primal-dual interior point method. SIAM J Optim 2(4):575–601Google Scholar
  28. Mizuno S, Todd MJ, Ye Y (1993) On adaptive-step primal-dual interior-point algorithms for linear programming. Math Oper Res 18:964–981Google Scholar
  29. Nesterov YE, Nemirovski A (1994) Interior point polynomial methods in convex programming. SIAM studies in applied mathematics. SIAM Publications, PhiladelphiaGoogle Scholar
  30. Peng J, Roos C, Terlaky T (2002) Self-regular functions: a new paradigm for primal-dual interior-point methods. Princeton University Press, PrincetonGoogle Scholar
  31. Potra FA (2002) The Mizuno-Todd-Ye algorithm in a larger neighborhood of the central path. Eur J Oper Res 143:257–267Google Scholar
  32. Potra FA (2014) Interior point methods for sufficient LCP in a wide neighborhood of the central path with optimal iteration complexity. SIAM J Optim 24(1):1–28Google Scholar
  33. Potra FA, Sheng R (1996) Predictor–corrector algorithm for solving \(P_*(\kappa )\)-matrix LCP from arbitrary positive starting points. Math Program 76(1):223–244Google Scholar
  34. Roos C, Terlaky T, Vial J-Ph (1997) Theory and algorithms for linear optimization, an interior-point approach. Wiley, ChichesterGoogle Scholar
  35. Sonnevend Gy (1985) A new method for solving a set of linear (convex) inequalities and its applications. Technical report, Department of Numerical Analysis, Institute of Mathematics, Eötvös Loránd University, BudapestGoogle Scholar
  36. Sonnevend Gy (1986) An ”analytic center” for polyhedrons and new classes of global algorithms for linear (smooth, convex) programming. In: Prékopa A, Szelezsán J, Strazicky B (eds) System modelling and optimization: proceedings of the 12th IFIP-conference held in Budapest, Hungary, Sept 1985. Lecture notes in control and information sciences, vol 84. Springer, Berlin, pp 866–876Google Scholar
  37. Sonnevend Gy, Stoer J, Zhao G (1990) On the complexity of following the central path by linear extrapolation in linear programming. Methods Oper Res 62:19–31Google Scholar
  38. Terlaky T (2001) An easy way to teach interior-point methods. Eur J Oper Res 130(1):1–19Google Scholar
  39. Wang GQ (2012) A new polynomial interior-point algorithm for the monotone linear complementarity problem over symmetric cones with full NT-steps. Asia-Pac J Oper Res 29(2)Google Scholar
  40. Wang GQ, Yue YJ, Cai XZ (2009) Weighted-path-following interior-point algorithm to monotone mixed linear complementarity problem. Fuzzy Inf Eng 1(4):435–445Google Scholar
  41. Wright SJ (1997) Primal-dual interior-point methods. SIAM, PhiladelphiaGoogle Scholar
  42. Ye Y (1997) Interior point algorithms, Wiley-interscience series in discrete mathematics and optimization. Theory and analysis. Wiley, New YorkGoogle Scholar
  43. Ye Y, Todd M, Mizuno S (1994) An \(O(\sqrt{n} L)\)-iteration homogeneous and self-dual linear programming algorithm. Math Oper Res 19(1):53–67Google Scholar
  44. Yoshise A (1996) Complementarity problems. In: Terlaky T (ed) Interior point methods of mathematical programming. Kluwer Academic Publishers, Dordrecht, pp 297–367Google Scholar

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Faculty of Mathematics and Computer ScienceBabeş-Bolyai UniversityCluj-NapocaRomania
  2. 2.Department of Differential EquationsBudapest University of Technology and EconomicsBudapestHungary
  3. 3.Department of Applied MathematicsAzarbaijan Shahid Madani UniversityTabrizIran

Personalised recommendations