Infeasible InteriorPoint Methods for Linear Optimization Based on Large Neighborhood
 1.2k Downloads
 3 Citations
Abstract
In this paper, we design a class of infeasible interiorpoint methods for linear optimization based on large neighborhood. The algorithm is inspired by a fullNewton step infeasible algorithm with a linear convergence rate in problem dimension that was recently proposed by the second author. Unfortunately, despite its good numerical behavior, the theoretical convergence rate of our algorithm is worse up to square root of problem dimension.
Keywords
Linear optimization Primaldual infeasible interiorpoint methods Polynomial algorithmsMathematics Subject Classification
90C05 90C511 Introduction
We consider the linear optimization (LO) problem in standard format, given by (P), and its dual problem given by (D) (see Sect. 3). We assume that the problems (P) and (D) are feasible. As is well known, this implies that both problems have optimal solutions and the same optimal value. In Sect. 8, we discuss how infeasibility and/or unboundedness of (P) and (D) can be detected.
Interiorpoint methods (IPMs) for LO are divided into two classes: feasible interiorpoint methods (FIPMs) and infeasible interiorpoint methods (IIPMs). FIPMs assume that a primaldual strictly feasible point is available from which the algorithm can immediately start. In order to get such a solution several initialization methods have been presented, e.g., by Megiddo [1] and Anstreicher [2]. The socalled big M method of Megiddo has not been received well due to the numerical instabilities caused by the use of huge coefficients. A disadvantage of the socalled combined phaseI and phaseII method of Anstreicher is that the feasible region must be nonempty and a lower bound on the optimal objective value must be known.
IIPMs have the advantage that they can start with an arbitrary point and try to achieve feasibility and optimality, simultaneously. It is worth to mention that the bestknown iteration bound for IIPMs is due to Mizuno [3]. We refer to, e.g., [4] for a survey of IIPMs.
Recently, the second author proposed an IIPM with fullNewton steps [4]. Each main iteration of the algorithm improves optimality and feasibility simultaneously, by a constant factor until a solution with a given accuracy is obtained. The amount of reduction is too small for practical purposes. The aim of this paper is to design a more aggressive variant of the algorithm which improves optimality and feasibility faster. We emphasize that this is our aim and also what happens in practice. As we will see, however, our algorithm suffers the same irony that occurs for FIPMs, namely that the theoretical convergence rate of largeupdate methods is worse than for fullNewton methods.
In the analysis of our algorithm, we use socalled kernel functionbased barrier function. Any such barrier function is based on a univariate function, named its kernel function.^{1} Such functions can be found in [5] and are closely related to the socalled selfregular functions introduced in [6]. In these references only FIPMs are considered, and it is shown that these functions are much more efficient for the process of recentering, which is a crucial part in every FIPM, especially when an iterate is far from the central path. Not surprising, it turned out that these functions are also useful in our algorithm, where recentering is also a crucial ingredient.
The paper is organized as follows. In Sect. 2, we describe the notations which we use throughout the paper. In Sect. 3, we explain the properties of a kernel functionbased barrier function. In Sect. 4, as a preparation to our large neighborhoodbased IIPM, we briefly recall the use of kernelbased barrier functions in largeupdate FIPMs, as presented in [5]. It will become clear in this section that the convergence rate highly depends on the underlying kernel function.
In Sect. 5, we describe our algorithm in detail. In our description we use a search direction based on a general kernel function. The algorithm uses two types of damped Newton steps: a socalled feasibility step and some centering steps. The feasibility step serves to reduce the infeasibility, whereas the centering steps keep the infeasibility fixed, but improve the optimality. This procedure is repeated until a solution with enough feasibility and optimality is obtained. Though many parts of our analysis are valid for general kernel function, at some places we restrict ourselves to a specific kernel function. We show that the complexity of our algorithm based on this kernel function is slightly worse than the complexity of the algorithm introduced and discussed by Salahi et al. [7], who used another variant of this function. Note that the bestknown iteration bound for the IIPMs that use a large neighborhood of the central path is due to Potra and Stoer [8] for a class of superlinearly convergent IIPMs for sufficient linear complementarity problems (LCP).
2 Notations
We use the following notational conventions. If \(x, s \in \mathbb {R}^n\), then xs denotes the coordinatewise (or Hadamard) product of the vectors x and s. The nonnegative orthant and positive orthant are denoted as \(\mathbb {R}^n_+\) and \(\mathbb {R}^n_{++}\), respectively. If \(z\in \mathbb {R}_+^n\) and \(f: \mathbb {R}_+\rightarrow \mathbb {R}_+\), then \(f\left( z\right) \) denotes the vector in \(\mathbb {R}_+^n\) whose ith component is \(f\left( z_i\right) \), with \(1\le i \le n\). We write \(f(x)=O(g(x))\), if for any x in the domain of f, \(f(x)\le cg(x)\) for some positive constant c. For \(z\in \mathbb {R}^n\), we denote the \(l_1\)norm by \(\left\ z\right\ _1\), and the Euclidean norm by \(\left\ z\right\ \).
3 Barrier Functions Based on Kernel Functions
We proceed by showing that the \(\mu \)center can be characterized as the minimizer of a suitably chosen primaldual barrier function. In fact we will define a wide class of such barrier functions, each of which is determined by a kernel function. A kernel function is just a univariate nonnegative function \(\psi (t)\), where \(t>0\), which is strictly convex, minimal at \(t=1\) and such that \(\psi (1)=0\), whereas \(\psi (t)\) goes to infinity both when t goes to zero and when t goes to infinity.
4 A Class of Largeupdate FIPMs for LO
In this section we recall from [5] some results for a largeupdate FIPM for solving (P) and (D) using a kernel functionbased barrier function.
Let us start with a definition. We call a triple (x, y, s), with \(x>0,\,s>0,\) an \(\varepsilon \)solution of (P) and (D) iff the duality gap \(x^Ts\) and the norms of the residual vectors \(r_b:= b  A x, \text{ and } r_c:= cA^Ty  s\), do not exceed \(\varepsilon \). In other words, defining \(\epsilon (x,y,s):=\max \{x^Ts,\Vert r_b\Vert ,\Vert r_c\Vert \},\) we say that (x, y, s) is an \(\varepsilon \)solution if \(\epsilon (x,y,s)\le \varepsilon .\)
Each main (or outer) iteration of the algorithm starts with a strictly feasible triple (x, y, s) that satisfies \(\varPhi (x,s,\mu )\le \tau \) for some \(\mu \in ]0,1]\), where \(\tau \) is a fixed positive constant. It then constructs a new triple \((x^+,y^+,s^+)\) such that \(\varPhi (x^+,s^+,\mu ^+)\le \tau \) with \(\mu ^+<\mu \). When taking \(\tau \) small enough, we obtain in this way a sequence of strictly feasible triples that belong to small neighborhoods of a sequence of \(\mu \)centers, for a decreasing sequence of \(\mu \)’s. As a consequence, the sequence of constructed triples (x, y, s) converges to an optimal solution of (P) and (D).
We will assume that \(\mu ^+ = (1\theta )\mu \), where \(\theta \in ]0,1[\) is a fixed constant, e.g., \(\theta =0.5\) or \(\theta =0.99\). The larger the \(\theta \), the more aggressive the algorithm is. In particular when \(\theta \) is large, each outer iteration will require several socalled inner iterations.
It is easy to compute the number the outer iterations. Using \(\mu ^0=1\) one has the following result.
Lemma 4.1
The main task is therefore to get a sharp upper estimate for the number of inner iterations during an outer iteration. We now describe how such an estimate is obtained. We go into some detail, though without repeating proofs, because the results that we recall below are relevant for the IIPM that we discuss in the next section.
As said before, at the start of each outer iteration we have a strictly feasible triple (x, y, s) and \(\mu >0\) such that \(\varPhi (x,s,\mu ) \le \tau \). We first need to estimate the increase in \(\varPhi \) when \(\mu \) is updated to \(\mu ^+ = (1\theta )\mu \). For this we need the following Lemma.
Lemma 4.2
The closeness of (x, y, s) to the \(\mu \)center is measured by \(\varPsi (v)\), where v is the variance vector of (x, y, s) with respect to the current value of \(\mu \). The initial triple (x, y, s) is as given by (5) and \(\mu =1\). So we then have \(\varPsi (v)=0 \le \tau \). After a \(\mu \)update we have \(\varPsi (v) \le \bar{\tau }\). Then a sequence of inner iterations is performed to restore the inequality \(\varPsi (v)\le \tau \). Then \(\mu \) is updated again and so on. This process is repeated until \(n\mu \) falls below the accuracy parameter \(\varepsilon \) after which we have obtained an \(\varepsilon \)solution.
In this paper, we consider a IIPM based on the use of a kernel function. Although many of the results below hold for any eligible kernel function, we will concentrate of the kernel \(\psi _q\).
5 A Class of LargeUpdate IIPMs for LO
Lemma 5.1
(cf. [13, Theorem 5.13]) The original problems (P) and (D) are feasible if and only if for each \(\nu \) satisfying \(0 < \nu \le 1\) the perturbed problems (\(\hbox {P}_\nu \)) and (\(\hbox {D}_\nu \)) satisfy the IPC.
5.1 An Outer Iteration of the Algorithm
Just as in the case of largeupdate FIPMs we use a primaldual barrier function to measure closeness of the iterates to the \(\mu \)center of (\(\hbox {P}_\nu \)) and (\(\hbox {D}_\nu \)). So, just as in Sect. 4, \(\varPsi (v)\) will denote the barrier function based on the kernel function \(\psi (t)\), as given in (3). Here, v denotes the variance vector of a triple (x, y, s) with respect to \(\mu >0\), and we define \(\varPhi (x,s,\mu )\) as in (4). The algorithm is designed in such a way that at the start of each outer iteration we have \(\varPsi (v) \le \tau \) for some threshold value \(\tau = O(1)\). As \(\varPsi (v) = 0\) at the starting points (13), the condition \(\varPsi (v) \le \tau \) is certainly satisfied at the start of the first outer iteration.
Each outer iteration of the algorithm consists of a feasibility step and some centering steps. At the start of the outer iteration we have a triple (x, y, s) that is strictly feasible for (\(\hbox {P}_\nu \)) and (\(\hbox {D}_\nu \)), for some \(\nu \in ]0,1]\), and that belongs to the \(\tau \)neighborhood of the \(\mu \)center of (\(\hbox {P}_\nu \)) and (\(\hbox {D}_\nu \)), where \(\mu = \nu \zeta ^2\). We first perform a feasibility step during which we generate a triple \(\left( x^f,y^f,s^f\right) \) which is strictly feasible for the perturbed problems (\(\hbox {P}_{\nu ^+}\)) and (\(\hbox {D}_{\nu ^+}\)), where \(\nu ^+ = (1\theta )\nu \) with \(\theta \in ]0,1[\), and moreover, close enough to the \(\mu ^+\)center of (\(\hbox {P}_{\nu ^+}\)) and (\(\hbox {D}_{\nu ^+}\)), with \(\mu ^+ = \nu ^+\zeta ^2\), i.e., \(\varPhi \left( x^f,s^f;\mu ^+\right) \le \tau ^f\), for some suitable value of \(\tau ^f\).
5.2 Feasibility Step
5.3 Analysis of the Feasibility Step
Lemma 5.2
Let \(\theta \) be such that \(1/\sqrt{1\theta }=O(1)\). Then \(\varPsi (\hat{v})= O(n)\) implies \(\varPsi ( v^f)= O(n)\).
Proof
The first derivative with respect to \(v_{\min }\) of the righthand side expression in this inequality is given by \((\psi ^{\prime \prime }(v_{\min })  \psi ^{\prime \prime }(v_{\min }  \theta \Vert d_x^f\Vert ))\theta \Vert d_x^f\Vert \). Since \(\psi ^{\prime \prime }\) is (strictly) decreasing, this derivative is negative. Hence it follows that the expression is decreasing in \(v_{\min }\). Therefore, when \(\theta \) and \(\Vert d_x^f\Vert \) are fixed, the less the \(v_{\min }\) is, the larger the expression will be. Below we establish how small \(v_{\min }\) can be when \(\delta (v)\) is given.
5.3.1 Upper Bound for \(\Vert d_{x}^{f}\Vert \)
5.3.2 Condition for \(\theta \)
6 Some Technical Lemmas
Lemma 6.1
Proof
Lemma 6.2
Proof
Recall that \(\rho \) is the inverse function of \(\psi (t)\) for \(t\ge 1\). The following two results are less trivial than the preceding two lemmas.
Lemma 6.3
Lemma 6.4
7 Iteration Bound of the IIPM Based on \(\psi _q\)
7.1 Complexity Analysis
8 Detecting Infeasibility or Unboundedness
As indicated in the introduction, the algorithm will detect infeasibility or/and unboundedness of (P) and (D) if no optimal solutions exist. In that case Lemma 5.1 implies the existence of \(\bar{\nu } > 0\) such that the perturbed pair (\(\hbox {P}_\nu \)) and (\(\hbox {D}_\nu \)) satisfy the IPC if and only if \(\nu \in ]\bar{\nu },1]\). As long as \(\nu ^+ = (1\theta ) \nu >\bar{\nu }\) the algorithm will run as it should, with \(\theta \) given by (42). However, if \(\varepsilon \) is small enough, at some stage it will happen that \(\nu > \bar{\nu } \ge \nu ^+\). At this stage the new perturbed pair does not satisfy the IPC. This will reveal itself since at that time we necessarily have \(\theta _{\max } < \tilde{\theta }\). If this happens we may conclude that there is no optimal pair \((x^*,s^*)\) satisfying \(\Vert x^*+s^*\Vert _\infty \le \zeta \). We refer to [4] for a more detailed discussion.
9 Computational Results
In this section, we present the numerical results for a practical variant of the algorithm described in Sect. 5. Theoretically, the barrier parameter \(\mu \) is updated by a factor \((1\theta )\) with \(\theta \) given by (42), and the iterates are kept very close to the \(\mu \)centers, namely the \(\tau \)neighborhood of the \(\mu \)centers, with \(\tau =\tfrac{1}{8}\). In practice, it is not efficient to do in this way and not even necessary either. We present a variant of the algorithm which uses a predictor–corrector step in the feasibility step. Moreover, for the parameter \(\tau \), defined in Sect. 5.1, we allow some larger value than \(\tfrac{1}{8}\), e.g., \(\tau = O(n)\). We set \(\tau = \hat{\tau } = O(n)\) with \(\hat{\tau }\) defined as in Sect. 5.1. As a consequence, the algorithm does not need centering steps. We choose \(\hat{\tau }\) according to the following heuristics: if \(n \le 500\), then \(\hat{\tau } = 100n\), for \(500\le n\le 5000\), we choose \(\hat{\tau } = 10n\) and for \(n\ge 5000\), we set \(\hat{\tau } = 3n\). We compare the performance of the algorithm with the wellknown LIPSOL package [16].
9.1 Starting Point and Stopping Criterion
A critical issue when implementing a primaldual method is to find a suitable starting point. It seems sensible to obtain a starting point which is well centered and as close to a feasible primaldual point as possible. The one suggested by theory, i.e., given by (13), being nicely centered, it may be quite far from the feasibility region. Moreover, to find a suitable \(\zeta \) is another issue.
In our implementation, we use a starting point \(\left( x^0,y^0,s^0\right) \) which is proposed by Lustig et al. [17] and inspired by the starting point used by Mehrtora [18]. Since we are interested in a point which is in the \(\tau \)neighborhood of the \(\mu ^0\)center. If \(\varPhi (x^0; s^0;\mu _0) > \tau \), we increase \(\mu _0\) until \(\varPhi (x0; s0;\mu _0) \le \tau \).
9.2 Feasibility Step Size
9.3 Solving the Linear System
9.4 A Rule for Updating \(\mu \)
Motivated by the numerical results, and considering the fact that the Mehrotra’s PC method has become the most efficient in practice and used in most IPMbased software packages, e.g., [16, 20, 21, 22], we present the numerical results of the variant of our algorithm which uses Mehrotra’s PC direction at the feasibility step.
Numerical results \(\left( q=\tfrac{\log n}{6} \hbox {in} \psi _q\right) \)
Problem  \(\psi _1\)  \(\psi _q\)  Lipsol  

Name  Rows  Cols  It.  E(x, y, s)  It.  E(x, y, s)  It.  E(x, y, s) 
25fv47  822  1571  26  1.8e\(\)007  27  6.1e\(\)007  25  2.8e\(\)007 
Adlittle  57  97  12  6.8e\(\)008  12  8.9e\(\)008  11  2.4e\(\)011 
Afiro  28  32  8  1.0e\(\)007  8  1.6e\(\)007  7  3.7e\(\)009 
Agg  489  163  17  8.8e\(\)007  18  7.2e\(\)008  18  1.1e\(\)008 
Agg2  517  302  17  9.5e\(\)007  17  4.9e\(\)007  18  2.6e\(\)010 
Agg3  517  302  18  3.0e\(\)007  18  5.7e\(\)007  16  6.2e\(\)008 
Bandm  306  472  20  2.6e\(\)007  19  3.5e\(\)007  16  3.6e\(\)007 
Beaconfd  174  262  11  1.1e\(\)007  11  3.0e\(\)007  11  1.2e\(\)010 
Blend  75  83  13  6.2e\(\)007  13  2.7e\(\)007  12  5.7e\(\)011 
Bnl1  644  1175  32  5.0e\(\)007  32  1.2e\(\)007  25  5.3e\(\)008 
Bnl2  2325  3489  33  4.1e\(\)007  31  9.3e\(\)007  31  1.3e\(\)007 
Brandy  221  249  19  2.5e\(\)007  19  8.9e\(\)008  18  2.0e\(\)008 
D2q06c  2172  5167  28  5.6e\(\)001  31  7.0e\(\)007  28  4.8e\(\)007 
Degen2  445  534  25  1.3e\(\)004  18  1.2e\(\)004  13  4.2e\(\)007 
Degen3  1504  1818  23  1.4e\(\)004  24  3.0e\(\)005  19  1.4e\(\)007 
e226  224  282  22  7.4e\(\)007  23  2.7e\(\)007  20  8.9e\(\)007 
fffff800  525  854  26  1.0e\(\)006  27  5.7e\(\)007  26  3.0e\(\)007 
Israel  175  142  22  2.0e\(\)007  21  7.1e\(\)007  20  2.2e\(\)007 
Lotfi  154  308  16  3.7e\(\)007  15  5.4e\(\)007  15  4.6e\(\)008 
Marosr7  3137  9408  19  8.0e\(\)007  19  7.6e\(\)008  14  1.0e\(\)009 
Sc105  106  103  10  1.7e\(\)008  10  2.3e\(\)008  9  4.2e\(\)008 
Sc205  206  203  11  2.5e\(\)007  11  2.5e\(\)007  11  6.5e\(\)009 
Sc50a  51  48  9  1.4e\(\)007  9  1.2e\(\)007  9  2.8e\(\)009 
Sc50b  51  48  7  5.4e\(\)007  7  4.9e\(\)007  7  1.6e\(\)007 
Scagr7  130  140  13  3.1e\(\)007  13  3.4e\(\)008  11  3.5e\(\)007 
Scfxm1  331  457  18  4.2e\(\)007  19  2.0e\(\)007  16  3.7e\(\)007 
scfxm2  661  914  21  1.4e\(\)006  22  5.9e\(\)008  19  1.6e\(\)008 
Scfxm3  991  1371  23  1.3e\(\)007  22  8.5e\(\)007  20  3.0e\(\)010 
Scsd1  78  760  13  3.9e\(\)007  14  1.2e\(\)007  10  3.3e\(\)011 
Scsd6  148  1350  15  3.8e\(\)007  15  4.2e\(\)007  11  7.8e\(\)008 
Scsd8  398  2750  13  6.4e\(\)008  12  3.3e\(\)007  11  4.0e\(\)011 
Sctap1  301  480  18  5.5e\(\)007  18  6.9e\(\)007  16  1.2e\(\)008 
Sctap2  1091  1880  19  1.3e\(\)007  18  3.5e\(\)007  18  3.5e\(\)009 
Sctap3  1481  2480  19  6.4e\(\)007  20  9.6e\(\)008  17  2.4e\(\)008 
Share1b  118  225  23  6.0e\(\)007  23  1.5e\(\)006  21  1.9e\(\)010 
Share2b  97  79  12  1.3e\(\)008  12  6.8e\(\)008  11  1.7e\(\)007 
Ship04l  403  2118  15  7.8e\(\)007  15  8.0e\(\)007  12  5.6e\(\)011 
Ship04s  403  1458  15  3.2e\(\)007  15  4.3e\(\)007  12  3.6e\(\)007 
Ship12l  1152  5427  19  9.1e\(\)007  19  3.7e\(\)007  15  7.7e\(\)009 
Ship12s  1152  2763  17  1.3e\(\)007  18  6.0e\(\)008  15  3.6e\(\)007 
Stocfor1  118  111  14  4.9e\(\)007  16  2.5e\(\)007  16  1.1e\(\)007 
Stocfor2  2158  2031  25  1.8e\(\)007  25  2.1e\(\)007  21  2.3e\(\)008 
Truss  1001  8806  18  3.9e\(\)007  18  1.7e\(\)007  17  8.4e\(\)007 
Wood1p  245  2594  17  8.3e\(\)007  17  1.2e\(\)005  14  7.0e\(\)010 
Woodw  1099  8405  25  3.1e\(\)007  24  1.0e\(\)006  23  5.1e\(\)010 
Total  816  815  725 
9.5 Results
In this section, we present our numerical results. Motivated by the theoretical result which says that the kernel function \(\psi _q\) gives rise to the bestknown theoretical iteration bound for largeupdate IIPMs based on kernel functions, we compare the performance of the algorithm described in the previous subsection based on both the logarithmic barrier function and the \(\psi _q\)based barrier function. As the theory suggests, we use \(q=\frac{\log n}{6}\) in \(\psi _q\).
Our test was done on a standard PC with Intel\(^\circledR \) Core\(^{\texttt {TM}}\) 2 Duo CPU and 3.25 GB of RAM. The code was implemented by version 7.11.0 (R2010b) of MATLAB\(^\circledR \) on Windows XP Professional operating system. The problems chosen for our test are from the NETLIB set. To simplify the study, we choose the problems which are in (P) format, i.e., there is no nonzero lower bound or finite upper bound on the decision variables.
Before solving a problem using the algorithm, we first solve it for singleton variables. This helps to reduce the size of the problem.
Numerical results are tabulated in Table 1. In second and the fourth columns, we listed the total number of iterations of the algorithm based on, respectively, \(\psi _1\), the kernel function of the logarithmic barrier function, and \(\psi _q\). The third and fifth columns stand for the quantity E(x, y, s). The iteration numbers of the LIPSOL package are given in the sixth column of these tables. Finally, in the seventh column, we listed the quantity E(x, y, s) of the LIPSOL package. In each row, the Italic number denotes the smallest of the iteration numbers of the three algorithms, namely our algorithm based on \(\psi _1\) and \(\psi _q\), and LIPSOL, and the bold number denotes the smallest of the iteration numbers of \(\psi _1\)based and \(\psi _q\)based algorithms. As it can be noticed from the last row of the table, the overall performance of the algorithm based on \(\psi _1\) is much better than that the variant based on \(\psi _q\). However, in some of the problems, the \(\psi _q\)based algorithm outperforms the \(\psi _1\)based algorithm. This amounts to the problems AGG, BANDM, DEGEN2, DEGEN3, SCSD1, SCSD6, SCSD8 and SHARE2B. The columns related to LIPSOL prove that it is still the best; however, our \(\psi _1\)based algorithm saved one iteration compared with LIPSOL, to solve AGG2 and AGG3, and two iterations when solving STOCFOR1.
10 Conclusions
In this paper, we analyze infeasible interiorpoint methods (IIPMs) for LO based on a large neighborhood. Our work is motivated by [4] in which Roos presents a fullNewton IIPM for LO. Since the analysis of our algorithms requires properties of barrier functions based on kernel functions that are used in largeupdate feasible interiorpoint methods (FIPMs), we present primaldual largeupdate FIPMs for LO based on kernel functions, as well.
In Roos’ algorithm, the iterates move within small neighborhood of the \(\mu \)centers of the perturbed problem pairs. In many IIPMs, the algorithm reduces the infeasibility and the duality gap at the same rate. His algorithm has the advantages that it uses fullNewton steps and hence no calculation of step size is needed, and moreover its theoretical iteration bound is \(O(n\log (\epsilon (\zeta {\mathbf{e}},0,\zeta {\mathbf{e}})/\varepsilon ))\) which coincides with the bestknown iteration bound for IIPMs. Nevertheless, it has the deficiency that it is too slow in practice as it reduces the quantity \(\epsilon (x,y,s)\) by a factor \((1\theta )\) with \(\theta = O(\tfrac{1}{n})\) at an iteration.
We attempt to design a largeupdate version of Roos’ algorithm which allows some larger amount of reduction of \(\epsilon (x,y,s)\) at an iteration than \(1\theta \) with \(\theta =O(1/n)\). This requires that the parameter \(\theta \) is larger than O(1 / n), even \(\theta = O(1)\). Unfortunately, the result of Sect. 5 implies that \(\theta \) is \(O(1/(n(\log n)^2))\) which yields \(O(n\sqrt{n}(\log n)^3\log (\epsilon (\zeta {\mathbf{e}},0,\zeta {\mathbf{e}})/\varepsilon )\) iteration bound for a variant. Since the theoretical result of the algorithm is disappointing, we rely on the numerical results to establish that our algorithm is a largeupdate method. A practically efficient version of the algorithm is presented and its numerical result is compared with the wellknown LIPSOL package. Fortunately, the numerical results seem promising as our algorithm has iteration numbers close to those of LIPSOL and, in a few cases, outperforms it. This means that IIPMs suffer from the same irony as FIPMs, i.e., regardless of their nice practical performance, the theoretical complexity of largeupdate methods is worse. Recall that the bestknown iteration bound for largeupdate IIPMs is \(O(n\sqrt{n}\log n\log (\epsilon (\zeta {\mathbf{e}},0,\zeta {\mathbf{e}})/\varepsilon )\) which is due to Salahi et al. [7].
As in LIPSOL [16], different step sizes in the primal and the dual spaces are used in our implementation. This gives rise to a faster achievement in feasibility than when identical step sizes are used. Moreover, inspired by the LIPSOL package, we use a predictor–corrector step in the feasibility step of the algorithm.

As mentioned before, our algorithm has a factor \((\log n)^2\) worse iteration bound than the bestknown iteration bound for largeupdate IIPMs. One may consider how to modify the analysis such that the iteration bound of our algorithm is improved by a factor \((\log n)^2\).

As mentioned in Sect. 10, according to the analysis of our algorithm presented in Sect. 5, the barrierupdating parameter \(\theta \) is \(O(1/(n(\log n)^2))\). This yields the loose iteration bound given by (44). This slender value of \(\theta \) is obtained because of some difficulties in the analysis of the algorithm which uses the largest value of \(\theta \), satisfying (24), to assure that \(\varPsi (\hat{v}) = O(n)\). This value of \(\theta \) is much smaller than the best value we may choose. Assuming \(n=60\), the largest value of \(\theta \) satisfying \(\varPsi (\hat{v}) = n\) is 0.849114 while the value of \(\theta \) suggested by theory is 0.141766. A future research may focus on some new analysis of the algorithm which yields some larger value of \(\theta \).

Roos’ fullNewton step IIPM was extended to semidefinite optimization (SDO) by Mansouri and Roos [23], to symmetric optimization (SO) by Gu et al. [24] and to LCP by Mansouri et al. [25]. An extension of largeupdate FIPMs based on kernel functions to SDO was presented by El Ghami [10]. One may consider how our algorithm behaves in theory and practice when it is extended to case of SDO, SO and LCP.
Footnotes
References
 1.Megiddo, N. (ed.): Pathways to the optimal set in linear programming. In: Progress in Mathematical Programming (Pacific Grove, CA, 1987), pp. 131–158. Springer, New York (1989)Google Scholar
 2.Anstreicher, K.M.: A combined phase Iphase II projective algorithm for linear programming. Math. Progr. 43(2, (Ser. A)), 209–223 (1989)MathSciNetCrossRefzbMATHGoogle Scholar
 3.Mizuno, S.: Polynomiality of infeasibleinteriorpoint algorithms for linear programming. Math. Progr. 67(1, Ser. A), 109–119 (1994)MathSciNetCrossRefzbMATHGoogle Scholar
 4.Roos, C.: A fullNewton step \(O(n)\) infeasible interiorpoint algorithm for linear optimization. SIAM J. Optim. 16(4), 1110–1136 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
 5.Bai, Y., El Ghami, M., Roos, C.: A comparative study of Kernel functions for primaldual interiorpoint algorithms in linear optimization. SIAM J. Optim. 15(1), 101–128 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
 6.Peng, J., Roos, C., Terlaky, T.: SelfRegularity: A New Paradigm for PrimalDual InteriorPoint Algorithms. Princeton Series in Applied Mathematics. Princeton University Press, Princeton (2002)zbMATHGoogle Scholar
 7.Salahi, M., Peyghami, M.R., Terlaky, T.: New complexity analysis of IIPMs for linear optimization based on a specific selfregular function. Eur. J. Oper. Res. 186(2), 466–485 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
 8.Potra, F.A., Stoer, J.: On a class of superlinearly convergent polynomialtime interiorpoint methods for sufficient LCP. SIAM J. Optim. 20(3), 1333–1363 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
 9.Roos, C., Terlaky, T., Vial, J.Ph.: InteriorPoint Methods for Linear Optimization. Springer, New York (2006). In: Second edition of Theory and Algorithms for Linear Optimization [Wiley, Chichester, 1997]Google Scholar
 10.El Ghami, M.: New PrimalDual InteriorPoint Methods Based on Kernel Functions. Ph.D. thesis, Delft University of Technology, The Netherlands (2005)Google Scholar
 11.Peng, J., Roos, C., Terlaky, T.: A new and efficient largeupdate interiorpoint method for linear optimization. Vychisl. Tekhnol. 6(4), 61–80 (2001)MathSciNetzbMATHGoogle Scholar
 12.Potra, F.A.: A superlinearly convergent predictorcorrector method for degenerate LCP in a wide neighborhood of the central path with \(O(\sqrt{n}L)\)iteration complexity. Math. Progr. 100(2, Ser. A), 317–337 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
 13.Ye, Y.: Interior Point Algorithms. WileyInterscience Series in Discrete Mathematics and Optimization. Wiley, New York (1997)Google Scholar
 14.Asadi, A., Gu, G., Roos, C.: Convergence of the homotopy path for a fullNewton step infeasible interiorpoint method. Oper. Res. Lett. 38(2), 147–151 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
 15.Gu, G., Mansouri, H., Zangiabadi, M., Bai, Y., Roos, C.: Improved fullNewton step \(O(n)\) infeasible interiorpoint method for linear optimization. J. Optim. Theory Appl. 145(2), 271–288 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
 16.Zhang, Y.: Solving largescale linear programs by interiorpoint methods under the MATLAB environment. Optim. Methods Softw. 10(1), 1–31 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
 17.Lustig, I., Marsten, R.E., Shanno, D.F.: On implementing Mehrotra’s predictorcorrector interiorpoint method for linear programming. SIAM J. Optim. 2(3), 435–449 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
 18.Mehrotra, S.: On the implementation of a primaldual interiorpoint method. SIAM J. Optim. 2(4), 575–601 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
 19.Lustig, I.: Feasibility issues in a primaldual interiorpoint method for linear programming. Math. Progr. 49(2, (Ser. A)):145–162 (1990/91)Google Scholar
 20.Vanderbei, R.J.: LOQO: an interiorpoint code for quadratic programming. Optim. Methods Softw. 11/12(1–4), 451–484 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
 21.Andersen, E.D., Andersen, K. D.: The Mosek interiorpoint optimizer for linear programming: an implementation of the homogeneous algorithm. In: Frenk, H., Roos, K., Terlaky, T., Zhang, S. (eds.) High Performance Optimization, vol. 33 of Appl. Optim. pp. 197–232. Kluwer Academy Publication, Dordrecht (2000)Google Scholar
 22.Czyzyk, J., Mehrotra, S., Wagner, M., Wright, S.: PCx: an interiorpoint code for linear programming. Optim. Methods Softw. 11/12(1–4), 397–430 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
 23.Mansouri, H., Roos, C.: A new fullNewton step \(O(n)\) infeasible interiorpoint algorithm for semidefinite optimization. Numer. Algorithms 52(2), 225–255 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
 24.Gu, G., Zangiabadi, M., Roos, C.: Full Nesterov–Todd step infeasible interiorpoint method for symmetric optimization. Eur. J. Oper. Res. 3, 473–484 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
 25.Mansouri, H., Zangiabadi, M., Pirhaji, M.: A fullNewton step \(O(n)\) infeasibleinteriorpoint algorithm for linear complementarity problems. Nonlinear Anal. Real World Appl. 12(1), 545–561 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.