Abstract
We propose the use of controlled perturbations to address the challenging question of optimal active-set prediction for interior point methods. Namely, in the context of linear programming, we consider perturbing the inequality constraints/bounds so as to enlarge the feasible set. We show that if the perturbations are chosen appropriately, the solution of the original problem lies on or close to the central path of the perturbed problem. We also find that a primal-dual path-following algorithm applied to the perturbed problem is able to accurately predict the optimal active set of the original problem when the duality gap for the perturbed problem is not too small; furthermore, depending on problem conditioning, this prediction can happen sooner than predicting the active set for the perturbed problem or when the original one is solved. Encouraging preliminary numerical experience is reported when comparing activity prediction for the perturbed and unperturbed problem formulations.
Similar content being viewed by others
Notes
Note that [5] proposed the use of such perturbations for creating a sequence of LPs with strict interior, converging to the original LP in the limit, so as to find the affine dimension of the feasible set of the original LP and well-centred points in Phase I of ipms; a different focus and approach than here. A relaxation technique for Mathematical Programs with Equilibrium Constraints (MPECs) was also proposed independently in [8] that relaxes the bound constraints similarly to [5], but that also relaxes the complementarity constraint, in order to create a sequence of nonlinear programming relaxations with strict interior, which thus satisfy a constraint qualification, and also converge to the original MPEC in the limit.
We estimate \(\tau _p\) and \(\tau _d\) from their definition in (16), namely, we solve the following optimisation problem in matlab, \( \max \Vert x-x^{*}\Vert \slash (r(x,s)+w(x,s))\) subject to \((x,y,s) \in {\mathcal {F}}^{0}_{\lambda }\), where r(x, s) and w(x, s) are defined in (17) and \({\mathcal {F}}^{0}_{\lambda }\) in (1); similarly for \(\tau _d\).
We obtain the ‘actual optimal active set’ by solving the problem using matlab’s solver linprog with the ’algorithm’ option set to interior point or simplex and considering all variables less than \(10^{-5}\) as active.
Here, for each of the test problems, we set \(\lambda \) in (\(\hbox {PD}_{\lambda }\)) to be the value of the perturbations when terminating Algorithm 1 at the 18th iteration. We then apply Algorithm 2 to the equivalent form (3) of (\(\hbox {PD}_{\lambda }\)), which means we solve the perturbed problem using an ipm method and predict the active set of the perturbed problem on the way.
The definition of \(\mu _{\lambda }^{k}\) and \(\mu ^{k}\) in Algorithms 1 and 2, respectively, as well as the choice of \((x^{0},s^{0})\) to be identical for (\(\hbox {PD}_{\lambda }\)) and (PD), imply that \(\mu _{\lambda }^{0} > \mu ^{0}\), with the difference being essentially dictated by the level of perturbations \((\lambda ^{0}, \phi ^{0})\). Thus we are not making it any easier for Algorithm 1 compared to Algorithm 2 in the choice of starting point.
The number of elements in either basis generated from Algorithms 1 or 2 but not both divided by the cardinality of the union of two bases.
This is because some problems have very large components in the right hand side b with \(\max (b) > 10^{3}\). For these problems, even when \(\mu _{\lambda }^{k} >10^{-3}\), the relative residual may already be less than \(10^{-6}\) and this causes numerical problems when trying to decrease \(\mu _{\lambda }^{k}\) further. There are five problems of this kind, agg3, forplan, grow7, israel and share1b, and we have marked those problems by \(*\) in Table 3 in Appendix 4. A possible remedy may be to consider using ‘scaled’ perturbations in Algorithm 1, namely, to set the perturbations to some percentage deviation for each component of the right-hand side b.
afiro, agg3, grow7, isreal, sc50b, scfxm2, scfxm3, seba and stocfor2.
References
Altman, A., Gondzio, J.: Regularized symmetric indefinite systems in interior point methods for linear and quadratic optimization. Optim. Methods Softw. 11(1–4), 275–302 (1999)
Bazaraa, M.S., Jarvis, J.J., Sherali, H.D.: Linear Programming and Network Flows. Special simplex implementations and optimality conditions, pp. 201–257. Wiley, Hoboken (2009)
Benson, H., Shanno, D.: An exact primal–dual penalty method approach to warmstarting interior-point methods for linear programming. Comput. Optim. Appl. 38(3), 371–399 (2007)
Berkelaar, M., Eikland, K., Notebaert, P.: lpsolve : open source (mixed-integer) linear programming system. Eindhoven University of Technology. http://lpsolve.sourceforge.net/5.5/ (2004)
Cartis, C., Gould, N.I.M.: Finding a point in the relative interior of a polyhedron. Tech. Rep. RAL 2006-016, Rutherford Appleton Laboratory (2006)
Castro, J., Cuesta, J.: Existence, uniqueness, and convergence of the regularized primal-dual central path. Oper. Res. Lett. 38(5), 366–371 (2010)
Cottle, R.W., Pang, J.S., Stone, R.E.: The Linear Complementarity Problem. SIAM, Philadelphia (2009)
DeMiguel, V., Friedlander, M.P., Nogales, F.J., Scholtes, S.: A two-sided relaxation scheme for mathematical programs with equilibrium constraints. SIAM J. Optim. 16(2), 587–609 (2005)
El-Bakry, A.S., Tapia, R., Zhang, Y.: A study of indicators for identifying zero variables in interior point methods. SIAM Rev. 36(1), 45–72 (1994)
Engau, A., Anjos, M.F., Vannelli, A.: Operations Research and Cyber-Infrastructure. A primal-dual slack approach to warmstarting interior-point methods for linear programming, vol. 47, pp. 195–217. Springer, New York (2009)
Engau, A., Anjos, M.F., Vannelli, A.: On interior-point warmstarts for linear and combinatorial optimization. SIAM J. Optim. 20(4), 1828–1861 (2010)
Facchinei, F., Fischer, A., Kanzow, C.: On the accurate identification of active constraints. SIAM J. Optim. 9(2), 14–32 (1998)
Ferris, M., Mangasarian, O., Wright, S.J.: Linear Programming with Matlab. SIAM, Philadelphia (2007)
Freund, R.M.: A potential-function reduction algorithm for solving a linear program directly from an infeasible “warm start”. Math. Prog. 52(1–3), 441–466 (1991)
Freund, R.M.: Theoretical efficiency of a shifted-barrier-function algorithm for linear programming. Linear Algebra Appl. 152, 19–41 (1991)
Freund, R.M.: An infeasible-start algorithm for linear programming whose complexity depends on the distance from the starting point to the optimal solution. Ann. Oper. Res. 62(1), 29–57 (1996)
Gill, P.E., Murray, W., Saunders, M.A., Tomlin, J., Wright, M.H.: On projected Newton barrier methods for linear programming and an equivalence to Karmarkar’s projective method. Math. Prog. 36(2), 183–209 (1986)
Gondzio, J.: Warm start of the primal-dual method applied in the cutting-plane scheme. Math. Prog. 83(1–3), 125–143 (1998)
Gondzio, J.: Interior point methods 25 years later. Eur. J. Oper. Res. 218, 587–601 (2012)
Gondzio, J., Grothey, A.: A new unblocking technique to warmstart interior point methods based on sensitivity analysis. SIAM J. Optim. 19(3), 1184–1210 (2008)
Gondzio, J., Vial, J.P.: Warm start and \(\epsilon \)-subgradients in a cutting plane scheme for block-angular linear programs. Comput. Optim. Appl. 14(1), 17–36 (1999)
Güler, O., den Hertog, D., Roos, C., Terlaky, T., Tsuchiya, T.: Degeneracy in interior point methods for linear programming: a survey. Ann. Oper. Res. 46–47, 107–138 (1993)
Karmarkar, N., Ramakrishnan, K.: Computational results of an interior point algorithm for large scale linear programming. Math. Prog. 52(1–3), 555–586 (1991)
Mangasarian, O., Ren, J.: New improved error bounds for the linear complementarity problem. Math. Prog. 66(2), 241–255 (1994)
McShane, K.A., Monma, C.L., Shanno, D.: An implementation of a primal-dual interior point method for linear programming. ORSA J. Comput. 1(2), 70–83 (1989)
Mehrotra, S.: Finite termination and superlinear convergence in primal-dual methods. Tech. Rep. 91-13, Northwestern University (1991)
Mehrotra, S.: On the implementation of a primal-dual interior point method. SIAM J. Optim. 2(4), 575–601 (1992)
Mehrotra, S., Ye, Y.: Finding an interior point in the optimal face of linear programs. Math. Prog. 62(1–3), 497–515 (1993)
Mitchell, J.E.: An interior point column generation method for linear programming using shifted barriers. SIAM J. Optim. 4(2), 423–440 (1994)
Morales, J.L.: A numerical study of limited memory BFGS methods. Appl. Math. Lett. 15(4), 481–487 (2002)
Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, Madison (2006)
Oberlin, C., Wright, S.J.: Active set identification in nonlinear programming. SIAM J. Optim. 17(2), 577–605 (2006)
Pang, J.S.: Error bounds in mathematical programming. Math. Prog. 79(1), 299–332 (1997)
Polyak, R.: Modified barrier functions (theory and methods). Math. Prog. 54(1–3), 177–222 (1992)
Saunders, M., Tomlin, J.: Solving regularized linear programs using barrier methods and KKT systems. Tech. Rep. SOL 96-4, Deptartment of Operations Research, Stanford University (1996)
Sierksma, G.: Linear and Integer Programming: Theory and Practice, 2nd edn. CRC Press, Boca Raton (2001)
Skajaa, A., Andersen, E., Ye, Y.: Warmstarting the homogeneous and self-dual interior point method for linear and conic quadratic problems. Math. Prog. Comput. 5, 1–25 (2013)
Tone, K.: An active-set strategy in an interior point method for linear programming. Math. Prog. 59(1–3), 345–360 (1993)
Williams, P.J.: Effective finite termination procedures in interior-point methods for linear programming. Ph.D. thesis, Department of Computational and Applied Mathematics, Rice University (1998)
Wright, S.J.: Primal-Dual Interior-Point Methods. SIAM, Philadelphia (1997)
Xu, X., Hung, P.F., Ye, Y.: A simplified homogeneous and self-dual linear programming algorithm and its implementation. Ann. Oper. Res. 62(1), 151–171 (1996)
Ye, Y.: On the finite convergence of interior-point algorithms for linear programming. Math. Prog. 57(1), 325–335 (1992)
Ye, Y., Todd, M.J., Mizuno, S.: An o(nl)-iteration homogeneous and self-dual linear programming algorithm. Math. Oper. Res. 19(1), 53–67 (1994)
Yildirim, E., Wright, S.J.: Warm-start strategies in interior-point methods for linear programming. SIAM J. Optim. 12(3), 782–810 (2002)
Zhang, Y.: Solving large-scale linear programs by interior-point methods under the Matlab Environment. Optim. Methods Softw. 10(1), 1–31 (1998)
Acknowledgments
We are grateful to Nick Gould for useful discussions and insights. We also thank three anonymous referees for instructive comments that have improved the quality of the paper, and the Mathematical Institute, University of Oxford, for hosting the second author during the completion of this work. This author was supported by the Principal’s Career Development Scholarship from the University of Edinburgh.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1: Proof of Lemma 2
An error bound for an optimization problem bounds the distance from a given point to the solution set in terms of a residual function [33]. In this section, we first formulate an lp problem as a monotone Linear Complementarity Problem (lcp) and then apply a global error bound for the monotone lcp to the reformulated lp problem in order to derive an error bound for the lp, and so prove Lemma 2.
By setting \(s = c-A^{\top }y\) and \(y=y^{+}-y^{-}\), where \(y^{+}=\max (y,0)\) and \(y^{-}=-\min (y,0)\), the first order optimality conditions (2) with \(\lambda = 0\) for (PD) can be reformulated as
Let
where A, b and c are (PD) problem data and \((x,y,s)\in {\mathbb {R}}^{n} \times {\mathbb {R}}^{m} \times {\mathbb {R}}^{n}\). Then finding a solution of (55) is equivalent to solving the following problem,
where M, q and z are defined in (56), and z is considered to be the vector of variables.
Lemma 8
(PD) is equivalent to the lcp in (57) with M and q defined in (56), namely,
-
1.
If \((x,y^{+},y^{-})\) is a solution of the lcp (57), then \((x,y,s)\) is a (PD) solution, where \(y=y^{+}-y^{-}\) and \(s=c-A^{T}y\).
-
2.
If \((x,y,s)\) is a (PD) solution, then \((x,y^{+},y^{-})\) is a solution of the lcp (57).
Next we show that our lp problem can be viewed as a monotone lcp [7].
Lemma 9
The matrix M, defined in (56), is positive semidefinite, and so (57) is a monotone lcp.
Proof
For all \(v=(v_{1},v_{2},v_{3})\), where \(v_{1} \in {\mathbb {R}}^{n}, v_{2} \in {\mathbb {R}}^{m}\) and \(v_{3} \in {\mathbb {R}}^{m}\), \(v^{T} M v= v^{T}_{2}Av_{1} - v^{T}_{3}Av_{1} - v_{1}^{T}A^{T}v_{2} + v^{T}_{1}A^{T}v_{3} = 0\), since \(v^{T}_{2}Av_{1} = (v^{T}_{2}Av_{1})^{T}=v^{T}_{1}A^{T}v_{2}\) and \(v^{T}_{3}Av_{1}=(v^{T}_{3}Av_{1})^{T}=v^{T}_{1}A^{T}v_{3}\). Thus M is positive semidefinite. \(\square \)
A global error bound for a monotone lcp [24] is given next.
Lemma 10
(Mangasarian and Ren [24, Corollary 2.2]) Let z be any point away from the solution set of the monotone lcp (57) and \(z^{*}\) be the closest solution of (57) to z under the Euclidean norm \(\Vert \cdot \Vert \). Then \(r(z)+w(z)\) is a global error bound for (57), namely,
where \(\tau \) is some problem-dependent constant, independent of z and \(z^{*}\), and
Lemma 11
Given the monotone lcp (57) with M and q defined in (56), let \((x,y^{+},y^{-})\) be any point away from the solution set of this problem and \((x^{*},(y^{*})^{+}\), \((y^{*})^{-})\) be the closest solution of this lcp to \((x,y^{+},y^{-})\) under the Euclidean norm \(\Vert \cdot \Vert \). Then we have
where \(\tau \) is some problem-dependent constant, independent of \((x,y^{+},y^{-})\) and of \((x^{*},(y^{*})^{+},(y^{*})^{-})\),
and
and where \(y = y^{+} - y^{-}\).
Proof
Substituting (56) into (58) and noting that \( u - (u-v)_{+} = \min \left\{ u,v\right\} \) for any u, v vectors, we have
and
Recalling \(y = y^{+} - y^{-} \), (59) and (60) follow directly from the above equations. \(\square \)
Theorem 9
(Error bound for lp) Let \((x,y,s)\in {\mathbb {R}}^{n}\times {\mathbb {R}}^{m}\times {\mathbb {R}}^{n}\) where \(s = c - A^{T}y \). Then there exist a (PD) solution \((x^{*},y^{*},s^{*})\)
and problem-dependent constants \(\tau _{p}\) and \(\tau _{d}\), independent of \((x,y,s)\) and \(\left( x^{*},y^{*},s^{*} \right) \), such that
where
and
and where \(y^{+} = \max \left\{ y,0\right\} \) and \(y^{-} = -\min \left\{ y,0\right\} \).
Proof
Consider the monotone lcp (57) with M and q defined in (56) and \(z=(x,y^{+},y^{-})\). Let \(z^{*}=(x^{*},(y^{*})^{+},(y^{*})^{-})\) be the closest solution to z in the solution set of this lcp. From Lemma 8, \((x^{*},y^{*},s^{*})\) with \(y^{*}=(y^{*})^{+}-(y^{*})^{-}\) and \(s^{*}=c-A^{T}y^{*}\) is a (PD) solution. (Note that we may lose the property that this is the closest solution to the given point.) From \(( y^{+} ,y^{-} ) \ge 0\), \(s = c- A^{T}y\) and Lemma 11, we have
where r(x, s) and w(x, s) are defined in (61) and (62), respectively. This and norm properties give
and so letting \(\tau _{p} = \tau \), we deduce \( \Vert x-x^{*} \Vert \le \tau _{p} (r(x,s)+w(x,s)). \) Since \(s^{*} = c - A^{T} y^{*}\), we also have
where \(\tau _{d}= 2\tau \Vert A\Vert \). \(\square \)
Proof of Lemma 2
Since \((x,y,s)\in {\mathcal {F}}^{0}_{\lambda }\), (1) gives \(Ax=b\) and \(A^{T}y+s=c\). Then the result follows directly from Theorem 9. \(\square \)
Appendix 2: Proof of Lemma 7
Theorem 3 shows that we are able to preserve the optimal strict complementarity partition after perturbing the problems if the original (PD) has a unique and nondegenerate solution. Actually, we can take a step further and show that then (\(\hbox {PD}_{\lambda }\)) will also have a unique and nondegenerate solution.
Theorem 10
Assume (6) holds and the (PD) problems have a unique and nondegenerate solution \(\left( x^{*},y^{*},s^{*} \right) \). Let \({\mathcal {A}}\) and \({\mathcal {I}}\) denote the corresponding optimal active and inactive sets. Then there exists \(\hat{\lambda }=\hat{\lambda }(A,b,c,x^{*},s^{*})>0\) such that the perturbed problems (\(\hbox {PD}_{\lambda }\)) with \(0 \le \Vert \lambda \Vert < \hat{\lambda }\) have a unique and nondegenerate solution and the optimal active set is the same as that of the original (PD) problems.
Proof
We consider the equivalent perturbed problem (3). From Theorem 3, we know there exists a \(\hat{\lambda }(A,b,c,x^{*},s^{*}) > 0\) such that (3) with \(0 \le \Vert \lambda \Vert < \hat{\lambda }\) has a strictly complementary solution \( \left( \hat{p},\hat{y},\hat{q} \right) \) with the same optimal active and inactive sets \({\mathcal {A}}\) and \({\mathcal {I}}\), namely we have
and also
Next we are about to show that \( \left( \hat{p},\hat{y},\hat{q} \right) \) is the unique solution of (3). Assume there exists another solution \(\bar{p} \ne \hat{p}\). Then \((\bar{p},\hat{y},\hat{q})\) satisfies the optimality conditions (4). From the complementarity equations (the third term) in (4) and \(\hat{q}_{{\mathcal {A}}} > 0\) we have \(\bar{p}_{{\mathcal {A}}} = 0 = \hat{p}_{{\mathcal {A}}}\). Then we have \( A_{{\mathcal {I}}}\bar{p}_{{\mathcal {I}}} = b_{\lambda }. \) It follows from this and (63) that \( A_{{\mathcal {I}}} ( \bar{p}_{{\mathcal {I}}} - \hat{p}_{{\mathcal {I}}} ) = 0. \) As the (PD) solution is unique and nondegenerate, we must have \(|{\mathcal {I}}| = m\) and \(rank(A_{{\mathcal {I}}}) = m\), namely, \(A_{{\mathcal {I}}}\) is nonsingular, which implies \(\hat{p}_{{\mathcal {I}}} = \bar{p}_{{\mathcal {I}}}\). Then (3) has a unique and nondegenerate primal solution, which also implies unique and nondegenerate dual solution. \(\square \)
To prove Lemma 7, we also need the following series of useful lemmas.
Lemma 12
(Farkas’ Lemma [2, Lemma 5.1]) One and only one of the following two systems has a solution:
where \(T \in {\mathbb {R}}^{m \times n}\), \(b \in {\mathbb {R}}^{m}\), \(w \in {\mathbb {R}}^{n}\) and \(y\in {\mathbb {R}}^{m}\).
Lemma 13
Given \(i \in \{1,\ldots ,n\}\), the following system
always has a solution, where \(A \in {\mathbb {R}}^{m \times n}\), \(x \in {\mathbb {R}}^{n}\), \(y\in {\mathbb {R}}^{m}\) and \(A_{i}\) is the \(i^{th}\) column of A.
Proof
Without losing generality, we can choose \(i=1\). Partition x and A as \( x = \left[ \, x_{1} \quad \bar{x}^{T} \,\right] ^{T} \) and \( A = \begin{bmatrix} A_{1}&\bar{A} \end{bmatrix}, \) where \( \bar{x} = \left[ \, x_{2} \quad \dots \quad x_{n} \,\right] ^{T} \) and \( \bar{A} = \begin{bmatrix} A_{2}&\dots&A_{n} \end{bmatrix}. \)
We need to prove the following system has a solution
From Lemma 12, we know (64) has a solution if and only if
has no solution, where \(u_{1} \in {\mathbb {R}}^{m}\), \(u_{2} \in {\mathbb {R}}^{n-1}\), \(u_{3} \in {\mathbb {R}}^{m}\), \(u_{4} \in {\mathbb {R}}\) and \(u_{5} \in {\mathbb {R}}^{n-1}\).
Assume (65) has a solution \((u_{1},u_{2},u_{3},u_{4},u_{5}) \ge 0\). Then we get
Multiplying both sides of (66a) by \(u_{1}^{T} \ge 0\), we have
From (66b), (66c) and the nonnegativity of the variables, we know
Thus (65) has no solution, which implies (64) has a solution. \(\square \)
Lemma 14
The system
always has a solution, where \(A \in {\mathbb {R}}^{m \times n}\), \(x \in {\mathbb {R}}^{n}\) and \(y\in {\mathbb {R}}^{m}\).
Proof
From Lemma 13, we know for any \(i \in \{1,\ldots ,n\}\), there exists \((x^{i}, y^{i}) \ge 0\) where \(x^{i} \in {\mathbb {R}}^{n}\) and \(y^{i} \in {\mathbb {R}}^{m}\), such that
Set \(\hat{x} = \sum _{i=1}^{n} x^{i} \ge 0\) and \(\hat{y} = \sum _{i=1}^{n}y^{i} \ge 0\). Then from (68), we have
and
\(\square \)
Lemma 15
The system
always has a solution, where \(A \in {\mathbb {R}}^{m \times n}\), \(x \in {\mathbb {R}}^{n}\) and \(y\in {\mathbb {R}}^{m}\).
Proof
Replace A in Lemma 14 by \(-A^{T}\). \(\square \)
Lemma 16
The system
always has a solution, where \(A \in {\mathbb {R}}^{m \times n}\), \(x \in {\mathbb {R}}^{n}\) and \(y\in {\mathbb {R}}^{m}\).
Proof
From Lemmas 14 and 15, we know there exist \((\hat{x}, \hat{y})\) and \((\tilde{x},\tilde{y})\) such that (67) and (69) hold respectively. Set \(\bar{x} = \hat{x}+\tilde{x}\) and \(\bar{y} = \hat{y}+\tilde{y}\) and deduce
\(\square \)
Proof of Lemma 7
Assume \({\mathcal {A}}\) and \({\mathcal {I}}\) are the optimal active and inactive sets at the unique solution \(\left( x^{*},y^{*},s^{*} \right) \) of (PD). Then from (45), we have
From Theorem 10, we know there exists a \(\hat{\lambda } = \hat{\lambda } (A,b,c,x^{*},s^{*})\) such that (\(\hbox {PD}_{\lambda }\)) with \(0 \le \Vert \lambda \Vert < \hat{\lambda } \) has a unique and nondegenerate solution and \({\mathcal {A}}\) and \({\mathcal {I}}\) are the optimal active and inactive sets. Since \(\left( \hat{p},\hat{y},\hat{q} \right) \) defined in (13) is a solution of (3), \(\left( x^{*}_{\lambda },y^{*}_{\lambda },s^{*}_{\lambda } \right) = ( \hat{p} - \lambda , \hat{y}, \hat{q} - \lambda )\) is a solution of (\(\hbox {PD}_{\lambda }\)) and also unique, with \({\mathcal {A}}\) and \({\mathcal {I}}\) being the optimal active and inactive sets. This and (21) give
From (13), recalling that \(\hat{p} = \hat{x} + \lambda \) and \(\hat{q} = \hat{s} + \lambda \), we have
This, (70) and (71) give us that \(\epsilon (A,b_{\lambda },c_{\lambda }) > \epsilon (A,b,c)\), provided
It remains to find a solution of (72) whose norm is less than \(\hat{\lambda }\). From Lemma 16, we know (72) always has a solution, say \(\bar{\lambda }\). Since (72) is homogeneous, \(\frac{\hat{\lambda }}{2 \Vert \bar{\lambda }\Vert }\bar{\lambda }\) is also a solution, and \(\Vert \frac{\hat{\lambda }}{2 \Vert \bar{\lambda }\Vert }\bar{\lambda } \Vert < \hat{\lambda }\). Without losing generality, we denote this solution as \(\bar{\lambda }\). Furthermore, (72) holds for all \(\lambda \) with \(0<\lambda = \alpha \bar{\lambda }<\bar{\lambda }\) where \(\alpha \in (0,1)\). \(\square \)
Appendix 3: An active-set prediction procedure
Note that in Procedure 1, \({\mathcal {A}}^{k}\) is the predicted active set, \({\mathcal {I}}^{k}\), the predicted inactive set and \({\mathcal {Z}}^{k} = \{ 1,2,\ldots ,n \} \backslash \left( {{\mathcal {A}}}^{k} \cup {{\mathcal {I}}}^{k}\right) \), the set of all undetermined indices at the \(k^{th}\) iteration.
Appendix 4: Results for crossover to simplex on selected Netlib problems
From the left to the right, we give the name of the test problems, number of equality constraints, number of variables, the value of duality gap \(\mu _{\lambda }^{K}\) when we terminate the (perturbed) Algorithm 1, the value of duality gap \(\mu ^{K}\) when we terminate the (unperturbed) Algorithm 2, number of ipm iterations, the relative difference (see Footnote 7 on Page 26) between two bases generated from Algorithms 1 and 2, simplex iterations for Algorithm 1 and the simplex iterations for Algorithm 2. Since the algorithm without perturbations is terminated at the same ipm iteration as Algorithm 1, we show only the number of ipm iterations for the latter. Problems on which Algorithm 1 loses are marked in bold font. ‘—’ means the simplex solver fails for a particular test problem.
Rights and permissions
About this article
Cite this article
Cartis, C., Yan, Y. Active-set prediction for interior point methods using controlled perturbations. Comput Optim Appl 63, 639–684 (2016). https://doi.org/10.1007/s10589-015-9791-z
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-015-9791-z