Skip to main content
Log in

Geometric random edge

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

We show that a variant of the random-edge pivoting rule results in a strongly polynomial time simplex algorithm for linear programs \(\max \{c^Tx :x \in \mathbb R^n, \, Ax\leqslant b\}\), whose constraint matrix A satisfies a geometric property introduced by Brunsch and Röglin: The sine of the angle of a row of A to a hyperplane spanned by \(n-1\) other rows of A is at least \(\delta \). This property is a geometric generalization of A being integral and each sub-determinant of A being bounded by \(\Delta \) in absolute value. In this case \(\delta \geqslant 1/(\Delta ^2 n)\). In particular, linear programs defined by totally unimodular matrices are captured in this framework. Here \(\delta \geqslant 1/ n\) and Dyer and Frieze previously described a strongly polynomial-time randomized simplex algorithm for linear programs with A totally unimodular. The expected number of pivots of the simplex algorithm is polynomial in the dimension and \(1/\delta \) and independent of the number of constraints of the linear program. Our main result can be viewed as an algorithmic realization of the proof of small diameter for such polytopes by Bonifas et al., using the ideas of Dyer and Frieze.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. A similar fact holds for totally unimodular constraint matrices, see, e.g., [22, Proposition 2.1, p. 540] meaning that after one has identified an element of the optimal basis, one is left with a linear program in dimension \(n-1\) with a totally unimodular constraint matrix. A similar fact fails to hold for integral matrices with sub-determinants bounded by 2.

References

  1. Applegate, D., Kannan, R.: Sampling and integration of near log-concave functions. In: STOC ’91: Proceedings of the Twenty-Third Annual ACM Symposium on Theory of Computing, pp. 156–163. ACM, New York (1991)

  2. Bobkov, S.G., Houdré, C.: Isoperimetric constants for product probability measures. Ann. Probab. 25(1), 184–205 (1997)

  3. Bonifas, N., Di Summa, M., Eisenbrand, F., Hähnle, N., Niemeier, M.: On sub-determinants and the diameter of polyhedra. In: Proceedings of the 28th Annual ACM Symposium on Computational Geometry, SoCG ’12, pp. 357–362 (2012)

  4. Brunsch, T., Großwendt, A., Röglin, H.: Solving totally unimodular lps with the shadow vertex algorithm. In: 32nd International Symposium on Theoretical Aspects of Computer Science, p. 171 (2015)

  5. Brunsch, T., Röglin, H.: Finding short paths on polytopes by the shadow vertex algorithm. In: Fomin, F.V., Freivalds, R., Kwiatkowska, M., Peleg, D. (eds.) Automata, Languages, and Programming, pp. 279–290. Springer, Berlin(2013)

  6. Cook, W., Gerards, A.M.H., Schrijver, A., Tardos, E.: Sensitivity theorems in integer linear programming. Math. Program. 34, 251–264 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  7. Dadush, D., Hähnle, N.: On the shadow simplex method for curved polyhedra. In: 31st International Symposium on Computational Geometry, SoCG 2015, June 22–25, 2015, Eindhoven, The Netherlands, pp. 345–359 (2015)

  8. Dantzig, G.B.: Maximization of a linear function of variables subject to linear inequalities. In: Koopmans, T.C. (ed.) Activity Analysis of Production and Allocation, pp. 339–347. Wiley, New York (1951)

    Google Scholar 

  9. Dyer, M., Frieze, A.: Random walks, totally unimodular matrices, and a randomised dual simplex algorithm. Math. Program. 64(1, Ser. A), 1–16 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  10. Friedmann, Oliver.: A subexponential lower bound for Zadeh’s pivoting rule for solving linear programs and games. In: Integer Programming and Combinatoral Optimization, pp. 192–206. Springer, Berlin (2011)

  11. Friedmann, O., Hansen, T.D., Zwick, U.: Subexponential lower bounds for randomized pivoting rules for the simplex algorithm. In: STOC’11—Proceedings of the 43rd ACM Symposium on Theory of Computing, pp. 283–292. ACM, New York (2011)

  12. Gärtner, B., Kaibel, V.: Two new bounds for the random-edge simplex-algorithm. SIAM J. Discrete Math. 21(1), 178–190 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  13. Grötschel, M., Lovász, L., Schrijver, A.: Geometric Algorithms and Combinatorial Optimization, Volume 2 of Algorithms and Combinatorics. Springer, Berlin (1988)

    Book  MATH  Google Scholar 

  14. Hansen, T.D., Paterson, M., Zwick, U.: Improved upper bounds for random-edge and random-jump on abstract cubes. In: Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 874–881. Society for Industrial and Applied Mathematics (2014)

  15. Kalai, Gil.: A subexponential randomized simplex algorithm (extended abstract). In: Proceedings of the 24th Annual ACM Symposium on Theory of Computing (STOC92), pp. 475–482 (1992)

  16. Karmarkar, N.: A new polynomial-time algorithm for linear programming. Combinatorica 4(4), 373–395 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  17. Khachiyan, L.G.: A polynomial algorithm in linear programming. Dokl. Akad. Nauk SSSR 244, 1093–1097 (1979)

    MathSciNet  MATH  Google Scholar 

  18. Klee, V., Minty, G.J.: How good is the simplex algorithm? In: Inequalities, III (Proceedings of Third Symposium, Univ. California, Los Angeles, Calif., 1969; dedicated to the memory of Theodore S. Motzkin), pp. 159–175. Academic Press, New York (1972)

  19. Lovász, L., Simonovits, M.: Random walks in a convex body and an improved volume algorithm. Random Struct. Algorithms 4(4), 359–412 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  20. Lovász, L.: Random walks on graphs. Combinatorics Paul Erdos Eighty 2, 1–46 (1993)

    Google Scholar 

  21. Matoušek, J., Sharir, M., Welzl, E.: A subexponential bound for linear programming. Algorithmica 16(4–5), 498–516 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  22. Nemhauser, G.L., Wolsey, L.A.: Integer programming. In: Nemhauser, G.L., et al. (eds.) Optimization, Volume 1 of Handbooks in Operations Research and Management Science, pp. 447–527. Elsevier, Amsterdam (1989)

    Google Scholar 

  23. Schrijver, A.: Theory of Linear and Integer Programming. Wiley, London (1986)

    MATH  Google Scholar 

  24. Sinclair, A., Jerrum, M.: Approximate counting, uniform generation and rapidly mixing markov chains. Inf. Comput. 82, 93–133 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  25. Spielman, D.A., Teng, S.-H.: Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time. J. ACM 51(3), 385–463 (2004). (electronic)

    Article  MathSciNet  MATH  Google Scholar 

  26. Tardos, É.: A strongly polynomial algorithm to solve combinatorial linear programs. Oper. Res. 34(2), 250–256 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  27. Vempala, S.: Geometric random walks: a survey. MSRI Comb. Comput. Geom. 52, 573–612 (2005)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to Daniel Dadush and Nicolai Hähnle, who pointed out an error in the sub-division scheme in a previous version of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Friedrich Eisenbrand.

Appendix

Appendix

Proof of Lemma 2

We denote the normal cones of B and \(B'\) by

$$\begin{aligned} {C} = \left\{ \sum _{i\in B} \lambda _i a_i:\lambda _i\geqslant 0 \right\} \quad \text {and} \quad {C}' = \left\{ \sum _{j \in B'} \mu _j a_j :\mu _j \geqslant 0\right\} . \end{aligned}$$

By a gift-wrapping technique, we construct a hyperplane \((h^T x = 0)\), \(h \in \mathbb R^n {\setminus } \{0\}\) such that the following conditions hold.

  1. (i)

    The hyperplane separates the interiors of C and \(C'\).

  2. (ii)

    The row \(a_k\) does not lie on the hyperplane.

  3. (iii)

    The hyperplane is spanned by \(n-1\) rows of A.

Once we construct \((h^Tx = 0)\) we can argue as follows. The distance of \(\mu _k \cdot a_k\) to the hyperplane \((h^Tx = 0)\) is at least \(\mu _k\cdot \delta \). Since \(c'\) is the sum of \(\mu _k \cdot a_k\) and a vector that is on the same side of the hyperplane as \(a_k\) it follows that the distance of \(c'\) to this hyperplane is also at least \(\mu _k \cdot \delta \). Since c lies on the opposite side of the hyperplane, the distance of c and \(c'\) is at least \(\mu _k \cdot \delta \).

We start with a hyperplane \((h^Tx = 0)\) strictly separating the interiors of C and \(C'\). The conditions (i, ii) are satisfied. Suppose that (iii) is not satisfied and let \(\ell < n-1\) be the maximum number of linearly independent rows of A that are contained in \((h^Tx = 0)\).

We tilt the hyperplane by moving its normal vector h along a chosen equator of the ball of radius \(\Vert h\Vert \) to augment this number. Since \(\ell < n-1\) there exists an equator leaving the rows of A that are in contained in \((h^T = 0)\) invariant under each rotation.

However, as soon as the hyperplane contains a new row of A we stop. If this new row of A is not \(a_k\) then still, conditions (i, ii) hold and the hyperplane now contains \(\ell +1\) linearly independent rows of A.

If this new row is \(a_k\), then we redo the tilting operation but this time by moving h in the opposite direction on the chosen equator. Since there are n linearly independent rows of A without the row \(a_k\) this tilting will stop at a new row of A which is not \(a_k\) and we end the first tilting operation.

This tilting operation has to be repeated at most \(n-1 - |B \cap B'|\) times to obtain the desired hyperplane. \(\square \)

1.1 Phase 1

We now describe an approach to determine an initial basic feasible solution or to assert that the linear program (1) is infeasible. Furthermore, we justify the assumption that the set of feasible solutions is a bounded polytope. This phase 1 is different from the usual textbook method since the linear programs that we need to solve have to comply with the \(\delta \) -distance property.

To find an initial basic feasible solution, we start by identifying n linearly independent linear inequalities \(\widetilde{a}_1^Tx \leqslant \widetilde{b}_1, \dots ,\widetilde{a}_n^Tx \leqslant \widetilde{b}_n\) of \(Ax \leqslant b\). Then we determine a ball that contains all feasible solutions. This standard technique is for example described in [13]. Using the normal-vectors \(\widetilde{a}_1,\dots ,\widetilde{a}_n\) we next determine values \(\beta _i, \gamma _i \in \mathbb R\), \(i=1,\dots ,n\) such that this ball is contained in the parallelepiped \(Z = \{ x \in \mathbb R^n :\beta _i \leqslant \widetilde{a}_i^Tx \leqslant \gamma _i, \, i=1,\dots ,n\}\). We start with a basic feasible solution \(x_0^*\) of this parallelepiped and then proceed in m iterations. In iteration i, we determine a basic feasible solution \(x^*_i\) of the polytope

$$\begin{aligned} P_i = Z \cap \left\{ x \in \mathbb R^n :a_j^Tx \leqslant b_j, \,1 \leqslant j \leqslant i \right\} \end{aligned}$$

using the basic feasible solution \(x^*_{i-1}\) from the previous iteration by solving the linear program

$$\begin{aligned} \min \left\{ a_i^Tx :x \in P_{i-1} \right\} . \end{aligned}$$

If the optimum value of this linear program is larger than \(b_i\), we assert that the linear program (1) is infeasible. Otherwise \(x^*_i\) is the basic feasible solution from this iteration.

Finally, we justify the assumption that \(P = \{x \in \mathbb R^n :Ax \leqslant b\}\) is bounded as follows. Instead of solving the linear program (1), we solve the linear program \(\max \{ c^Tx :x \in P \cap Z\}\) with the initial basic feasible solution \(x^*_m\). If the optimum solution is not feasible for (1) then we assert that (1) is unbounded.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Eisenbrand, F., Vempala, S. Geometric random edge. Math. Program. 164, 325–339 (2017). https://doi.org/10.1007/s10107-016-1089-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-016-1089-0

Mathematics Subject Classification

Navigation