Skip to main content
Log in

Approximate Farkas lemmas and stopping rules for iterative infeasible-point algorithms for linear programming

  • Published:
Mathematical Programming Submit manuscript

Abstract

In exact arithmetic, the simplex method applied to a particular linear programming problem instance with real data either shows that it is infeasible, shows that its dual is infeasible, or generates optimal solutions to both problems. Most interior-point methods, on the other hand, do not provide such clear-cut information. If the primal and dual problems have bounded nonempty sets of optimal solutions, they usually generate a sequence of primal or primaldual iterates that approach feasibility and optimality. But if the primal or dual instance is infeasible, most methods give less precise diagnostics. There are methods with finite convergence to an exact solution even with real data. Unfortunately, bounds on the required number of iterations for such methods applied to instances with real data are very hard to calculate and often quite large. Our concern is with obtaining information from inexact solutions after a moderate number of iterations. We provide general tools (extensions of the Farkas lemma) for concluding that a problem or its dual is likely (in a certain well-defined sense) to be infeasible, and apply them to develop stopping rules for a homogeneous self-dual algorithm and for a generic infeasible-interior-point method for linear programming. These rules allow precise conclusions to be drawn about the linear programming problem and its dual: either near-optimal solutions are produced, or we obtain “certificates” that all optimal solutions, or all feasible solutions to the primal or dual, must have large norm. Our rules thus allow more definitive interpretation of the output of such an algorithm than previous termination criteria. We give bounds on the number of iterations required before these rules apply. Our tools may also be useful for other iterative methods for linear programming. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. N.K. Karmarkar, A new polynomial-time algorithm for linear programming, Combinatorica 4 (1984) 373–395.

    Google Scholar 

  2. A. Schrijver, Theory of Linear and Integer Programming, Wiley, Chichester, 1986.

    Google Scholar 

  3. C.C. Gonzaga, Path following methods for linear programming, SIAM Review 34 (2) (1992) 167–227.

    Google Scholar 

  4. D. den Hertog, Interior Point Approach to Linear, Quadratic and Convex Programming: Algorithms and Complexity, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1994.

    Google Scholar 

  5. M.J. Todd, Potential-reduction methods in mathematical programming, Mathematical Programming 76 (1996) 3–45.

    Google Scholar 

  6. S. Mehrotra, Y. Ye, Finding an interior point in the optimal face of linear programs, Mathematical Programming 62 (1993) 497–515.

    Google Scholar 

  7. S.A. Vavasis, Y. Ye, A primal-dual interior point method whose running time depends only on the constraint matrix, Mathematical Programming 74 (1996) 79–120.

    Google Scholar 

  8. S.M. Robinson, Bounds for the error in the solution set of a perturbed linear program, Linear Algebra and Its Applications 6 (1973) 69–81.

    Google Scholar 

  9. O.L. Mangasarian, A condition number for linear inequalities and linear programs, in: G. Bamberg, O. Opitz (Eds.), Methods of Operations Research, vol. 43, Verlagsgruppe Athenaum/Hain/Scriptor/Hanstein, Konigstein, 1981, pp. 3–15.

    Google Scholar 

  10. R.M. Freund, Dual gauge programs, with applications to quadratic programming and the minimumnorm problem, Mathematical Programming 38 (1987) 47–67.

    Google Scholar 

  11. J. Renegar, Some perturbation theory for linear programming, Mathematical Programming 65 (1994) 73–82.

    Google Scholar 

  12. A. Dax, On minimum norm solutions, Journal of Optimization Theory and Applications 76 (1993) 183–193.

    Google Scholar 

  13. I.I. Yeremin, Duality for improper linear and convex programming problems and methods of correcting them, Engineering Cybernetics 21 (1983) 15–26.

    Google Scholar 

  14. M. Kojima, N. Megiddo, S. Mizuno, A primal-dual infeasible-interior-point algorithm for linear programming, Mathematical Programming 61 (1993) 263–280.

    Google Scholar 

  15. I.J. Lustig, Feasibility issues in a primal-dual interior point method for linear programming, Mathematical Programming 49 (1990/91) 145–162.

    Google Scholar 

  16. I.J. Lustig, R.E. Marsten, D.F. Shanno, Computational experience with a primal-dual interior point method for linear programming, Linear Algebra and Its Applications 152 (1991) 191–222.

    Google Scholar 

  17. S. Mizuno, Polynomiality of infeasible-interior-point algorithms for linear programming, Mathematical Programming 67 (1994) 109–119.

    Google Scholar 

  18. F.A. Potra, An infeasible interior-point predictor-corrector algorithm for linear programming, SIAM Journal on Optimization 6 (1996) 19–32.

    Google Scholar 

  19. F.A. Potra, A quadratically convergent predictor-corrector method for solving linear programs from infeasible starting points, Mathematical Programming 67 (1994) 383–406.

    Google Scholar 

  20. Y. Ye, M.J. Todd, S. Mizuno, An O(\(\sqrt n \) L)-iteration homogeneous and self-dual linear programming algorithm, Mathematics of Operations Research 19 (1994) 52–67.

    Google Scholar 

  21. Y. Zhang, On the convergence of a class of infeasible interior-point methods for the horizontal linear complementarity problem, SIAM Journal on Optimization 4 (1994) 208–227.

    Google Scholar 

  22. A.J. Hoffman, On approximate solutions of systems of linear inequalities, Journal of Research of the National Bureau of Standards 49 (1952) 263–265.

    Google Scholar 

  23. O. Güler, A.J. Hoffman, U.G. Rothblum, Approximations to solutions to systems of linear inequalities, SIAM Journal on Matrix Analysis and Applications 16 (1995) 688–696.

    Google Scholar 

  24. T. Terlaky, Onl p programming, European Journal of Operations Research 22 (1985) 70–100.

    Google Scholar 

  25. S. Mizuno, M.J. Todd, Y. Ye, On adaptive step primal-dual interior-point algorithms for linear programming, Mathematics of Operations Research 18 (1993) 964–981.

    Google Scholar 

  26. O. Güler, Y. Ye, Convergence behavior of interior-point algorithms, Mathematical Programming 60 (1993) 215–228.

    Google Scholar 

  27. X. Xu, P.-F. Hung, Y. Ye, A simplified homogeneous and self-dual linear programming algorithm and its implementation, Annals of Operations Research 62 (1996) 151–172.

    Google Scholar 

  28. M. Kojima, Basic lemmas in polynomial-time infeasible-interior-point methods for linear programs, Annals of Operations Research 62 (1996) 1–28.

    Google Scholar 

  29. Y. Zhang, D. Zhang, On polynomiality of the Mehrotra-type predictor-corrector interior-point algorithm, Mathematical Programming 68 (1995) 303–318.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Todd, M.J., Ye, Y. Approximate Farkas lemmas and stopping rules for iterative infeasible-point algorithms for linear programming. Mathematical Programming 81, 1–21 (1998). https://doi.org/10.1007/BF01584841

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01584841

Keywords

Navigation