Skip to main content

Mathematical Solution Techniques

  • Chapter
  • First Online:
Business Optimization Using Mathematical Programming

Part of the book series: International Series in Operations Research & Management Science ((ISOR,volume 307))

  • 1136 Accesses

Abstract

This chapter provides some of the mathematical and algorithmic backgrounds to solving LP and MILP problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Other standard notations are possible. One that is also used very often is subject to A x = b, l ≤x ≤u. Note that instead of the non-negativity constraints, we have lower and upper bounds on the variables. See Sect. 3.8.3 for how this case is treated.

  2. 2.

    We call a certain equation f i(x) = 0 of a system f(x) = 0 of equations redundant if it does not provide any further information with respect to the solution of that system. All information carried by f i(x) = 0 is already contained in the other equations.

  3. 3.

    To readers with a background in linear algebra, the concept of basic variables stems from linear algebra. Consider the constraint matrix A and a collection of m columns, A j. If these columns are linearly independent, then the corresponding variables x j are called basic variables. Since there may exist different sets of m linearly independent column vectors, we also have different sets of basic variables. Thus, to say a variable is a basic variable makes sense only if this variable is seen as a member of an ensemble of variables. The situation with canonical variables is different. It can easily be seen whether a variable is a canonical one or not. If we inspect the columns of A associated with a set of m canonical variables, we see that, after ordering the columns, we get the identity matrix, which is a very special example of a set of linearly independent columns.

  4. 4.

    In the original Simplex algorithm , the matrix A is updated in every iteration.

  5. 5.

    An inequality is said to be active for some values of the variables if the left- and right-hand sides are equal, i.e., for these values, the inequality is fulfilled as an equality. Example: x + y ≤ 5 becomes active for x = 3 and y = 2.

  6. 6.

    Such solvers are discussed in the review article by Freund & Mizuno (1996, [201]). Further details are found in Andersen & Ye (1995,[22]) and Vanderbei (1996,[560],2014,[561]).

  7. 7.

    Imagine the case that we have n binary variables, δ i, we have 2n possible combinations. While 23 = 8 and 210 = 1024, 250 is already a number that starts with leading digit 1 and then has 15 zeros. Note that we have approximately 2n ≈ 100.3n or to be precise 2n ≈ 10Mn with \(M=\ln (2)/\ln (10)\approx 0.30103\ldots \).

  8. 8.

    The problem size of a model may be expressed by the number of variables and constraints.

  9. 9.

    Tightening of bounds and other presolving operations are further described in Sect. 9.1.1.2.

  10. 10.

    Note that this property is not valid in MILP problems.

  11. 11.

    Using the results \({\mathbf {u}}^{\mathrm {T}}\mathbf {v}=\left ( {\mathbf {u}}^{ \mathrm {T}}\mathbf {v}\right ) ^{\mathrm {T}}={\mathbf {v}}^{\mathrm {T}}\mathbf {u}\) and \(\left ( {\mathbf {y}}^{\mathrm {T}}\mathsf {A}\right ) ^{\mathrm {T}}=\mathsf {A} ^{\mathrm {T}}\mathbf {y}\) from linear algebra, it is easy to see that both formulations of the dual problem are equivalent.

  12. 12.

    This result is only true if no upper bounds (see Sect. 3.8.3) are present.

  13. 13.

    As further pointed out on page 40, the basis inverse is only rarely inverted explicitly.

  14. 14.

    From now on A j denotes the columns of the original matrix A corresponding to the variable x j.

  15. 15.

    In this case, we are solving a maximization problem.

  16. 16.

    The “Boat Renting” problem shows this property very well. Let us inspect the system of linear equations in each iteration. The columns associated with the variables s 1, …, s 4, the original basic variables, give the basis inverse associated with the current basis. The reason is that elementary row operations are equivalent to a multiplication of the matrix representing the equations by another matrix, say M. If we inspect the first iteration, we can understand how the method works. The initial matrix and, in particular, the columns corresponding to the new basic variables are multiplied by M and obviously give , where is the unit matrix. Thus we have \( \mathsf {M}=\mathcal {B}^{-1}\). Since we have multiplied all columns of A by M, and in particular also the unit matrix associated with the original basic variables, these columns just give the columns of the basis inverse \(\mathcal {B}^{-1}\). In each iteration k we multiply our original matrix by such a matrix M k, so the original basic columns represent the product of all matrices M k, which then is the basis inverse of the current matrix.

  17. 17.

    Note that LP solvers may have a different sign convention for the shadow prices and may print out π 2 = π 3 = +0.5.

  18. 18.

    Affine scaling methods are based on a series of strictly feasible points (see definition below) of the primal problem. Around these points the feasible region of the primal problem is locally approximated by the so-called Dikin ellipsoid. This procedure leads to a convex optimization problem which is in most parts linear except for one convex quadratic constraint.

  19. 19.

    Potential reduction methods are those that have been introduced in Karmarkar’s famous 1984 paper. The objective function is the potential function \(q\ln \left ( {\mathbf {c}}^{\mathrm {T}}\mathbf {x}-{\mathbf {b}}^{\mathrm {T }}\mathbf {y}\right ) -\sum _{j=1}^{n}\ln x_{j}\) built up by the logarithm of the duality gap and a logarithmic term designed to repel feasible points from the boundary of the feasible region. The constraints are the linear constraints for primal and dual feasibility.

  20. 20.

    Central trajectory methods are primal–dual methods based on the concept of the central path or central trajectory . They are discussed in detail below. In their primal–dual predictor–corrector version, they are the most efficient ones, and according to Freund & Mizuno (1996), they have the most aesthetic qualities.

  21. 21.

    The reason for using this expression is that the method will include both the primal and dual variables.

  22. 22.

    diag(x 1, …, x n) denotes a diagonal matrix with diagonal elements x 1, …, x n and zeros at all other entries.

  23. 23.

    The screen output of an interior-point solver may thus contain the number of the current iteration, quantities measuring the violation of primal and dual feasibility, the values of the primal and the dual objective function (or the duality gap) and possibly the barrier parameter as valuable information on each iteration.

References

  1. Andersen, E.D.: On exploiting problem structure in a basis identifications procedure for linear programming. Department Publication 6, Department of Management Sciences, Odense Universitet, Odense (1996)

    Google Scholar 

  2. Andersen, E.D., Ye, Y.: Combining Interior-Point and Pivoting Algorithms for Linear Programming. Technical report, Department of Management Sciences, The University of Iowa, Ames, Iowa (1994)

    Google Scholar 

  3. Andersen, E.D., Ye, Y.: On a Homogeneous Algorithm for the Monotone Complementary Problem. Technical report, Department of Management Sciences, The University of Iowa, Ames, Iowa (1995)

    Google Scholar 

  4. Andersen, E.D., Gonzio, J., Meszaros, C., Xu, X.: Implementation of Interior Point Methods for Large Scale Linear Programming. Department Publication 1, Department of Management Sciences, Odense Universitet, Odense (1996)

    Google Scholar 

  5. Arbel, A.: Exploring Interior-Point Linear Programming Algorithms and Software. MIT Press, London (1994)

    Google Scholar 

  6. Balas, E.: An additive algorithm for solving linear programs with zero-one variables. Oper. Res. 13, 517–546 (1965)

    Article  Google Scholar 

  7. Barnhart, C., Johnson, E.L., Nemhauser, G.L., Savelsberg, M.W.P., Vance, P.H.: Branch-and-price: column generation for solving huge integer programs. Oper. Res. 46(3), 316–329 (1998)

    Article  Google Scholar 

  8. Bock, H.G., Zillober, C.: Interior point methods and extrapolation. In: Kleinschmidt, P. (ed.) Symposium Operations Research (SOR’95): Program and Abstract of the Annual Conference of the DGOR, GMÖOR and ÖGOR, p. 39. University of Passau, Passau (1995)

    Google Scholar 

  9. Burkhard, R.E.: Methoden der Ganzzahligen Optimierung. Springer, Wien, New York (1972)

    Book  Google Scholar 

  10. Dakin, R.J.: A tree search algorithm for mixed integer programming problems. Comput. J. 8, 250–255 (1965)

    Article  Google Scholar 

  11. Dantzig, G.B.: Linear Programming and Extensions. Princeton University Press, Princeton, NJ (1963)

    Book  Google Scholar 

  12. Dantzig, G.B., Wolfe, P.: Decomposition principle for linear programs. Oper. Res. 8, 101–111 (1960)

    Article  Google Scholar 

  13. Desrosiers, J., Dumas, Y., Solomon, M.M., Soumis, F.: Time constrained routing and scheduling. In: Ball, M.E., Magnanti, T.L., Monma, C., Nemhauser, G.L. (eds.) Handbook in Operations Research and Management Science, pp. 35–140. Society for Industrial and Applied Mathematics, Philadelphia (1995)

    Google Scholar 

  14. Fiacco, A.V., McCormick, G.P.: Nonlinear Programming. Sequential Unconstrained Minimization Techniques. Wiley, New York (1968)

    Google Scholar 

  15. Freund, R.M., Mizuno, S.: Interior point methods: current status and future directions. Optima (Math. Program. Soc. Newslett.) 51, 1–9 (1996)

    Google Scholar 

  16. Frisch, K.R.: The Logarithmic Potential Method for Convex Programming. Technical Report, University Institute of Economics, Oslo, Norway (1955)

    Google Scholar 

  17. Gill, P.E., Murray, W., Wright, M.H.: Practical Optimization. Academic Press, London (1981)

    Google Scholar 

  18. Gill, P.E., Murray, W., Saunders, M.A., Tomlin, J.A., Wright, M.H.: On the projected Newton barrier methods for linear programming and an equivalence to Karmarkar’s projective method. Math. Program. 36, 183–209 (1986)

    Article  Google Scholar 

  19. Glover, F.: A new foundation for a simplified primal integer programming algorithm. Oper. Res. 16, 727–740 (1968)

    Article  Google Scholar 

  20. Gomory, R.E.: Outline of an algorithm for integer solutions to linear programming. Bull. Am. Math. Soc. 64, 275–278 (1958)

    Article  Google Scholar 

  21. Gonzaga, C.L.: Path-following methods for linear programming. SIAM Rev. 34, 167–224 (1992)

    Article  Google Scholar 

  22. Granot, F., Hammer, P.: On the use of boolean functions in 0-1 programming. Methods Oper. Res. 12, 154–184 (1972)

    Google Scholar 

  23. Greenberg, H.J.: How to analyse the results of linear programs - Part 1: preliminaries. Interfaces 23(4), 56–67 (1993)

    Article  Google Scholar 

  24. Greenberg, H.J.: How to analyse the results of linear programs - Part 2: price interpretation. Interfaces 23(5), 97–114 (1993)

    Article  Google Scholar 

  25. Greenberg, H.J.: How to analyse the results of linear programs - Part 3: infeasibility. Interfaces 23(6), 120–139 (1993)

    Article  Google Scholar 

  26. Greenberg, H.J.: How to analyse the results of linear programs - Part 4: forcing substructures. Interfaces 24(1), 121–130 (1994)

    Article  Google Scholar 

  27. Harris, P.M.J.: Pivot selection methods of the Devex LP code. Math. Program. 5, 1–28 (1973)

    Article  Google Scholar 

  28. Karmarkar, N.: A new polynomial time algorithm for linear programming. Combinatorica 4, 375–395 (1984)

    Article  Google Scholar 

  29. Karush, W.: Minima of Functions of Several Variables with Inequalities as Side Constraints. Master thesis, Department of Mathematics, University of Chicago, Chicago (1939)

    Google Scholar 

  30. Klee, V., Minty, G.J.: How good is the simplex algorithm? In: Shisha, O. (ed.) Inequalities III, pp. 159–175. Academic Press, New York (1972)

    Google Scholar 

  31. Klotz, E.: Identification, assessment, and correction of ill-conditioning and numerical instability in linear and integer programs. In: INFORMS TutORials in Operations Research. INFORMS, chap. 3, pp. 54–108 (2014)

    Google Scholar 

  32. Klotz, E., Newman, A.M.: Practical guidelines for solving difficult linear programs. Surv. Oper. Res. Manage. Sci. 18(1), 1–17 (2013)

    Google Scholar 

  33. Klotz, E., Newman, A.M.: Practical guidelines for solving difficult mixed integer linear programs. Surv. Oper. Res. Manage. Sci. 18(1), 18–32 (2013)

    Google Scholar 

  34. Korte, B., Vygen, J.: Combinatorial Optimization: Theory and Algorithms. Algorithms and Combinatorics, vol. 21, 6th edn. Springer, New York (2018)

    Google Scholar 

  35. Kuhn, H.W., Tucker, A.W.: Nonlinear programming. In: Neumann, J. (ed.) Proceedings Second Berkeley Symposium on Mathematical Statistics and Probability, pp. 481–492. University of California, Berkeley, CA (1951)

    Google Scholar 

  36. Lancia, G., Serafini, P.: Compact Extended Linear Programming Models. Springer International Publishing Company AG, Cham (2018)

    Book  Google Scholar 

  37. Land, A.H., Doig, A.G.: An automatic method for solving discrete programming problems. Econometrica 28, 497–520 (1960)

    Article  Google Scholar 

  38. Lustig, I.J., Marsten, R.E., Shanno, D.F.: Computational experience with a primal-dual interior point method for linear programming. Linear Algebra Appl. 152, 191–222 (1991)

    Article  Google Scholar 

  39. Lustig, I.J., Marsten, R.E., Shanno, D.F.: On implementing Mehrotra’s predictor-corrector interior-point method for linear programming. SIAM J. Optim. 2, 435–449 (1992)

    Article  Google Scholar 

  40. Maros, I., Mitra, G.: Finding better starting bases for the simplex method. In: Kleinschmidt, P. (ed.) Operations Research Proceedings 1995. Springer, Berlin (1996)

    Google Scholar 

  41. McMullen, P.: The maximum number of faces of convex polytopes. Mathematika 17, 179–184 (1970)

    Article  Google Scholar 

  42. Mehrotra, S.: On the implementation of a primal-dual interior point method. SIAM J. Optim. 2(4), 575–601 (1992)

    Article  Google Scholar 

  43. Nemhauser, G.L.: The age of optimization: solving large-scale real world-problems. Oper. Res. 42, 5–13 (1994)

    Article  Google Scholar 

  44. Nemhauser, G.L., Wolsey, L.A.: Integer and Combinatorial Optimization. Wiley, New York (1988)

    Book  Google Scholar 

  45. Padberg, M.: Linear Optimization and Extensions. Springer, Berlin, Heidelberg (1996)

    Google Scholar 

  46. Padberg, M.W., Rinaldi, G.: Optimization of a 532-city traveling salesman problem by branch and cut. Oper. Res. Lett. 6, 1–6 (1987)

    Article  Google Scholar 

  47. Ravindran, A., Phillips, D.T., Solberg, J.J.: Operations Research. Principles and Practice. Wiley, New York (1987)

    Google Scholar 

  48. Savelsbergh, M.W.P.: A branch-and-price algorithm for the generalized assignment problem. Oper. Res. 6, 831–841 (1997)

    Article  Google Scholar 

  49. Savelsbergh, M.W.P., Sismondi, G.C., Nemhauser, G.L.: Functional description of MINTO and a mixed INTeger optimizer. Oper. Res. Lett. 8, 119–124 (1994)

    Google Scholar 

  50. Schrijver, A.: Combinatorial Optimization: Polyhedra and Efficiency. Springer Science & Business Media, Heidelberg (2003). Journal of Computer and System Sciences B

    Google Scholar 

  51. Vanderbeck, F., Wolsey, L.A.: An exact algorithm for IP column generation. Oper. Res. Lett. 19, 151–160 (1996)

    Article  Google Scholar 

  52. Vanderbei, R.J.: Linear Programming - Foundations and Extensions. Kluwer, Dordrecht (1996)

    Google Scholar 

  53. Vanderbei, R.J.: Linear Programming - Foundations and Extensions, 4th edn. Springer, New York (2014)

    Book  Google Scholar 

  54. Vishnoi, N.: Algorithms for Convex Optimization. Cambridge University Press, Cambridge (2021)

    Book  Google Scholar 

  55. Wolsey, L.A.: Group-theoretic results in mixed integer programming. Oper. Res. 19, 1691–1697 (1971)

    Article  Google Scholar 

  56. Young, R.D.: A simplified primal (all-integer) integer programming algorithm. Oper. Res. 16, 750–782 (1968)

    Article  Google Scholar 

  57. Young, R.E., Baumol, W.J.: Integer programming and pricing. Econometrica 28, 520–550 (1960)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

3.1 Electronic Supplementary Material

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Kallrath, J. (2021). Mathematical Solution Techniques. In: Business Optimization Using Mathematical Programming. International Series in Operations Research & Management Science, vol 307. Springer, Cham. https://doi.org/10.1007/978-3-030-73237-0_3

Download citation

Publish with us

Policies and ethics