Skip to main content
Log in

Design and implementation of a modular interior-point solver for linear optimization

Mathematical Programming Computation Aims and scope Submit manuscript

Abstract

This paper introduces the algorithmic design and implementation of Tulip, an open-source interior-point solver for linear optimization. It implements a regularized homogeneous interior-point algorithm with multiple centrality corrections, and therefore handles unbounded and infeasible problems. The solver is written in Julia, thus allowing for a flexible and efficient implementation: Tulip’s algorithmic framework is fully disentangled from linear algebra implementations and from a model’s arithmetic. In particular, this allows to seamlessly integrate specialized routines for structured problems. Extensive computational results are reported. We find that Tulip is competitive with open-source interior-point solvers on the H. Mittelmann’s benchmark of barrier linear programming solvers. Furthermore, we design specialized linear algebra routines for structured master problems in the context of Dantzig–Wolfe decomposition. These routines yield a tenfold speedup on large and dense instances that arise in power systems operation and two-stage stochastic programming, thereby outperforming state-of-the-art commercial interior point method solvers. Finally, we illustrate Tulip’s ability to use different levels of arithmetic precision by solving problems in extended precision.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Notes

  1. Personal communication with PIPS developers.

  2. In Julia, a ! is appended to functions that mutate their arguments.

  3. https://ds4dm.github.io/Tulip.jl/dev/.

  4. Source code is available at https://github.com/ds4dm/Tulip.jl, and online documentation at https://ds4dm.github.io/Tulip.jl/dev/.

  5. http://plato.asu.edu/ftp/lpbar.html.

  6. https://github.com/mtanneau/LPBenchmarks.

  7. See discussion in https://github.com/coin-or/Clp/issues/151.

  8. https://github.com/mtanneau/UnitBlockAngular.jl.

  9. Code for generating DER instances is available at https://github.com/mtanneau/DER_experiments and for TSPP instances at https://github.com/mtanneau/TSSP.

  10. Mosek always uses its homogeneous algorithm. With default settings, CPLEX and Gurobi only do so when solving node relaxations of a MIP model.

References

  1. Andersen, E.D., Andersen, K.D.: Presolving in linear programming. Math. Program. 71(2), 221–245 (1995). https://doi.org/10.1007/BF01586000

    Article  MathSciNet  MATH  Google Scholar 

  2. Andersen, E.D., Andersen, K.D.: The Mosek Interior Point Optimizer for Linear Programming: An Implementation of the Homogeneous Algorithm, pp. 197–232. Springer, Boston (2000). https://doi.org/10.1007/978-1-4757-3216-0_8

    Book  MATH  Google Scholar 

  3. Anjos, M.F., Burer, S.: On handling free variables in interior-point methods for conic linear optimization. SIAM J. Optim. 18(4), 1310–1325 (2008). https://doi.org/10.1137/06066847X

    Article  MathSciNet  MATH  Google Scholar 

  4. Anjos, M.F., Lodi, A., Tanneau, M.: A decentralized framework for the optimal coordination of distributed energy resources. IEEE Trans. Power Syst. 34(1), 349–359 (2019). https://doi.org/10.1109/TPWRS.2018.2867476

    Article  Google Scholar 

  5. Babonneau, F., Vial, J.P.: Accpm with a nonlinear constraint and an active set strategy to solve nonlinear multicommodity flow problems. Math. program. 120(1), 179–210 (2009)

    Article  MathSciNet  Google Scholar 

  6. Benders, J.F.: Partitioning procedures for solving mixed-variables programming problems. Numer. Math. 4(1), 238–252 (1962). https://doi.org/10.1007/BF01386316

    Article  MathSciNet  MATH  Google Scholar 

  7. Bezanson, J., Edelman, A., Karpinski, S., Shah, V.B.: Julia: a fresh approach to numerical computing. SIAM Rev. 59(1), 65–98 (2017). https://doi.org/10.1137/141000671

    Article  MathSciNet  MATH  Google Scholar 

  8. Birge, J.R., Qi, L.: Computing block-angular karmarkar projections with applications to stochastic programming. Manag. Sci. 34(12), 1472–1479 (1988)

    Article  MathSciNet  Google Scholar 

  9. Bixby, R.E., Gregory, J.W., Lustig, I.J., Marsten, R.E., Shanno, D.F.: Very large-scale linear programming: a case study in combining interior point and simplex methods. Oper. Res. 40(5), 885–897 (1992). https://doi.org/10.1287/opre.40.5.885

    Article  MathSciNet  MATH  Google Scholar 

  10. Castro, J.: Interior-point solver for convex separable block-angular problems. Optim. Methods Softw. 31(1), 88–109 (2016). https://doi.org/10.1080/10556788.2015.1050014

    Article  MathSciNet  MATH  Google Scholar 

  11. Castro, J., Nasini, S., Saldanha-da Gama, F.: A cutting-plane approach for large-scale capacitated multi-period facility location using a specialized interior-point method. Math. Program. 163(1), 411–444 (2017). https://doi.org/10.1007/s10107-016-1067-6

    Article  MathSciNet  MATH  Google Scholar 

  12. Choi, I.C., Goldfarb, D.: Exploiting special structure in a primal–dual path-following algorithm. Math. Program. 58(1), 33–52 (1993). https://doi.org/10.1007/BF01581258

    Article  MathSciNet  MATH  Google Scholar 

  13. Dantzig, G.B., Wolfe, P.: Decomposition principle for linear programs. Oper. Res., pp. 101–111. (1960). https://doi.org/10.1287/opre.8.1.101

    Chapter  Google Scholar 

  14. Davis, T.A.: SuiteSparse: a suite of sparse matrix software. http://faculty.cse.tamu.edu/davis/suitesparse.html

  15. Desaulniers, G., Desrosiers, J., Solomon, M.M.: Column generation, GERAD 25th anniversary, vol. 5, 1 edn. Springer Science & Business Media (2006)

  16. Diamond, S., Boyd, S.: CVXPY: a python-embedded modeling language for convex optimization. J. Mach. Learn. Res. 17(83), 1–5 (2016)

    MathSciNet  MATH  Google Scholar 

  17. Domahidi, A., Chu, E., Boyd, S.: ECOS: An SOCP solver for embedded systems. In: European Control Conference (ECC), pp. 3071–3076 (2013)

  18. Dunning, I., Huchette, J., Lubin, M.: Jump: a modeling language for mathematical optimization. SIAM Rev. 59(2), 295–320 (2017)

    Article  MathSciNet  Google Scholar 

  19. Elhedhli, S., Goffin, J.L.: The integration of an interior-point cutting plane method within a branch-and-price algorithm. Math. Program. 100(2), 267–294 (2004)

    Article  MathSciNet  Google Scholar 

  20. Forrest, J., Vigerske, S., Ralph, T., Hafer, L., jpfasano, Santos, H.G., Saltzman, M., Gassmann, H., Kristjansson, B., King, A.: coin-or/Clp: version 1.17.6 (2020). https://doi.org/10.5281/zenodo.3748677

  21. Friedlander, M.P., Orban, D.: A primal-dual regularized interior-point method for convex quadratic programs. Math. Program. Comput. 4(1), 71–107 (2012). https://doi.org/10.1007/s12532-012-0035-2

    Article  MathSciNet  MATH  Google Scholar 

  22. Gertz, E.M., Wright, S.J.: Object-oriented software for quadratic programming. ACM Trans. Math. Softw. 29(1), 58–81 (2003). https://doi.org/10.1145/641876.641880

    Article  MathSciNet  MATH  Google Scholar 

  23. Gleixner, A.M., Steffy, D.E., Wolter, K.: Iterative refinement for linear programming. INFORMS J. Comput. 28(3), 449–464 (2016). https://doi.org/10.1287/ijoc.2016.0692

    Article  MathSciNet  MATH  Google Scholar 

  24. Gondzio, J.: Multiple centrality corrections in a primal-dual method for linear programming. Comput. Optim. Appl. 6(2), 137–156 (1996). https://doi.org/10.1007/BF00249643

    Article  MathSciNet  MATH  Google Scholar 

  25. Gondzio, J.: Presolve analysis of linear programs prior to applying an interior point method. INFORMS J. Comput. 9(1), 73–91 (1997). https://doi.org/10.1287/ijoc.9.1.73

    Article  MathSciNet  MATH  Google Scholar 

  26. Gondzio, J.: Interior point methods 25 years later. Eur. J. Oper. Res. 218(3), 587–601 (2012). https://doi.org/10.1016/j.ejor.2011.09.017

    Article  MathSciNet  MATH  Google Scholar 

  27. Gondzio, J., Gonzalez-Brevis, P., Munari, P.: New developments in the primal-dual column generation technique. Eur. J. Oper. Res. 224(1), 41–51 (2013). https://doi.org/10.1016/j.ejor.2012.07.024

    Article  MathSciNet  MATH  Google Scholar 

  28. Gondzio, J., González-Brevis, P., Munari, P.: Large-scale optimization with the primal-dual column generation method. Math. Program. Computation 8(1), 47–82 (2016). https://doi.org/10.1007/s12532-015-0090-6

    Article  MathSciNet  MATH  Google Scholar 

  29. Gondzio, J., Grothey, A.: Parallel interior-point solver for structured quadratic programs: application to financial planning problems. Ann. Oper. Res. 152(1), 319–339 (2007). https://doi.org/10.1007/s10479-006-0139-z

    Article  MathSciNet  MATH  Google Scholar 

  30. Gondzio, J., Grothey, A.: Exploiting structure in parallel implementation of interior point methods for optimization. Comput. Manag. Sci. 6(2), 135–160 (2009). https://doi.org/10.1007/s10287-008-0090-3

    Article  MathSciNet  MATH  Google Scholar 

  31. Gondzio, J., Sarkissian, R.: Column generation with a primal-dual method. Technical report 96.6, Logilab (1996). https://www.maths.ed.ac.uk/~gondzio/reports/pdcgm.pdf

  32. Gondzio, J., Sarkissian, R.: Parallel interior-point solver for structured linear programs. Math. Program. 96(3), 561–584 (2003). https://doi.org/10.1007/s10107-003-0379-5

    Article  MathSciNet  MATH  Google Scholar 

  33. Gondzio, J., Sarkissian, R., Vial, J.P.: Using an interior point method for the master problem in a decomposition approach. Eur. J. Oper. Res. 101(3), 577–587 (1997). https://doi.org/10.1016/S0377-2217(96)00182-8

    Article  MATH  Google Scholar 

  34. Grothey, A., Hogg, J., Colombo, M., Gondzio, J.: A Structure Conveying Parallelizable Modeling Language for Mathematical Programming, pp. 145–156. Springer, New York (2009). https://doi.org/10.1007/978-0-387-09707-7_13

    Book  MATH  Google Scholar 

  35. Gurobi Optimization, L.: Gurobi optimizer reference manual (2018). https://www.gurobi.com

  36. Hart, W.E., Watson, J.P., Woodruff, D.L.: Pyomo: modeling and solving mathematical programs in python. Math. Program. Comput. 3(3), 219–260 (2011)

    Article  MathSciNet  Google Scholar 

  37. Hurd, J.K., Murphy, F.H.: Exploiting special structure in primal dual interior point methods. ORSA J. Comput. 4(1), 38–44 (1992). https://doi.org/10.1287/ijoc.4.1.38

    Article  MathSciNet  MATH  Google Scholar 

  38. IBM: IBM ILOG CPLEX Optimization Studio. https://www.ibm.com/products/ilog-cplex-optimization-studio

  39. Jessup, E.R., Yang, D., Zenios, S.A.: Parallel factorization of structured matrices arising in stochastic programming. SIAM J. Optim. 4(4), 833–846 (1994). https://doi.org/10.1137/0804048

    Article  MathSciNet  MATH  Google Scholar 

  40. Kelley Jr., J.: The cutting-plane method for solving convex programs. J. Soc. Ind. Appl. Math. 8(4), 703–712 (1960). https://doi.org/10.1137/0108053

    Article  MathSciNet  MATH  Google Scholar 

  41. Legat, B., Dowson, O., Garcia, J.D., Lubin, M.: MathOptInterface: a data structure for mathematical optimization problems. (2020). arXiv:2002.03447 [math]

  42. Löfberg, J.: Yalmip : A toolbox for modeling and optimization in matlab. In: Proceedings of the CACSD Conference. Taipei, Taiwan (2004)

  43. Lubin, M., Petra, C.G., Anitescu, M., Zavala, V.: Scalable stochastic optimization of complex energy systems. In: Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, SC ’11, pp. 64:1–64:64. ACM, New York, NY, USA (2011). https://doi.org/10.1145/2063384.2063470

  44. Ma, D., Saunders, M.A.: Solving multiscale linear programs using the simplex method in quadruple precision. In: Al-Baali, M., Grandinetti, L., Purnama, A. (eds.) Numerical Analysis and Optimization, pp. 223–235. Springer International Publishing, Cham (2015)

    Chapter  Google Scholar 

  45. Makhorin, A.: GNU Linear Programming Kit, version 4.64 (2017). https://www.gnu.org/software/glpk/glpk.html

  46. Mehrotra, S.: On the implementation of a primal-dual interior point method. SIAM J. Optim. 2(4), 575–601 (1992). https://doi.org/10.1137/0802028

    Article  MathSciNet  MATH  Google Scholar 

  47. Mitchell, J.E.: Cutting plane methods and subgradient methods. In: Decision Technologies and Applications, chap. 2, pp. 34–61. INFORMS (2009). https://doi.org/10.1287/educ.1090.0064

  48. Mitchell, J.E., Borchers, B.: Solving Linear Ordering Problems with a Combined Interior Point/Simplex Cutting Plane Algorithm, pp. 349–366. Springer, Boston (2000). https://doi.org/10.1007/978-1-4757-3216-0_14

    Book  MATH  Google Scholar 

  49. MOSEK ApS: The MOSEK Optimization Suite. https://www.mosek.com/

  50. Munari, P., Gondzio, J.: Using the primal-dual interior point algorithm within the branch-price-and-cut method. Comput. Oper. Res. 40(8), 2026–2036 (2013). https://doi.org/10.1016/j.cor.2013.02.028

    Article  MathSciNet  MATH  Google Scholar 

  51. Naoum-Sawaya, J., Elhedhli, S.: An interior-point benders based branch-and-cut algorithm for mixed integer programs. Ann. Oper. Res. 210(1), 33–55 (2013)

    Article  MathSciNet  Google Scholar 

  52. Orban, D., contributors: LDLFactorizations.jl: Factorization of symmetric matrices. https://github.com/JuliaSmoothOptimizers/LDLFactorizations.jl (2020). https://doi.org/10.5281/zenodo.3900668

  53. Rousseau, L.M., Gendreau, M., Feillet, D.: Interior point stabilization for column generation. Oper. Res. Lett. 35(5), 660–668 (2007). https://doi.org/10.1016/j.orl.2006.11.004

    Article  MathSciNet  MATH  Google Scholar 

  54. Schultz, G.L., Meyer, R.R.: An interior point method for block angular optimization. SIAM J. Optim. 1(4), 583–602 (1991). https://doi.org/10.1137/0801035

    Article  MathSciNet  MATH  Google Scholar 

  55. Tanneau, M.: Tulip.jl (2020). https://doi.org/10.5281/zenodo.3787950. https://github.com/ds4dm/Tulip.jl

  56. Udell, M., Mohan, K., Zeng, D., Hong, J., Diamond, S., Boyd, S.: Convex optimization in Julia. In: Proceedings of the 1st Workshop for High Performance Technical Computing in Dynamic Languages, pp. 18–28. IEEE Press (2014)

  57. Westerlund, T., Pettersson, F.: An extended cutting plane method for solving convex minlp problems. Comput. Chem. Eng. 19, 131–136 (1995). https://doi.org/10.1016/0098-1354(95)87027-X

    Article  Google Scholar 

  58. Wright, S.: Primal–Dual Interior-Point Methods. Society for Industrial and Applied Mathematics (1997). https://doi.org/10.1137/1.9781611971453

  59. Xu, X., Hung, P.F., Ye, Y.: A simplified homogeneous and self-dual linear programming algorithm and its implementation. Ann. Oper. Res. 62(1), 151–171 (1996). https://doi.org/10.1007/BF02206815

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We thank Dominique Orban for helpful discussions on the regularization scheme and its implementation. We are also indebted to three anonymous referees for their careful reading and constructive suggestions that helped us improving the quality and readability of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrea Lodi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Mathieu Tanneau was supported by an excellence doctoral scholarship from FQRNT.

Appendices

A Dantzig–Wolfe decomposition and column generation

In this section, we present the Dantzig–Wolfe decomposition principle [13] and the basic column-generation framework. We refer to [15] for a thorough overview of column generation, and the relation between Dantzig–Wolfe decomposition and Lagrangian decomposition.

1.1 A.1 Dantzig–Wolfe decomposition

Consider the problem

$$\begin{aligned} (P) \ \ \ \min _{x} \ \ \&\sum _{r=0}^{R} c_{r}^{T} x_{r}\\ s.t. \ \ \&\sum _{r=0}^{R} A_{r} x_{r} = b_{0},\\&x_{0} \ge 0,\\&x_{r} \in \mathcal {X}_{r}, \ \ \ r=1, \ldots , R, \end{aligned}$$

where, for each \(r=1, \ldots , R\), \(\mathcal {X}_{r}\) is defined by a finite number of linear inequalities, plus integrality restrictions on some of the coordinates of \(x_{r}\). Therefore, the convex hull of \(\mathcal {X}_{r}\), denoted by \(conv(\mathcal {X}_{r})\), is a polyhedron whose set of extreme points (resp. extreme rays) is denoted by \(\varOmega _{r}\) (resp. \(\varGamma _{r}\)). Any element of \(conv(\mathcal {X}_{r})\) can thus be written as a convex combination of extreme points \(\{ \omega \}_{\omega \in \varOmega _{r}}\), plus a non-negative combination of extreme rays \(\{ \rho \}_{\rho \in \varGamma _{r}}\) i.e.,

$$\begin{aligned} conv(\mathcal {X}_{r}) = \left\{ \sum _{\omega \in \varOmega _{r}} \lambda _{\omega } \omega + \sum _{\rho \in \varGamma _{r}} \lambda _{\rho } \rho \ \Bigg | \ \lambda \ge 0, \sum _{\omega }\lambda _{\omega } = 1 \right\} . \end{aligned}$$
(93)

The Dantzig–Wolfe decomposition principle [13] then consists in substituting \(x_{r}\) with such a combination of extreme points and extreme rays. This change of variable yields the so-called Master Problem

$$\begin{aligned} (MP) \ \ \ \min _{x, \lambda } \ \ \&c_{0}^{T}x_{0} + \sum _{r=1}^{R} \sum _{\omega \in \varOmega _{r}} c_{r, \omega } \lambda _{r, \omega } + \sum _{r=1}^{R} \sum _{\rho \in \varGamma _{r}} c_{r, \rho } \lambda _{r, \rho } \end{aligned}$$
(94)
$$\begin{aligned} s.t. \ \ \&\sum _{\omega \in \varOmega _{r}} \lambda _{r, \omega } =1, \ \ \ r=1, \ldots , R\end{aligned}$$
(95)
$$\begin{aligned}&A_{0}x_{0} + \sum _{r=1}^{R} \sum _{\omega \in \varOmega _{r}} a_{r, \omega } \lambda _{r, \omega } + \sum _{r=1}^{R} \sum _{\rho \in \varGamma _{r}} a_{r, \rho } \lambda _{r, \rho } = b_{0},\end{aligned}$$
(96)
$$\begin{aligned}&x_{0}, \lambda \ge 0,\end{aligned}$$
(97)
$$\begin{aligned}&\sum _{\omega \in \varOmega _{r} \lambda _{r, \omega } \omega + \sum _{\rho \in \varGamma _{r}} \lambda _{r, \rho } \rho \in \mathcal {X}_{r}, \ \ \ r=1, \ldots , R} \end{aligned}$$
(98)

where \(c_{r, \omega } = c_{r}^{T} \omega \), \(c_{r, \rho } = c_{r}^{T} \rho \), and \(a_{r, \omega } = A_{r} \omega \), \(a_{r, \rho } = A_{r} \rho \). Constraints (95) and (96) are referred to as convexity and linking constraints, respectively.

The linear relaxation of (MP) is given by (94)–(97); its objective value is greater or equal to that of the linear relaxation of (P) [15]. Note that if (P) in a linear program, i.e., all variables are continuous, then constraints (98) are redundant, and (94)–(97) is equivalent to (P). In the mixed-integer case, problem (94)–(97) is the root node in a branch-and-price tree. In this work, we focus on solving this linear relaxation. Thus, in what follows, we make a slight abuse of notation and use the term “Master Problem” to refer to (94)–(97) instead.

1.2 A.2 Column generation

The Master Problem has exponentially many variables. Therefore, it is typically solved by column generation, wherein only a small subset of the variables are considered. Additional variables are generated iteratively by solving an auxiliary sub-problem.

Let \(\bar{\varOmega }_{r}\) (resp. \(\bar{\varGamma }_{r}\)) be a small subset of \(\varOmega _{r}\) (resp. of \(\varGamma _{r}\)), and define the Restricted Master Problem (RMP)

$$\begin{aligned} (RMP) \ \ \ \min _{\lambda } \ \ \&c_{0}^{T}x_{0} + \sum _{r=1}^{R} \sum _{\omega \in \bar{\varOmega }_{r}} c_{r, \omega } \lambda _{r, \omega } + \sum _{r=1}^{R} \sum _{\rho \in \bar{\varGamma }_{r}} c_{r, \rho } \lambda _{r, \rho } \end{aligned}$$
(99)
$$\begin{aligned} s.t. \ \ \&\sum _{\omega \in \bar{\varOmega }_{r}} \lambda _{r, \omega } =1, \ \ r=1, \ldots , R\end{aligned}$$
(100)
$$\begin{aligned}&\sum _{r=1}^{R} \sum _{\omega \in \bar{\varOmega }_{r}} a_{r, \omega } \lambda _{r, \omega } + \sum _{r=1}^{R} \sum _{\rho \in \bar{\varGamma }_{r}} a_{r, \rho } \lambda _{r, \rho } = b_{0},\end{aligned}$$
(101)
$$\begin{aligned}&x_{0}, \lambda \ge 0. \end{aligned}$$
(102)

In all that follows, we assume that (RMP) is feasible and bounded. Note that feasibility can be obtained by adding artificial slacks and surplus variables with sufficiently large cost, effectively implementing an \(l_{1}\) penalty. If the RMP is unbounded, then so is the MP.

Let \(\sigma \in \mathbb {R}^{R}\) and \(\pi \in \mathbb {R}^{m_{0}}\) denote the vector of dual variables associated to convexity constraints (100) and linking constraints constraints (101), respectively. Here, we assume that \((\sigma , \pi )\) is dual-optimal for (RMP); the use of interior, sub-optimal dual solutions is explored in [31]. Then, for given r, \(\omega \in \varOmega _{r}\) and \(\rho \in \varGamma _{r}\), the reduced cost of variable \(\lambda _{r, \omega }\) is

$$\begin{aligned} \bar{c}_{r, \omega } = c_{r, \omega } - \pi ^{T} a_{r, \omega } - \sigma _{r} = (c_{r}^{T} - \pi ^{T} A_{r}) \omega - \sigma _{r}, \end{aligned}$$

while the reduced cost of variable \(\lambda _{r, \rho }\) is

$$\begin{aligned} \bar{c}_{r, \rho } = c_{r, \rho } - \pi ^{T} a_{r, \rho } = (c_{r}^{T} - \pi ^{T} A_{r}) \rho . \end{aligned}$$

If \(\bar{c}_{r, \omega } \ge 0\) for all r, \(\omega \in \varOmega _{r}\) and \(\bar{c}_{r, \rho } \ge 0\) for all r, \(\rho \in \varGamma _{r}\), then the current solution is optimal for the MP. Otherwise, a variable with negative reduced cost is added to the RMP. Finding such a variable, or proving that none exists, is called the pricing step.

Explicitly iterating through the exponentially large sets \(\varOmega _{r}\) and \(\varGamma _{r}\) is prohibitively expensive. Nevertheless, the pricing step can be written as the following MILP:

$$\begin{aligned} (SP_{r}) \ \ \ \min _{x_{r}} \ \ \&(c_{r}^{T} - \pi ^{T}A)x_{r} - \sigma _{r}\end{aligned}$$
(103)
$$\begin{aligned} s.t. \ \ \&x_{r} \in \mathcal {X}_{r}, \end{aligned}$$
(104)

which we refer to as the \(r^{th}\) sub-problem. If \(SP_{r}\) is infeasible, then \(\mathcal {X}_{r}\) is empty, and the original problem P is infeasible. This case is ruled out in all that follows. Then, since the objective of \(SP_{r}\) is linear, any optimal solution is either an extreme point \(\omega \in \varOmega _{r}\) (bounded case), or an extreme ray \(\rho \in \varGamma _{r}\) (unbounded case). The corresponding variable \(\lambda _{r, \omega }\) or \(\lambda _{r, \rho }\) is identified by retrieving an optimal point or unbounded ray. Finally, note that all R sub-problems \(SP_{1}, \ldots , SP_{R}\) can be solved independently from one another. Optimality in the Master Problem is attained when no variable with negative reduced cost can be identified from all R sub-problems.

We now describe a basic column-generation procedure, which is formally stated in Algorithm 2. The algorithm starts with an initial RMP that contains a small subset of columns, some of which may be artificial to ensure feasibility. At the beginning of each iteration, the RMP is solved to optimality, and a dual solution \((\pi , \sigma )\) is obtained which is used to perform the pricing step. Each sub-problem is solved to identify a variable with most negative reduced cost. If a variable with negative reduced cost is found, it is added to the RMP; if not, the column-generation procedure stops.

figure b

For large instances with numerous subproblems, full pricing, wherein all subproblems are solved at each iteration, is often not the most efficient approach. Therefore, we implemented a partial pricing strategy, in which subproblems are solved in a random order until either all subproblems have been solved, or a user-specified number of columns with negative reduced cost have been generated.

B Detailed results on structured LP instances

See Table 6.

Table 6 Structured instances: performance comparison of IPM solvers

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tanneau, M., Anjos, M.F. & Lodi, A. Design and implementation of a modular interior-point solver for linear optimization. Math. Prog. Comp. 13, 509–551 (2021). https://doi.org/10.1007/s12532-020-00200-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12532-020-00200-8

Keywords

Mathematics Subject Classification

Navigation