Skip to main content
Log in

mplrs: A scalable parallel vertex/facet enumeration code

  • Full Length Paper
  • Published:
Mathematical Programming Computation Aims and scope Submit manuscript

Abstract

We describe a new parallel implementation, mplrs, of the vertex enumeration code lrs that uses the MPI parallel environment and can be run on a network of computers. The implementation makes use of a C wrapper that essentially uses the existing lrs code with only minor modifications. mplrs was derived from the earlier parallel implementation plrs, written by G. Roumanis in C\({++}\) which runs on a shared memory machine. By improving load balancing we are able to greatly improve performance for medium to large scale parallelization of lrs. We report computational results comparing parallel and sequential codes for vertex/facet enumeration problems for convex polyhedra. The problems chosen span the range from simple to highly degenerate polytopes. For most problems tested, the results clearly show the advantage of using the parallel implementation mplrs of the reverse search based code lrs, even when as few as 8 cores are available. For some problems almost linear speedup was observed up to 1200 cores, the largest number of cores tested. The software that was reviewed as part of this submission is included in lrslib-062.tar.gz which has MD5 hash be5da7b3b90cc2be628dcade90c5d1b9.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. See posts for December 9, 2014 (Gurobi) and February 25, 2015 (CPLEX).

  2. John White prepared a long list of applications which is available at [6].

  3. http://cgm.cs.mcgill.ca/~avis/C/lrslib/lrslib.html.

References

  1. Anderson, D.P., Cobb, J., Korpela, E., Lebofsky, M., Werthimer, D.: SETI@home: an experiment in public-resource computing. Commun. ACM 45(11), 56–61 (2002)

    Article  Google Scholar 

  2. Anstreicher, K., Brixius, N., Goux, J.P., Linderoth, J.: Solving large quadratic assignment problems on computational grids. Math. Program. 91(3), 563–588 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  3. Applegate, D.L., Bixby, R.E., Chvatal, V., Cook, W.J.: http://www.math.uwaterloo.ca/tsp/concorde.html. Accessed 6 Nov 2017

  4. Applegate, D.L., Bixby, R.E., Chvatal, V., Cook, W.J.: The Traveling Salesman Problem: A Computational Study (Princeton Series in Applied Mathematics). Princeton University Press, Princeton (2007)

    Google Scholar 

  5. Assarf, B., Gawrilow, E., Herr, K., Joswig, M., Lorenz, B., Paffenholz, A., Rehn, T.: Computing convex hulls and counting integer points with polymake. Math. Program. Comput. 9, 1–38 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  6. Avis, D.: http://cgm.cs.mcgill.ca/~avis/C/lrs.html. Accessed 6 Nov 2017

  7. Avis, D.: A revised implementation of the reverse search vertex enumeration algorithm. In: Kalai, G., Ziegler, G.M. (eds.) Polytopes—Combinatorics and Computation, vol. 29, pp. 177–198. DMV Seminar, Birkhäuser, Basel (2000)

  8. Avis, D., Devroye, L.: Estimating the number of vertices of a polyhedron. Inf. Process. Lett. 73(3–4), 137–143 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  9. Avis, D., Devroye, L.: An analysis of budgeted parallel search on conditional Galton–Watson trees. arXiv:1703.10731 (2017)

  10. Avis, D., Fukuda, K.: A pivoting algorithm for convex hulls and vertex enumeration of arrangements and polyhedra. Discrete Comput. Geom. 8, 295–313 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  11. Avis, D., Fukuda, K.: Reverse search for enumeration. Discrete Appl. Math. 65, 21–46 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  12. Avis, D., Jordan, C.: A parallel framework for reverse search using mts. arXiv:1610.07735 (2016)

  13. Avis, D., Roumanis, G.: A portable parallel implementation of the lrs vertex enumeration code. In: Combinatorial Optimization and Applications—7th International Conference, COCOA 2013, Lecture Notes in Computer Science, vol. 8287, pp. 414–429. Springer, New York (2013)

  14. Bagnara, R., Hill, P.M., Zaffanella, E.: The Parma Polyhedra Library: toward a complete set of numerical abstractions for the analysis and verification of hardware and software systems. Sci. Comput. Program. 72(1–2), 3–21 (2008)

  15. Balyo, T., Sanders, P., Sinz, C.: HordeSat: a massively parallel portfolio SAT solver. In: Proceedings of the 18th International Conference on Theory and Applications of Satisfiability Testing (SAT 2015), Lecture Notes in Computer Science, vol. 9340, pp. 156–172 (2015)

  16. Blumofe, R.D., Leiserson, C.E.: Scheduling multithreaded computations by work stealing. J. ACM 46(5), 720–748 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  17. Brüngger, A., Marzetta, A., Fukuda, K., Nievergelt, J.: The parallel search bench ZRAM and its applications. Ann. Oper. Res. 90, 45–63 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  18. Bruns, W., Ichim, B., Söger, C.: The power of pyramid decomposition in Normaliz. J. Symb. Comput. 74, 513–536 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  19. Carle, M.A.: The quest for optimality. http://www.thequestforoptimality.com. Accessed 6 Nov 2017

  20. Casado, L.G., Martínez, J.A., García, I., Hendrix, E.M.T.: Branch-and-bound interval global optimization on shared memory multiprocessors. Optim. Methods Softw. 23(5), 689–701 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  21. Ceder, G., Garbulsky, G., Avis, D., Fukuda, K.: Ground states of a ternary fcc lattice model with nearest- and next-nearest-neighbor interactions. Phys. Rev. B: Condens. Matter 49(1), 1–7 (1994)

    Article  Google Scholar 

  22. Christof, T., Loebel, A.: http://porta.zib.de. Accessed 6 Nov 2017

  23. Chvátal, V.: Linear Programming. W.H. Freeman, San Francisco (1983)

    MATH  Google Scholar 

  24. Cornuéjols, G., Karamanov, M., Li, Y.: Early estimates of the size of branch-and-bound trees. INFORMS J. Comput. 18(1), 86–96 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  25. Crainic, T.G., Le Cun, B., Roucairol, C.: Parallel Branch-and-Bound Algorithms, pp. 1–28. Wiley, New York (2006)

    Google Scholar 

  26. Deza, M.M., Laurent, M.: Geometry of Cuts and Metrics. Springer, New York (1997)

    Book  MATH  Google Scholar 

  27. Djerrah, A., Le Cun, B., Cung, V.D., Roucairol, C.: Bob++: framework for solving optimization problems with branch-and-bound methods. In: 2006 15th IEEE International Conference on High Performance Distributed Computing, pp. 369–370 (2006)

  28. Ferrez, J., Fukuda, K., Liebling, T.: Solving the fixed rank convex quadratic maximization in binary variables by a parallel zonotope construction algorithm. Eur. J. Oper. Res. 166, 35–50 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  29. Fischetti, M., Monaci, M., Salvagnin, D.: Self-splitting of workload in parallel computation. In: Integration of AI and OR Techniques in Constraint Programming, CPAIOR 2014, Lecture Notes in Computer Science, vol. 8451, pp. 394–404 (2014)

  30. Fisikopoulos, V., Peñaranda, L.M.: Faster geometric algorithms via dynamic determinant computation. Comput. Geom. 54, 1–16 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  31. Fukuda, K.: http://www.inf.ethz.ch/personal/fukudak/cdd_home. Accessed 6 Nov 2017

  32. Goux, J.P., Kulkarni, S., Yoder, M., Linderoth, J.: Master-worker: an enabling framework for applications on the computational grid. Clust. Comput. 4(1), 63–70 (2001)

    Article  Google Scholar 

  33. Graham, R.L.: Bounds on multiprocessing timing anomalies. SIAM J. Appl. Math. 17(2), 416–429 (1969)

    Article  MathSciNet  MATH  Google Scholar 

  34. Grama, A., Kumar, V.: State of the art in parallel search techniques for discrete optimization problems. IEEE Trans. Knowl. Data Eng. 11(1), 28–35 (1999)

    Article  Google Scholar 

  35. Gurobi, I.: Gurobi optimizer. http://www.gurobi.com/. Accessed 6 Nov 2017

  36. Hall Jr., M., Knuth, D.E.: Combinatorial analysis and computers. Am. Math Mon. 10, 21–28 (1965)

    Article  MathSciNet  MATH  Google Scholar 

  37. Hamadi, Y., Wintersteiger, C.M.: Seven challenges in parallel SAT solving. In: Proceedings of the 26th AAAI Conference on Artificial Intelligence (AAAI’12), pp. 2120–2125 (2012)

  38. Herrera, J.F.R., Salmerón, J.M.G., Hendrix, E.M.T., Asenjo, R., Casado, L.G.: On parallel branch and bound frameworks for global optimization. J. Glob. Optim. 69(3), 547–560. https://doi.org/10.1007/s10898-017-0508-y

  39. Heule, M.J., Kullmann, O., Wieringa, S., Biere, A.: Cube and conquer: guiding CDCL SAT solvers by lookaheads. In: Hardware and Software: Verification and Testing (HVC’11), Lecture Notes in Computer Science, vol. 7261, pp. 50–65 (2011)

  40. Horst, R., Pardalos, P.M., Thoai, N.V.: Introduction to Global Optimization (Nonconvex Optimization and Its Applications). Springer, New York (2000)

    Book  MATH  Google Scholar 

  41. Hyatt, R.M., Suter, B.W., Nelson, H.L.: A parallel alpha/beta tree searching algorithm. Parallel Comput. 10(3), 299–308 (1989)

    Article  MATH  Google Scholar 

  42. ILOG, I.: ILOG CPLEX. http://www-01.ibm.com/software/info/ilog/. Accessed 6 Nov 2017

  43. Kilby, P., Slaney, J., Thiébaux, S., Walsh, T.: Estimating search tree size. In: Proceedings of the 21st National Conference on Artificial Intelligence (AAAI’06), pp. 1014–1019 (2006)

  44. Koch, T., Ralphs, T., Shinano, Y.: Could we use a million cores to solve an integer program? Math. Methods Oper. Res. 76(1), 67–93 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  45. Kumar, V., Grama, A.Y., Vempaty, N.R.: Scalable load balancing techniques for parallel computers. J. Parallel Distrib. Comput. 22, 60–79 (1994)

    Article  Google Scholar 

  46. Kumar, V., Rao, V.N.: Parallel depth first search. Part II. Analysis. Int. J. Parallel Prog. 16(6), 501–519 (1987)

    Article  MATH  Google Scholar 

  47. Malapert, A., Régin, J.C., Rezgui, M.: Embarrassingly parallel search in constraint programming. J. Artif. Intell. Res. 57, 421–464 (2016)

    MathSciNet  MATH  Google Scholar 

  48. Marzetta, A.: ZRAM: A library of parallel search algorithms and its use in enumeration and combinatorial optimization. Ph.D. thesis, Swiss Federal Institute of Technology Zurich (1998)

  49. Mattson, T., Sanders, B., Massingill, B.: Patterns Parallel Program. Addison-Wesley Professional, Boston (2004)

    Google Scholar 

  50. McCreesh, C., Prosser, P.: The shape of the search tree for the maximum clique problem and the implications for parallel branch and bound. ACM Trans. Parallel Comput. 2(1), 8:1–8:27 (2015)

    Article  MATH  Google Scholar 

  51. Moran, B., Cohen, F., Wang, Z., Suvorova, S., Cochran, D., Taylor, T., Farrell, P., Howard, S.: Bounds on multiple sensor fusion. ACM Trans. Sensor Netw. 16(1), 16:1–16:26 (2016)

  52. Normaliz: (2015). https://www.normaliz.uni-osnabrueck.de/. Accessed 6 Nov 2017

  53. Otten, L., Dechter, R.: AND/OR branch-and-bound on a computational grid. J. Artif. Intell. Res. 59, 351–435 (2017)

    MathSciNet  MATH  Google Scholar 

  54. Reinders, J.: Intel Threading Building Blocks. O’Reilly & Associates, Inc., Sebastopol (2007)

    Google Scholar 

  55. Reinelt, G., Wenger, K.M.: Small instance relaxations for the traveling salesman problem. In: Operations Research Proceedings 2003, Operations Research Proceedings, vol. 2003, pp. 371–378. Springer, Berlin (2004)

  56. Shinano, Y., Achterberg, T., Berthold, T., Heinz, S., Koch, T.: ParaSCIP: a parallel extension of SCIP. Competence in High Performance Computing 2010, pp. 135–148. Springer, Berlin (2012)

  57. Shirazi, B.A., Kavi, K.M., Hurson, A.R. (eds.): Scheduling and Load Balancing in Parallel and Distributed Systems. IEEE Computer Society Press, Los Alamitos (1995)

    Google Scholar 

  58. Weibel, C.: Implementation and parallelization of a reverse-search algorithm for Minkowski sums. In: 2010 Proceedings of the Twelfth Workshop on Algorithm Engineering and Experiments (ALENEX), pp. 34–42 (2010)

  59. Wilkinson, B., Allen, M.: Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers. Prentice Hall, Upper Saddle River (2005)

    Google Scholar 

  60. Xu, Y.: Scalable algorithms for parallel tree search. Ph.D. thesis, Lehigh University (2007)

  61. Ziegler, G.M.: Lectures on Polytopes. Springer, New York (1995)

    Book  MATH  Google Scholar 

Download references

Acknowledgements

We thank Kazuki Yoshizoe for helpful discussions concerning the MPI library which improved mplrs ’ performance.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Charles Jordan.

Additional information

This work was partially supported by JSPS Kakenhi Grants 16H02785, 23700019 and 15H00847, Grant-in-Aid for Scientific Research on Innovative Areas, ‘Exploring the Limits of Computation (ELC)’.

Appendix A: Code organization for mplrs

Appendix A: Code organization for mplrs

We give a brief outline of the code organization for mplrs v. 6.2. See the lrslib programmer’s guideFootnote 3 for additional information on the legacy lrslib code.

To begin, mplrs is built using the following files. Each file has a corresponding header file which we omit in this description.

  • lrslib.c legacy library code implementing reverse search for vertex/facet enumeration

  • Choice of arithmetic packages: (default) lrsgmp.c using the extended precision GNU MP library (libgmp), lrsmp.c the lrslib extended precision arithmetic package, or lrslong.c using fixed precision arithmetic (used in mplrs1)

  • mplrs.c MPI wrapper containing all parallelization code for mplrs

There are a number of other files used to build other lrslib components that are not used in mplrs; most relevant of these is the C\({++}\) parallel wrapper (plrs.cpp) used in plrs. Sample input files for mplrs can be found in the ine/ directory. See the programmer’s guide for more information on other files and details on the legacy lrslib code.

The mplrs code is split into three main functions: mplrs_master, mplrs_ worker, and mplrs_consumer which correspond to Algorithms 3, 4 and 5 respectively. The master sends work in the send_work function which calls the setparams function to set budgeting parameters as in Algorithm 3. The hook to lrslib code (BRS call in Algorithm 4) occurs in the do_work function. This is where actual work is performed.

The data structures particular to mplrs are defined in mplrs.h. Variables used only by the master are collected in the masterv data structure, which contains cobasis_list (L in Algorithm 3). Likewise, variables used only by the consumer are collected in the consumerv structure. Each process has an mplrsv structure. The mplrs.h file also contains definitions for the default values of all options.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Avis, D., Jordan, C. mplrs: A scalable parallel vertex/facet enumeration code. Math. Prog. Comp. 10, 267–302 (2018). https://doi.org/10.1007/s12532-017-0129-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12532-017-0129-y

Keywords

Mathematics Subject Classification

Navigation