A Parallel Implementation of the Revised Simplex Algorithm Using OpenMP: Some Preliminary Results

  • Nikolaos Ploskas
  • Nikolaos SamarasEmail author
  • Konstantinos Margaritis
Conference paper
Part of the Springer Proceedings in Mathematics & Statistics book series (PROMS, volume 31)


Linear Programming (LP) is a significant research area in the field of operations research. The simplex algorithm is the most widely used method for solving Linear Programming problems (LPs). The aim of this paper is to present a parallel implementation of the revised simplex algorithm. Our parallel implementation focuses on the reduction of the time taken to perform the basis inverse, due to the fact that the total computational effort of an iteration of simplex type algorithms is dominated by this computation. This inverse does not have to be computed from scratch at any iteration. In this paper, we compute the basis inverse with two well-known updating schemes: (1) The Product Form of the Inverse (PFI) and (2) A Modification of the Product Form of the Inverse (MPFI); and incorporate them with revised simplex algorithm. Apart from the parallel implementation, this paper presents a computational study that shows the speedup among the serial and the parallel implementations in large-scale LPs. Computational results with a set of benchmark problems from Netlib, including some infeasible ones, are also presented. The parallelism is achieved using OpenMP in a shared memory multiprocessor architecture.

Key words

Linear programming Revised simplex method Basis inverse Parallel computing OpenMP 


  1. 1.
    Agrawal, A., Blelloch, G.E., Krawitz, R.L., Phillips, C.A.: Four vector-matrix primitives. In: Proceedings of the ACM Symposium on Parallel Algorithms and Architectures, pp. 292–302 (1989)Google Scholar
  2. 2.
    Badr, E.S., Moussa, M., Papparrizos, K., Samaras, N., Sifaleras, A.: Some computational results on MPI parallel implementations of dense simplex method. In: Proceedings of World Academy of Science, Engineering and Technology. Presented in the 17th International Conference on Computer & Information Science and Engineering (CISE 2006), 8–10 December, Cairo, Egypt, vol. 23, pp. 39–42 (2006)Google Scholar
  3. 3.
    Barr, R.S., Hickman, B.L.: Parallel simplex for large pure network problems: computational testing and sources of speedup. Oper. Res. 42(1), 65–80 (1994)zbMATHCrossRefGoogle Scholar
  4. 4.
    Benhamadou, M.: On the simplex algorithm “revised form”. Adv. Eng. Softw. 33, 769–777 (2002)CrossRefGoogle Scholar
  5. 5.
    Bieling, J., Peschlow, P., Martini, P.: An efficient GPU implementation of the revised simplex method. In: Proceedings of the IPDPS Workshops, pp. 1–8 (2010)Google Scholar
  6. 6.
    Chang, M.D., Engquist, M., Finkel, R., Meyer, R.R.: A parallel algorithm for generalized networks. Ann. Oper. Res. 14(1–4), 125–145 (1988)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Eckstein, J., Boduroglu, I., Polymenakos, L., Goldfarb, D.: Data-parallel implementations of dense simplex methods on the connection machine CM-2. ORSA J. Comput. 7(4), 402–416 (1995)zbMATHCrossRefGoogle Scholar
  8. 8.
    Finkel, R.A.: Large-grain parallelism: three case studies. In: Jamieson, H. (ed.) Proceedings of Characteristics of Parallel Algorithms. MIT, Cambridge (1987)Google Scholar
  9. 9.
    Gay, D.M.: Electronic mail distribution of linear programming test problems. Math. Program. Soc. COAL Newslett. 13, 10–12 (1985)Google Scholar
  10. 10.
    Greeff, G.: The revised simplex method on a GPU. Stellenbosch University, South Africa, Honours Year Project (2004)Google Scholar
  11. 11.
    Hake, J.F.: Parallel algorithms for matrix operations and their performance in multiprocessor systems. In: Kronsjo, L., Shumsheruddin, D. (edS.) Advances in Parallel Algorithms. Halsted Press, New York (1993)Google Scholar
  12. 12.
    Hall, J.A.J., McKinnon, K.I.M.: PARSMI, a parallel revised simplex algorithm incorporating minor iterations and Devex pricing. In: Wasniewski, J., Dongarra, J., Madsen, K., Olesen, D. (eds.) Applied Parallel Computing. LNCS, vol. 1184. Springer, Berlin (1996)Google Scholar
  13. 13.
    Hall, J.A.J.: SYNPLEX: a task-parallel scheme for the revised simplex method. Contributed talk at the 2nd International Workshop on Combinatorial Scientific Computing (2005)Google Scholar
  14. 14.
    Hall, J.A.J.: Towards a practical parallelisation of the simplex method. Comput. Manag. Sci. 7, 139–170 (2010)MathSciNetzbMATHCrossRefGoogle Scholar
  15. 15.
    Hall, J.A.J., McKinnon, K.I.M.: ASYNPLEX an asynchronous parallel revised simplex algorithm. Ann. Oper. Res. 81, 27–50 (1998)MathSciNetzbMATHCrossRefGoogle Scholar
  16. 16.
    Helgason, R.V., Kennington, J.L., Zaki, H.A.: A parallelization of the simplex method. Ann. Oper. Res. 14(1–4), 17–40 (1988)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Horowitz, E., Zorat, A.: Divide-and-conquer for parallel processing. IEEE Trans. Comput. C-32(6), 582–585 (1983)CrossRefGoogle Scholar
  18. 18.
    Jung, J.H., O’Leary, D.P.: Implementing an interior point method for linear programs on a CPU-GPU system. Electr. Trans. Numer. Anal. 28, 174–189 (2008)MathSciNetGoogle Scholar
  19. 19.
    Lentini, M., Reinoza, A., Teruel, A., Guillen, A.: SIMPAR: a parallel sparse simplex. Comput. Appl. Math. 14(1), 49–58 (1995)zbMATHGoogle Scholar
  20. 20.
    Luo, J., Reijns, G.L.: Linear programming on transputers. In: van Leeuwen, J. (ed.) Algorithms, Software, Architecture. IFIP Transactions A (Computer Science and Technology). Elsevier, Amsterdam (1992)Google Scholar
  21. 21.
    Mamalis, B., Pantziou, G., Kremmydas D., Dimitropoulos, G.: Reexamining the parallelization schemes for standard full tableau simplex method on distributed memory environments. In: Proceedings of the 10th IASTED PDCN Conference, pp. 115–123 (2011)Google Scholar
  22. 22.
    Ordóñez, F., Freund, R.: Computational experience and the explanatory value of condition measures for linear optimization. SIAM J. Optim. 14(2), 307–333 (2003)MathSciNetzbMATHCrossRefGoogle Scholar
  23. 23.
    Owens, J.D., Houston, M., Luebke, D., Green, S., Stone, J.E.: GPU Computing. Proc. IEEE 96(5), 879–899 (2008)Google Scholar
  24. 24.
    Peters, J.: The network simplex method on a multiprocessor. Networks 20(7), 845–859 (1990)zbMATHCrossRefGoogle Scholar
  25. 25.
    Shu, W.: Parallel implementation of a sparse simplex algorithm on MIMD distributed memory computers. J. Parallel Distr. Comput. 31(1), 25–40 (1995)CrossRefGoogle Scholar
  26. 26.
    Shu, W., Wu, M.: Sparse implementation of revised simplex algorithms on parallel computers. In: Proceedings of the 6th SIAM Conference on Parallel Processing for Scientific Computing, Norfolk (1993)Google Scholar
  27. 27.
    Spampinato, D.G., Elster, A.C.: Linear optimization on modern GPUs. In: Proceedings of the 2009 IEEE International Symposium on Parallel and Distributed Processing (2009)Google Scholar
  28. 28.
    Stunkel, C.B.: Linear optimization via message-based parallel processing. In: Proceedings of the International Conference on Parallel Processing, vol. 3, pp. 264–271 (1988)Google Scholar
  29. 29.
    Thomadakis, M.E., Liu, J.C.: An efficient steepest-edge simplex algorithm for SIMD computers. In: Proceedings of the International conference on Super-Computing (1996)Google Scholar
  30. 30.
    Yarmish, G., Slyke, R.V.: A distributed scaleable simplex method. J. Supercomput. 49(3), 373–381 (2009)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Nikolaos Ploskas
    • 1
  • Nikolaos Samaras
    • 1
    Email author
  • Konstantinos Margaritis
    • 1
  1. 1.Department of Applied InformaticsUniversity of MacedoniaThessalonikiGreece

Personalised recommendations