Scalability Evaluation of NSLP Algorithm for Solving Non-Stationary Linear Programming Problems on Cluster Computing Systems

Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 793)

Abstract

The paper is devoted to a scalability study of the NSLP algorithm for solving non-stationary high-dimension linear programming problem on the cluster computing systems. The analysis is based on the BSF model of parallel computations. The BSF model is a new parallel computation model designed on the basis of BSP and SPMD models. The brief descriptions of the NSLP algorithm and the BSF model are given. The NSLP algorithm implementation in the form of a BSF program is considered. On the basis of the BSF cost metric, the upper bound of the NSLP algorithm scalability is derived and its parallel efficiency is estimated. NSLP algorithm implementation using BSF skeleton is described. A comparison of scalability estimations obtained analytically and experimentally is provided.

Keywords

Non-stationary linear programming problem Large-scale linear programming NSLP algorithm BSF parallel computation model Cost metric Scalability bound Parallel efficiency estimation 

References

  1. 1.
    Chung, W.: Applying large-scale linear programming in business analytics. In: Proceedings of the 2015 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), pp. 1860–1864. IEEE (2015)Google Scholar
  2. 2.
    Tipi, H.: Solving super-size problems with optimization. Presentation at the meeting of the 2010 INFORMS Conference on O.R. Practice, Orlando, Florida, April 2010. http://nymetro.chapter.informs.org/prac_cor_pubs/06-10%20Horia%20Tipi%20SolvingLargeScaleXpress.pdf. Accessed 7 May 2017
  3. 3.
    Gondzio, J., et al.: Solving large-scale optimization problems related to Bell’s Theorem. J. Comput. Appl. Math. 263, 392–404 (2014)CrossRefMATHMathSciNetGoogle Scholar
  4. 4.
    Sodhi, M.S.: LP modeling for asset-liability management: a survey of choices and simplifications. Oper. Res. 53(2), 181–196 (2005)CrossRefMATHGoogle Scholar
  5. 5.
    Brogaard, J., Hendershott, T., Riordan, R.: High-frequency trading and price discovery. Rev. Finan. Stud. 27, 2267–2306 (2014)CrossRefGoogle Scholar
  6. 6.
    Budish, E., Cramton, P., Shim, J.: The high-frequency trading arms race: frequent batch auctions as a market design response. Q. J. Econ. 130, 1547–1621 (2015)CrossRefGoogle Scholar
  7. 7.
    Goldstein, M.A. and Kwan, A., Philip, R.: High-frequency trading strategies. https://ssrn.com/abstract=2973019
  8. 8.
    Hendershott, T., Jones, C.M., Menkveld, A.J.: Does algorithmic trading improve liquidity? J. Finan. 66, 1–33 (2011)CrossRefGoogle Scholar
  9. 9.
    Dantzig, G.B.: Linear Programming and Extensions. Princeton University Press, Princeton (1998)MATHGoogle Scholar
  10. 10.
    Klee, V., Minty, G.J.: How good is the simplex algorithm? In: Inequalities III (Proceedings of the Third Symposium on Inequalities Held at the University of California, Los Angeles, California, 1–9 September 1969, dedicated to the memory of Theodore S. Motzkin), pp. 159–175. Academic Press, New York (1972)Google Scholar
  11. 11.
    Karmarkar, N.: A new polynomial-time algorithm for linear programming. Combinatorica 4, 373–395 (1984)CrossRefMATHMathSciNetGoogle Scholar
  12. 12.
    Sokolinskaya, I., Sokolinsky, L.B.: On the Solution of Linear Programming Problems in the Age of Big Data. In: Sokolinsky, L., Zymbler, M. (eds.) PCT 2017. CCIS, Vol. 753. pp. 86–100. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67035-5_7
  13. 13.
    Agmon, S.: The relaxation method for linear inequalities. Can. J. Math. 6, 382–392 (1954)CrossRefMATHMathSciNetGoogle Scholar
  14. 14.
    Motzkin, T.S., Schoenberg, I.J.: The relaxation method for linear inequalities. J. Can. Math. 6, 393–404 (1954)CrossRefMATHMathSciNetGoogle Scholar
  15. 15.
    Eremin, I.I.: Fejerovskie metody dlya zadach linejnoj i vypukloj optimizatsii [Fejer’s Methods for Problems of Convex and Linear Optimization]. Publishing of the South Ural State University, Chelyabinsk (2009)Google Scholar
  16. 16.
    González-Gutiérrez, E., Hernández Rebollar, L., Todorov, M.I.: Relaxation methods for solving linear inequality systems: converging results. TOP 20, 426–436 (2012)CrossRefMATHMathSciNetGoogle Scholar
  17. 17.
    Sokolinskaya, I., Sokolinsky, L.: Revised pursuit algorithm for solving non-stationary linear programming problems on modern computing clusters with manycore accelerators. In: Voevodin, V., Sobolev, S. (eds.) RuSCDays 2016. CCIS, vol. 687, pp. 212–223. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-55669-7_17 CrossRefGoogle Scholar
  18. 18.
    Sahni, S., Vairaktarakis, G.: The master-slave paradigm in parallel computer and industrial settings. J. Glob. Optim. 9, 357–377 (1996)CrossRefMATHMathSciNetGoogle Scholar
  19. 19.
    Silva, L.M., Buyya, R.: Parallel programming models and paradigms. High Perform. Cluster Comput. Architect. Syst. 2, 4–27 (1999)Google Scholar
  20. 20.
    Leung, J.Y.-T., Zhao, H.: Scheduling problems in master-slave model. Ann. Oper. Res. 159, 215–231 (2008)CrossRefMATHMathSciNetGoogle Scholar
  21. 21.
    Sokolinsky, L.B.: Analytical estimation of scalability of iterative numerical algorithms on distributed memory multiprocessors. http://arxiv.org/abs/1710.10490
  22. 22.
    Darema, F., George, D.A., Norton, V.A., Pfister, G.F.: A single-program-multiple-data computational model for EPEX/FORTRAN. Parallel Comput. 7, 11–24 (1988)CrossRefMATHGoogle Scholar
  23. 23.
    Kostenetskiy, P.S., Safonov, A.Y.: SUSU supercomputer resources. In: Proceedings of the 10th Annual International Scientific Conference on Parallel Computing Technologies (PCT 2016). CEUR Workshop Proceedings, vol. 1576, pp. 561–573 (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.South Ural State UniversityChelyabinskRussia

Personalised recommendations