Advertisement

The Scalability Analysis of a Parallel Tree Search Algorithm

  • Roman KolpakovEmail author
  • Mikhail Posypkin
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 974)

Abstract

Increasing the number of computational cores is a primary way of achieving high performance of contemporary supercomputers. However, developing parallel applications capable to harness the enormous amount of cores is a challenging task. Thus, studying the scalability of parallel algorithms (the growth order of the number of processors required to accommodate the growing amount of work, below we give a clear definition of the scalability investigated in our paper) is very important. In this paper we propose a parallel tree search algorithm aimed at distributed parallel computers. For this parallel algorithm, we perform a theoretical analysis of its scalability and show that the achieved scalability is close to the theoretical maximum.

Keywords

Parallel scalability Parallel efficiency Complexity analysis Parallel tree search Global optimization 

References

  1. 1.
    Baldwin, A., Asaithambi, A.: An efficient method for parallel interval global optimization. In: 2011 International Conference on High Performance Computing and Simulation (HPCS), pp. 317–321. IEEE (2011)Google Scholar
  2. 2.
    Barkalov, K., Gergel, V.: Parallel global optimization on GPU. J. Global Optim. 66(1), 3–20 (2016)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Bhatt, S., Greenberg, D., Leighton, T., Liu, P.: Tight bounds for on-line tree embeddings. SIAM J. Comput. 29(2), 474–491 (1999)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Casado, L.G., Martinez, J., García, I., Hendrix, E.M.: Branch-and-bound interval global optimization on shared memory multiprocessors. Optim. Methods Softw. 23(5), 689–701 (2008)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Eckstein, J., Phillips, C.A., Hart, W.E.: Pico: an object-oriented framework for parallel branch and bound. Stud. Comput. Math. 8, 219–265 (2001)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Evtushenko, Y., Posypkin, M., Rybak, L., Turkin, A.: Approximating a solution set of nonlinear inequalities. J. Global Optim. 71(1), 129–145 (2018)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Evtushenko, Y., Posypkin, M., Sigal, I.: A framework for parallel large-scale global optimization. Comput. Sci. Res. Dev. 23(3–4), 211–215 (2009)CrossRefGoogle Scholar
  8. 8.
    Gergel, V., Sergeyev, Y.D.: Sequential and parallel algorithms for global minimizing functions with Lipschitzian derivatives. Comput. Math. Appl. 37(4–5), 163–179 (1999)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Gmys, J., Leroy, R., Mezmaz, M., Melab, N., Tuyttens, D.: Work stealing with private integer-vector-matrix data structure for multi-core branch-and-bound algorithms. Concurr. Comput. Pract. Exp. 28(18), 4463–4484 (2016)CrossRefGoogle Scholar
  10. 10.
    Grama, A., Kumar, V., Gupta, A., Karypis, G.: Introduction to Parallel Computing. Pearson Education, Upper Saddle River (2003)zbMATHGoogle Scholar
  11. 11.
    Herley, K.T., Pietracaprina, A., Pucci, G.: Deterministic parallel backtrack search. Theor. Comput. Sci. 270(1–2), 309–324 (2002)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Karp, R.M., Zhang, Y.: Randomized parallel algorithms for backtrack search and branch-and-bound computation. J. ACM (JACM) 40(3), 765–789 (1993)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Kolpakov, R., Posypkin, M.: Estimating the computational complexity of one variant of parallel realization of the branch-and-bound method for the knapsack problem. J. Comput. Syst. Sci. Int. 50(5), 756 (2011)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Kolpakov, R.M., Posypkin, M.A., Sigal, I.K.: On a lower bound on the computational complexity of a parallel implementation of the branch-and-bound method. Autom. Remote Control 71(10), 2152–2161 (2010)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Pietracaprina, A., Pucci, G., Silvestri, F., Vandin, F.: Space-efficient parallel algorithms for combinatorial search problems. J. Parallel Distrib. Comput. 76, 58–65 (2015)CrossRefGoogle Scholar
  16. 16.
    Rauber, T., Rünger, G.: Parallel Programming: For Multicore and Cluster Systems. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-37801-0CrossRefzbMATHGoogle Scholar
  17. 17.
    Roucairol, C.: A parallel branch and bound algorithm for the quadratic assignment problem. Discret. Appl. Math. 18(2), 211–225 (1987)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Sergeyev, Y.D., Grishagin, V.: Sequential and parallel algorithms for global optimization. Optim. Methods Softw. 3(1–3), 111–124 (1994)CrossRefGoogle Scholar
  19. 19.
    Sergeyev, Y., Grishagin, V.: Parallel asynchronous global search and the nested optimization scheme. J. Comput. Anal. Appl. 3(2), 123–145 (2001)MathSciNetzbMATHGoogle Scholar
  20. 20.
    Strongin, R., Sergeyev, Y.: Global multidimensional optimization on parallel computer. Parallel Comput. 18(11), 1259–1273 (1992)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Strongin, R., Sergeyev, Y.: Global optimization: fractal approach and non-redundant parallelism. J. Glob. Optim. 27(1), 25–50 (2003)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Vu, T.T., Derbel, B.: Parallel branch-and-bound in multi-core multi-CPU multi-GPU heterogeneous environments. Futur. Gener. Comput. Syst. 56, 95–109 (2016)CrossRefGoogle Scholar
  23. 23.
    Wu, I.C., Kung, H.T.: Communication complexity for parallel divide-and-conquer. In: Proceedings of 32nd Annual Symposium on Foundations of Computer Science, pp. 151–162. IEEE (1991)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Lomonosov Moscow State UniversityMoscowRussia
  2. 2.Dorodnicyn Computing Centre, FRC CSC RASMoscowRussia
  3. 3.Moscow Institute of Physics and TechnologyDolgoprudnyRussia
  4. 4.Institute for Information Transmission Problems RASMoscowRussia

Personalised recommendations