Advertisement

Application of Methods for Optimizing Parallel Algorithms for Solving Problems of Distributed Computing Systems

  • Yulia ShichkinaEmail author
  • Mikhail Kupriyanov
  • Al-Mardi Mohammed Haidar Awadh
Conference paper
Part of the Lecture Notes in Networks and Systems book series (LNNS, volume 95)

Abstract

Today, various researchers have developed a set of methods for optimizing parallel algorithms for systems with distributed memory. These methods are optimized for various parameters and taking into account various properties of the algorithm. A distributed computing system has its own characteristics, such as heterogeneity of computing nodes, network bandwidth and others. The studies conducted by the authors of this article show that these characteristics do not interfere with the application of these methods to solving problems in a distributed computing environment. The article shows that there is no need to modify and adapt optimization methods for the use in distributed computing systems. However, it is necessary to take into account the properties of such systems contributed to the emergence of iteration in the application of optimization methods and the increase of the complexity of the process of analysis and optimization of the initial parallel algorithm. The article also describes ways to solve the problem of reducing the time complexity of the iterative application of optimization methods to the initial parallel algorithm. The results of the authors’ research is a method for constructing a special type of graph for a parallel algorithm that takes into account properties of a given computing system and approaches to constructing the schedule of the algorithm.

Keywords

Schedule Optimization Algorithm Information graph Network bandwidth Execution time Operation Process Node interconnection graph 

Notes

Acknowledgments

The paper was prepared within the scope of the state project “Initiative scientific project” of the main part of the state plan of the Ministry of Education and Science of the Russian Federation (task № 2.6553.2017/8.9 BCH Basic Part) and was funded by RFBR according to the research project № 19-07-00784.

References

  1. 1.
    Abramov, O.V., Katueva, Y.: Multivariant analysis and stochastic optimization using parallel processing techniques. Manage. Probl. 4, 11–15 (2003)Google Scholar
  2. 2.
    Jordan, H.F., Alaghband, F.: Fundamentals of Parallel Processing, p. 578. Pearson Education Inc., Upper Saddle River (2003)Google Scholar
  3. 3.
    Voevodin, V.V., Voevodin, V.V.: Parallel Computing, p. 608. BHV-Petersburg, St. Petersburg (2002)Google Scholar
  4. 4.
    Drake, D.E., Hougardy, S.: A linear-time approximation algorithm for weighted matchings in graphs. ACM Trans. Algorithms 1, 107–122 (2005)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Hu, C.: MPIPP: an automatic profileguided parallel process placement toolset for SMP clusters and multiclusters. In: Proceedings of the 20th Annual International Conference on Super-Computing. New York, NY, USA, pp. 353–360 (2006)Google Scholar
  6. 6.
    Amdahl, G.M., Reston, V.A.: Validity of the single processor approach to achieving large-scale computing capabilities. In: Proceedings AFIPS Spring Joint Computer Conference, pp. 483–485 (1967)Google Scholar
  7. 7.
    Grama, A., Gupta, A., Karypis, G., Kumar, V.: Introduction to Parallel Computing, 2nd edn. Addison Wesley, USA (2003)zbMATHGoogle Scholar
  8. 8.
    Gergel, V.P., Strongin, R.G.: Parallel Computing for Multiprocessor Computers. NGU Publishing, Nizhnij Novgorod (2003). (in Russian)Google Scholar
  9. 9.
    Quinn, M.J.: Parallel Programming in C with MPI and OpenMP, 1st edn. McGraw-Hill Education, New York (2003)Google Scholar
  10. 10.
    Wittwer, T.: An Introduction to Parallel Programming, VSSD uitgeverij (2006)Google Scholar
  11. 11.
    Tiwari, A., Tabatabaee, V., Hollingsworth, J.K.: Tuning parallel applications in parallel. Parallel Comput. 35(8–9), 475–492 (2009)CrossRefGoogle Scholar
  12. 12.
    Mubarak, M., Seol, S., Lu, Q., Shephard, M.S.: A parallel ghosting algorithm for the flexible distributed mesh database. Sci. Program. 21(1–2), 17–42 (2013)Google Scholar
  13. 13.
    Kruatrachue, B., Lewis, T.: Grain size determination for parallel processing. IEEE Softw. 5(1), 23–32 (1988)CrossRefGoogle Scholar
  14. 14.
    Rauber, N., Runger, G.: Parallel Programming: For Multicore and Cluster Systems, p. 450. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-37801-0CrossRefzbMATHGoogle Scholar
  15. 15.
    Gergel, V.P., Fursov, V.A. Lectures of Parallel Programming: Proc. Benefit, p. 163. Samara State Aerospace University Publishing House (2009)Google Scholar
  16. 16.
    Liu, C.L., Layland, J.W.: Scheduling algorithms for multiprogramming in hard real-time environment. J. ACM 20(1), 46–61 (1973)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Marte, B.: Preemptive scheduling with release times, deadlines and due times. J. ACM 29(3), 812–829 (1982)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Burns, A.: Scheduling hard real-time systems: a review. Softw. Eng. J. 6(3), 116–128 (1991)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Stankovic, J.A.: Implications of Classical Scheduling Results for Real-Time Systems. IEEE Computer Society Press, Los Alamitos (1995)CrossRefGoogle Scholar
  20. 20.
    Tzen, T.H., Ni, L.M.: Trapezoid self-scheduling: a practical scheduling scheme for parallel compilers. IEEE Trans. Parallel Distrib. Syst. 4, 87–98 (1993)CrossRefGoogle Scholar
  21. 21.
    Sinnen, O., Sousa, L.A.: Communication contention in task scheduling. IEEE Trans. Parallel Distrib. Syst. 16(6), 503–515 (2005)CrossRefGoogle Scholar
  22. 22.
    Shichkina, Y., Kupriyanov, M.: Creating a schedule for parallel execution of tasks based on the adjacency lists. In: Galinina, O., Andreev, S., Balandin, S., Koucheryavy, Y. (eds.) NEW2AN/ruSMART -2018. LNCS, vol. 11118, pp. 102–115. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01168-0_10CrossRefGoogle Scholar
  23. 23.
    Liedtke, J.: On Micro-Kernel Construction. In: Proceedings of the 15th ACM Symposium on Operating System Principles. ACM, December (1995)Google Scholar
  24. 24.
    Tanenbaum, A., Woodhull, A.: Operating Systems Design and Implementation, 3rd edn, pp. 197–495. Prentice Hall, Eaglewood Cliffs (2006)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Yulia Shichkina
    • 1
    Email author
  • Mikhail Kupriyanov
    • 1
  • Al-Mardi Mohammed Haidar Awadh
    • 1
  1. 1.St. Petersburg Electrotechnical University “LETI”St. PetersburgRussia

Personalised recommendations