Abstract
Today, various researchers have developed a set of methods for optimizing parallel algorithms for systems with distributed memory. These methods are optimized for various parameters and taking into account various properties of the algorithm. A distributed computing system has its own characteristics, such as heterogeneity of computing nodes, network bandwidth and others. The studies conducted by the authors of this article show that these characteristics do not interfere with the application of these methods to solving problems in a distributed computing environment. The article shows that there is no need to modify and adapt optimization methods for the use in distributed computing systems. However, it is necessary to take into account the properties of such systems contributed to the emergence of iteration in the application of optimization methods and the increase of the complexity of the process of analysis and optimization of the initial parallel algorithm. The article also describes ways to solve the problem of reducing the time complexity of the iterative application of optimization methods to the initial parallel algorithm. The results of the authors’ research is a method for constructing a special type of graph for a parallel algorithm that takes into account properties of a given computing system and approaches to constructing the schedule of the algorithm.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abramov, O.V., Katueva, Y.: Multivariant analysis and stochastic optimization using parallel processing techniques. Manage. Probl. 4, 11–15 (2003)
Jordan, H.F., Alaghband, F.: Fundamentals of Parallel Processing, p. 578. Pearson Education Inc., Upper Saddle River (2003)
Voevodin, V.V., Voevodin, V.V.: Parallel Computing, p. 608. BHV-Petersburg, St. Petersburg (2002)
Drake, D.E., Hougardy, S.: A linear-time approximation algorithm for weighted matchings in graphs. ACM Trans. Algorithms 1, 107–122 (2005)
Hu, C.: MPIPP: an automatic profileguided parallel process placement toolset for SMP clusters and multiclusters. In: Proceedings of the 20th Annual International Conference on Super-Computing. New York, NY, USA, pp. 353–360 (2006)
Amdahl, G.M., Reston, V.A.: Validity of the single processor approach to achieving large-scale computing capabilities. In: Proceedings AFIPS Spring Joint Computer Conference, pp. 483–485 (1967)
Grama, A., Gupta, A., Karypis, G., Kumar, V.: Introduction to Parallel Computing, 2nd edn. Addison Wesley, USA (2003)
Gergel, V.P., Strongin, R.G.: Parallel Computing for Multiprocessor Computers. NGU Publishing, Nizhnij Novgorod (2003). (in Russian)
Quinn, M.J.: Parallel Programming in C with MPI and OpenMP, 1st edn. McGraw-Hill Education, New York (2003)
Wittwer, T.: An Introduction to Parallel Programming, VSSD uitgeverij (2006)
Tiwari, A., Tabatabaee, V., Hollingsworth, J.K.: Tuning parallel applications in parallel. Parallel Comput. 35(8–9), 475–492 (2009)
Mubarak, M., Seol, S., Lu, Q., Shephard, M.S.: A parallel ghosting algorithm for the flexible distributed mesh database. Sci. Program. 21(1–2), 17–42 (2013)
Kruatrachue, B., Lewis, T.: Grain size determination for parallel processing. IEEE Softw. 5(1), 23–32 (1988)
Rauber, N., Runger, G.: Parallel Programming: For Multicore and Cluster Systems, p. 450. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-37801-0
Gergel, V.P., Fursov, V.A. Lectures of Parallel Programming: Proc. Benefit, p. 163. Samara State Aerospace University Publishing House (2009)
Liu, C.L., Layland, J.W.: Scheduling algorithms for multiprogramming in hard real-time environment. J. ACM 20(1), 46–61 (1973)
Marte, B.: Preemptive scheduling with release times, deadlines and due times. J. ACM 29(3), 812–829 (1982)
Burns, A.: Scheduling hard real-time systems: a review. Softw. Eng. J. 6(3), 116–128 (1991)
Stankovic, J.A.: Implications of Classical Scheduling Results for Real-Time Systems. IEEE Computer Society Press, Los Alamitos (1995)
Tzen, T.H., Ni, L.M.: Trapezoid self-scheduling: a practical scheduling scheme for parallel compilers. IEEE Trans. Parallel Distrib. Syst. 4, 87–98 (1993)
Sinnen, O., Sousa, L.A.: Communication contention in task scheduling. IEEE Trans. Parallel Distrib. Syst. 16(6), 503–515 (2005)
Shichkina, Y., Kupriyanov, M.: Creating a schedule for parallel execution of tasks based on the adjacency lists. In: Galinina, O., Andreev, S., Balandin, S., Koucheryavy, Y. (eds.) NEW2AN/ruSMART -2018. LNCS, vol. 11118, pp. 102–115. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01168-0_10
Liedtke, J.: On Micro-Kernel Construction. In: Proceedings of the 15th ACM Symposium on Operating System Principles. ACM, December (1995)
Tanenbaum, A., Woodhull, A.: Operating Systems Design and Implementation, 3rd edn, pp. 197–495. Prentice Hall, Eaglewood Cliffs (2006)
Acknowledgments
The paper was prepared within the scope of the state project “Initiative scientific project” of the main part of the state plan of the Ministry of Education and Science of the Russian Federation (task № 2.6553.2017/8.9 BCH Basic Part) and was funded by RFBR according to the research project № 19-07-00784.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Shichkina, Y., Kupriyanov, M., Awadh, AM.M.H. (2020). Application of Methods for Optimizing Parallel Algorithms for Solving Problems of Distributed Computing Systems. In: Arseniev, D., Overmeyer, L., Kälviäinen, H., Katalinić, B. (eds) Cyber-Physical Systems and Control. CPS&C 2019. Lecture Notes in Networks and Systems, vol 95. Springer, Cham. https://doi.org/10.1007/978-3-030-34983-7_21
Download citation
DOI: https://doi.org/10.1007/978-3-030-34983-7_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-34982-0
Online ISBN: 978-3-030-34983-7
eBook Packages: EngineeringEngineering (R0)