Abstract
Loop scheduling scheme plays a critical role in the efficient execution of programs, especially loop dominated applications. This paper presents KASS, a knowledge-based adaptive loop scheduling scheme. KASS consists of two phases: static partitioning and dynamic scheduling. To balance the workload, the knowledge of loop features and the capabilities of processors are both taken into account using a heuristic approach in static partitioning phase. In dynamic scheduling phase, an adaptive self-scheduling algorithm is applied, in which two tuning parameters are set to control chunk sizes, aiming at load balancing and minimizing synchronization overhead. In addition, we extend KASS to apply on loop nests and adjust the chunk sizes at runtime. The experimental results show that KASS performs 4.8% to 16.9% better than the existing self- scheduling schemes, and up to 21% better than the affinity scheduling scheme.
This work was supported by the National Natural Science Foundation of China (60973010).
Chapter PDF
References
Polychronopoulos, C.D., Kuck, D.J.: Guided self-scheduling: a practical scheduling scheme for parallel supercomputers. IEEE Trans. Computers 36(12), 1425–1439 (1987)
Smith, B.J.: Architecture and Application of the HEP Multiprocessor Computer System. In: Real Time Signal Processing IV, vol. 298 (1981)
Tang, P., Yew, P.C.: Processor self-scheduling for multiple nested parallel loops. In: ICPP, pp. 528–535 (1986)
Flynn-Hummel, S., Schonberg, E., Flynn, L.E.: Factoring: A method for scheduling parallel loops. Communications of the ACM 35(8), 90–101 (1992)
Tzen, T.H., Ni, L.M.: Trapezoid self-scheduling: a practical scheduling scheme for parallel computers. IEEE Transactions on Parallel and Distributed Systems 4(1), 87–98 (1993)
Tabirca, T., Freeman, L., Tabirca, S., Yang, L.T.: Feedback guided dynamic loop scheduling: convergence of the continuous case. J. Supercomput. 30(2), 151–178 (2004)
Cariño, R.L., Banicescu, I.: Dynamic load balancing with adaptive factoring methods in scientific applications. J. Supercomput. 44(1), 41–63 (2008)
Markatos, E.P., LeBlanc, T.J.: Using processor affinity in loop scheduling on shared-memory multiprocessors. IEEE Trans. Parallel Distrib. Syst. 5(4), 379–400 (1994)
Srivastava, S., Banicescu, I., Ciorba, F.M.: Investigating the robustness of adaptive Dynamic Loop Scheduling on heterogeneous computing systems. In: IPDPS Workshops, pp. 1–8 (2010)
Yang, C., Wu, C., Chang, J.: Performance-based parallel loop self-scheduling using hybrid OpenMP and MPI programming on multicore SMP clusters. Concurrency Computation Practice and Experience 23(8), 721–744 (2011)
Kejariwal, A., Nicolau, A., Polychronopoulos, C.D.: History-aware Self-Scheduling. In: ICPP, Columbus, Ohio, USA, pp. 185–192 (2006)
Liu, J., Saletore, V.A., Lewis, T.G.: Safe Self-Scheduling: A Parallel Loop Scheduling Scheme for Shared-Memory Multiprocessors. Int. J. Parallel Program. 22(6), 589–616 (1994)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 IFIP International Federation for Information Processing
About this paper
Cite this paper
Wang, Y., Ji, W., Shi, F., Zuo, Q., Deng, N. (2012). Knowledge-Based Adaptive Self-Scheduling. In: Park, J.J., Zomaya, A., Yeo, SS., Sahni, S. (eds) Network and Parallel Computing. NPC 2012. Lecture Notes in Computer Science, vol 7513. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-35606-3_3
Download citation
DOI: https://doi.org/10.1007/978-3-642-35606-3_3
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-35605-6
Online ISBN: 978-3-642-35606-3
eBook Packages: Computer ScienceComputer Science (R0)