A Compilation and Run-Time Framework for Maximizing Performance of Self-scheduling Algorithms
Ordinary programs contain many parallel loops which account for a significant portion of these programs’ completion time. The parallel executions of such loops can significantly speedup performance of modern multi-core systems. We propose a new framework - Locality Aware Self-scheduling (LASS) - for scheduling parallel loops to multi-core systems and boost up performance of known self-scheduling algorithms in diverse execution conditions. LASS enforces data locality, by forcing the execution of consecutive chunks of iterations to the same core, and favours load balancing with the introduction of a work-stealing mechanism. LASS is evaluated on a set of kernels on a multi-core system with 16 cores. Two execution scenarios are considered. In the first scenario our application runs alone on top of the operating system. In the second scenario our application runs in conjunction with an interfering parallel job. The average speedup achieved by LASS for first execution scenario is 11% and for the second one is 31%.
Keywordsloop scheduling self-scheduling random forest
Unable to display preview. Download preview PDF.
- 5.Breiman, L., Friedman, J., et al.: Classification and Regression Trees. Chapman & Hall/CRC (1984)Google Scholar
- 6.Jarp, S., Jurga, R., Nowak, A.: Perfmon2: a leap forward in performance monitoring. J. Phys. Conf. Ser. 119, 042017 (2008)Google Scholar
- 8.Tang, P., Yew, P.C.: Processor self-scheduling for multiple nested parallel loops. In: ICPP, pp. 528–535 (1986)Google Scholar
- 9.Kruskal, C.P., Weiss, A.: Allocating independent subtasks on parallel processors. IEEE Trans. Softw. Eng. SE-1 1(10), 1001–1016 (1985)Google Scholar
- 14.Blumofe, R.D., Joerg, C.F., et al.: Cilk: An efficient multithreaded runtime system. In: PPoPP, pp. 207–216 (1995)Google Scholar
- 15.Intel(R) Threading Building Blocks, Intel CorporationGoogle Scholar