Advertisement

A Compilation and Run-Time Framework for Maximizing Performance of Self-scheduling Algorithms

  • Yizhuo Wang
  • Laleh Aghababaie Beni
  • Alexandru Nicolau
  • Alexander V. Veidenbaum
  • Rosario Cammarota
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8707)

Abstract

Ordinary programs contain many parallel loops which account for a significant portion of these programs’ completion time. The parallel executions of such loops can significantly speedup performance of modern multi-core systems. We propose a new framework - Locality Aware Self-scheduling (LASS) - for scheduling parallel loops to multi-core systems and boost up performance of known self-scheduling algorithms in diverse execution conditions. LASS enforces data locality, by forcing the execution of consecutive chunks of iterations to the same core, and favours load balancing with the introduction of a work-stealing mechanism. LASS is evaluated on a set of kernels on a multi-core system with 16 cores. Two execution scenarios are considered. In the first scenario our application runs alone on top of the operating system. In the second scenario our application runs in conjunction with an interfering parallel job. The average speedup achieved by LASS for first execution scenario is 11% and for the second one is 31%.

Keywords

loop scheduling self-scheduling random forest 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Smith, B.J.: Architecture and applications of the HEP multiprocessor computer system. Real Time Signal Processing IV 298, 241–298 (1981)CrossRefGoogle Scholar
  2. 2.
    Polychronopoulos, C.D., Kuck, D.J.: Guided self-scheduling: a practical scheduling scheme for parallel supercomputers. IEEE Trans on Computers 36(12), 1425–1439 (1987)CrossRefGoogle Scholar
  3. 3.
    Flynn-Hummel, S., Schonberg, E., Flynn, L.E.: Factoring: A method for scheduling parallel loops. Communications of the ACM 35(8), 90–101 (1992)CrossRefGoogle Scholar
  4. 4.
    Tzen, T.H., Ni, L.M.: Trapezoid self-scheduling: a practical scheduling scheme for parallel computers. IEEE Trans. on Parallel and Distributed Systems 4(1), 87–98 (1993)CrossRefGoogle Scholar
  5. 5.
    Breiman, L., Friedman, J., et al.: Classification and Regression Trees. Chapman & Hall/CRC (1984)Google Scholar
  6. 6.
    Jarp, S., Jurga, R., Nowak, A.: Perfmon2: a leap forward in performance monitoring. J. Phys. Conf. Ser. 119, 042017 (2008)Google Scholar
  7. 7.
    Podgorelec, V., Kokol, P., et al.: Decision trees: An overview and their use in medicine. J. Med. Syst. 26, 445–463 (2002)CrossRefGoogle Scholar
  8. 8.
    Tang, P., Yew, P.C.: Processor self-scheduling for multiple nested parallel loops. In: ICPP, pp. 528–535 (1986)Google Scholar
  9. 9.
    Kruskal, C.P., Weiss, A.: Allocating independent subtasks on parallel processors. IEEE Trans. Softw. Eng. SE-1 1(10), 1001–1016 (1985)Google Scholar
  10. 10.
    Cariño, R.L., Banicescu, I.: Dynamic load balancing with adaptive factoring methods in scientific applications. J. Supercomput 44(1), 41–63 (2008)CrossRefGoogle Scholar
  11. 11.
    Tabirca, T., Freeman, L., et al.: Feedback guided dynamic loop scheduling: convergence of the continuous case. J. Supercomput. 30(2), 151–178 (2004)CrossRefMATHGoogle Scholar
  12. 12.
    Markatos, E.P., LeBlanc, T.J.: Using processor affinity in loop scheduling on shared-memory multiprocessors. IEEE Trans. Parallel Distrib. Syst. 5(4), 379–400 (1994)CrossRefGoogle Scholar
  13. 13.
    Blumofe, R.D., Leiserson, C.E.: Scheduling multithreaded computations by work stealing. J. ACM 46(5), 720–748 (1999)CrossRefMATHMathSciNetGoogle Scholar
  14. 14.
    Blumofe, R.D., Joerg, C.F., et al.: Cilk: An efficient multithreaded runtime system. In: PPoPP, pp. 207–216 (1995)Google Scholar
  15. 15.
    Intel(R) Threading Building Blocks, Intel CorporationGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2014

Authors and Affiliations

  • Yizhuo Wang
    • 1
  • Laleh Aghababaie Beni
    • 2
  • Alexandru Nicolau
    • 2
  • Alexander V. Veidenbaum
    • 2
  • Rosario Cammarota
    • 3
  1. 1.Beijing Institute of TechnologyBeijingP.R.China
  2. 2.University of CaliforniaIrvineUSA
  3. 3.Qualcomm ResearchSan DiegoUSA

Personalised recommendations