Topology-Aware OpenMP Process Scheduling

  • Peter Thoman
  • Hans Moritsch
  • Thomas Fahringer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6132)

Abstract

Multi-core multi-processor machines provide parallelism at multiple levels, including CPUs, cores and hardware multithreading. Elements at each level in this hierarchy potentially exhibit heterogeneous memory access latencies. Due to these issues and the high degree of hardware parallelism, existing OpenMP applications often fail to use the whole system effectively. To increase throughput and decrease power consumption of OpenMP systems employed in HPC settings we propose and implement process-level scheduling of OpenMP parallel regions. We present a number of scheduling optimizations based on system topology information, and evaluate their effectiveness in terms of metrics calculated in simulations as well as experimentally obtained performance and power consumption results. On 32 core machines our methods achieve performance improvements of up to 33% as compared to standard OS-level scheduling, and reduce power consumption by an average of 12% for long-term tests.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    OpenMP Architecture Review Board: OpenMP Application Program Interface. Version 3.0 (May 2008)Google Scholar
  2. 2.
    Karl-Filip, F. (ed.), Bengtsson, C., Brorsson, M., Grahn, H., Hagersten, E., Jonsson, B., Kessler, C., Lisper, B., Stenström, P., Svensson, B.: Multicore computing – the state of the art (2008), http://eprints.sics.se/3546/
  3. 3.
    Seiler, L., Carmean, D., Sprangle, E., Forsyth, T., Abrash, M., Dubey, P., Junkins, S., Lake, A., Sugerman, J., Cavin, R., Espasa, R., Grochowski, E., Juan, T., Hanrahan, P.: Larrabee: a many-core x86 architecture for visual computing. ACM Trans. Graph 27(3), 1–15 (2008)CrossRefGoogle Scholar
  4. 4.
    Herbert, S., Marculescu, D.: Analysis of dynamic voltage/frequency scaling in chip-multiprocessors. In: Proc. 2007 Int. Symp. on Low Power Electronics and Design ISLPED ’07, pp. 38–43. ACM, New York (2007)Google Scholar
  5. 5.
  6. 6.
    Novillo, D.: OpenMP and automatic parallelization in GCC. GCC developers summit (2006)Google Scholar
  7. 7.
    Bailey, D., Barton, J., Lasinski, T., Simon, H.: The NAS Parallel Benchmarks. NAS Technical Report RNR-91-002, NASA Ames Research Center, Moffett Field, CA (1991)Google Scholar
  8. 8.
    Chapman, B., Huang, L.: Enhancing OpenMP and Its Implementation for Programming Multicore Systems. In: Advances in Parallel Computing, vol. 15. IOS Press, Amsterdam (2008)Google Scholar
  9. 9.
    Noronha, R., Panda, D.K.: Improving Scalability of OpenMP Applications on Multi-core Systems Using Large Page Support. In: Parallel and Distributed Processing Symp., IPDPS 2007, March 26-30. IEEE International, Los Alamitos (2007)Google Scholar
  10. 10.
    Duran, A., Corbalan, J., Ayguadé, E.: Evaluation of OpenMP Task Scheduling Strategies. In: Eigenmann, R., de Supinski, B.R. (eds.) IWOMP 2008. LNCS, vol. 5004, pp. 100–110. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  11. 11.
    Krawezik, G., Cappello, F.: Performance Comparison of MPI and three OpenMP Programming Styles on Shared Memory Multiprocessors. In: ACM SPAA 2003, San Diego, USA (June 2003)Google Scholar
  12. 12.
    Li, T., Baumberger, D., Hahn, S.: Efficient and scalable multiprocessor fair scheduling using distributed weighted round-robin. SIGPLAN Not. 44(4), 65–74 (2009)CrossRefGoogle Scholar
  13. 13.
    Stevens, W.R.: Advanced Programming in the UNIX Environment. Addison Wesley Longman Publishing Co., Inc., Amsterdam (1992)MATHGoogle Scholar
  14. 14.
    Kleen, A.: A NUMA API for Linux (2004), http://halobates.de/numaapi3.pdf
  15. 15.
    Sun Microsystems, Inc.: Sun Fire X4600 M2 Server Architecture. White Paper (2008), http://www.sun.com/servers/x64/x4600/arch-wp.pdf

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Peter Thoman
    • 1
  • Hans Moritsch
    • 1
  • Thomas Fahringer
    • 1
  1. 1.Distributed and Parallel Systems GroupUniversity of InnsbruckInnsbruckAustria

Personalised recommendations