Advertisement

Unifying Barrier and Point-to-Point Synchronization in OpenMP with Phasers

  • Jun Shirako
  • Kamal Sharma
  • Vivek Sarkar
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6665)

Abstract

OpenMP is a widely used standard for parallel programing on a broad range of SMP systems. In the OpenMP programming model, synchronization points are specified by implicit or explicit barrier operations. However, certain classes of computations such as stencil algorithms need to specify synchronization only among particular tasks/threads so as to support pipeline parallelism with better synchronization efficiency and data locality than wavefront parallelism using all-to-all barriers. In this paper, we propose two new synchronization constructs in the OpenMP programming model, thread-level phasers and iteration level phasers to support various synchronization patterns such as point-to-point synchronizations and sub-group barriers with neighbor threads. Experimental results on three platforms using numerical applications show performance improvements of phasers over OpenMP barriers of up to 1.74× on an 8-core Intel Nehalem system, up to 1.59× on a 16-core Core-2-Quad system and up to 1.44× on a 32-core IBM Power7 system. It is reasonable to expect larger increases on future manycore processors.

Keywords

Parallel Region Parallel Loop Barrier Synchronization Synchronization Pattern Registration Mode 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Baskaran, M., et al.: Parameterized tiling revisited. In: Proceedings of The International Symposium on Code Generation and Optimization, pp. 200–209 (2010)Google Scholar
  2. 2.
    Cavé, V., et al.: Comparing the usability of library vs. language approaches to task parallelism. In: Evaluation and Usability of Programming Languages and Tools, PLATEAU 2010, pp. 9:1–9:6. ACM, New York (2010)Google Scholar
  3. 3.
    Charles, P., et al.: X10: an object-oriented approach to non-uniform cluster computing. In: Proceedings of the ACM SIGPLAN Conference on Object Oriented Programming, Systems, Languages, and Applications, NY, USA, pp. 519–538 (2005)Google Scholar
  4. 4.
    Dagum, L., Menon, R.: OpenMP: An industry standard API for shared memory programming. IEEE Computational Science & Engineering (1998)Google Scholar
  5. 5.
    Darema, F., et al.: A Single-Program-Multiple-Data computational model for EPEX/FORTRAN. Parallel Computing 7(1), 11–24 (1988)CrossRefzbMATHGoogle Scholar
  6. 6.
    Diniz, P.C., Rinard, M.C.: Synchronization transformations for parallel computing. In: Proceedings of the ACM Symposium on the Principles of Programming Languages, pp. 187–200. ACM, New York (1997)Google Scholar
  7. 7.
    Gupta, R.: The fuzzy barrier: a mechanism for high speed synchronization of processors. In: Proceedings of the Third International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 54–63. ACM, New York (1989)CrossRefGoogle Scholar
  8. 8.
    The Habanero Java (HJ) Programming Language, http://habanero.rice.edu/hj
  9. 9.
    Miller, A.: Set your java 7 phasers to stun (2008), http://tech.puredanger.com/2008/07/08/java7-phasers/
  10. 10.
    Peierls, T., et al.: Java Concurrency in Practice. Addison-Wesley Professional, Reading (2005)Google Scholar
  11. 11.
    Sarkar, V.: Synchronization using counting semaphores. In: Proceedings of the International Conference on Supercomputing, pp. 627–637 (July 1988)Google Scholar
  12. 12.
    Shirako, J., et al.: Phasers: a unified deadlock-free construct for collective and point-to-point synchronization. In: ICS 2008: Proceedings of the 22nd Annual International Conference on Supercomputing, pp. 277–288. ACM, New York (2008)Google Scholar
  13. 13.
    Shirako, J., et al.: Chunking parallel loops in the presence of synchronization. In: ICS 2009: Proceedings of the 23rd Annual International Conference on Supercomputing, pp. 181–192. ACM, New York (2009)Google Scholar
  14. 14.
    Shirako, J., Sarkar, V.: Hierarchical phasers for scalable synchronization and reductions in dynamic parallelism. In: Proceedings of the International Parallel and Distributed Processing Symposium, IPDPS (2010)Google Scholar
  15. 15.
    Smith, L.A., Bull, J.M.: A multithreaded Java Grande benchmark suite. In: Proceedings of the Third Workshop on Java for High Performance Computing (2001)Google Scholar
  16. 16.
    Snyder, L.: The design and development of ZPL. In: HOPL III: Proceedings of the Third ACM SIGPLAN Conference on History of Programming Languages, pp. 8-1–8-37. ACM Press, New York (2007)Google Scholar
  17. 17.
    Tseng, C.: Compiler optimizations for eliminating barrier synchronization. In: Proceedings of the Symposium on Principles and Practice of Parallel Programming, pp. 144–155. ACM, New York (1995)Google Scholar
  18. 18.
    Vasudevan, N., Tardieu, O., Dolby, J., Edwards, S.A.: Compile-time analysis and specialization of clocks in concurrent programs. In: de Moor, O., Schwartzbach, M.I. (eds.) CC 2009. LNCS, vol. 5501, pp. 48–62. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  19. 19.
    Yelick, K., et al.: Productivity and performance using partitioned global address space languages. In: Proceedings of the International Workshop on Parallel Symbolic Computation, pp. 24–32. ACM, New York (2007)Google Scholar
  20. 20.
    Zhao, J., et al.: Reducing task creation and termination overhead in explicitly parallel programs. In: Proceedings of the Conference on Parallel Architectures and Compilation Techniques (PACT 2010) (September 2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Jun Shirako
    • 1
  • Kamal Sharma
    • 1
  • Vivek Sarkar
    • 1
  1. 1.Department of Computer ScienceRice UniversityUSA

Personalised recommendations