Reducing Synchronization Overhead Through Bundled Communication

  • James Dinan
  • Clement Cole
  • Gabriele Jost
  • Stan Smith
  • Keith Underwood
  • Robert W. Wisniewski
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8356)

Abstract

OpenSHMEM provides a one-sided communication interface that allows for asynchronous, one-sided communication operations on data stored in a partitioned global address space. While communication in this model is efficient, synchronizations must currently be achieved through collective barriers or one-sided updates of sentinel locations in the global address space. These synchronization mechanisms can over-synchronize, or require additional communication operations, respectively, leading to high overheads. We propose a SHMEM extension that utilizes capabilities present in most high performance interconnects (e.g. communication events) to bundle synchronization information together with communication operations. Using this approach, we improve ping-pong latency for small messages by a factor of two, and demonstrate significant improvement to synchronization-heavy communication patterns, including all-to-all and pipelined parallel stencil communication.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    OpenSHMEM implementation using portals 4. Website, http://code.google.com/p/portals-shmem/
  2. 2.
    Portals 4 open source implementation for InfiniBand. Website, http://code.google.com/p/portals4/
  3. 3.
    Alverson, R., Callahan, D., Cummings, D., Koblenz, B., Porterfield, A., Smith, B.: The Tera computer system. In: Proc. ACM Intl. Conf. on Supercomputing, ICS (June 1990)Google Scholar
  4. 4.
    Bariuso, R., Knies, A.: SHMEM user’s guide. Tech. Rep. SN-2516, Cray Research, Inc. (1994)Google Scholar
  5. 5.
    Barrett, B.W., Brightwell, R., Hemmert, K.S., Pedretti, K.T., Wheeler, K.B., Underwood, K.D.: Enhanced support for OpenSHMEM communication in Portals. In: Hot Interconnects, pp. 61–69. IEEE (2011)Google Scholar
  6. 6.
    Barrett, B.W., Brightwell, R., Hemmert, S., Pedretti, K., Wheeler, K., Underwood, K., Riesen, R., Maccabe, A.B., Hudson, T.: The portals 4.0.1 network programming interface. Tech. Rep. SAND2013-3181, Sandia National Laboratories (April 2013)Google Scholar
  7. 7.
    Berkeley UPC: Berkeley UPC user’s guide version 2.16.0. Tech. rep., U.C. Berkeley and LBNL (2013)Google Scholar
  8. 8.
    Bonachea, D.: GASNet specification, v1.1. Tech. Rep. UCB/CSD-02-1207, U.C. Berkeley (2002)Google Scholar
  9. 9.
    Bonachea, D., Nishtala, R., Hargrove, P., Yelick, K.: Efficient point-to-point synchronization in UPC. In: 2nd Conf. on Partitioned Global Address Space Programming Models, PGAS 2006 (October 2006)Google Scholar
  10. 10.
    Bruck, J., Ho, C.T., Upfal, E., Kipnis, S., Weathersby, D.: Efficient algorithms for all-to-all communications in multiport message-passing systems. IEEE Trans. Parallel Distrib. Syst. 8(11), 1143–1156 (1997)CrossRefGoogle Scholar
  11. 11.
    Chamberlain, B., Callahan, D., Zima, H.: Parallel programmability and the Chapel language. Intl. J. High Performance Computing Applications (IJHPCA) 21(3), 291–312 (2007)CrossRefGoogle Scholar
  12. 12.
    Culler, D., Dusseau, A., Goldstein, S., Krishnamurthy, A., Lumetta, S., von Eicken, T., Yelick, K.: Parallel programming in Split-C. In: Proc., Supercomputing 1993, pp. 262–273 (1993)Google Scholar
  13. 13.
    von Eicken, T., Culler, D.E., Goldstein, S.C., Schauser, K.E.: Active messages: a mechanism for integrated communication and computation. In: Proc. 19th Intl. Symp. on Computer Architecture, ISCA 1992, pp. 256–266 (1992)Google Scholar
  14. 14.
    Feo, J., Harper, D., Kahan, S., Konecny, P.: ELDORADO. In: Proc. 2nd Conf. on Computing Frontiers, CF 2005 (2005)Google Scholar
  15. 15.
    GASPI Consortium: GASPI: Global address space programming interface specification of a PGAS API for communication. Version 1.00 (June 2013)Google Scholar
  16. 16.
    Mattson, T., van der Wijngaart, R.: Parallel research kernels. Website (2013), https://github.com/ParRes/Kernels
  17. 17.
    MPI Forum: MPI: A message-passing interface standard version 3.0. Tech. rep., University of Tennessee, Knoxville (September 2012)Google Scholar
  18. 18.
    Nieplocha, J., Carpenter, B.: ARMCI: A portable remote memory copy library for distributed array libraries and compiler run-time systems. In: Rolim, J., et al. (eds.) IPPS-WS 1999 and SPDP-WS 1999. LNCS, vol. 1586, pp. 533–546. Springer, Heidelberg (1999)Google Scholar
  19. 19.
    OpenSHMEM Consortium: OpenSHMEM application programming interface, version 1.0 (January 2012)Google Scholar
  20. 20.
    Reed, D., Kanodia, R.: Synchronization with event counts and sequences. Communications of the ACM 22(2), 115–123 (1979)CrossRefMATHGoogle Scholar
  21. 21.
    Thakur, R., Rabenseifner, R., Gropp, W.: Optimization of collective communication operations in MPICH. International Journal of High Performance Computing Applications (IJHPCA) 19(1), 49–66 (2005)CrossRefGoogle Scholar
  22. 22.
    UPC Consortium: UPC language specifications, v1.2. Tech. Rep. LBNL-59208, Lawrence Berkeley National Lab (2005)Google Scholar
  23. 23.
    Yarrow, M., van der Wijngaart, R.: Communication improvement for the LU NAS parallel benchmark: A model for efficient parallel relaxation schemes. Tech. Rep. NAS-97-032, NASA Ames Research Center (1997)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • James Dinan
    • 1
  • Clement Cole
    • 1
  • Gabriele Jost
    • 1
  • Stan Smith
    • 1
  • Keith Underwood
    • 1
  • Robert W. Wisniewski
    • 1
  1. 1.Intel Corp.USA

Personalised recommendations