Implementing OpenSHMEM Using MPI-3 One-Sided Communication

  • Jeff R. Hammond
  • Sayan Ghosh
  • Barbara M. Chapman
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8356)

Abstract

This paper reports the design and implementation of Open- SHMEM over MPI using new one-sided communication features in MPI- 3, which include not only new functions (e.g. remote atomics) but also a newmemory model that is consistent with that of SHMEM.We use a new, non-collective MPI communicator creation routine to allow SHMEM collectives to use their MPI counterparts. Finally, we leverage MPI sharedmemory windows within a node, which allows direct (load-store) access. Performance evaluations are conducted for shared-memory and InfiniBand conduits using microbenchmarks.

Keywords

SHMEM MPI-3 RMA one-sided communication 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bariuso, R., Knies, A.: Shmem user’s guide (1994)Google Scholar
  2. 2.
    Barrett, B., Hoefler, T., Dinan, J., Thakur, R., Balaji, P., Gropp, B., Underwood, K.D.: Remote memory access programming in MPI-3. Preprint, Argonne National Laboratory (April 2013)Google Scholar
  3. 3.
    Barrett, B.W., Brightwell, R., Hemmert, S., Pedretti, K., Wheeler, K., Underwood, K., Riesen, R., Maccabe, A.B., Hudson, T.: The Portals 4.0 message passing interface (SAND2013-3181) (April 2013)Google Scholar
  4. 4.
    Barrett, B.W., Brigthwell, R., Scott Hemmert, K., Pedretti, K., Wheeler, K., Underwood, K.D.: Enhanced support for OpenSHMEM communication in Portals. In: Symposium on High-Performance Interconnects, pp. 61–69 (2011)Google Scholar
  5. 5.
    Bonachea, D.: GASNet specification, v1.1. Technical Report UCB/CSD-02-1207, U.C. Berkeley (2002)Google Scholar
  6. 6.
    Bonachea, D., Duell, J.: Problems with using MPI 1.1 and 2.0 as compilation targets for parallel language implementations. Int. J. High Perform. Comput. Netw. 1, 91–99 (2004)CrossRefGoogle Scholar
  7. 7.
    Chapman, B., Curtis, T., Pophale, S., Poole, S., Kuehn, J., Koelbel, C., Smith, L.: Introducing OpenSHMEM: SHMEM for the PGAS community. In: Proceedings of the Fourth Conference on Partitioned Global Address Space Programming Model, p. 2. ACM (2010)Google Scholar
  8. 8.
    Dinan, J., Balaji, P., Hammond, J.R., Krishnamoorthy, S., Tipparaju, V.: Supporting the Global Arrays PGAS model using MPI one-sided communication. In: Proceedings of the International Parallel and Distributed Processing Symposium (IPDPS) (May 2012)Google Scholar
  9. 9.
    Dinan, J., Krishnamoorthy, S., Balaji, P., Hammond, J.R., Krishnan, M., Tipparaju, V., Vishnu, A.: Noncollective communicator creation in MPI. In: Cotronis, Y., Danalis, A., Nikolopoulos, D.S., Dongarra, J. (eds.) EuroMPI 2011. LNCS, vol. 6960, pp. 282–291. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  10. 10.
    Feind, K.: Shared memory access (shmem) routines. In: Cray User Group, CUG 2005 (1995)Google Scholar
  11. 11.
    Gribenko, D., Zinenko, A.: Enabling Clang to statically check MPI type safety. In: International Conferences on High Performance Computing (HPC-UA) (October 2012)Google Scholar
  12. 12.
  13. 13.
  14. 14.
    Jose, J., Kandalla, K., Luo, M., Panda, D.K.: Supporting hybrid MPI and OpenSHMEM over InfiniBand: Design and performance evaluation. In: 2012 41st International Conference on Parallel Processing (ICPP), pp. 219–228 (2012)Google Scholar
  15. 15.
    Jose, J., Kandalla, K., Luo, M., Panda, D.K.: Supporting hybrid MPI and OpenSHMEM over InfiniBand: Design and performance evaluation. In: 2012 41st International Conference on Parallel Processing (ICPP), pp. 219–228. IEEE (2012)Google Scholar
  16. 16.
    Jose, J., Kandalla, K., Zhang, J., Potluri, S., Panda, D.K.: Optimizing collective communication in OpenSHMEM (October 2013)Google Scholar
  17. 17.
    MPI Forum. MPI: A message-passing interface standard. Version 3.0 (November 2012)Google Scholar
  18. 18.
    Nieplocha, J., Carpenter, B.: ARMCI: A portable remote memory copy library for distributed array libraries and compiler run-time systems. In: Rolim, J., et al. (eds.) IPPS-WS 1999 and SPDP-WS 1999. LNCS, vol. 1586, pp. 533–546. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  19. 19.
    Parzyszek, K., Nieplocha, J., Kendall, R.A.: A generalized portable SHMEM library for high performance computing. Technical report, Ames Lab., Ames, IA, US (2000)Google Scholar
  20. 20.
    Poole, S.W., Hernandez, O., Kuehn, J.A., Shipman, G.M., Curtis, A., Feind, K.: OpenSHMEM - toward a unified RMA model. In: Encyclopedia of Parallel Computing, pp. 1379–1391. Springer (2011)Google Scholar
  21. 21.
    Quadrics. Quadrics/SHMEM programming manual (2001)Google Scholar
  22. 22.
    Shainer, G., Wilde, T., Lui, P., Liu, T., Kagan, M., Dubman, M., Shahar, Y., Graham, R., Shamis, P., Poole, S.: The co-design architecture for exascale systems, a novel approach for scalable designs. In: Computer Science-Research and Development, pp. 1–7 (2013)Google Scholar
  23. 23.
    Träff, J.L.: Compact and efficient implementation of the MPI group operations, pp. 170–178 (2010)Google Scholar
  24. 24.
    Woodacre, M., Robb, D., Roe, D., Feind, K.: The SGI AltixTM 3000 global shared-memory architecture (2005)Google Scholar
  25. 25.
    Yoon, C., Aggarwal, V., Hajare, V., George, A.D., Billingsley III, M.: GSHMEM: A portable library for lightweight, shared-memory, parallel programming. In: Proceedings of Partitioned Global Address Space, Galveston, Texas (2011)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Jeff R. Hammond
    • 1
  • Sayan Ghosh
    • 2
  • Barbara M. Chapman
    • 2
  1. 1.Argonne National LaboratoryArgonneUSA
  2. 2.Dept. of Computer ScienceUniversity of HoustonHoustonUSA

Personalised recommendations