Advertisement

One-Sided Communication in Coarray Fortran: Performance Tests on TH-1A

  • Peiming Guo
  • Jianping Wu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11337)

Abstract

One-sided communication mechanism of Messaging Passing Interface (MPI) has been extended by remote memory access (RMA) from several aspects, including interface, language and compiler, etc. Coarray Fortran (CAF), as an emerging syntactic extension of Fortran to satisfy one-sided communication, has been freely supported by the open-source and widely used GNU Fortran compiler, which relies on MPI-3 as the transport layer. In this paper, we present the potential of RMA to benefit the communication patterns in Cannon algorithm. EVENTS, a safer implementation of atomics to synchronize different processes in CAF, are also introduced via classic Fast Fourier Transform (FFT). In addition, we also studied the performance of one-sided communication based on different compilers. In our tests, one-sided communication outperforms two-sided communication only when the data size is large enough (in particular, inter-node transfer). CAF is slightly faster than the simple one-sided routines without optimization by compiler in MPI-3. EVENTS are capable of improving the performance of parallel applications by avoiding the idle time.

Keywords

One-sided communication Coarray Fortran MPI 

Notes

Acknowledgments

This work is funded by NSFC (41875121, 61379022). The authors would like to thank Gcc team for providing compiler resources to carry out this research and the editors for their helpful suggestions.

References

  1. 1.
    Ghosh, S., Hammond, R., et al.: One-sided interface for matrix operations using MPI-3 RMA: a case study with elemental. In: International Conference on Parallel Processing, pp. 185–194. IEEE (2016)Google Scholar
  2. 2.
    Dinan, J., et al.: An implementation and evaluation of the MPI 30 one-sided communication interface. Concurr. Comput. Pract. Exp. 28(17), 4385–4404 (2016)CrossRefGoogle Scholar
  3. 3.
    Guo, M., Chen, T., et al.: An improved DOA estimation approach using coarray interpolation and matrix denoising. Sensors 17(5), 1140 (2017)CrossRefGoogle Scholar
  4. 4.
    Eachempati, D., Jun, H.J., et al.: An open-source compiler and runtime implementation for Coarray Fortran. In: ACM Conference on Partitioned Global Address Space Programming Model, pp. 1–8 (2010)Google Scholar
  5. 5.
    Fanfarillo, A., Burnus, T., Cardellini, V., Filippone, S., Dan, N., et al.: Coarrays in GNU Fortran. In: IEEE International Conference on Parallel Architecture and Compilation Techniques, pp. 513–514 (2014)Google Scholar
  6. 6.
    Dosanjh, M.G.F., Grant, R.E., et al.: RMA-MT: a benchmark suite for assessing MPI multi-cluster. In: Cloud and Grid Computing. IEEE (2016)Google Scholar
  7. 7.
    Garain, S., Balsara, D.S., et al.: Comparing Coarray Fortran (CAF) with MPI for several structured mesh PDE applications. Academic Press Professional, Inc. (2015)Google Scholar
  8. 8.
    Rouson, D., Gutmann, E.D., Fanfarillo, A., et al.: Performance portability of an intermediate-complexity atmospheric research model in Coarray Fortran. In: PGAS Applications Workshop, pp. 1–4 (2017)Google Scholar
  9. 9.
    Jin, G., Mellorcrummey, J., Adhianto, L., Iii, W.N.S., et al.: Implementation and performance evaluation of the HPC challenge benchmarks in Coarray Fortran 2.0. In: IEEE International Parallel & Distributed Processing Symposium, pp. 1089–1100 (2011)Google Scholar
  10. 10.
    Zhou, H., Idrees, K., Gracia, J.: Leveraging MPI-3 shared-memory extensions for efficient PGAS runtime systems. In: Träff, J.L., Hunold, S., Versaci, F. (eds.) Euro-Par 2015. LNCS, vol. 9233, pp. 373–384. Springer, Heidelberg (2015).  https://doi.org/10.1007/978-3-662-48096-0_29CrossRefGoogle Scholar
  11. 11.
    Fanfarillo, A., Hammond, J.: CAF events implementation using MPI-3 capabilities. In: ACM European MPI Users’ Group Meeting, pp. 198–207 (2016)Google Scholar
  12. 12.
    Iwashita, H., Nakao, M., Murai, H., et al.: A source-to-source translation of Coarray Fortran with MPI for high performance. In: The International Conference, pp. 86–97 (2018)Google Scholar
  13. 13.
  14. 14.
    GNU Compiler Collection. https://gcc.gnu.org
  15. 15.
    Eachempati, D., Richardson, A., Liao, T., Calandra, H., et al.: A Coarray Fortran implementation to support data-intensive application development. Cluster Comput. 17(2), 569–583 (2014)CrossRefGoogle Scholar
  16. 16.
    Fanfarillo, A., Vento, D.: Notified access in Coarray Fortran. In: European MPI Users’ Group Meeting, pp. 1–7 (2017)Google Scholar
  17. 17.
    Fanfarillo, A., Rouson, D., et al.: OpenCoarrays: open-source transport layers supporting Coarray Fortran compilers. In: ACM International Conference on Partitioned Global Address Space Programming MODELS, pp. 1–11 (2014)Google Scholar
  18. 18.
    MPI Forum: A Message-Passing Interface Standard. University of Tennessee, pp. 403–418 (2012)Google Scholar
  19. 19.
    Yang, C., Murthy, K., Mellor-Crummey, J.: Managing asynchronous operations in Coarray Fortran 2.0. In: 27th International Parallel and Distributed Processing Symposium, vol. 36, no. 2, pp. 1321–1332. IEEE (2013)Google Scholar
  20. 20.
    Henty, D.: Performance of Fortran coarrays on the Cray XE6. In: Proceedings of Cray Users Groups. Applications, Tools and Techniques on the Road to Exascale Computing, pp. 281–288. IOS Press (2013)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.School of Meteorology and OceanologyNational University of Defense TechnologyChangshaChina

Personalised recommendations