Advertisement

Implementation and Performance Analysis of 2.5D-PDGEMM on the K Computer

  • Daichi MukunokiEmail author
  • Toshiyuki Imamura
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10777)

Abstract

In this study, we propose a 2D-compatible implementation of 2.5D parallel matrix multiplication (2.5D-PDGEMM), which was designed to perform computations of 2D distributed matrices on a 2D process grid. We evaluated the performance of our implementation using 16384 nodes (131072 cores) on the K computer, which is a highly parallel computer. The results show that our 2.5D implementation outperforms conventional 2D implementations including the ScaLAPACK PDGEMM routine, in terms of strong scaling, even when the cost for matrix redistribution between 2D and 2.5D distributions is included. We discussed the performance of our implementation by providing a breakdown of the performance and describing the performance model of the implementation.

Keywords

Matrix multiplication 2.5D algorithm Parallel computing Communication-avoiding ScaLAPACK compatibility K computer 

Notes

Acknowledgment

The results were obtained using the K computer at the RIKEN Advanced Institute for Computational Science (project number: ra000022). This study is a part of the Flagship2020 project. We thank Akiyoshi Kuroda (RIKEN Advanced Institute for Computational Science), Eiji Yamanaka, and Naoki Sueyasu (Fujitsu Limited) for their helpful suggestions and discussions.

References

  1. 1.
    Georganas, E., González-Domínguez, J., Solomonik, E., Zheng, Y., Touriño, J., Yelick, K.: Communication avoiding and overlapping for numerical linear algebra. In: Proceedings of International Conference on High Performance Computing, Networking, Storage and Analysis (SC 2012), pp. 100:1–100:11 (2012)Google Scholar
  2. 2.
    Kitazawa, Y., Kuroda, A., Shida, N., Adachi, T., Minami, K.: Evaluation of MPI communication performance using throughput on the K computer. In: Proceedings of IPSJ Symposium on High Performance Computing and Computational Science (HPCS2017), pp. 17–25 (2017). (in Japanese)Google Scholar
  3. 3.
    Lipshitz, B., Ballard, G., Demmel, J., Schwartz, O.: Communication-avoiding parallel strassen: implementation and performance. In: Proceedings of International Conference on High Performance Computing, Networking, Storage and Analysis (SC 2012), pp. 101:1–101:11 (2012)Google Scholar
  4. 4.
    Schatz, M., Van de Geijn, R.A., Poulson, J.: Parallel matrix multiplication: a systematic journey. SIAM J. Sci. Comput. 38(6), C748–C781 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Solomonik, E., Demmel, J.: Communication-optimal parallel 2.5D matrix multiplication and LU factorization algorithms. In: Jeannot, E., Namyst, R., Roman, J. (eds.) Euro-Par 2011. LNCS, vol. 6853, pp. 90–109. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-23397-5_10 CrossRefGoogle Scholar
  6. 6.
    Solomonik, E., Demmel, J.: Communication-optimal parallel 2.5D matrix multiplication and LU factorization algorithms. Technical Report UCB/EECS-2011-10, LAPACK Working Note (2011). http://www.netlib.org/lapack/lawnspdf/lawn238.pdf
  7. 7.
    Van de Geijn, R.A., Watts, J.: SUMMA: scalable universal matrix multiplication algorithm, Technical report. Department of Computer Science, University of Texas at Austin (1995)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.RIKEN Advanced Institute for Computational ScienceKobeJapan

Personalised recommendations