Advertisement

Science China Information Sciences

, Volume 58, Issue 9, pp 1–14 | Cite as

Automatic tuning of sparse matrix-vector multiplication on multicore clusters

  • ShiGang LiEmail author
  • ChangJun HuEmail author
  • JunChao Zhang
  • YunQuan Zhang
Research Paper

Abstract

To have good performance and scalability, parallel applications should be sophisticatedly optimized to exploit intra-node parallelism and reduce inter-node communication on multicore clusters. This paper investigates the automatic tuning of the sparse matrix-vector (SpMV) multiplication kernel implemented in a partitioned global address space language, which supports a hybrid thread- and process-based communication layer for multicore systems. One-sided communication is used for inter-node data exchange, while intra-node communication uses a mix of process shared memory and multithreading. We develop performance models to facilitate selecting the best configuration of threads and processes hybridization as well as the best communication pattern for SpMV. As a result, our tuned SpMV in the hybrid runtime environment consumes less memory and reduces inter-node communication volume, without damaging the data locality. Experiments are conducted on 12 real sparse matrices. On 16-node Xeon and 8-node Opteron clusters, our tuned SpMV kernel gets on average 1.4X and 1.5X improvement in performance over the well-optimized process-based message-passing implementation, respectively.

Keywords

SpMV PGAS hybridization model-driven multicore clusters 

多核集群上稀疏矩阵向量乘法的自动调优

抽象

创新点

为了获得理想的性能及可扩展性, 并行应用通常需要精细调优, 以更好地利用多核集 群节点内部的高度并行性, 并减少节点间通信开销. 本文研究了多核集群上稀疏矩阵向量(SpMV) 乘法的自动调优技术, 其中SpMV代码基于划分全局地址空间(PGAS)语言UPC实现. UPC 通信层支持多线程/多进程混合运行时环境, 其中节点间数据交换通过单边通信实现, 而节点内通信通过 PSHM(Process SHare Memory) 以及多线程进行优化. 本文为此类混合运行时环境 (如UPC) 建立通信性能模型, 并基于该模型为 SpMV 选择最优混合运行时配置参数以及通信模式, 在保证数据局部性的前提下, 减少内存开销及节点间通信量. 通过对 12个实际稀疏矩阵进行实验测试表明, 相对于高度手工优化的 MPI SpMV 实现, 自动调优后的 SpMV 在 16 节点至强集群及 8 节点皓龙集群上分别获得 1.4 倍及 1.5 倍性能提升.

关键词

SpMV PGAS 模型驱动 多核集群 
092102 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    UPC Consortium. UPC Language Specification Version 1.2. LBNL Technical Report LBNL-59208. 2005Google Scholar
  2. 2.
    Hilfinger P, Bonachea D, Datta K, et al. Titanium Language Reference Manual, version 2.19. UCB/EECS Technical Report UCB/EECS-2005-15. 2005Google Scholar
  3. 3.
    Numrish R W, Reid J. Co-array Fortran for parallel programming. SIGPLAN Fortran Forum, 1998, 17: 1–31zbMATHCrossRefGoogle Scholar
  4. 4.
    Shan H, Austin B, Wright N, et al. Accelerating applications at scale using one-sided communication. In: Proceedings of the Sixth Conference on Partitioned Global Address Space Programming Models. New York: ACM, 2012Google Scholar
  5. 5.
    Blagojevic F, Hargrove P, Iancu C, et al. Hybrid PGAS runtime support for multicore nodes. In: Proceedings of the Fourth Conference on Partitioned Global Address Space Programming Model. New York: ACM, 2010. 1–10CrossRefGoogle Scholar
  6. 6.
    Nishtala R. Architectural probes for measuring communication overlap potential. Dissertation for the Master Degree. Berkeley: University of California, 2006Google Scholar
  7. 7.
    Molka D, Hackenberg D, Schone R, et al. Memory performance and cache coherency effects on an Intel Nehalem multiprocessor system. In: Proceedings of the 2009 18th International Conference on Parallel Architectures and Compilation Techniques. Washington DC: IEEE Computer Society, 2009. 261–270CrossRefGoogle Scholar
  8. 8.
    Su J, Yelick K. Automatic support for irregular computations in a high-level language. In: Proceedings of the 19th IEEE International Parallel and Distributed Processing Symposium. Washington DC: IEEE Computer Society, 2005Google Scholar
  9. 9.
    Bell N, Garland M. Implementing sparse matrix-vector multiplication on throughput-oriented processors. In: Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis. New York: ACM, 2009. 1–11CrossRefGoogle Scholar
  10. 10.
    Davis T A, Hu Y. The University of Florida sparse matrix collection. ACM Trans Math Softw, 2011, 1: 1–25zbMATHMathSciNetGoogle Scholar
  11. 11.
    Balay S, Brown J, Buschelman K, et al. PETSc User’s Manual. ANL Technical Report ANL-95/11—Revision 3.3. 2012Google Scholar
  12. 12.
    Vuduc R, Demmel J, Yelick K. OSKI: a library of automatically tuned sparse matrix kernels. J Phys Confer Ser, 2005, 16: 521–530CrossRefGoogle Scholar
  13. 13.
    Im E, Yelick K, Vuduc R. Sparsity: optimization framework for sparse matrix kernels. Int J High Perform Comput Appl, 2004. 18: 135–158CrossRefGoogle Scholar
  14. 14.
    Jain A. pOSKI: an extensible autotuning framework to perform optimized SpMVs on multicore architectures. Dissertation for the Master Degree. Berkeley: University of California, 2008Google Scholar
  15. 15.
    Peng D, Ding Y, Yu S, et al. Automatic parallelization of tiled loop nests with enhanced fine-grained parallelism on GPUs. In: Proceedings of the 41st International Conference on Parallel Processing. Washington DC: IEEE Computer Society, 2012. 350–359Google Scholar
  16. 16.
    Choi J W, Singh A, Vuduc R W. Model-driven autotuning of sparse matrix-vector multiply on GPUs. In: Proceedings of the 15th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. New York: ACM, 2010. 115–126Google Scholar
  17. 17.
    Boman E, Catalyurek U. Constrained fine-grain parallel sparse matrix distribution. In: Proceedings of SIAMWorkshop on Combinatorial Scientific Computing, Costa Mesa, 2007Google Scholar
  18. 18.
    Bisseling R H, Meesen W. Communication balancing in parallel sparse matrix-vector multiplication. Electron Trans Numer Anal, 2005, 21: 47–65zbMATHMathSciNetGoogle Scholar
  19. 19.
    Nastea S, Frieder O, El-Ghazawi T. Load-balancing in sparse matrix-vector multiplication. In: Proceedings of the 8th IEEE Symposium on Parallel and Distributed Processing, New Orleans, 1996. 218–225Google Scholar
  20. 20.
    Lee S, Eigenmann R. Adaptive runtime tuning of parallel sparse matrix-vector multiplication on distributed memory systems. In: Proceedings of the 22nd Annual International Conference on Supercomputing. New York: ACM, 2008. 195–204CrossRefGoogle Scholar
  21. 21.
    OpenMP Architecture Review Board. Application Program Interface Version 3.1, 2011Google Scholar
  22. 22.
    Chow E, Hysom D. Assessing Performance of Hybrid MPI/OpenMP Programs on SMP Clusters. LLNL Technical Report UCRL-JC-143957, 2001Google Scholar
  23. 23.
    Rabenseifner R. Hybrid parallel programming on HPC platforms. In: Proceedings of the 5th European Workshop on OpenMP, Aachen, 2003. 185–194Google Scholar
  24. 24.
    Rabenseifner R, Hager G, Jost G. Hybrid MPI/OpenMP parallel programming on clusters of multi-core SMP nodes. In: Proceedings of the 17th Euromicro International Conference on Parallel, Distributed and Network-based Processing. Washington DC: IEEE Computer Society, 2009. 427–436Google Scholar
  25. 25.
    Negara S, Zheng G, Pan K C, et al. Automatic MPI to AMPI program transformation using Photran. In: Proceedings of Euro-Par 2010 Parallel Processing Workshops. Berlin: Springer, 2011. 531–539CrossRefGoogle Scholar
  26. 26.
    Li S, Hoefler T, Snir M. NUMA-aware shared memory collective communication for MPI. In: Proceedings of the 22nd International Symposium on High-performance Parallel and Distributed Computing. New York: ACM, 2013. 85–96CrossRefGoogle Scholar
  27. 27.
    Li S, Hoefler T, Hu C, et al. Improved MPI collectives for MPI processes in shared address spaces. Cluster Comput, 2014, 17: 1139–1155zbMATHCrossRefGoogle Scholar
  28. 28.
    Friedley A, Hoefler T, Bronevetsky G, et al. Ownership passing: efficient distributed memory programming on multicore systems. In: Proceedings of the 18th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. New York: ACM, 2013. 177–186Google Scholar

Copyright information

© Science China Press and Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.State Key Laboratory of Computer Architecture,Institute of Computing TechnologyChinese Academy of SciencesBeijingChina
  2. 2.School of Computer and Communication EngineeringUniversity of Science and Technology BeijingBeijingChina
  3. 3.Department of Computer ScienceUniversity of Illinois at Urbana-ChampaignUrbanaUSA

Personalised recommendations