Advertisement

Sparsifying Synchronization for High-Performance Shared-Memory Sparse Triangular Solver

  • Jongsoo Park
  • Mikhail Smelyanskiy
  • Narayanan Sundaram
  • Pradeep Dubey
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8488)

Abstract

The last decade has seen rapid growth of single-chip multiprocessors (CMPs), which have been leveraging Moore’s law to deliver high concurrency via increases in the number of cores and vector width. Modern CMPs execute from several hundreds to several thousands concurrent operations per second, while their memory subsystem delivers from tens to hundreds Giga-bytes per second bandwidth.

Taking advantage of these parallel resources requires highly tuned parallel implementations of key computational kernels, which form the back-bone of modern HPC. Sparse triangular solver is one such kernel and is the focus of this paper. It is widely used in several types of sparse linear solvers, and it is commonly considered challenging to parallelize and scale even on a moderate number of cores. This challenge is due to the fact that triangular solver typically has limited task-level parallelism and relies on fine-grain synchronization to exploit this parallelism, compared to data-parallel operations such as sparse matrix-vector multiplication.

This paper presents synchronization sparsification technique that significantly reduces the overhead of synchronization in sparse triangular solver and improves its scalability. We discover that a majority of task dependencies are redundant in task dependency graphs which are used to model the flow of computation in sparse triangular solver. We propose a fast and approximate sparsification algorithm, which eliminates more than 90% of these dependencies, substantially reducing synchronization overhead. As a result, on a 12-core Intel® Xeon® processor, our approach improves the performance of sparse triangular solver by 1.6x, compared to the conventional level-scheduling with barrier synchronization. This, in turn, leads to a 1.4x speedup in a pre-conditioned conjugate gradient solver.

Keywords

Critical Path Barrier Synchronization Synchronization Overhead Dependency Edge Redundant Edge 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Agullo, E., Demmel, J., Dongarra, J., Hadri, B., Kurzak, J., Langou, J., Ltaief, H., Luszczek, P., Tomov, S.: Numerical linear algebra on emerging architectures: the PLASMA and MAGMA projects. Journal of Physics: Conference Series 180 (2009)Google Scholar
  2. 2.
    Anderson, E., Saad, Y.: Solving Sparse Triangular Linear Systems on Parallel Computers. International Journal of High Speed Computing 1(1) (1989)Google Scholar
  3. 3.
    Chan, E., Quintana-Orti, E.S., Quintana-Orti, G., van de Geijn, R.: SuperMatrix Out-of-Order Scheduling of Matrix Operations for SMP and Multi-Core Architectures. In: Symposium on Parallelism in Algorithms and Architectures (SPAA) (2007)Google Scholar
  4. 4.
    Chhugani, J., Satish, N., Kim, C., Sewall, J., Dubey, P.: Fast and Efficient Graph Traversal Algorithm for CPUs: Maximizing Single-Node Efficiency. In: International Symposium on Parallel and Distributed Processing (IPDPS) (2012)Google Scholar
  5. 5.
    Molka, R.S.D., Hackenberg, D., Müller, M.S.: Memory Performance and Cache Coherency Effects on an Intel Nehalem Multiprocessor System. In: International Conference on Parallel Architectures and Compilation Techniques (PACT) (2009)Google Scholar
  6. 6.
    Davis, T.A., Hu, Y.: The University of Florida Sparse Matrix Collection. ACM Transactions on Mathematical Software 15(1) (2011), http://www.cise.ufl.edu/research/sparse/matrices
  7. 7.
    Dongarra, J., Heroux, M.A.: Toward a New Metric for Ranking High Performance Computing Systems. Technical Report 4744, Sandia National Laboratories (2013)Google Scholar
  8. 8.
    Graham, R.L.: Bounds on Multiprocessing Timing Anomalies. SIAM Journal on Applied Mathematics 17(2) (1969)Google Scholar
  9. 9.
    Hensgen, D., Finkel, R., Manber, U.: Two Algorithms for Barrier Synchronization. International Journal of Parallel Programming 17(1) (1988)Google Scholar
  10. 10.
    Henson, V.E., Yang, U.M.: Boomeramg: a parallel algebraic multigrid solver and preconditioner. Applied Numerical Mathematics 41, 155–177 (2000)CrossRefMathSciNetGoogle Scholar
  11. 11.
    Hestenes, M.R., Stiefel, E.: Methods of Conjugate Gradients for Solving Linear Systems. Journal of Research of the National Bureau of Standards 49(6) (1952)Google Scholar
  12. 12.
    Hsu, H.T.: An Algorithm for Finding a Minimal Equivalent Graph of a Digraph. Journal of the ACM (JACM) 22(1) (1975)Google Scholar
  13. 13.
    Hu, T.C.: Parallel Sequencing and Assembly Line Problems. Operations Research 19(6) (1961)Google Scholar
  14. 14.
    Iwashita, T., Nakashima, H., Takahashi, Y.: Algebraic Block Multi-Color Ordering Method for Parallel Multi-Threaded Sparse Triangular Solver in ICCG Method. In: International Symposium on Parallel and Distributed Processing (IPDPS) (2012)Google Scholar
  15. 15.
    Kepner, J., Gilbert, J.: Graph Algorithms in the Language of Linear Algebra. Society for Industrial & Applied Mathematics (2011)Google Scholar
  16. 16.
    Kim, K., Eijkhout, V.: A Parallel Sparse Direct Solver via Hierarchical DAG Scheduling. Technical Report 5, The Texas Advanced Computing Center (2012)Google Scholar
  17. 17.
    Mayer, J.: Parallel algorithms for solving linear systems with sparse triangular matrices. Computing 86(4) (2009)Google Scholar
  18. 18.
    Meijerink, J.A., van der Vorst, H.A.: An Iterative Solution Method for Linear Systems of Which the Coefficient Matrix is a Symmetric M-Matrix. Mathematics of Computation 31(137) (1977)Google Scholar
  19. 19.
    Naumov, M.: Parallel Solution of Sparse Triangular Linear Systems in the Preconditioned Iterative Methods on the GPU. Technical Report 001, NVIDIA Corporation (2011)Google Scholar
  20. 20.
    Park, J., Dally, W.J.: Buffer-space Efficient and Deadlock-free Scheduling of Stream Applications on Multi-core Architectures. In: Symposium on Parallelism in Algorithms and Architectures (SPAA) (2010)Google Scholar
  21. 21.
    Petitet, A., Whaley, R.C., Dongarra, J., Cleary, A.: HPL - A Portable Implementation of the High-Performance Linpack Benchmark for Distributed-Memory Computers, http://www.netlib.org/benchmark/hpl/
  22. 22.
    Poole, E.L., Ortega, J.M.: Multicolor ICCG Methods for Vector Computers. SIAM Journal on Numerical Analysis 24(6) (1987)Google Scholar
  23. 23.
    Rothberg, E., Gupta, A.: Parallel ICCG on a Hierarchical Memory Multiprocessor - Addressing the Triangular Solve Bottleneck. Parallel Computing 18(7) (1992)Google Scholar
  24. 24.
    Saltz, J.H.: Aggregation Methods for Solving Sparse Triangular Systems on Multiprocessors. SIAM Journal of Scientific and Statistical Computing 11(1) (1990)Google Scholar
  25. 25.
    Saltz, J.H., Mirchandaney, R., Baxter, D.: Run-Time Parallelization and Scheduling of Loops. In: Symposium on Parallelism in Algorithms and Architectures (SPAA) (1989)Google Scholar
  26. 26.
    Smith, B., Zhang, H.: Sparse triangular solves for ILU revisited: Data layout crucial to better performance. International Journal of High Performance Computing Applications 25(4), 386–391 (2011)CrossRefGoogle Scholar
  27. 27.
    Wolf, M.M., Heroux, M.A., Boman, E.G.: Factors Impacting Performance of Multithreaded Sparse Triangular Solve. In: Palma, J.M.L.M., Daydé, M., Marques, O., Lopes, J.C. (eds.) VECPAR 2010. LNCS, vol. 6449, pp. 32–44. Springer, Heidelberg (2011)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Jongsoo Park
    • 1
  • Mikhail Smelyanskiy
    • 1
  • Narayanan Sundaram
    • 1
  • Pradeep Dubey
    • 1
  1. 1.Parallel Computing LabIntel CorporationSanta ClaraUSA

Personalised recommendations