Exploiting Data Sparsity in Parallel Matrix Powers Computations

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8384)

Abstract

We derive a new parallel communication-avoiding matrix powers algorithm for matrices of the form \(A=D+USV^H\), where \(D\) is sparse and \(USV^H\) has low rank and is possibly dense. We demonstrate that, with respect to the cost of computing \(k\) sparse matrix-vector multiplications, our algorithm asymptotically reduces the parallel latency by a factor of \(O(k)\) for small additional bandwidth and computation costs. Using problems from real-world applications, our performance model predicts up to \(13\times \) speedups on petascale machines.

Keywords

Communication-avoiding Matrix powers Graph cover Hierarchical matrices Parallel algorithms 

References

  1. 1.
    Bebendorf, M.: A means to efficiently solve elliptic boundary value problems. In: Bart, T., Griebel, M., Keyes, D., Nieminen, R., Roose, D., Schlick, T. (eds.) Hierarchical Matrices. LNCS, vol. 63, pp. 49–98. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  2. 2.
    Chan, E., Heimlich, M., Purkayastha, A., Van De Geijn, R.: Collective communication: theory, practice, and experience. Concurrency Comput.: Pract. Exper. 19, 1749–1783 (2007)CrossRefGoogle Scholar
  3. 3.
    Chandrasekaran, S., Dewilde, P., Gu, M., Lyons, W., Pals, T.: A fast solver for HSS representations via sparse matrices. SIAM J. Matrix Anal. Appl. 29, 67–81 (2006)CrossRefMATHMathSciNetGoogle Scholar
  4. 4.
    Demmel, J., Hoemmen, M., Mohiyuddin, M., Yelick, K.: Avoiding communication in computing Krylov subspaces. Technical report UCB/EECS-2007-123, University of California-Berkeley (2007)Google Scholar
  5. 5.
    Hoemmen, M.: Communication-avoiding Krylov subspace methods. Ph.D. thesis, University of California-Berkeley (2010)Google Scholar
  6. 6.
    Hong, J., Kung, H.: I/O complexity: the red-blue pebble game. In: Proceedings of the 13th ACM Symposium on Theory of Computing, pp. 326–333. ACM, New York (1981)Google Scholar
  7. 7.
    Knight, N., Carson, E., Demmel, J.: Exploiting data sparsity in parallel matrix powers computations. Technical report UCB/EECS-2013-47, University of California-Berkeley (2013)Google Scholar
  8. 8.
    Kriemann, R.: Parallele Algorithmen für \(\cal H\)-Matrizen. Ph.D. thesis, Christian-Albrechts-Universität zu Kiel (2005)Google Scholar
  9. 9.
    Leiserson, C., Rao, S., Toledo, S.: Efficient out-of-core algorithms for linear relaxation using blocking covers. J. Comput. Syst. Sci. Int. 54, 332–344 (1997)CrossRefMATHMathSciNetGoogle Scholar
  10. 10.
    Mohiyuddin, M.: Tuning hardware and software for multiprocessors. Ph.D. thesis, University of California-Berkeley (2012)Google Scholar
  11. 11.
    Mohiyuddin, M., Hoemmen, M., Demmel, J., Yelick, K.: Minimizing communication in sparse matrix solvers. In: Proceedings of the Conference on High Performance Computing Networking, Storage, and Analysis, pp. 36:1–36:12. ACM, New York (2009)Google Scholar
  12. 12.
    Philippe, B., Reichel, L.: On the generation of Krylov subspace bases. Appl. Numer. Math. 62, 1171–1186 (2012)CrossRefMATHMathSciNetGoogle Scholar
  13. 13.
    Wang, S., Li, X., Xia, J., Situ, Y., de Hoop, M.: Efficient scalable algorithms for hierarchically semiseparable matrices. SIAM J. Sci. Comput. (2012, under review)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.University of CaliforniaBerkeleyUSA

Personalised recommendations