Exploiting Data Sparsity in Parallel Matrix Powers Computations

Conference paper

DOI: 10.1007/978-3-642-55224-3_2

Part of the Lecture Notes in Computer Science book series (LNCS, volume 8384)
Cite this paper as:
Knight N., Carson E., Demmel J. (2014) Exploiting Data Sparsity in Parallel Matrix Powers Computations. In: Wyrzykowski R., Dongarra J., Karczewski K., Waśniewski J. (eds) Parallel Processing and Applied Mathematics. PPAM 2013. Lecture Notes in Computer Science, vol 8384. Springer, Berlin, Heidelberg


We derive a new parallel communication-avoiding matrix powers algorithm for matrices of the form \(A=D+USV^H\), where \(D\) is sparse and \(USV^H\) has low rank and is possibly dense. We demonstrate that, with respect to the cost of computing \(k\) sparse matrix-vector multiplications, our algorithm asymptotically reduces the parallel latency by a factor of \(O(k)\) for small additional bandwidth and computation costs. Using problems from real-world applications, our performance model predicts up to \(13\times \) speedups on petascale machines.


Communication-avoiding Matrix powers Graph cover Hierarchical matrices Parallel algorithms 

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.University of CaliforniaBerkeleyUSA

Personalised recommendations