Parallel Processing and Applied Mathematics

Volume 8384 of the series Lecture Notes in Computer Science pp 15-25


Exploiting Data Sparsity in Parallel Matrix Powers Computations

  • Nicholas KnightAffiliated withUniversity of California Email author 
  • , Erin CarsonAffiliated withUniversity of California
  • , James DemmelAffiliated withUniversity of California

* Final gross prices may vary according to local VAT.

Get Access


We derive a new parallel communication-avoiding matrix powers algorithm for matrices of the form \(A=D+USV^H\), where \(D\) is sparse and \(USV^H\) has low rank and is possibly dense. We demonstrate that, with respect to the cost of computing \(k\) sparse matrix-vector multiplications, our algorithm asymptotically reduces the parallel latency by a factor of \(O(k)\) for small additional bandwidth and computation costs. Using problems from real-world applications, our performance model predicts up to \(13\times \) speedups on petascale machines.


Communication-avoiding Matrix powers Graph cover Hierarchical matrices Parallel algorithms