Chapter

Parallel Processing and Applied Mathematics

Volume 8384 of the series Lecture Notes in Computer Science pp 15-25

Date:

Exploiting Data Sparsity in Parallel Matrix Powers Computations

  • Nicholas KnightAffiliated withDept. Computer Sciences, Lancaster UniversityUniversity of California Email author 
  • , Erin CarsonAffiliated withDept. Computer Sciences, Lancaster UniversityUniversity of California
  • , James DemmelAffiliated withDept. Computer Sciences, Lancaster UniversityUniversity of California

* Final gross prices may vary according to local VAT.

Get Access

Abstract

We derive a new parallel communication-avoiding matrix powers algorithm for matrices of the form \(A=D+USV^H\), where \(D\) is sparse and \(USV^H\) has low rank and is possibly dense. We demonstrate that, with respect to the cost of computing \(k\) sparse matrix-vector multiplications, our algorithm asymptotically reduces the parallel latency by a factor of \(O(k)\) for small additional bandwidth and computation costs. Using problems from real-world applications, our performance model predicts up to \(13\times \) speedups on petascale machines.

Keywords

Communication-avoiding Matrix powers Graph cover Hierarchical matrices Parallel algorithms