High Performance Computing for Computational Science - VECPAR 2012

Volume 7851 of the series Lecture Notes in Computer Science pp 201-215

Matrix Multiplication on Multidimensional Torus Networks

  • Edgar SolomonikAffiliated withDivision of Computer Science, University of California at Berkeley
  • , James DemmelAffiliated withDivision of Computer Science, University of California at Berkeley

* Final gross prices may vary according to local VAT.

Get Access


Blocked matrix multiplication algorithms such as Cannon’s algorithm and SUMMA have a 2-dimensional communication structure. We introduce a generalized ’Split-Dimensional’ version of Cannon’s algorithm (SD-Cannon) with higher-dimensional and bidirectional communication structure. This algorithm is useful for torus interconnects that can achieve more injection bandwidth than single-link bandwidth. On a bidirectional torus network of dimension d, SD-Cannon can lower the algorithmic bandwidth cost by a factor of up to d. With rectangular collectives, SUMMA also achieves the lower bandwidth cost but has a higher latency cost. We use Charm++ virtualization to efficiently map SD-Cannon on unbalanced and odd-dimensional torus network partitions. Our performance study on Blue Gene/P demonstrates that a MPI version of SD-Cannon can exploit multiple communication links and improve performance.