Advertisement

Graph Expansion Analysis for Communication Costs of Fast Rectangular Matrix Multiplication

  • Grey Ballard
  • James Demmel
  • Olga Holtz
  • Benjamin Lipshitz
  • Oded Schwartz
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7659)

Abstract

Graph expansion analysis of computational DAGs is useful for obtaining communication cost lower bounds where previous methods, such as geometric embedding, are not applicable. This has recently been demonstrated for Strassen’s and Strassen-like fast square matrix multiplication algorithms. Here we extend the expansion analysis approach to fast algorithms for rectangular matrix multiplication, obtaining a new class of communication cost lower bounds. These apply, for example to the algorithms of Bini et al. (1979) and the algorithms of Hopcroft and Kerr (1971). Some of our bounds are proved to be optimal.

Keywords

Matrix Multiplication Communication Cost Computational Graph Fast Memory Matrix Multiplication Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Alon, N., Schwartz, O., Shapira, A.: An elementary construction of constant-degree expanders. Combinatorics, Probability & Computing 17(3), 319–327 (2008)MathSciNetzbMATHGoogle Scholar
  2. 2.
    Ballard, G., Demmel, J., Holtz, O., Lipshitz, B., Schwartz, O.: Brief announcement: strong scaling of matrix multiplication algorithms and memory-independent communication lower bounds. In: Proceedings of the 24th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2012, pp. 77–79. ACM, New York (2012)CrossRefGoogle Scholar
  3. 3.
    Ballard, G., Demmel, J., Holtz, O., Lipshitz, B., Schwartz, O.: Communication-optimal parallel algorithm for Strassen’s matrix multiplication. In: Proceedings of the 24th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2012, pp. 193–204. ACM, New York (2012)CrossRefGoogle Scholar
  4. 4.
    Ballard, G., Demmel, J., Holtz, O., Schwartz, O.: Minimizing communication in numerical linear algebra. SIAM J. Matrix Analysis Applications 32(3), 866–901 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    Ballard, G., Demmel, J., Holtz, O., Schwartz, O.: Graph expansion and communication costs of fast matrix multiplication. J. ACM (accepted, 2012)Google Scholar
  6. 6.
    Ballard, G., Demmel, J., Lipshitz, B., Schwartz, O.: Communication-avoiding parallel Strassen: Implementation and performance. In: Proceedings of 2012 International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2012, ACM, New York (2012)Google Scholar
  7. 7.
    Beling, P., Megiddo, N.: Using fast matrix multiplication to find basic solutions. Theoretical Computer Science 205(1-2), 307–316 (1998)MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    Bilardi, G., Pietracaprina, A., D’Alberto, P.: On the Space and Access Complexity of Computation DAGs. In: Brandes, U., Wagner, D. (eds.) WG 2000. LNCS, vol. 1928, pp. 47–58. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  9. 9.
    Bilardi, G., Preparata, F.: Processor-time tradeoffs under bounded-speed message propagation: Part II, lower boundes. Theory of Computing Systems 32(5), 1432–4350 (1999)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Bini, D.: Relations between exact and approximate bilinear algorithms. applications. Calcolo 17, 87–97 (1980), doi:10.1007/BF02575865MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    Bini, D., Capovani, M., Romani, F., Lotti, G.: O(n 2.7799) complexity for n ×n approximate matrix multiplication. Information Processing Letters 8(5), 234–235 (1979)MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    Bűrgisser, P., Clausen, M., Shokrollahi, M.A.: Algebraic Complexity Theory. Grundlehren der mathematischen Wissenschaften, vol. 315. Springer (1997)Google Scholar
  13. 13.
    Coppersmith, D.: Rapid multiplication of rectangular matrices. SIAM Journal on Computing 11(3), 467–471 (1982)MathSciNetzbMATHCrossRefGoogle Scholar
  14. 14.
    Coppersmith, D.: Rectangular matrix multiplication revisited. J. Complex. 13, 42–49 (1997)MathSciNetzbMATHCrossRefGoogle Scholar
  15. 15.
    Fischer, P., Probert, R.: Efficient Procedures for Using Matrix Algorithms. In: Loeckx, J. (ed.) ICALP 1974. LNCS, vol. 14, pp. 413–427. Springer, Heidelberg (1974)CrossRefGoogle Scholar
  16. 16.
    Galil, Z., Pan, V.: Parallel evaluation of the determinant and of the inverse of a matrix. Information Processing Letters 30(1), 41–45 (1989)MathSciNetzbMATHCrossRefGoogle Scholar
  17. 17.
    Hong, J.W., Kung, H.T.: I/O complexity: The red-blue pebble game. In: STOC 1981: Proceedings of the Thirteenth Annual ACM Symposium on Theory of Computing, pp. 326–333. ACM, New York (1981)Google Scholar
  18. 18.
    Hopcroft, J., Musinski, J.: Duality applied to the complexity of matrix multiplications and other bilinear forms. In: Proceedings of the Fifth Annual ACM Symposium on Theory of Computing, STOC 1973, pp. 73–87. ACM, New York (1973)CrossRefGoogle Scholar
  19. 19.
    Hopcroft, J.E., Kerr, L.R.: On minimizing the number of multiplications necessary for matrix multiplication. SIAM Journal on Applied Mathematics 20(1), 30–36 (1971)MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
    Huang, X., Pan, V.Y.: Fast rectangular matrix multiplications and improving parallel matrix computations. In: Proceedings of the Second International Symposium on Parallel Symbolic Computation, PASCO 1997, pp. 11–23. ACM, New York (1997)CrossRefGoogle Scholar
  21. 21.
    Huang, X., Pan, V.Y.: Fast rectangular matrix multiplication and applications. J. Complex. 14, 257–299 (1998)MathSciNetzbMATHCrossRefGoogle Scholar
  22. 22.
    Irony, D., Toledo, S., Tiskin, A.: Communication lower bounds for distributed-memory matrix multiplication. J. Parallel Distrib. Comput. 64(9), 1017–1026 (2004)zbMATHCrossRefGoogle Scholar
  23. 23.
    Kaplan, H., Sharir, M., Verbin, E.: Colored intersection searching via sparse rectangular matrix multiplication. In: Proceedings of the Twenty-Second Annual Symposium on Computational Geometry, SCG 2006, pp. 52–60. ACM, New York (2006)CrossRefGoogle Scholar
  24. 24.
    Ke, S., Zeng, B., Han, W., Pan, V.: Fast rectangular matrix multiplication and some applications. Science in China Series A: Mathematics 51, 389–406 (2008), doi:10.1007/s11425-007-0169-2MathSciNetzbMATHCrossRefGoogle Scholar
  25. 25.
    Knight, P.: Fast rectangular matrix multiplication and QR decomposition. Linear Algebra and its Applications 221, 69–81 (1995)MathSciNetzbMATHCrossRefGoogle Scholar
  26. 26.
    Koucky, M., Kabanets, V., Kolokolova, A.: Expanders made elementary (2007) (in preparation), http://www.cs.sfu.ca/~kabanets/papers/expanders.pdf
  27. 27.
    Kratsch, D., Spinrad, J.: Between O(nm) and O(n)?. In: Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2003, pp. 709–716. Society for Industrial and Applied Mathematics, Philadelphia (2003)Google Scholar
  28. 28.
    Lev, G., Valiant, L.G.: Size bounds for superconcentrators. Theoretical Computer Science 22(3), 233–251 (1983)MathSciNetzbMATHCrossRefGoogle Scholar
  29. 29.
    Lotti, G., Romani, F.: On the asymptotic complexity of rectangular matrix multiplication. Theoretical Computer Science 23(2), 171–185 (1983)MathSciNetzbMATHCrossRefGoogle Scholar
  30. 30.
    Mihail, M.: Conductance and convergence of Markov chains: A combinatorial treatment of expanders. In: Proceedings of the Thirtieth Annual IEEE Symposium on Foundations of Computer Science, pp. 526–531 (1989)Google Scholar
  31. 31.
    Reingold, O., Vadhan, S., Wigderson, A.: Entropy waves, the zig-zag graph product, and new constant-degree expanders. Annals of Mathematics 155(1), 157–187 (2002)MathSciNetzbMATHCrossRefGoogle Scholar
  32. 32.
    Savage, J.: Space-time tradeoffs in memory hierarchies. Technical report, Brown University, Providence, RI, USA (1994)Google Scholar
  33. 33.
    Strassen, V.: Gaussian elimination is not optimal. Numer. Math. 13, 354–356 (1969)MathSciNetzbMATHCrossRefGoogle Scholar
  34. 34.
    Yuster, R., Zwick, U.: Detecting short directed cycles using rectangular matrix multiplication and dynamic programming. In: Proceedings of the Fifteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2004, pp. 254–260. Society for Industrial and Applied Mathematics, Philadelphia (2004)Google Scholar
  35. 35.
    Yuster, R., Zwick, U.: Fast sparse matrix multiplication. ACM Trans. Algorithms 1(1), 2–13 (2005)MathSciNetCrossRefGoogle Scholar
  36. 36.
    Zwick, U.: All pairs shortest paths using bridging sets and rectangular matrix multiplication. J. ACM 49, 289–317 (2002)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Grey Ballard
    • 1
  • James Demmel
    • 2
  • Olga Holtz
    • 3
    • 4
  • Benjamin Lipshitz
    • 1
  • Oded Schwartz
    • 1
  1. 1.EECS DepartmentUniversity of CaliforniaBerkeleyUS
  2. 2.Mathematics Department and CS DivisionUniversity of CaliforniaBerkeleyUSA
  3. 3.Departments of MathematicsUniversity of CaliforniaBerkeleyUSA
  4. 4.Technische UniversitätBerlinGermany

Personalised recommendations