Advertisement

Efficient Evaluation of Matrix Polynomials

  • Niv Hoffman
  • Oded Schwartz
  • Sivan ToledoEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10777)

Abstract

We revisit the problem of evaluating matrix polynomials and introduce memory and communication efficient algorithms. Our algorithms, based on that of Patterson and Stockmeyer, are more efficient than previous ones, while being as memory-efficient as Van Loan’s variant. We supplement our theoretical analysis of the algorithms, with matching lower bounds and with experimental results showing that our algorithms outperform existing ones.

Keywords

Polynomial evaluation Matrix polynomials Cache efficiency 

References

  1. 1.
    Ballard, G., Benson, A.R., Druinsky, A., Lipshitz, B., Schwartz, O.: Improving the numerical stability of fast matrix multiplication. SIAM J. Matrix Anal. Appl. 37, 1382–1418 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Ballard, G., Demmel, J., Holtz, O., Schwartz, O.: Minimizing communication in linear algebra. SIAM J. Matrix Anal. Appl. 32, 866–901 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Ballard, G., Demmel, J., Holtz, O., Schwartz, O.: Graph expansion and communication costs of fast matrix multiplication. J. ACM 59, 32 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Benson, A.R., Ballard, G.: A framework for practical parallel fast matrix multiplication. In: ACM SIGPLAN Notices, vol. 50, pp. 42–53 (2015)Google Scholar
  5. 5.
    Bini, D., Lotti, G.: Stability of fast algorithms for matrix multiplication. Numer. Math. 36, 63–72 (1980)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Davies, P.I., Higham, N.J.: A Schur-Parlett algorithm for computing matrix functions. SIAM J. Matrix Anal. Appl. 25, 464–485 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Deadman, E., Higham, N.J., Ralha, R.: Blocked schur algorithms for computing the matrix square root. In: Manninen, P., Öster, P. (eds.) PARA 2012. LNCS, vol. 7782, pp. 171–182. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-36803-5_12 CrossRefGoogle Scholar
  8. 8.
    Demmel, J., Dumitriu, I., Holtz, O., Kleinberg, R.: Fast matrix multiplication is stable. Numer. Math. 106, 199–224 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Dongarra, J.J., Cruz, J.D., Hammarling, S., Duff, I.: A set of level 3 basic linear algebra subprograms. ACM Trans. Math. Softw. 16, 1–17 (1990)CrossRefzbMATHGoogle Scholar
  10. 10.
    Frigo, M., Leiserson, C.E., Prokop, H., Ramachandran, S.: Cache-oblivious algorithms. In: Proceedings of the 40th IEEE Annual Symposium on Foundations of Computer Science (FOCS), pp. 285–297 (1999)Google Scholar
  11. 11.
    Golub, G., Loan, C.V.: Matrix Computations, 4th edn. Johns Hopkins, Baltimore (2013)zbMATHGoogle Scholar
  12. 12.
    Higham, N.J.: Functions of matrices: theory and algorithm. In: SIAM (2008)Google Scholar
  13. 13.
    Huang, J., Smith, T.M., Henry, G.M., van de Geijn, R.A.: Implementing Strassen’s algorithm with BLIS. arXiv preprint arXiv:1605.01078 (2016)
  14. 14.
    Huang, J., Smith, T.M., Henry, G.M., van de Geijn, R.A.: Strassen’s algorithm reloaded. In: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis (SC), pp. 690–701. IEEE (2016)Google Scholar
  15. 15.
    Irony, D., Toledo, S., Tiskin, A.: Communication lower bounds for distributed-memory matrix multiplication. J. Par. Dist. Comput. 64, 1017–1026 (2004)CrossRefzbMATHGoogle Scholar
  16. 16.
    Jia-Wei, H., Kung, H.T.: I/o complexity: the red-blue pebble game. In: Proceedings of the Thirteenth Annual ACM Symposium on Theory of Computing (STOC), pp. 326–333, ACM, New York (1981)Google Scholar
  17. 17.
    Jonsson, I., Kågström, B.: Recursive blocked algorithms for solving triangular systems: Part II: two-sided and generalized Sylvester and Lyapunov matrix equations. ACM Trans. Math. Softw. 28, 416–435 (2002)CrossRefzbMATHGoogle Scholar
  18. 18.
    Kressner, D.: Block algorithms for reordering standard and generalized Schur forms. ACM Trans. Math. Softw. 32, 521–532 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Loan, C.F.V.: A note on the evaluation of matrix polynomials. IEEE Trans. Autom. Control AC 24, 320–321 (1979)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Parlett, B.N.: Computation of functions of triangular matrices. Memorandum ERL-M481, Electronics Research Laboratory, UC Berkeley, November 1974Google Scholar
  21. 21.
    Paterson, M.S., Stockmeyer, L.J.: On the number of nonscalar multiplications necessary to evaluate polynomials. SIAM J. Comput. 2, 60–66 (1973)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Strassen, V.: Gaussian elimination is not optimal. Num. Math. 13, 354–356 (1969)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Toledo, S.: A survey of out-of-core algorithms in numerical linear algebra. In: Abello, J.M., Vitter, J.S. (eds.), External Memory Algorithms, DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pp. 161–179. American Mathematical Society (1999)Google Scholar
  24. 24.
    Winograd, S.: On multiplication of 2-by-2 matrices. Linear Algebra Appl. 4, 381–388 (1971)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Tel-Aviv UniversityTel AvivIsrael
  2. 2.The Hebrew University of JerusalemJerusalemIsrael

Personalised recommendations