Advertisement

Low-Rank Approximation of a Matrix: Novel Insights, New Progress, and Extensions

  • Victor Y. PanEmail author
  • Liang Zhao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9691)

Abstract

Empirical performance of the celebrated algorithms for low-rank approximation of a matrix by means of random sampling has been consistently efficient in various studies with various sparse and structured multipliers, but so far formal support for this empirical observation has been missing. Our new insight into this subject enables us to provide such an elusive formal support. Furthermore, our approach promises significant acceleration of the known algorithms by means of sampling with more efficient sparse and structured multipliers. It should also lead to enhanced performance of other fundamental matrix algorithms. Our formal results and our initial numerical tests are in good accordance with each other, and we have already extended our progress to the acceleration of the Fast Multipole Method and the Conjugate Gradient algorithms.

Keywords

Low-rank approximation of a matrix Random sampling Derandomization Fast multipole method Conjugate gradient algorithms 

Notes

Acknowledgements

Our research has been supported by NSF Grant CCF-1116736 and PSC CUNY Awards 67699-00 45 and 68862-00 46. We are also grateful to the reviewers for valuable comments.

References

  1. 1.
    Börm, S.: Efficient Numerical Methods for Non-local Operators: H2-Matrix Compression. Algorithms and Analysis. European Mathematical Society, Zürich (2010)CrossRefzbMATHGoogle Scholar
  2. 2.
    Bruns, W., Vetter, U.: Determinantal Rings. LNM, vol. 1327. Springer, Heidelberg (1988)zbMATHGoogle Scholar
  3. 3.
    Barba, L.A., Yokota, R.: How will the fast multipole method fare in exascale era? SIAM News 46(6), 1–3 (2013)Google Scholar
  4. 4.
    Chen, Z., Dongarra, J.J.: Condition numbers of Gaussian random matrices. SIAM J. Matrix Anal. Appl. 27, 603–620 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Carrier, J., Greengard, L., Rokhlin, V.: A fast adaptive algorithm for particle simulation. SIAM J. Sci. Comput. 9, 669–686 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Demmel, J.: The probability that a numerical analysis problem is difficult. Math. Comput. 50, 449–480 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Drineas, P., Kannan, R., Mahoney, M.W.: Fast Monte Carlo algorithms for matrices I-III. SIAM J. Comput. 36(1), 132–206 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Davidson, K.R., Szarek, S.J.: Local operator theory, random matrices, and Banach spaces. In: Johnson, W.B., Lindenstrauss, J. (eds.) Handbook on the Geometry of Banach Spaces, pp. 317–368. North Holland, Amsterdam (2001)CrossRefGoogle Scholar
  9. 9.
    Edelman, A.: Eigenvalues and condition numbers of random matrices. SIAM J. Matrix Anal. Appl. 9(4), 543–560 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Eidelman, Y., Gohberg, I., Haimovici, I.: Separable Type Representations of Matrices and Fast Algorithms Volume 1: Basics Completion Problems. Multiplication and Inversion Algorithms, Volume 2: Eigenvalue Methods. Birkhauser, Basel (2013)Google Scholar
  11. 11.
    Edelman, A., Sutton, B.D.: Tails of condition number distributions. SIAM J. Matrix Anal. Appl. 27(2), 547–560 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Frieze, A., Kannan, R., Vempala, S.: Fast Monte-Carlo algorithms for finding low-rank approximations. J. ACM 51, 1025–1041 (2004). (Proceedings version in 39th FOCS, pp. 370–378. IEEE Computer Society Press (1998))Google Scholar
  13. 13.
    Grasedyck, L., Hackbusch, W.: Construction and arithmetics of h-matrices. Computing 70(4), 295–334 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Golub, G.H., Van Loan, C.F.: Matrix Computations. The Johns Hopkins University Press, Baltimore, Maryland (2013)zbMATHGoogle Scholar
  15. 15.
    Greengard, L., Rokhlin, V.: A fast algorithm for particle simulation. J. Comput. Phys. 73, 325–348 (1987)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Goreinov, S.A., Tyrtyshnikov, E.E., Zamarashkin, N.L.: A theory of pseudo-skeleton approximations. Linear Algebra Appl. 261, 1–21 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Goreinov, S.A., Zamarashkin, N.L., Tyrtyshnikov, E.E.: Pseudo-skeleton approximations by matrices of maximal volume. Math. Notes 62(4), 515–519 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Higham, N.J.: Accuracy and Stability in Numerical Analysis, 2nd edn. SIAM, Philadelphia (2002)zbMATHGoogle Scholar
  19. 19.
    Halko, N., Martinsson, P.G., Tropp, J.A.: Finding Structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev. 53(2), 217–288 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    Mahoney, M.W.: Randomized algorithms for matrices and data. Found. Trends Mach. Learn. 3(2) (2011). (In: Way, M.J., et al. (eds.) Advances in Machine Learning and Data Mining for Astronomy (abridged version), pp. 647-672. NOW Publishers (2012))Google Scholar
  21. 21.
    Martinsson, P.G., Rokhlin, V., Tygert, M.: A fast algorithm for the inversion of general Toeplitz matrices. Comput. Math. Appl. 50, 741–752 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Pan, V.Y., Qian, G., Yan, X.: Random multipliers numerically stabilize Gaussian and block Gaussian elimination: proofs and an extension to low-rank approximation. Linear Algebra Appl. 481, 202–234 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Pan, V.Y., Zhao, L.: Low-rank approximation of a matrix: novel insights, new progress, and extensions. arXiv:1510.06142 [math.NA]. Submitted on 21 October 2015, Revised April 2016
  24. 24.
    Stewart, G.W.: Matrix Algorithms: Basic Decompositions, vol. I. SIAM, Philadelphia (1998)CrossRefzbMATHGoogle Scholar
  25. 25.
    Sankar, A., Spielman, D., Teng, S.-H.: Smoothed analysis of the condition numbers and growth factors of matrices. SIAM J. Matrix Anal. Appl. 28(2), 46–476 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Tyrtyshnikov, E.E.: Incomplete cross-approximation in the mosaic-skeleton method. Computing 64, 367–380 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Vandebril, R., van Barel, M., Golub, G., Mastronardi, N.: A bibliography on semiseparable matrices. Calcolo 42(3–4), 249–270 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    Vandebril, R., van Barel, M., Mastronardi, N.: Matrix Computations and Semiseparable Matrices: Linear Systems, vol. 1. The Johns Hopkins University Press, Baltimore, Maryland (2007)zbMATHGoogle Scholar
  29. 29.
    Vandebril, R., van Barel, M., Mastronardi, N.: Matrix Computations and Semiseparable Matrices: Eigenvalue and Singular Value Methods, vol. 2. The Johns Hopkins University Press, Baltimore, Maryland (2008)zbMATHGoogle Scholar
  30. 30.
    Xia, J.: On the complexity of some hierarchical structured matrix algorithms. SIAM J. Matrix Anal. Appl. 33, 388–410 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  31. 31.
    Xia, J.: Randomized sparse direct solvers. SIAM J. Matrix Anal. Appl. 34, 197–227 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Xia, J., Xi, Y., Cauley, S., Balakrishnan, V.: Superfast and stable structured solvers for Toeplitz least squares via randomized sampling. SIAM J. Matrix Anal. Appl. 35, 44–72 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  33. 33.
    Xia, J., Xi, Y., Gu, M.: A superfast structured solver for Toeplitz linear systems via randomized sampling. SIAM J. Matrix Anal. Appl. 33, 837–858 (2012)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Departments of Mathematics and Computer ScienceLehman College of the City University of New YorkBronxUSA
  2. 2.Ph.D. Programs in Mathematics and Computer ScienceThe Graduate Center of the City University of New YorkNew YorkUSA

Personalised recommendations