Thick-restart Lanczos method for symmetric eigenvalue problems

  • Kesheng Wu
  • Horst D. Simon
Regular Talks
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1457)


This paper describes a restarted Lanczos algorithm that particularly suitable for implementation on distributed machines. The only communication operation is requires outside of the matrix-vector multiplication is a global sum. For most large eigenvalue problems, the global sum operation takes a small fraction of the total execution time. The majority of the computer is spent in the matrix-vector multiplication. Efficient parallel matrix-vector multiplication routines can be found in many parallel sparse matrix packages such as AZTEC [9], BLOCK-SOLVE [10], PETSc [3], P_SPARSLIB. For this reason, our main emphasis in this paper is to demonstrate the correctness and the effectiveness of the new algorithm.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    W. E. Arnoldi. The principle of minimized iteration in the solution of the matrix eigenvalue problem. Quarterly of Applied Mathematics, 9:17–29, 1951.Google Scholar
  2. [2]
    J. Baglama, D. Calvetti, and L. Reichel. Iterative methods for the computation of a few eigenvalues of a large symmetric matrix. BIT, 36:400–421, 1996.Google Scholar
  3. [3]
    S. Balay, W. Gropp, L. C. McInnes, and B. Smith. PETSc 2.0 users manual. Technical Report ANL-95/11, Mathematics and Computer Science Division, Argonne National Laboratory, 1995. Latest source code available at URL Scholar
  4. [4]
    A. Chapman and Y. Saad. Deflated and augmented Krylov subspace techniques. Technical Report UMSI 95/181, Minnesota Supercomputing Institute, University of Minnesota, 1995.Google Scholar
  5. [5]
    M. Crouzeix, B. Philippe, and M. Sadkane. The Davidson method. SIAM J. Sci. Comput., 15:62–76, 1994.Google Scholar
  6. [6]
    Ernest R. Davidson. The iterative calculation of a few of the lowest eigenvalues and corresponding eigenvectors of large real-symmetric matrices. J. Comput. Phys., 17:87–94, 1975.Google Scholar
  7. [7]
    Ernest R. Davidson. Super-matrix methods. Computer Physics Communications, 53:49–60, 1989.Google Scholar
  8. [8]
    G. H. Golub and C. F. van Loan. Matrix Computations. The Johns Hopkins University Press, Baltimore, MD 21211, third edition, 1996.Google Scholar
  9. [9]
    S. A. Hutchinson, J. N. Shadid, and R. S. Tuminaro. AZTEC user's guide. Technical Report SAND95-1559, Massively parallel computing research laboratory, Sandia National Laboratories, Albuquerque, NM, 1995.Google Scholar
  10. [10]
    M. T. Jones and P. E. Plassmann. Blocksolve95 users manual: scalable library software for parallel solution of sparse linear systems. Technical Report ANL95/48, Mathematics and Computer Science Division, Argonne national laboratory, Argonne, IL, 1995.Google Scholar
  11. [11]
    Richard B. Lehoucq. Analysis and implementation of an implicitly restarted Arnoldi iteration. PhD thesis, Rice University, 1995.Google Scholar
  12. [12]
    Ronald B. Morgan. On restarting the Arnoldi method for large nonsymmetric eigenvalue problems. Mathematics of Computation, 65(215):1213–1230, July 1996.Google Scholar
  13. [13]
    Beresford N. Parlett. The symmetric eigenvalue problem. Prentice-Hall, Englewood Cliffs, NJ, 1980.Google Scholar
  14. [14]
    Y. Saad. Analysis of augmented Krylov subspace techniques. Technical Report UMSI 95175, Minnesota Supercomputing Institute, University of Minnesota, 1995.Google Scholar
  15. [15]
    Yousef Saad. Numerical Methods for Large Eigenvalue Problems. Manchester University Press, 1993.Google Scholar
  16. [16]
    Yousef Saad. Iterative Methods for Sparse Linear Systems. PWS publishing, Boston, MA, 1996.Google Scholar
  17. [17]
    Miloud Sadkaue. A block Arnoldi-Chebyshev method for computing the leading eigenpairs of large sparse unsymmetric matrices. Numer. Math., 64(2):181–193, 1993.Google Scholar
  18. [18]
    D. Sorensen, R. Lehoucq, P. Vu, and C. Yang. ARPACK: an implementation of the Implicitly Restarted Arnoldi iteration that computes some of the eigenvalues and eigenvectors of a large sparse matrix, 1995.Google Scholar
  19. [19]
    D. S. Sorensen. Implicit application of polynomial filters in a K-step Arnoldi method. SIAM J. Matrix Anal. Appl., 13(1):357–385, 1992.Google Scholar
  20. [20]
    A. Stathopoulos, Y. Saad, and K. Wu. Thick restarting of the Davidson method: an extension to implicit restarting. In T. Manteuffel, S. McCormick, L. Adams, S. Ashby, H. Elman, R. Freund, A. Greenbaum, S. Parter, P. Saylor, N. Trefethen, H. van der Vorst, H. Walker, and O. Wildlund, editors, Proceedings of Copper Mountain Conference on Iterative Methods, Copper Mountain, Colorado, 1996.Google Scholar
  21. [21]
    A. Stathopoulos, Y. Saad, and K. Wu. Dynamic thick restarting of the Davidson and the implicitly restarted Arnoldi methods. SIAM J. Sci. Comput., 19(1):227–245, 1998.Google Scholar
  22. [22]
    Kesheng Wu. Preconditioned Techniques for Large Eigenvalue Problems. PhD thesis, University of Minnesota, 1997. An updated version also appears as Technical Report TR97-038 at the Computer Science Department.Google Scholar
  23. [23]
    Kesheng Wu and Horst Simon. Thick-restart Lauczos method for symmetric eigenvalue problems. Technical Report 41412, Lawrence Berkeley National Laboratory, 1998.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Kesheng Wu
    • 1
  • Horst D. Simon
    • 1
  1. 1.Lawrence Berkeley National Laboratory/NERSCBerkeley

Personalised recommendations