Advertisement

Comparison of Tridiagonalization Methods Using High-Precision Arithmetic with MuPAT

  • Ryoya Ino
  • Kohei Asami
  • Emiko IshiwataEmail author
  • Hidehiko Hasegawa
Conference paper
Part of the Lecture Notes in Computational Science and Engineering book series (LNCSE, volume 117)

Abstract

In general, when computing the eigenvalues of symmetric matrices, a matrix is tridiagonalized using some orthogonal transformation. The Householder transformation, which is a tridiagonalization method, is accurate and stable for dense matrices, but is not applicable to sparse matrices because of the required memory space. The Lanczos and Arnoldi methods are also used for tridiagonalization and are applicable to sparse matrices, but these methods are sensitive to computational errors. In order to obtain a stable algorithm, it is necessary to apply numerous techniques to the original algorithm, or to simply use accurate arithmetic in the original algorithm. In floating-point arithmetic, computation errors are unavoidable, but can be reduced by using high-precision arithmetic, such as double-double (DD) arithmetic or quad-double (QD) arithmetic. In the present study, we compare double, double-double, and quad-double arithmetic for three tridiagonalization methods; the Householder method, the Lanczos method, and the Arnoldi method. To evaluate the robustness of these methods, we applied them to dense matrices that are appropriate for the Householder method. It was found that using high-precision arithmetic, the Arnoldi method can produce good tridiagonal matrices for some problems whereas the Lanczos method cannot.

Notes

Acknowledgements

The authors would like to thank the reviewers for their careful reading and much helpful suggestions, and Mr. Takeru Shiiba in Tokyo University of Science for his kind support in numerical experiments. The present study was supported by the Grant-in-Aid for Scientific Research (C) No. 25330141 from the Japan Society for the Promotion of Science.

References

  1. 1.
    Golub, G.H., Van Loan, C.F.: Matrix Computations, 4th edn. The Johns Hopkins University Press, Baltimore (2013)zbMATHGoogle Scholar
  2. 2.
    Kikkawa, S., Saito, T., Ishiwata, E., Hasegawa, H.: Development and acceleration of multiple precision arithmetic toolbox MuPAT for Scilab. J. SIAM Lett. 5, 9–12 (2013)CrossRefGoogle Scholar
  3. 3.
    Saito, T., Kikkawa, S., Ishiwata, E., Hasegawa, H.: Effectiveness of sparse data structure for double-double and quad-double arithmetics. In: Wyrzykowski, R., et al. (eds.) Parallel Processing and Applied Mathematics, Part I. Lecture Notes in Computer Science, vol.8384, pp. 1–9. Springer, Berlin/Heidelberg (2014)Google Scholar
  4. 4.
    Hida, Y., Li, X. S., Bailey, D.H.: Quad-double arithmetic: algorithms, implementation, and application. Technical Report LBNL-46996 (2000)Google Scholar
  5. 5.
    Dekker, T.J.: A floating-point technique for extending the available precision. Numer. Math. 18, 224–242 (1971)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Demmel, J.W.: Applied Numerical Linear Algebra. SIAM, Philadelphia (1997)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Ryoya Ino
    • 1
  • Kohei Asami
    • 1
  • Emiko Ishiwata
    • 1
    Email author
  • Hidehiko Hasegawa
    • 2
  1. 1.Department of Mathematical Information ScienceTokyo University of ScienceTokyoJapan
  2. 2.Faculty of Library, Information and Media ScienceUniversity of TsukubaTsukubaJapan

Personalised recommendations