Skip to main content
Log in

The joint bidiagonalization process with partial reorthogonalization

  • Original Paper
  • Published:
Numerical Algorithms Aims and scope Submit manuscript

Abstract

The joint bidiagonalization (JBD) process is a useful algorithm for the computation of the generalized singular value decomposition (GSVD) of a matrix pair. However, it always suffers from rounding errors, which causes the Lanczos vectors to lose their mutual orthogonality. In order to maintain some level of orthogonality, we present a semiorthogonalization strategy. Our rounding error analysis shows that the JBD process with the semiorthogonalization strategy can ensure that the convergence of the computed quantities is not affected by rounding errors and the final accuracy is high enough. Based on the semiorthogonalization strategy, we develop the joint bidiagonalization process with partial reorthogonalization (JBDPRO). In the JBDPRO algorithm, reorthogonalizations occur only when necessary, which saves a big amount of reorthogonalization work, compared with the full reorthogonalization strategy. Numerical experiments illustrate our theory and algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. Here, we use the result of an exercise from Higham’s book [9, Chapter 6, Problems 6.14], which gives the upper bound of the p-norm of a row/column sparse matrix.

References

  1. Barlow, J.L.: Reorthogonalization for the Golub-Kahan-Lanczos bidiagonal reduction. Numer. Math. 124, 237–278 (2013)

    Article  MathSciNet  Google Scholar 

  2. Björck, Å.: Numerical Methods for Least Squares Problems. SIAM, Philadelphia (1996)

    Book  Google Scholar 

  3. Davis, T.A., Hu, Y.: The University of Florida sparse matrix collection. ACM Trans. Math. Software 38, 1–25 (2011). Data available online at http://www.cise.ufl.edu/research/sparse/matrices/

    MathSciNet  MATH  Google Scholar 

  4. Golub, G.H., Kahan, W.: Calculating the singular values and pseudo-inverse of a matrix. SIAM J. Numer. Anal. 2, 205–224 (1965)

    MathSciNet  MATH  Google Scholar 

  5. Golub, G.H., van Loan, C.F.: Matrix Computations. John Hopkins University Press (2012)

  6. Hansen, P.C.: Regularization, GSVD and truncated GSVD. BIT 29, 491–504 (1989)

    Article  MathSciNet  Google Scholar 

  7. Hansen, P.C.: Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion. SIAM, Philadelphia (1998)

    Book  Google Scholar 

  8. Hansen, P.C.: Discrete Inverse Problems: Insight and Algorithms. SIAM, Philadelphia (2010)

    Book  Google Scholar 

  9. Higham, N.J.: Accuracy and Stability of Numerical Algorithms, 2nd edn. SIAM, Philadelphia (2002)

    Book  Google Scholar 

  10. Jia, Z., Li, H.: A rounding error analysis of the joint bidiagonalization process with applications to the GSVD computation. arXiv:1912.08505v4

  11. Jia, Z., Yang, Y.: A joint bidiagonalization based algorithm for large scale general-form Tikhonov regularization. Appl. Numer. Math. 157, 159–177 (2020)

    Article  MathSciNet  Google Scholar 

  12. Kilmer, M.E., Hansen, P.C., Espanol, M.I.: A projection-based approach to general-form Tikhonov regularization. SIAM J. Sci. Comput. 29, 315–330 (2007)

    Article  MathSciNet  Google Scholar 

  13. Lanczos, C: An iteration method for the solution of eigenvalue problem of linear differential and integral operators. J. Res. Nat. Bur. 45, 255–282 (1950)

    Article  MathSciNet  Google Scholar 

  14. Larsen, R.M.: Lanczos bidiagonalization with partial reorthogonalization. Department of Computer Science University of Aarhus (1998)

  15. Meurant, G., Strakos, Z.: The Lanczos and conjugate gradient algorithms in finite precision arithmetic. Acta Numerica. 15, 471–542 (2006)

    Article  MathSciNet  Google Scholar 

  16. Paige, C.C.: The computation of eigenvalues and eigenvectors of very large sparse matrices. PhD thesis University of London (1971)

  17. Paige, C.C.: Computational variants of the Lanczos method for the eigenproblem. J. Inst. Math. Appl. 10, 373–381 (1972)

    Article  MathSciNet  Google Scholar 

  18. Paige, C.C.: Error analysis of the Lanczos algorithm for tridiagonalizing a symmetric matrix. J. Inst. Math. Appl. 18, 341–349 (1976)

    Article  MathSciNet  Google Scholar 

  19. Paige, C.C.: Accuracy and effectiveness of the Lanczos algorithm for the symmetric eigenproblem. Linear Algebra Appl. 34, 235–258 (1980)

    Article  MathSciNet  Google Scholar 

  20. Paige, C.C., Saunders, M.A.: Towards a generalized singular value decomposition. SIAM J. Numer. Anal. 18, 398–405 (1981)

    Article  MathSciNet  Google Scholar 

  21. Paige, C.C., Saunders, M.A.: LSQR, an algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Soft. 8, 43–71 (1982)

    Article  MathSciNet  Google Scholar 

  22. Parlett, B.N., Scott, D.S.: The Lanczos algorithm with selective reorthogonalization. Math. Comput. 33, 217–238 (1979)

    Article  Google Scholar 

  23. Parlett, B.N.: The rewards for maintaining semi-orthogonality among Lanczos vectors. Numer. Linear Algebra Appl. 1, 243–267 (1992)

    MathSciNet  Google Scholar 

  24. Parlett, B.N.: The Symmetric Eigenvalue Problem. SIAM, Philadelphia (1998)

    Book  Google Scholar 

  25. Simon, H.D.: The Lanczos algorithm with partial reorthogonalization. Math. Comput. 42, 115–142 (1984)

    Article  MathSciNet  Google Scholar 

  26. Simon, H.D.: Analysis of the symmetric Lanczos algorithm with reorthogonalization methods. Linear Algebra Appl. 61, 101–131 (1984)

    Article  MathSciNet  Google Scholar 

  27. Simon, H.D., Zha, H.: Low-rank matrix approximation using the Lanczos bidiagonalization process with applications. SIAM J. Sci. Comput. 21, 2257–2274 (2000)

    Article  MathSciNet  Google Scholar 

  28. van Loan, C.F.: Generalizing the singular value decomposition. SIAM J. Numer. Anal. 13, 76–83 (1976)

    Article  MathSciNet  Google Scholar 

  29. van Loan, C.F.: Computing the CS and generalized singular value decomposition. Numer. Math. 46, 479–491 (1985)

    Article  MathSciNet  Google Scholar 

  30. Zha, H.: Computing the generalized singular values/vectors of large sparse or structured matrix pairs. Numer. Math. 72, 391–417 (1996)

    Article  MathSciNet  Google Scholar 

Download references

Funding

This work was supported in part by the National Science Foundation of China (No. 11771249).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhongxiao Jia.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: 1: Proofs of Lemma 3.1 and Lemma 3.2

Appendix: 1: Proofs of Lemma 3.1 and Lemma 3.2

Proof Proof of Lemma 3.1

We prove (3.11) by mathematical induction. For i = 1, from (3.9) and (3.10), we have

$$ \begin{array}{@{}rcl@{}} \hat{\alpha}_{1}{Q_{L}^{T}}\hat{u}_{1} & =& {Q_{L}^{T}}Q_{L}\hat{\nu}_{1} -{Q_{L}^{T}}\hat{f}_{1} \\ & =& (I_{n}-{Q_{A}^{T}}Q_{A})\hat{\nu}_{1} -{Q_{L}^{T}}\hat{f}_{1} \\ & =& \hat{\nu}_{1}-{Q_{A}^{T}}(\alpha_{1}u_{1}+\beta_{2}u_{2}+f_{1})-{Q_{L}^{T}}\hat{f}_{1} \\ & =& \hat{\nu}_{1}-\alpha_{1}(\alpha_{1} \nu_{1}+g_{1})- \beta_{2}(\alpha_{2}\nu_{2}+\beta_{2}\nu_{1}+g_{2}) -{Q_{A}^{T}}f_{1}-{Q_{L}^{T}}\hat{f}_{1} \\ & =& (1-{\alpha_{1}^{2}}-{\beta_{2}^{2}})\hat{\nu}_{1} + \alpha_{2}\beta_{2}\hat{v}_{2} + O(\bar{q}(m,n,p)\varepsilon) . \end{array} $$

Next, suppose (3.11) is true for the indices up to i. For i + 1, we have

$$\hat{\alpha}_{i+1}{Q_{L}^{T}}\hat{u}_{i+1} = {Q_{L}^{T}}Q_{L}\hat{\nu}_{i+1}-\hat{\beta}_{i}{Q_{L}^{T}}\hat{u}_{i}- {\sum}_{j=1}^{i-1}\hat{\xi}_{ji+1}{Q_{L}^{T}}\hat{u}_{j}-{Q_{L}^{T}}\hat{f}_{i+1} .$$

Since \((\hat {\beta }_{i}{Q_{L}^{T}}\hat {u}_{i}- {\sum }_{j=1}^{i-1}\hat {\xi }_{ji+1}{Q_{L}^{T}}\hat {u}_{j}) \in span\{\hat {\nu }_{1}, \dots , \hat {\nu }_{i+1}\}+ O(\bar {q}(m,n,p)\varepsilon )\), we only need to prove \({Q_{L}^{T}}Q_{L}\hat {\nu }_{i+1}\in span\{\hat {\nu }_{1}, \dots , \hat {\nu }_{i+2}\}+ O(\bar {q}(m,n,p)\varepsilon )\). Notice that

$$ \begin{array}{@{}rcl@{}} {Q_{L}^{T}}Q_{L}\hat{\nu}_{i+1} & =& (I_{n}-{Q_{A}^{T}}Q_{A})\hat{\nu}_{i+1} \\ & =& \hat{\nu}_{i+1}+(-1)^{i+1}{Q_{A}^{T}}(\alpha_{i+1}u_{i+1}+\beta_{i+1}u_{i+2}+{\sum}_{j=1}^{i}\xi_{ji+1}u_{j}+f_{i+1}) \end{array} $$
$$ \begin{array}{@{}rcl@{}} \phantom{{Q_{L}^{T}}Q_{L}\hat{\nu}_{i+1}}& =& \hat{\nu}_{i+1}+(-1)^{i+1}(\alpha_{i+1}{Q_{A}^{T}}u_{i+1}+\beta_{i+1}{Q_{A}^{T}}u_{i+2}+ {\sum}_{j=1}^{i}\xi_{ji+1}{Q_{A}^{T}}u_{j}) \\&&+ (-1)^{i+1}{Q_{A}^{T}}f_{i+1} .\end{array} $$

From (3.9), we have

$$ \begin{array}{@{}rcl@{}} (\alpha_{i+1}{Q_{A}^{T}}u_{i+1}+\beta_{i+1}{Q_{A}^{T}}u_{i+2}&+& {\sum}_{j=1}^{i}\xi_{ji+1}{Q_{A}^{T}}u_{j}) \in span\{\hat{\nu}_{1}, \dots, \hat{\nu}_{i+2}\}\\&+& O(\bar{q}(m,n,p)\varepsilon) , \end{array} $$

which completes the proof of the induction step.

By the mathematical induction principle, (3.11) holds for all \(i = 1, 2, \dots ,k\). □

Proof Proof of Lemma 3.2

By (3.8) and (3.9), the process of computing Uk+ 1 and Vk can be treated as the Lanczos bidiagonalization of QA with the semiorthogonalization strategy. Since the k-step Lanczos bidiagonalization process is equivalent to the (2k + 1)-step symmetric Lanczos process [2, §7.6.1], the bounds for Ck and Dk can be deduced from the property of the symmetric Lanczos process with the semiorthogonalization strategy; see [26, Lemma 4] and its proof.

Now, we give the bound of \(\widehat {C}_{k}\). At the (i − 1)-th step, from (3.10), we can write the reorthogonalization step of \(\hat {u}_{i}\) as

$$ \begin{array}{@{}rcl@{}} & \hat{\alpha}_{i}^{\prime}\hat{u}_{i}^{\prime} = Q_{L}\hat{\nu}_{i}-\hat{\beta}_{i-1}\hat{u}_{i-1}-\hat{f}_{i}^{\prime}, \end{array} $$
(A.1)
$$ \begin{array}{@{}rcl@{}} & \hat{\alpha}_{i}\hat{u}_{i} = \hat{\alpha}_{i}^{\prime}\hat{u}_{i}^{\prime}- {\sum}_{j=1}^{i-2}\hat{\xi}_{ji}\hat{u}_{j}-\hat{f}_{i}^{\prime\prime} , \end{array} $$
(A.2)

where \(\|\hat {f}_{i}^{\prime }\|, \|\hat {f}_{i}^{\prime \prime }\| = O(q_{3}(p,n)\varepsilon )\). Thus, for \(l=1,\dots , i-2\), we have

$$\hat{\alpha}_{i}^{\prime}\hat{u}_{l}^{T}\hat{u}_{i}^{\prime} = \hat{u}_{l}^{T}Q_{L}\hat{\nu}_{i}-\hat{\beta}_{i-1}\hat{u}_{l}^{T}\hat{u}_{i-1}- \hat{u}_{l}^{T}\hat{f}_{i}^{\prime} .$$

From (3.11) and its proof, we know that

$$ {Q_{L}^{T}}\hat{u}_{l}={\sum}_{j=1}^{l+1}\lambda_{j}\hat{v}_{j} + O(\bar{q}(m,n,p)\varepsilon) $$

with modest constants λj for \(j=1,\dots , l+1\). Notice that \(\left |\hat {u}_{l}^{T}\hat {u}_{i-1}, \hat {\nu }_{j}^{T}\hat {\nu }_{i}\right | \leq \sqrt {\varepsilon /(2k+1)}\) for \(l=1,\dots , i-2\) and \(j=1,\dots , l+1\). We obtain

$$ \hat{\alpha}_{i}^{\prime}\hat{u}_{l}^{T}\hat{u}_{i}^{\prime}= {\sum}_{j=1}^{l+1}\lambda_{j}\hat{\nu}_{j}^{T}\hat{\nu}_{i}- \hat{\beta}_{i-1}\hat{u}_{l}^{T}\hat{u}_{i-1} + O(\bar{q}(m,n,p)\varepsilon) =O(\sqrt{\varepsilon}) .$$

Then, we prove \(M = \max \limits _{1\leq j \leq i-1}|\hat {\xi }_{ji}|=O(\sqrt {\varepsilon })\). Premultiplying (A.1) by \(\hat {u}_{l}^{T}\) and making some arrangement, we obtain

$$\hat{\xi}_{li} = \hat{\alpha}_{i}^{\prime}\hat{u}_{l}^{T}\hat{u}_{i}^{\prime}-\hat{\alpha}_{i}\hat{u}_{l}^{T}\hat{u}_{i}-{\sum}_{j=1,j\neq l}^{i-2}\hat{\xi}_{ji}\hat{u}_{l}^{T}\hat{u}_{j}-\hat{u}_{l}^{T}\hat{f}_{i}^{\prime\prime} .$$

Notice that \(\hat {u}_{l}^{T}\hat {u}_{i}=O(\sqrt {\varepsilon })\) and we have proved \(\hat {\alpha }_{i}^{\prime }\hat {u}_{l}^{T}\hat {u}_{i}^{\prime }=O(\sqrt {\varepsilon })\) for \(l=1,\dots , i-2\). We obtain

$$ |\hat{\xi}_{li}| \leq O(\sqrt{\varepsilon})+O(\sqrt{\varepsilon})+iM\sqrt{\varepsilon}+O(\bar{q}(m,n,p)\varepsilon) . $$

The above right-hand side does not depend on l anymore, and we finally obtain by taking the maximum on the the left-hand side:

$$ (1-i\sqrt{\varepsilon})M \leq O(\sqrt{\varepsilon}) + O(\bar{q}(m,n,p)\varepsilon) . $$

Therefore, we have \(M = O(\sqrt {\varepsilon })\). □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jia, Z., Li, H. The joint bidiagonalization process with partial reorthogonalization. Numer Algor 88, 965–992 (2021). https://doi.org/10.1007/s11075-020-01064-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11075-020-01064-8

Keywords

Mathematics Subject Classification (2010)

Navigation