Skip to main content

A Kogbetliantz-type algorithm for the hyperbolic SVD


In this paper, a two-sided, parallel Kogbetliantz-type algorithm for the hyperbolic singular value decomposition (HSVD) of real and complex square matrices is developed, with a single assumption that the input matrix, of order n, admits such a decomposition into the product of a unitary, a non-negative diagonal, and a J-unitary matrix, where J is a given diagonal matrix of positive and negative signs. When J = ±I, the proposed algorithm computes the ordinary SVD. The paper’s most important contribution—a derivation of formulas for the HSVD of 2 × 2 matrices—is presented first, followed by the details of their implementation in floating-point arithmetic. Next, the effects of the hyperbolic transformations on the columns of the iteration matrix are discussed. These effects then guide a redesign of the dynamic pivot ordering, being already a well-established pivot strategy for the ordinary Kogbetliantz algorithm, for the general, n × n HSVD. A heuristic but sound convergence criterion is then proposed, which contributes to high accuracy demonstrated in the numerical testing results. Such a J-Kogbetliantz algorithm as presented here is intrinsically slow, but is nevertheless usable for matrices of small orders.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4


  1. 1.

    For a = 0, η(a) = 0 instead of a huge negative integer, so taking \(\max \limits \{|a|,\omega \}\) filters out such a.

  2. 2.

    For example, the minimumNumber operation of the IEEE 754-2019 standard [12, section 9.6].


  1. 1.

    Anderson, E., Bai, Z., Bischof, C., Blackford, S., Demmel, J., Dongarra, J., Du Croz, J., Greenbaum, A., Hammarling, S., McKenney, A., Sorensen, D.: LAPACK Users’ Guide. Software, Environments and Tools, 3rd edn. Society for Industrial and Applied Mathematics, Philadelphia (1999).

    Book  Google Scholar 

  2. 2.

    Baudet, G., Stevenson, D.: Optimal sorting algorithms for parallel computers. IEEE Trans. Comput. C-27(1), 84–87 (1978).

    MathSciNet  Article  Google Scholar 

  3. 3.

    Bečka, M., Okša, G., Vajteršic, M.: Dynamic ordering for a parallel block-Jacobi SVD algorithm. Parallel Comp. 28(2), 243–262 (2002).

    Article  Google Scholar 

  4. 4.

    Bojanczyk, A.W., Onn, R., Steinhardt, A.O.: Existence of the hyperbolic singular value decomposition. Linear Algebra Appl. 185(C), 21–30 (1993).

    MathSciNet  Article  Google Scholar 

  5. 5.

    Charlier, J.P., Vanbegin, M., Van Dooren, P.: On efficient implementations of Kogbetliantz’s algorithm for computing the singular value decomposition. Numer. Math. 52(3), 279–300 (1987).

    MathSciNet  Article  Google Scholar 

  6. 6.

    Drmač, Z.: Implementation of Jacobi rotations for accurate singular value computation in floating point arithmetic. SIAM J. Sci. Comput. 18(4), 1200–1222 (1997).

    MathSciNet  Article  Google Scholar 

  7. 7.

    Hari, V., Matejaš, J.: Accuracy of two SVD algorithms for 2 × 2 triangular matrices. Appl. Math. Comput. 210 (1), 232–257 (2009).

    MathSciNet  MATH  Google Scholar 

  8. 8.

    Hari, V., Singer, S., Singer, S.: Block-oriented J-Jacobi methods for Hermitian matrices. Linear Algebra Appl. 433(8–10), 1491–1512 (2010).

    MathSciNet  Article  Google Scholar 

  9. 9.

    Hari, V., Singer, S., Singer, S.: Full block J-Jacobi method for Hermitian matrices. Linear Algebra Appl. 444, 1–27 (2014).

    MathSciNet  Article  Google Scholar 

  10. 10.

    Hari, V., Veselić, K.: On Jacobi methods for singular value decompositions. SIAM J. Sci. Stat. 8(5), 741–754 (1987).

    MathSciNet  MATH  Google Scholar 

  11. 11.

    Hari, V., Zadelj-Martić, V.: Parallelizing the Kogbetliantz method: A first attempt. J. Numer. Anal. Ind. Appl. Math. 2(1–2), 49–66 (2007)

    MathSciNet  MATH  Google Scholar 

  12. 12.

    IEEE Computer Society: 754-2019 - IEEE Standard for Floating-Point Arithmetic. IEEE, New York. (2019)

  13. 13.

    ISO/IEC JTC1/SC22/WG5: ISO/IEC 1539-1:2018(en) Information Technology — Programming languages — Fortran — Part 1: Base language, 4th edn. ISO, Geneva (2018)

    Google Scholar 

  14. 14.

    Kogbetliantz, E.G.: Solution of linear equations by diagonalization of coefficients matrix. Quart. Appl. Math. 13(2), 123–132 (1955).

    MathSciNet  Article  Google Scholar 

  15. 15.

    Kulikov, G. Yu., Kulikova, M.V.: Hyperbolic-singular-value-decomposition-based square-root accurate continuous-discrete extended-unscented Kalman filters for estimating continuous-time stochastic models with discrete measurements. Int. J. Robust Nonlinear Control 30, 2033–2058 (2020).

    MathSciNet  Article  Google Scholar 

  16. 16.

    Kulikova, M.V.: Hyperbolic SVD-based Kalman filtering for Chandrasekhar recursion. IET Control Theory A. 13(10), 1525–1531 (2019).

    MathSciNet  Article  Google Scholar 

  17. 17.

    Kulikova, M.V.: Square-root approach for Chandrasekhar-based maximum correntropy Kalman filtering. IEEE Signal Process Lett. 26(12), 1803–1807 (2019).

    Article  Google Scholar 

  18. 18.

    Mackey, D.S., Mackey, N., Tisseur, F.: Structured factorizations in scalar product spaces. SIAM J. Matrix Anal. and Appl. 27(3), 821–850 (2005).

    MathSciNet  Article  Google Scholar 

  19. 19.

    Matejaš, J., Hari, V.: Accuracy of the Kogbetliantz method for scaled diagonally dominant triangular matrices. Appl. Math. Comput. 217(8), 3726–3746 (2010).

    MathSciNet  MATH  Google Scholar 

  20. 20.

    Matejaš, J., Hari, V.: On high relative accuracy of the Kogbetliantz method. Linear Algebra Appl. 464, 100–129 (2015).

    MathSciNet  Article  Google Scholar 

  21. 21.

    Novaković, V.: A hierarchically blocked Jacobi SVD algorithm for single and multiple graphics processing units. SIAM J. Sci. Comput. 37(1), C1–C30 (2015).

    MathSciNet  Article  Google Scholar 

  22. 22.

    Novaković, V.: Batched computation of the singular value decompositions of order two by the AVX-512 vectorization. Parallel Process. Lett. 30(4), 2050015 (2020).

    MathSciNet  Article  Google Scholar 

  23. 23.

    Novaković, V., Singer, S.: A GPU-based hyperbolic SVD algorithm. BIT 51(4), 1009–1030 (2011).

    MathSciNet  Article  Google Scholar 

  24. 24.

    NVIDIA Corp.: CUDA C++ Programming Guide v10.2.89. (2019)

  25. 25.

    Okša, G., Yamamoto, Y., Bečka, M., Vajteršic, M.: Asymptotic quadratic convergence of the two-sided serial and parallel block-Jacobi SVD algorithm. SIAM J. Matrix Anal. and Appl. 40(2), 639–671 (2019).

    MathSciNet  Article  Google Scholar 

  26. 26.

    Onn, R., Steinhardt, A.O., Bojanczyk, A.W.: The hyperbolic singular value decomposition and applications. IEEE Trans. Signal Process. 39 (7), 1575–1588 (1991).

    Article  Google Scholar 

  27. 27.

    OpenMP ARB: OpenMP Application Programming Interface Version 5.0. (2018)

  28. 28.

    Singer, S.: Indefinite QR factorization. BIT 46(1), 141–161 (2006).

    MathSciNet  Article  Google Scholar 

  29. 29.

    Singer, S., Di Napoli, E., Novaković, V., Čaklović, G.: The LAPW method with eigendecomposition based on the Hari–Zimmermann generalized hyperbolic SVD. SIAM J. Sci. Comput. 42(5), C265–C293 (2020).

    MathSciNet  Article  Google Scholar 

  30. 30.

    Singer, S., Singer, S., Novaković, V., Davidović, D., Bokulić, K., Ušćumlić, A.: Three-level parallel J-Jacobi algorithms for Hermitian matrices. Appl. Math. Comput. 218(9), 5704–5725 (2012).

    MathSciNet  MATH  Google Scholar 

  31. 31.

    Singer, S., Singer, S., Novaković, V., Ušćumlić, A., Dunjko, V.: Novel modifications of parallel Jacobi algorithms. Numer. Algorithms 59(1), 1–27 (2012).

    MathSciNet  Article  Google Scholar 

  32. 32.

    Slapničar, I.: Accurate symmetric eigenreduction by a Jacobi method. Ph.D. thesis, FernUniversitȧt–Gesamthochschule, Hagen (1992)

  33. 33.

    Slapničar, I.: Componentwise analysis of direct factorization of real symmetric and Hermitian matrices. Linear Algebra Appl. 272, 227–275 (1998).

    MathSciNet  Article  Google Scholar 

  34. 34.

    Stewart, G.W.: An updating algorithm for subspace tracking. IEEE Trans. Signal Process. 40(6), 1535–1541 (1992).

    Article  Google Scholar 

  35. 35.

    Veselić, K.: A Jacobi eigenreduction algorithm for definite matrix pairs. Numer. Math. 64 (1), 241–269 (1993).

    MathSciNet  Article  Google Scholar 

  36. 36.

    Zha, H.: A note on the existence of the hyperbolic singular value decomposition. Linear Algebra Appl. 240, 199–205 (1996).

    MathSciNet  Article  Google Scholar 

Download references


We are much indebted to Saša Singer for his suggestions on the paper’s subject, and to the anonymous referee for significantly improving the presentation of the paper.


This work has been supported in part by Croatian Science Foundation under the project IP–2014–09–3670 “Matrix Factorizations and Block Diagonalization Algorithms” (

Author information



Corresponding author

Correspondence to Vedran Novaković.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Additional information

Author contribution

The second author formulated the research topic, reviewed the literature, plotted the figures and proofread the manuscript. The first author performed the rest of the research tasks.

Data availability

Several animations and the full testing dataset with the matrix outputs are available at or by request.

Code availability

The source code is available in repository.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work is dedicated to the memory of Sanja Singer.

Appendix: . Proofs of Lemmas 3.1 and 3.2

Appendix: . Proofs of Lemmas 3.1 and 3.2


(Lemma 3.1) Let, for 1 ≤ n, \(\gamma _{\ell }=\arg (x_{\ell })\) and \(\delta _{\ell }=\arg (y_{\ell })\). Then,

$$ \left[\begin{array}{cc}x_{\ell}&y_{\ell} \end{array}\right]= \left[\begin{array}{cc}e^{\mathrm{i}\gamma_{\ell}}|x_{\ell}|&e^{\mathrm{i}\delta_{\ell}}|y_{\ell}| \end{array}\right] = e^{\mathrm{i}\gamma_{\ell}}\left[\begin{array}{cc}|x_{\ell}|&e^{\mathrm{i}(\delta_{\ell}-\gamma_{\ell})}|y_{\ell}| \end{array}\right]= e^{\mathrm{i}\delta_{\ell}}\left[\begin{array}{cc}e^{\mathrm{i}(\gamma_{\ell}-\delta_{\ell})}|x_{\ell}|&|y_{\ell}| \end{array}\right]. $$

Using the second equality, from the matrix multiplication it follows

$$ x_{\ell}^{\prime}=e^{\mathrm{i}\gamma_{\ell}^{}}(|x_{\ell}^{}|\cosh\psi+e^{\mathrm{i}(\delta_{\ell}^{}-\gamma_{\ell}^{}-\beta)}|y_{\ell}^{}|\sinh\psi), $$

and using the third equality, from the matrix multiplication it follows

$$ y_{\ell}^{\prime}=e^{\mathrm{i}\delta_{\ell}^{}}(e^{\mathrm{i}(\gamma_{\ell}^{}-\delta_{\ell}^{}+\beta)}|x_{\ell}^{}|\sinh\psi+|y_{\ell}^{}|\cosh\psi). $$

Since \(|e^{-\mathrm {i}\gamma _{\ell }^{}}x_{\ell }^{\prime }|=|x_{\ell }^{\prime }|\) and \(|e^{-\mathrm {i}\delta _{\ell }^{}}y_{\ell }^{\prime }|=|y_{\ell }^{\prime }|\) and \(\cos \limits (-\phi )=\cos \limits \phi \), it holds

$$ \begin{aligned} |x_{\ell}^{\prime}|^2 &=(|x_{\ell}^{}|\cosh\psi+\cos(\delta_{\ell}^{}-\gamma_{\ell}^{}-\beta)|y_{\ell}^{}|\sinh\psi)^2+(\sin(\delta_{\ell}^{}-\gamma_{\ell}^{}-\beta)|y_{\ell}^{}|\sinh\psi)^2\\ &=|x_{\ell}^{}|^2\cosh^2\psi+|y_{\ell}^{}|^2\sinh^2\psi+\cos(\delta_{\ell}^{}-\gamma_{\ell}^{}-\beta)|x_{\ell}^{}||y_{\ell}^{}|2\cosh\psi\sinh\psi,\\ |y_{\ell}^{\prime}|^2 &=(\cos(\gamma_{\ell}^{}-\delta_{\ell}^{}+\beta)|x_{\ell}^{}|\sinh\psi+|y_{\ell}^{}|\cosh\psi)^2+(\sin(\gamma_{\ell}^{}-\delta_{\ell}^{}+\beta)|x_{\ell}^{}|\sinh\psi)^2\\ &=|x_{\ell}^{}|^2\sinh^2\psi+|y_{\ell}^{}|^2\cosh^2\psi+\cos(\delta_{\ell}^{}-\gamma_{\ell}^{}-\beta)|x_{\ell}^{}||y_{\ell}^{}|2\cosh\psi\sinh\psi. \end{aligned} $$

After grouping the terms, the square of the Frobenius norm of the new th row is

$$ \begin{aligned} \left\|\left[\begin{array}{cc}x_{\ell}^{\prime} & y_{\ell}^{\prime} \end{array}\right]\right\|_{F}^{2}= |x_{\ell}^{\prime}|^{2}+|y_{\ell}^{\prime}|^{2}&= (\cosh^{2}\psi+\sinh^{2}\psi)(|x_{\ell}^{}|^{2}+|y_{\ell}^{}|^{2})\\ &\quad+2\cos(\delta_{\ell}^{}-\gamma_{\ell}^{}-\beta)|x_{\ell}^{}||y_{\ell}^{}|2\cosh\psi\sinh\psi. \end{aligned} $$

Summing the left side of (A.1) over all one obtains

$$ \left\|\left[\begin{array}{cc}\mathbf{x}^{\prime}&\mathbf{y}^{\prime} \end{array}\right]\right\|_F^2= \sum\limits_{\ell=1}^n\left( |x_{\ell}^{\prime}|^2+|y_{\ell}^{\prime}|^2\right), $$

what is equal to the right side of (A.1), summed over all ,

$$ \sum\limits_{\ell=1}^n\left( (\cosh^2\psi+\sinh^2\psi)(|x_{\ell}^{}|^2+|y_{\ell}^{}|^2)+2\zeta_{\ell}^{}|x_{\ell}^{}||y_{\ell}^{}|2\cosh\psi\sinh\psi\right), $$

where \(-1\le \zeta _{\ell }^{}=\cos \limits (\delta _{\ell }^{}-\gamma _{\ell }^{}-\beta )\le 1\), so \(|\zeta _{\ell }^{}|\le 1\). The last sum can be split into a non-negative part and the remaining part of an arbitrary sign,

$$ (\cosh^2\psi+\sinh^2\psi)\sum\limits_{\ell=1}^n(|x_{\ell}^{}|^2+|y_{\ell}^{}|^2)+ 2\cosh\psi\sinh\psi\sum\limits_{\ell=1}^n 2\zeta_{\ell}^{}|x_{\ell}^{}||y_{\ell}^{}|. $$

Using the triangle inequality, and observing that \({\sum }_{\ell =1}^{n}(|x_{\ell }^{}|^{2}+|y_{\ell }^{}|^{2})=\left \|\left [\begin {array}{cc}\mathbf {x}&\mathbf {y} \end {array}\right ]\right \|_{F}^{2}\), this value can be bounded above by

$$ (\cosh^2\psi+\sinh^2\psi)\left\|\left[\begin{array}{cc}\mathbf{x}&\mathbf{y} \end{array}\right]\right\|_F^2+ 2\cosh\psi|\sinh\psi|\sum\limits_{\ell=1}^n 2|x_{\ell}^{}||y_{\ell}^{}|, $$

and below by

$$ (\cosh^2\psi+\sinh^2\psi)\left\|\left[\begin{array}{cc}\mathbf{x}&\mathbf{y} \end{array}\right]\right\|_F^2- 2\cosh\psi|\sinh\psi|{\sum}_{\ell=1}^n 2|x_{\ell}^{}||y_{\ell}^{}|, $$

where both bounds can be simplified by the identities

$$ \cosh^2\psi+\sinh^2\psi=\cosh(2\psi),\qquad 2\cosh\psi|\sinh\psi|=|\sinh(2\psi)|. $$

By the inequality of arithmetic and geometric means it holds \(2|x_{\ell }^{}||y_{\ell }^{}|\le (|x_{\ell }^{}|^{2}+|y_{\ell }^{}|^{2})\), so a further upper bound is reached as

$$ \cosh(2\psi)\left\|\left[\begin{array}{cc}\mathbf{x}&\mathbf{y} \end{array}\right]\right\|_F^2+ |\sinh(2\psi)|\left\|\left[\begin{array}{cc}\mathbf{x}&\mathbf{y} \end{array}\right]\right\|_F^2, $$

and a further lower bound as

$$ \cosh(2\psi)\left\|\left[\begin{array}{cc}\mathbf{x}&\mathbf{y} \end{array}\right]\right\|_F^2- |\sinh(2\psi)|\left\|\left[\begin{array}{cc}\mathbf{x}&\mathbf{y} \end{array}\right]\right\|_F^2, $$

what, after grouping the terms and dividing by \(\left \|\left [\begin {array}{cc}\mathbf {x}&\mathbf {y} \end {array}\right ]\right \|_{F}^{2}\), concludes the proof. □


(Lemma 3.2) Note that \(\cosh (2\psi )+|\sinh (2\psi )|\ge \cosh (2\psi )\ge 1\), and

$$ \begin{aligned} 1&=\cosh^2(2\psi)-\sinh^2(2\psi)\\ &=(\cosh(2\psi)-|\sinh(2\psi)|)\cdot(\cosh(2\psi)+|\sinh(2\psi)|)\\ &\ge\cosh(2\psi)-|\sinh(2\psi)|>0. \end{aligned} $$

If ψ = 0, the equalities in the bounds established in Lemma 3.1 hold trivially. Also, if both equalities hold simultaneously, ψ = 0.

The inequality of arithmetic and geometric means in the proof of Lemma 3.1 turns into equality if and only if |x| = |y| for all . When |x||y|≠ 0, it has to hold ζ = ζ, where \(\zeta =\pm \text {sign}(\sinh \psi )\), to reach the upper or the lower bound, respectively. From ζ = ± 1 it follows δ = γ + β + lπ for a fixed \(l\in \mathbb {Z}\), i.e., \(x_{\ell }=e^{\mathrm {i}\gamma _{\ell }}|x_{\ell }|\) and \(y_{\ell }=\pm e^{\mathrm {i}\beta }e^{\mathrm {i}\gamma _{\ell }}|x_{\ell }|\), so y = ±eiβx for all . Conversely, y = ±eiβx implies, for all , that |x| = |y| and ζ is a constant ζ = ± 1, so one of the two bounds is reached. □

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Novaković, V., Singer, S. A Kogbetliantz-type algorithm for the hyperbolic SVD. Numer Algor (2021).

Download citation


  • Hyperbolic singular value decomposition
  • Kogbetliantz algorithm
  • Hermitian eigenproblem
  • OpenMP multicore parallelization

Mathematics Subject Classification (2010)

  • 65F15
  • 65Y05
  • 15A18