On computing the symplectic LLT factorization

We analyze two algorithms for computing the symplectic factorization A = LLT of a given symmetric positive definite symplectic matrix A. The first algorithm W1 is an implementation of the HHT factorization from Dopico and Johnson (SIAM J. Matrix Anal. Appl. 31(2):650–673, 2009), see Theorem 5.2. The second one is a new algorithm W2 that uses both Cholesky and Reverse Cholesky decompositions of symmetric positive definite matrices. We present a comparison of these algorithms and illustrate their properties by numerical experiments in MATLAB. A particular emphasis is given on symplecticity properties of the computed matrices in floating-point arithmetic.


Introduction
We study numerical properties of two algorithms for computing symplectic LL T factorization of a given symmetric positive definite symplectic matrix A ∈ R 2n×2n .A symplectic factorization is the factorization A = LL T , where L ∈ R 2n×2n is block lower triangular and is symplectic. Let where I n denotes the n × n identity matrix.We will write J and I instead of J n and I n when the sizes are clear from the context.Definition 1.1.A matrix A ∈ R 2n×2n is symplectic if and only if A T JA = J.
We can use the symplectic LL T factorization to compute the symplectic QR factorization and the Iwasawa decomposition of a given symplectic matrix via Cholesky decomposition.We can modify Tam's method, see [1], [7].Symplectic matrices arise in several applications, among which symplectic formulation of classical mechanics and quantum mechanic, quantum optics, various aspects of mathematical physics, including the application of symplectic block matrices to special relativity, optimal control theory.For more details we refer the reader to [3], [1], and [8].
Partition A ∈ R 2n×2n conformally with J n defined by (1) as in which A ij ∈ R n×n for i, j = 1, 2.
An immediate consequence of Definition 1.1 is that the matrix A, partitioned as in (2), is symplectic if and only if A T 11 A 21 and A T 12 A 22 are symmetric and A T 11 A 22 − A T 21 A 12 = I.Symplectic matrices form a Lie group under matrix multiplications.The product A 1 A 2 of two symplectic matrices A 1 , A 2 ∈ R 2n×2n is also a symplectic matrix.The symplectic group is closed under transposition.If A is symplectic then the inverse of A equals A −1 = J T A T J, and A −1 is also symplectic.
Lemmas 1.2-1.6 will be helpful in the construction and for testing of some herein proposed algorithms.Lemma 1.2.A nonsingular block lower triangular matrix L ∈ R 2n×2n , partitioned as is symplectic if and only if where C, S ∈ R n×n and U = C + iS is unitary.
Lemma 1.4.Every symmetric positive definite symplectic matrix A ∈ R 2n×2n has a spectral decomposition A = Qdiag(D, D −1 )Q T , where Q ∈ R 2n×2n is orthogonal symplectic, and In order to create examples of symmetric positive definite symplectic matrices we can use the following result from [3], Theorem 5.2.Lemma 1.5.Every symmetric positive definite symplectic matrix A ∈ R 2n×2n can be written as where G is symmetric positive definite and C is symmetric.
Lemma 1.6.Let A ∈ R 2n×2n be a symmetric positive definite symplectic matrix, partitioned as in (2).Let S be the Schur complement of A 11 in A: Then S is symmetric positive definite and we have Proof.The property (7) was proved in a more general setting in [3], see Corollary 2.3.We propose an alternative proof for completeness.
It is well known that if A is a symmetric positive definite matrix then the Schur complement S is also symmetric positive definite.We only need to prove (7).Let A be a symmetric positive definite matrix.Then A is symplectic if and only if AJA = J, which is equivalent to the three following conditions: From (8) we get , which together with (6) leads to (7).
We propose methods for computing symplectic LL T factorization of a given symmetric positive definite symplectic matrix A, where L is symplectic and partitioned as in (3).We apply the Cholesky and the Reverse Cholesky decompositions.Practical algorithm for the Reverse Cholesky decomposition is described in Section 2, see Remark 2.1.
Theorem 1.7.Let M ∈ R m×m be a symmetric positive definite matrix.
(i) Then there exists a unique lower triangular matrix L ∈ R m×m with positive diagonal entries such that M = LL T (Cholesky decomposition).
(ii) Then there exists a unique upper triangular matrix U ∈ R m×m with positive diagonal entries such that M = U U T (Reverse Cholesky decomposition).
Proof.We only need to prove (ii).Using the fact (i) for the inverse of M , we get M −1 = L LT , where L is a lower triangular matrix with positive diagonal entries.Then M = ( L LT ) −1 = U U T where U = L−T .Clearly, U is upper triangular with positive entries, and U is unique.
Based on Theorem 1.7, we prove the following result on symplectic LL T factorization (see [3], Theorem 5.2).
Theorem 1.8.Let A ∈ R 2n×2n be a symmetric positive definite symplectic matrix of the form is symplectic.
If S is the Schur complement of A 11 in A, defined in (6, and S = U U T is the Reverse Cholesky decomposition of S, then L 22 = L −T 11 = U .
Proof.We can write This gives the identities Clearly The paper is organized as follows.Section 2 describes Algorithms W 1 and W 2 .Section 3 present both theoretical and practical computational issues.Section 4 is devoted to numerical experiments and comparisons of the methods.Conclusions are given in Section 5.

Algorithms
We apply Theorem 1.8 to develop two algorithms for finding the symplectic LL T factorization.They differ only in a way of computing the matrix L 22 .Algorithm W 1 is based on Theorem 5.2 from [3].We propose Algorithm W 2 , which can be used for symmetric positive definite matrix A, not necessarily symplectic.However, if A is additionally symplectic then the factor L is also symplectic as well.

Algorithm W 1
Given a symmetric positive definite symplectic matrix A ∈ R 2n×2n .This algorithm computes the symplectic LL T factorization A = LL T , where L is symplectic and has a form -Find the Cholesky decomposition A 11 = L 11 L T 11 .-Solve the multiple lower triangular system L 11 L T 21 = A 12 by forward substitution.
-Solve the lower triangular system L 11 X = I by forward substitution, i.e. computing each column of X = L −1 11 independly, using forward substitution.
Cost: 5  3 n 3 flops.Algorithm W 2 Given a symmetric positive definite symplectic matrix A ∈ R 2n×2n .This algorithm computes the symplectic LL T factorization A = LL T , where L is symplectic and has a form -Find the Cholesky factorization A 11 = L 11 L T 11 .-Solve the multiple lower triangular system L 11 L T 21 = A 12 by forward substitution.
-Compute the Schur complement , where L 22 is upper triangular matrix with positive diagonal entries.
Cost: 8  3 n 3 flops.We use the following MATLAB code:

Theoretical and practical computational issues
In this work, for any matrix X ∈ R m×m , X 2 denotes the 2-norm (the spectral norm) of A, and κ 2 (X) = X −1 2 • X 2 is the condition number of a nonsingular matrix X.
This section mainly addresses the problem of measuring the departure of a given matrix from symplecticity.We also touch a few aspects of numerical stability of Algorithms W 1 and W 2 .However, this topic exceeds the scope of this paper.
First we introduce the loss of symplecticity (absolute error) of Clearly, ∆(X) = 0 if and only if X is symplectic.If X ∈ R 2n×2n is symplectic then X −1 = J T X T J, and the condition number of X equals κ 2 (X) = X 2 2 .However, in practice ∆(X) hardly ever equals 0.
Then X is nonsingular and we have Proof.Assume that ∆(X) < 1.We first prove that det X = 0. Define F = X T JX −J.Since J T = −J and J 2 = −I 2n , we have the identity Since J is orthogonal, we get JF 2 = F 2 = ∆(X) < 1, hence the matrix I 2n − JF is nonsigular.Then (15) and the property det J = 1 leads to (det X) 2 = det(X T JX) = det(I 2n − JF ) = 0. Therefore, det X = 0.
To estimate κ 2 (X), we rewrite (15) as Taking norms we obtain This together with JF 2 = ∆(X) establishes the formula (14).The proof is complete.Now we show that the assumption ∆(X) < 1 of Lemma 3.1 is crucial.Lemma 3.2.For every t ≥ 1 and every natural number n there exists a singular matrix X ∈ R 2n×2n such that ∆(X) = t.
According to (18) we introduce the loss of symplecticity (relative error) of nonzero matrix A ∈ R 2n×2n as Remark 3.1.Assume that A is symplectic.Then we have A T JA = J, so taking norms we obtain We see that A 2 ≥ 1 for every symplectic matrix A. Therefore, under the hypotheses of Lemma 3.3 and applying (19) we get the inequality If A 2 is large and Â is close to A, then symp Â << ∆( Â).This property is highligted in our numerical experiments in Section 4.
Partition L and F conformally with J as Then F 21 = −F 12 T , F 22 = 0 and Moreover, the loss of symplecticity ∆( L) can be bounded as follows Proof.It is easy to check that F is a skew-symmetric matrix satisfying (25), with F 22 = 0. Notice that ∆( L) = F 2 .It remains to prove (26).
Write F in a form F = F 1 + F 2 , where It is obvious that F 1 2 = F 11 2 and F 2 2 = F 12 2 .By property of 2-norm, it follows that F ij 2 ≤ F 2 for all i, j = 1, 2.
On the other hand, we get This completes the proof.
Remark 3.2.If Algorithm W 1 runs to completion in floating-point arithmetic, then L22 = L−T 11 + O(ε M ), where ε M is machine precision.See [5], pp.263-264, where the detailed error analysis of methods for inverting triangular matrix was given.Notice that F 12 2 defined by (25) depends only on conditioning of A 11 , the submatrix of A. Since A is symmetric positive definite it follows that κ 2 (A 11 ) ≤ κ 2 (A).However, the loss of simplecticity of L from Algorithm W 2 can be much larger than for Algorithm W 1 , see our examples presented in Section 4.
Notice that F 11 defined by (25) remains the same for both Algorithms W 1 and W 2 .

Now we explain what we mean by numerical stability of algorithms for computing LL T factorization.
The precise definition is the following.
Definition 3.5.An algorithm W for computing the LL T factorization of a given symmetric positive definite matrix A ∈ R 2n×2n is numerically stable, if the computed matrix L ∈ R 2n×2n , partitioned as in (24), is the exact factor of the LL T factorization of a slightly perturbed matrix A+δA, with where c is a small constant depending upon n, and ε M is machine precision.
In practice, we can compute the decomposition error If dec is of order ε M then this is the best result we can achieve in floatingpoint arithmetic.We emphasize that here we apply numerically stable Cholesky decomposition of symmetric positive definite matrix A 11 (see Theorem 10.3 in [5], p. 197), and also numerically stable Reverse Cholesky decomposition of the Schur complement S (defined by ( 7)) applied in Algorithm W 2 .Notice that Lemma 1.6 implies that κ 2 (S) = κ 2 (A 11 ).For general symmetric positive definite matrix A we have a weaker bound: κ 2 (S) ≤ κ 2 (A), see [2].

Numerical experiments
In this section we present numerical tests that show the comparison of Algorithms W 1 and W 2 .All tests were performed in MATLAB ver.R2021a, with machine precision ε M ≈ 2.2 • 10 −16 .
Example 4.1.In the first experiment we take A = S T S, where S is a symplectic matrix, which was also used in [1] and [7]: The results are contained in Table 1.We see that Algorithm W 1 produces unstable result L, opposite to Algorithm W 2 .Since κ 2 (A −1 ) = κ 2 (A), we see that the condition numbers of A is the same in both Examples 1 and 2. However, here A 11 is perfectly well-conditioned, opposite to the previous Example 4.1.The results are contained in Table 2. Now Algorithm W 1 produces numericall stable result L, like Algorithm W 2 .We observe that for large values of ∆A (in the last columns of Tables 1 and 2) the loss of simplicticity of computed L is significant.Random matrices of entries are from the distribution N (0, 1).They were generated by MATLAB function "randn".Before each usage the random number generator was reset to its initial state.
Here we use Lemmas 1.3-1.4 to create the following MATLAB functions: • function for generating orthogonal symplectic matrix Q(2n × 2n): where C is the Hilbert matrix and B is beta matrix.Here B = 1 β(i,j) , where β(•, •) is the β function.By definition, where Γ(•) is the Gamma function.
B is symmetric totally positive matrix of integer.More detailed information related to beta matrix can be found in [4] and [6].
Note that generating A requires computing the inverse of the ill-conditioned Hilbert matrix.It influences significantly on the quality of computed results in floating-point arithmetic.
The results are contained in Table 4.

Conclusions
• We analyzed two algorithms W 1 and W 2 for computing the symplectic LL T factorization of a given symmetric positive definite matrix A(2n × 2n).To assess their practical behaviour we performed numerical experiments.
• Algorithm W 1 is cheaper than Algorithm W 2 .However, Algorithm W 1 is unstable, in general, although it works very well for many test matrices.The decomposition error (27) of the computed matrix L via Algorithm W 1 can be very large.In opposite, in all our tests Algorithm W 2 produces a numerically stable resulting matrices L in floating-point arithmetic (in sense of Definition 3.5).Numerical stability of Algorithms W 1 and W 2 remains a topic of future work.
• Numerical tests presented in Section 4 give indication that the loss of symplecticity of the computed matrix L from Algorithm W 2 can be much larger than obtained from Algorithm W 1 .We observe that the loss of symplecticity of L for both Algorithms W 1 and W 2 strongly depends on the distance from the simplicticity properties (see Lemma 1.6), and also on conditioning of A and its submatrix A 11 .
, L T 21 = L −1 11 A 12 , and S = A 22 − L 21 L T 21 is the Schur complement of A 11 in A. Moreover, S = L 22 L T 22 .If S = U U T is the Reverse Cholesky decomposition of S and L 22 is upper triangular, then L 22 = U , by Theorem 1.7.From Lemma 1.6 we have S = A −1 11 , hence S = L −T 11 L −1 11 .Notice that L −T 11 is upper triangular, so U = L −T 11 .It is easy to prove that L in 12 is symplectic.It follows from Lemma 1.2 and (9).

Remark 2 . 1 .m
The Reverse Cholesky decomposition M = U U T of a symmetric positive definite matrix M ∈ R m×m can be treated as the Cholesky decomposition of the matrix M new = P T M P , where P is the permutation matrix comprising the identity matrix with its column in reverse order.If M new = LL T , where L is lower triangular (with positive diagonal entries), then M = U U T , with U = P LP T being upper triangular (with positive diagonal entries).For example, for m = 3 we have 33 m 32 m 31 m 23 m 22 m 21 m 13 m 12 m 11

Proposition 3 . 4 .
Let L ∈ R 2n×2n be the computed factor of the symplectic factorization A = LL T , where A ∈ R 2n×2n is a symmetric positive definite symplectic matrix.Define F = LT J L − J.

Table 1 :
The results for Example 4.1 and A = S T S, where S is defined by (28).Example 4.2.For comparison, in the second experiment we use the same matrix S and repeat the calculations for the inverse of A from the Example 4.1.

Table 2 :
The results for Example 4.2 and A = (S T S) −1 , where S is defined by (28).

Table 3 :
The results for Example 4.3.Now we apply Lemma 1.5 for creating our test matrices.We take A = P DP T , where

Table 4 :
The results for Example 4.4.Figure 1: Condition numbers κ 2 (A) and decomposition errors for Example 4.5.We applied Lemma 1.4 for creating matrices of the form A = U GU T , where G is a diagonal matrix, and U is an orthogonal symplectic matrix, generated by the same MATLAB function as in Example 4.3.Figures1-3illustrate the values of the statistics.We can see the differences between decomposition errors dec (in favor of Algorithm W 2 ) and between the values ∆L, in favor of Algorithm W 2 .