Skip to main content
Log in

Error analysis of a hybrid method for computing Lyapunov exponents

Numerische Mathematik Aims and scope Submit manuscript

Abstract

In a previous paper (Beyn and Lust in Numer Math 113:357–375, 2009) we suggested a numerical method for computing all Lyapunov exponents of a dynamical system by spatial integration with respect to an ergodic measure. The method extended an earlier approach of Aston and Dellnitz (Comput Methods Appl Mech Eng 170:223–237, 1999) for the largest Lyapunov exponent by integrating the diagonal entries from the \(QR\)-decomposition of the Jacobian for an iterated map. In this paper we provide an asymptotic error analysis of the method for the case in which all Lyapunov exponents are simple. We employ Oseledec multiplicative ergodic theorem and impose certain hyperbolicity conditions on the invariant subspaces that belong to neighboring exponents. The resulting error expansion shows that one step of extrapolation is enough to obtain exponential decay of errors.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1

Notes

  1. \(Ord(j,d)=\{\delta \in \{1,\ldots ,d\}^{\{1,\ldots ,j\}}\,:\, \delta \ \text{ strictly} \text{ monotone}\}\) see also Appendix B.

References

  1. Allen, L., Bridges, T.J.: Numerical exterior algebra and the compound matrix method. Numer. Math. 92, 197–232 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  2. Aston, P.J., Dellnitz, M.: Computation of the Lyapunov exponent via spatial integration with application to blowout bifurcations. Comput. Methods Appl. Mech. Eng. 170, 223–237 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  3. Aston, P.J., Dellnitz, M.: Computation of the dominant Lyapunov exponent via spatial integration using matrix norms. Proc. Roy. Soc. Lond. A 459, 2933–2955 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  4. Aston, P.J., Dellnitz, M.: Computation of the dominant Lyapunov exponent via spatial integration using vector norms. In: Fiedler, B., Gröger, K., Sprekels, J. (eds.) Proceedings of the Equadiff 99, pp. 1015–1020. World Scientific, Singapore (2000)

  5. Barreira, L., Pesin, Y.B.: Lyapunov Exponents and Smooth Ergodic Theory. University Lecture Series, vol. 23. The American Mathematical Society (2002)

  6. Beyn, W.-J., Lust, A.: A hybrid method for computing Lyapunov exponents. Numer. Math. 113, 357–375 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bridges, T.J., Reich, S.: Computing Lyapunov exponents on a Stiefel manifold. Physica D 156, 219–238 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  8. Dellnitz, M., Froyland, G., Junge, O.: The algorithms behind GAIO: set oriented numerical methods for dynamical systems. In: Fiedler, B. (ed.) Ergodic Theory, Analysis and Efficient Simulation of Dynamical Systems, p. 174. Springer, Berlin (2001)

    Google Scholar 

  9. Dellnitz, M., Junge, O.: Set oriented numerical methods for dynamical systems. In: Fiedler, B., Iooss, G., Kopell, N. (eds.) Handbook of Dynamical Systems II: Towards Applications, p. 264. World Scientific, Singapore (2002)

    Google Scholar 

  10. Dieci, L., van Vleck, E.S.: Computation of a few Lyapunov exponents for continuous and discrete dynamical systems. Appl. Num. Math. 17, 275–291 (1995)

    Article  MATH  Google Scholar 

  11. Dieci, L., van Vleck, E.S.: On the error in computing Lyapunov exponents by QR methods. Numer. Math. 101(4), 619–642 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  12. Dieci, L., van Vleck, E.S.: Lyapunov and Sacker-Sell spectral intervals. J. Dyn. Differ. Equ. 19(2), 265–293 (2007)

    Article  MATH  Google Scholar 

  13. Eckman, J.-P., Ruelle, D.: Ergodic theory of chaos and strange attractors. Rev. Mod. Phys. 57(3), 617–656 (1985)

    Article  Google Scholar 

  14. Fischer, G.: Lineare Algebra, 10 Auflage. Vieweg (1995)

  15. Froyland, G., Judd, K., Mess, A.I., Murano, K.: Lyapunov exponents and triangulation. Proceedings of the 1993 International Symposium on Nonlinear Theory and its Applications, Hawaii, pp. 281–286 (1993)

  16. Froyland, G., Judd, K., Mess, A.I.: Estimation of Lyapunov exponents of dynamical systems using a spatial average. Phys. Rev. E 51(4), 2844–2855 (1995)

    Article  MathSciNet  Google Scholar 

  17. Gantmacher, F.R.: The Theory of Matrices. Chelsea, New York (1971)

    Google Scholar 

  18. Geist, K., Parlitz, U., Lauterborn, W.: Comparison of different methods for computig Lyapunov exponents. Prog. Theory Phys. 83(5), 875–893 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  19. Gröbner, W.: Matrizenrechnung. Hochschultaschenbücher-Verlag (1966)

  20. Johnson, R.A., Palmer, K.J., Sell, G.R.: Ergodic properties of linear dynamical systems. SIAM J. Math. Anal. 18(1), 1–33 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  21. Katok, A., Hasselblatt, B.: Introduction to the Modern Theory of Dynamical Systems. Cambridge University Press, Cambridge (1995)

    Book  MATH  Google Scholar 

  22. Ljapunov, A.M.: The General Problem of the Stability of Motion. Comm. Soc. Math., Kharkow (1892, in Russian); reprinted in English, Taylor& Francis, London (1992)

  23. Lust, A.: Eine hybride Methode zur Berechnung von Liapunow-Exponenten. PhD thesis, Universität Bielefeld (2006). http://www.math.uni-bielefeld.de/~beyn/AG_Numerik/html/en/theses/

  24. Oseledec, V.: A multiplicativ ergodic theorem. Ljapunov characteristic numbers for dynamical systems. Trans. Moscow Math. Soc. 19, 197–231 (1968)

    MathSciNet  Google Scholar 

  25. Pilyugin, S.Y.: Introduction to Structurally Stable Systems of Differential Equations. Brinkhäuser Verlag, Boston (1992)

    Book  MATH  Google Scholar 

  26. Pollicott, M.: Lectures on Ergodic Theory and Pesin Theory on Compact Manifolds. Cambridge University Press, Cambridge (1993)

    MATH  Google Scholar 

  27. Sacker, R.J., Sell, G.R.: A spectral theory for linear differential systems. J. Differ. Equ. 27(3), 320–358 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  28. Walters, P.: An Introduction to Ergodic Theory. Springer, Berlin (2000)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wolf-Jürgen Beyn.

Additional information

W.-J. Beyn and A. Lust supported by CRC 701 ‘Spectral Analysis and Topological Methods in Mathematics’, Bielefeld University.

The paper is mainly based on the PhD thesis [23] of A. Lust.

This is a companion article containing the error analysis of a numerical method that was suggested in article doi:10.1007/s00211-009-0236-4.

Appendices

Appendix A: Oseledec’s theorem

We state a special version of the multiplicative ergodic theorem of Oseledec (see [24]) which may be found e.g. in [26, 28].

Theorem 9

([24]) Let \(g\) be a \(C^1\)-diffeomorphism of a compact and smooth Riemannian manifold \(M\) of dimension \(d\) and let \(\mu \) be an ergodic measure of \(g\) on \(M\). Then there exists a Borel set \(M_{\mu }\subset M\) such that \(g(M_{\mu })= M_\mu , \mu (M_{\mu })=1\), and the following properties hold:

  1. (i)

    There exist natural numbers \(d_1,\ldots ,d_s\) with \(s\le d\) and \(\sum _{j=1}^s d_j=d\).

  2. (ii)

    For every \(x\in M_{\mu }\) there exists a measurable decomposition of the tangent spaces \(T_x M=\bigoplus \nolimits _{j=1}^s W^j(x)\) such that \(\dim W^j(x)=d_j\) and \(D\,g(x)\left(W^j(x)\right)=W^j(g(x))\).

  3. (iii)

    There are numbers \(\lambda _1>\lambda _2>\cdots >\lambda _s\) such that

    $$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{n} \log \Vert D g^n(x)v\Vert =\lambda _j \end{aligned}$$

    for all \(v\in \bigoplus \nolimits _{i=j}^sW^i(x)\) with \(v\notin \bigoplus \nolimits _{i=j+1}^{s} W^i(x)\) and for all \(x\in M_{\mu }\).

 

Remarks

 

  1. 1.

    The points in \(M_{\mu }\) are called (Lyapunov-)regular and the decomposition \(T_x M=\bigoplus \nolimits _{j=1}^s W^j(x)\) into invariant subspaces is called the Oseledec decomposition of \(TM\).

  2. 2.

    The number \(\lambda _j\) is called the \(j\)-th Lyapunov exponent (or characteristic number) with respect to the ergodic measure \(\mu \). The number \(d_j\) denotes the multiplicity of \(\lambda _j\).

  3. 3.

    The largest Lyapunov exponent \(\lambda _1\) can also be expressed in terms of matrix norms as follows (see [26])

    $$\begin{aligned} \lambda _1=\lim _{n\rightarrow \infty } \frac{1}{n} \ln \Vert D\,g^n(x)\Vert \quad \text{ for} \ \mu \text{-a.e.} x\in M. \end{aligned}$$

Appendix B: Exterior products

For the convenience of the reader we summarize in this appendix several results from the theory of exterior products that are used in this paper. Most of the results can be found in [1, 17, 19], but a few details have been added that are important for our estimates.

1.1 Coordinate representation of exterior products

A mapping \(\delta :\{1,\ldots ,j\}\rightarrow \{1,\ldots ,d\}\) with \(j\le d\) is called strictly monotone if \(\delta (1)<\delta (2)<\cdots <\delta (j)\) holds. By \(Ord(j,d)\) we denote the set of all strictly monotone mappings

$$\begin{aligned} Ord(j,d)=\{\delta \in \{1,\ldots ,d\}^{\{1,\ldots ,j\}}\,|\, \delta \ \text{ strictly} \text{ monotone}\}. \end{aligned}$$

Let \(D=\# Ord(j,d)\) and note that \(D={d \atopwithdelims ()j}\). We will frequently identify elements \(i\in Ord(j,d)\) with tuples \(i=(i_1,\ldots ,i_j)\) and simply write \(i=i_1,\ldots ,i_j\), where \(1\le i_1<\cdots <i_j\le d\). In \(Ord(j,d)\) we use lexicographical order written as \(\sigma <\delta \) and meaning that for some \(\ell \in \{1,\ldots ,j\}\) we have

$$\begin{aligned} \sigma (k)=\delta (k)\quad \text{ for}\quad k=1,\ldots ,\ell -1 \quad \text{ and}\quad \sigma (\ell )<\delta (\ell ). \end{aligned}$$

The smallest element is \(\delta _1=(1,\ldots ,j)={{\small 1}\!\!1}_j\) and the largest element is \(\delta _D=(d+j-1,\ldots ,d)\).

Given vectors \(x_1,\ldots ,x_j\in \mathbb{R }^d\) with coordinates \(x_\ell =(x_{1\ell },\ldots ,x_{d\ell })^T\) we denote by \(X=[x_1,\ldots ,x_j]\) the \(d\times j\)-matrix with columns \(x_1,\ldots ,x_j\), i.e.

By \(X_{i_1,\ldots ,i_j}\) we denote the minor of \(X\) that belongs to the rows with indices \(i_1,\ldots ,i_j\), i.e.

The exterior product of vectors \(x_1,\ldots ,x_j\in \mathbb{R }^d\) is defined as the vector

$$\begin{aligned} \wedge _{\ell =1}^jx_\ell = {x}_{1}\wedge \cdots \wedge {x}_{j}\in \mathbb{R }^{D}. \end{aligned}$$

with coordinates

$$\begin{aligned} ({x}_{1}\wedge \cdots \wedge {x}_{j})_{i_1\ldots i_j} = X_{i_1 \ldots i_j}, \quad \ (i_1,\ldots ,i_j)\in Ord(j,d). \end{aligned}$$
(5.1)

From the Cartesian basis \(\{{e}_1,\ldots ,{e}_{d}\}\) in \(\mathbb{R }^d\) we obtain

$$\begin{aligned} \{{e}_{i_1}\wedge \cdots \wedge {e}_{i_j}\,|\,(i_1,\ldots ,i_j)\in Ord(j,d)\} \end{aligned}$$

as a basis of \(\mathbb{R }^{D}\).

The coordinate representation of exterior products has standard properties.

Lemma 10

For all \({x}_1,\ldots ,{x}_{j}, v\in \mathbb{R }^d\) and \(\alpha ,\beta \in \mathbb{R }\) the following holds:

  1. (i)

    For any permutation \(\pi \in S_j\),

    $$\begin{aligned} {x}_{\pi (1)}\wedge \cdots \wedge {x}_{\pi (j)}= sign(\pi ){x}_{1}\wedge \cdots \wedge {x}_{j} \end{aligned}$$
  2. (ii)

    For \(\ell =1,\ldots ,j\),

    $$\begin{aligned}&{x}_{1}\wedge \cdots \wedge {x}_{\ell -1}\wedge (\alpha x_\ell +\beta v) \wedge {x}_{\ell +1}\wedge \cdots \wedge {x}_{j}\\&\qquad =\alpha ({x}_{1}\wedge \cdots \wedge {x}_{\ell }\wedge \cdots \wedge x_j)+ \beta (x_1\wedge \cdots \wedge \,v\,\wedge \cdots \wedge x_j) \end{aligned}$$
  3. (iii)

    the vectors \({x}_1,\ldots ,{x}_{j}\) are linearly dependent if and only if \({x}_{1}\wedge \cdots \wedge {x}_{j}=0\).

Let \(\langle \cdot ,\cdot \rangle _k\) denote the standard scalar product in \(\mathbb{R }^k \) with norm \(\Vert x\Vert =\sqrt{\langle x,\,x\rangle _k}\) for \(x\in \mathbb{R }^k\).

Theorem 11

For two decomposable vectors

$$\begin{aligned} x={x}_{1}\wedge \cdots \wedge {x}_{j},\ y={y}_{1}\wedge \cdots \wedge {y}_{j} \in \mathbb R ^D \quad \text{ with}\ x_\ell ,y_\ell \in \mathbb{R }^d,\ \ell =1,\ldots ,j, \end{aligned}$$

the scalar product can be written as

$$\begin{aligned} \langle x,y\rangle _{D} = \langle {x}_{1}\wedge \cdots \wedge {x}_{j},{y}_{1}\wedge \cdots \wedge {y}_{j}\rangle _{D} =\det \left(Z\right), \end{aligned}$$

where \(Z\in \mathbb{R }^{j\times j}\) is defined by

$$\begin{aligned} \ Z_{i,\ell }=\langle x_i,y_\ell \rangle _d\ \ \quad \text{ for}\ i,\ell \in \{1,\ldots ,j\}. \end{aligned}$$

 

Proof

We have \(Z=X^T Y\) for the matrices \(X=[{x}_1,\ldots ,{x}_{j}]\) und \(Y=[{y}_1,\ldots ,{y}_{j}]\), thus the multiplication theorem for determinants [14] shows

$$\begin{aligned} \det \left(X^TY\right)=\sum _{i\in Ord(j,d)} X_{i_1,\ldots ,i_j} Y_{i_1,\ldots ,i_j} =\langle {x}_{1}\wedge \cdots \wedge {x}_{j},{y}_{1}\wedge \cdots \wedge {y}_{j}\rangle _{D}. \end{aligned}$$

\(\square \)

An immediate consequence of this theorem is the following result.

Corollary 12

For any set of vectors \({x}_1,\ldots ,{x}_{j}\in \mathbb{R }^d\),

$$\begin{aligned} \Vert {x}_{1}\wedge \cdots \wedge {x}_{j}\Vert =\sqrt{\det \left(X^TX\right)}, \quad X= \left[ x_1,\ldots ,x_j \right], \end{aligned}$$

in particular, \(\Vert {x}_{1}\wedge \cdots \wedge {x}_{d}\Vert =\left| \det \left(X\right)\right|\) in case \(j=d\).

Recall that the Gramian determinant \(\Vert {x}_{1}\wedge \cdots \wedge {x}_{j}\Vert =\sqrt{\det \left(X^TX\right)}\) equals the \(j\)-dimensional volume of the parallelepiped spanned by the vectors \({x}_1,\ldots ,{x}_{j}\) [17, §9.5].

Angles between vectors and subspaces can also be described in terms of exterior products. Consider, for example, linearly independent vectors \(x_1,\ldots ,x_j \in \mathbb{R }^d\) and let \(x\in \mathbb{R }^d\) be arbitrary. Decompose \(x=x^p+x^o\) where \(x^p\in \mathrm{span}\{{x}_1,\ldots ,{x}_{j}\}\) and \(x^o\bot \,\mathrm{span}\{{x}_1,\ldots ,{x}_{j}\}\). Then the volume of the parallelepiped spanned by \({x}_1,\ldots ,{x}_{j}, x\) is

$$\begin{aligned} \Vert {x}_{1}\wedge \cdots \wedge {x}_{j}\wedge x\Vert =\Vert {x}_{1}\wedge \cdots \wedge {x}_{j}\Vert \,\Vert x^o\Vert . \end{aligned}$$

or equivalently

$$\begin{aligned} \displaystyle \frac{\displaystyle \Vert {x}_{1}\wedge \cdots \wedge {x}_{j}\wedge x\Vert }{\displaystyle \Vert {x}_{1}\wedge \cdots \wedge {x}_{j}\Vert }=\Vert x^o\Vert . \end{aligned}$$

This quotient is the length of the \(x\)-component orthogonal to \(\mathrm{span}\{{x}_1,\ldots ,{x}_{j}\}\). In case \(\Vert x\Vert =1\) we obtain the sine of the angle between \(x\) and the subspace

$$\begin{aligned} \frac{\displaystyle \Vert {x}_{1}\wedge \cdots \wedge {x}_{j}\wedge x\Vert }{\displaystyle \Vert {x}_{1}\wedge \cdots \wedge {x}_{j}\Vert }= \left| \sin \angle \left(x,\mathrm{span}\left\{ x_1,\ldots ,x_j\right\} \right)\right|. \end{aligned}$$
(5.2)

We also note the obvious estimate

$$\begin{aligned} \Vert {x}_{1}\wedge \cdots \wedge {x}_{j}\wedge x\Vert \le \Vert {x}_{1}\wedge \cdots \wedge {x}_{j}\Vert \,\Vert x\Vert , \end{aligned}$$
(5.3)

which has the following generalization.

Lemma 13

(Generalized Hadamard inequality). For vectors \({x}_1,\ldots ,{x}_{j}\in \mathbb{R }^d\) with \(j\le d\) and \(k\in \{1,\ldots ,j\}\),

  1. (i)

    \(\displaystyle \Vert {x}_{1}\wedge \cdots \wedge {x}_{j}\Vert \le \Vert {x}_{1}\wedge \cdots \wedge {x}_{k}\Vert \Vert {x}_{k+1}\wedge \cdots \wedge {x}_{j}\Vert .\)

  2. (ii)

    \(\displaystyle \Vert {x}_{1}\wedge \cdots \wedge {x}_{j}\Vert \le \prod \nolimits _{i=1}^j\Vert x_i\Vert \).

The proof of (i) may be found in [17, §9.5] while (ii) follows from (5.3) by induction.

Definition 14

For a matrix \(A\in \mathbb{R }^{d\times d}\) define its j-th exterior power \(\bigwedge \nolimits ^{j}A\in \mathbb R ^{D\times D},\ D={d\atopwithdelims ()j}\) by its action on exterior products

$$\begin{aligned} \bigwedge \nolimits ^{j}A({x}_{1}\wedge \cdots \wedge {x}_{j})={Ax}_{1}\wedge \cdots \wedge {Ax}_{j} \quad \text{ for}\; x_1,\ldots ,x_{j} \in \mathbb{R }^d. \end{aligned}$$

As an immediate consequence of the definition we obtain that the column of \(\bigwedge \nolimits ^{j}A\) belonging to the index \(\ell _1 ,\ldots , \ell _j\) is given by

$$\begin{aligned} \left(\bigwedge \nolimits ^{j}A\right)_{\,{\displaystyle \cdot }\; \ell _1,\ldots ,\ell _j}= \bigwedge \nolimits ^{j}A\left({e}_{\ell _1}\wedge \cdots \wedge {e}_{\ell _j}\right) ={A}_{\,{\displaystyle \cdot }\ell _1}\wedge \cdots \wedge {A}_{\,{\displaystyle \cdot }\ell _j}. \end{aligned}$$
(5.4)

By the definition of the exterior product (5.1) we obtain that the element \(i_1,\ldots ,i_j\) of column \(\ell _1,\ldots ,\ell _j\) is

(5.5)

The exterior power has the following properties:

Lemma 15

For \(A,B\in \mathbb{R }^{d\times d}\) holds,

  1. (i)

    \(\bigwedge \nolimits ^{j}\left(AB\right)=\bigwedge \nolimits ^{j}\left(A\right)\bigwedge \nolimits ^{j}\left(B\right)\).

  2. (ii)

    \(\bigwedge \nolimits ^{j}\left(A^T\right)= \left(\bigwedge \nolimits ^{j}A\right)^T\),

  3. (iii)

    \(\bigwedge \nolimits ^{j}\left(A^{-1}\right)= \left(\bigwedge \nolimits ^{j}A\right)^{-1}\), if \(A\) is nonsingular.

Finally, we note the transformation rule for exterior products.

Lemma 16

If \({x}_1,\ldots ,{x}_{d}\) and \({y}_1,\ldots ,{y}_{j}\) are vectors in \(\mathbb{R }^d\) that satisfy

$$\begin{aligned} y_\ell =\sum _{i=1}^d A_{i\ell }x_i\quad \text{ for}\ \ell =1,\ldots ,j,\ A_{i\ell }\in \mathbb{R }, \end{aligned}$$

then

$$\begin{aligned} {y}_{1}\wedge \cdots \wedge {y}_{j}=\sum _{(\ell _1,\ldots ,\ell _j)\in Ord(j,d)} \left(\bigwedge \nolimits ^{j}A\right)_{\ell _1,\ldots ,\ell _j, {{\small 1}\!\!1}_j}({x}_{\ell _1}\wedge \cdots \wedge {x}_{\ell _j}). \end{aligned}$$

 

Proof

With \(X=[{x}_1,\ldots ,{x}_{d}]\) we obtain

$$\begin{aligned} {y}_{1}\wedge \cdots \wedge {y}_{j}={X\,A}_{{\,{\displaystyle \cdot } 1}}\wedge \cdots \wedge {X\,A}_{{\,{\displaystyle \cdot } j}} ={\textstyle \bigwedge ^j} X\left({A}_{{\,{\displaystyle \cdot } 1}}\wedge \cdots \wedge {A}_{{\,{\displaystyle \cdot } j}}\right). \end{aligned}$$

Using (5.1) and (5.4) we find

$$\begin{aligned} {y}_{1}\wedge \cdots \wedge {y}_{j}=&\displaystyle \sum \limits _{(\ell _1,\ldots ,\ell _j)\in Ord(j,d)} \left( \bigwedge \nolimits ^{j}X \right)_{{\displaystyle \cdot }\;\ell _1,\ldots ,\ell _j} \left({A}_{{\,{\displaystyle \cdot } 1}}\wedge \cdots \wedge {A}_{{\,{\displaystyle \cdot } j}}\right)_{\ell _1,\ldots ,\ell _j} \\ =&\displaystyle \sum \limits _{(\ell _1,\ldots ,\ell _j)\in Ord(j,d)} \left( \bigwedge \nolimits ^{j}A \right)_{\ell _1,\ldots ,\ell _j, {{\small 1}\!\!1}_j} {X}_{{\,{\displaystyle \cdot } \ell _1}}\wedge \cdots \wedge {X}_{{\,{\displaystyle \cdot } \ell _j}}. \end{aligned}$$

This proves the assertion. \(\square \)

1.2 Exterior product and QR-decomposition

As noted in [20] without proof the \(QR\)-decomposition is consistent with the formation of exterior powers.

Lemma 17

Let \(A\in \mathbb{R }^{d\times d}\) be nonsingular and let \(A=QR\) be its unique \(QR\)-decomposition (with positive diagonal entries for \(R\)). Then \(\bigwedge \nolimits ^{j}A=\left(\bigwedge \nolimits ^{j}Q\right)\,\left(\bigwedge \nolimits ^{j}R\right)\) is the unique \(QR\)-decomposition of \(\bigwedge \nolimits ^{j}A\). In particular, the diagonal elements of \(R\left(\bigwedge \nolimits ^{j}A\right)\) are given by

$$\begin{aligned} R_{i_1,\ldots ,i_j,i_1,\ldots ,i_j}\left(\bigwedge \nolimits ^{j}A\right)= \prod ^j_{k=1}R_{i_k i_k}, \quad (i_1,\ldots ,i_j)\in Ord(j,d). \end{aligned}$$
(5.6)

 

Proof

In view of Lemma 15 is is sufficient to show that \(\bigwedge \nolimits ^{j}Q\) is orthogonal and \(\bigwedge \nolimits ^{j}R\) is upper triangular with positive diagonal entries.

Lemma 15 shows the orthogonality of \(\bigwedge \nolimits ^{j}Q\),

$$\begin{aligned} \left(\bigwedge \nolimits ^{j}Q\right)^T\bigwedge \nolimits ^{j}Q=\left(\bigwedge \nolimits ^{j}Q^T\right)\bigwedge \nolimits ^{j}Q =\bigwedge \nolimits ^{j}\left(Q^TQ\right)=\bigwedge \nolimits ^{j}I_d=I_{D}. \end{aligned}$$

Next note that according to (5.5),

(5.7)

If \(({i}_1,\ldots ,{i}_{j})>({\ell }_1,\ldots ,{\ell }_{j})\) then there exists an index \(\hat{k}\) such that \(i_k=\ell _k\) for \(k=1,\ldots ,\hat{k}-1\) and \(i_{\hat{k}}>\ell _{\hat{k}}\). Hence \(R_{i_{\hat{k}}\ell _{\hat{k}}}=0\). Since \(\ell _k<\ell _{\hat{k}}\) for \(k=1,\ldots ,\hat{k}-1\) and \(i_n>i_{\hat{k}}\) for \(n=\hat{k}+1,\ldots ,j\) we arrive at

$$\begin{aligned} i_n>\ell _k\quad \text{ for}\ k=1,\ldots ,\hat{k}-1\ \quad \text{ and}\quad \ n=\hat{k}+1,\ldots ,j. \end{aligned}$$

and therefore,

$$\begin{aligned} R_{i_n \ell _k}=0\quad \text{ for}\ k=1,\ldots ,\hat{k}-1\ \quad \text{ and}\quad \ n=\hat{k}+1,\ldots ,j. \end{aligned}$$

Thus the first \(\hat{k}\) columns of the matrix in (5.7) are of the form

$$\begin{aligned} (R_{i_1 \ell _k},\ldots ,R_{i_{\hat{k}-1}\ell _{k}},0,\ldots ,0)^T \quad \text{ for}\ k=1,\ldots ,\hat{k}, \end{aligned}$$

and hence linearly dependent. Moreover, the determinant in (5.7) vanishes for \(({i}_1,\ldots ,{i}_{j})>({\ell }_1,\ldots ,{\ell }_{j})\). Therefore, both equation (5.6) and the positivity of diagonal elements follow from (5.7). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Beyn, WJ., Lust, A. Error analysis of a hybrid method for computing Lyapunov exponents. Numer. Math. 123, 189–217 (2013). https://doi.org/10.1007/s00211-012-0486-4

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00211-012-0486-4

Mathematics Subject Classification (2000)

Navigation