Abstract
In a previous paper (Beyn and Lust in Numer Math 113:357–375, 2009) we suggested a numerical method for computing all Lyapunov exponents of a dynamical system by spatial integration with respect to an ergodic measure. The method extended an earlier approach of Aston and Dellnitz (Comput Methods Appl Mech Eng 170:223–237, 1999) for the largest Lyapunov exponent by integrating the diagonal entries from the \(QR\)-decomposition of the Jacobian for an iterated map. In this paper we provide an asymptotic error analysis of the method for the case in which all Lyapunov exponents are simple. We employ Oseledec multiplicative ergodic theorem and impose certain hyperbolicity conditions on the invariant subspaces that belong to neighboring exponents. The resulting error expansion shows that one step of extrapolation is enough to obtain exponential decay of errors.
Notes
\(Ord(j,d)=\{\delta \in \{1,\ldots ,d\}^{\{1,\ldots ,j\}}\,:\, \delta \ \text{ strictly} \text{ monotone}\}\) see also Appendix B.
References
Allen, L., Bridges, T.J.: Numerical exterior algebra and the compound matrix method. Numer. Math. 92, 197–232 (2002)
Aston, P.J., Dellnitz, M.: Computation of the Lyapunov exponent via spatial integration with application to blowout bifurcations. Comput. Methods Appl. Mech. Eng. 170, 223–237 (1999)
Aston, P.J., Dellnitz, M.: Computation of the dominant Lyapunov exponent via spatial integration using matrix norms. Proc. Roy. Soc. Lond. A 459, 2933–2955 (2003)
Aston, P.J., Dellnitz, M.: Computation of the dominant Lyapunov exponent via spatial integration using vector norms. In: Fiedler, B., Gröger, K., Sprekels, J. (eds.) Proceedings of the Equadiff 99, pp. 1015–1020. World Scientific, Singapore (2000)
Barreira, L., Pesin, Y.B.: Lyapunov Exponents and Smooth Ergodic Theory. University Lecture Series, vol. 23. The American Mathematical Society (2002)
Beyn, W.-J., Lust, A.: A hybrid method for computing Lyapunov exponents. Numer. Math. 113, 357–375 (2009)
Bridges, T.J., Reich, S.: Computing Lyapunov exponents on a Stiefel manifold. Physica D 156, 219–238 (2001)
Dellnitz, M., Froyland, G., Junge, O.: The algorithms behind GAIO: set oriented numerical methods for dynamical systems. In: Fiedler, B. (ed.) Ergodic Theory, Analysis and Efficient Simulation of Dynamical Systems, p. 174. Springer, Berlin (2001)
Dellnitz, M., Junge, O.: Set oriented numerical methods for dynamical systems. In: Fiedler, B., Iooss, G., Kopell, N. (eds.) Handbook of Dynamical Systems II: Towards Applications, p. 264. World Scientific, Singapore (2002)
Dieci, L., van Vleck, E.S.: Computation of a few Lyapunov exponents for continuous and discrete dynamical systems. Appl. Num. Math. 17, 275–291 (1995)
Dieci, L., van Vleck, E.S.: On the error in computing Lyapunov exponents by QR methods. Numer. Math. 101(4), 619–642 (2005)
Dieci, L., van Vleck, E.S.: Lyapunov and Sacker-Sell spectral intervals. J. Dyn. Differ. Equ. 19(2), 265–293 (2007)
Eckman, J.-P., Ruelle, D.: Ergodic theory of chaos and strange attractors. Rev. Mod. Phys. 57(3), 617–656 (1985)
Fischer, G.: Lineare Algebra, 10 Auflage. Vieweg (1995)
Froyland, G., Judd, K., Mess, A.I., Murano, K.: Lyapunov exponents and triangulation. Proceedings of the 1993 International Symposium on Nonlinear Theory and its Applications, Hawaii, pp. 281–286 (1993)
Froyland, G., Judd, K., Mess, A.I.: Estimation of Lyapunov exponents of dynamical systems using a spatial average. Phys. Rev. E 51(4), 2844–2855 (1995)
Gantmacher, F.R.: The Theory of Matrices. Chelsea, New York (1971)
Geist, K., Parlitz, U., Lauterborn, W.: Comparison of different methods for computig Lyapunov exponents. Prog. Theory Phys. 83(5), 875–893 (1990)
Gröbner, W.: Matrizenrechnung. Hochschultaschenbücher-Verlag (1966)
Johnson, R.A., Palmer, K.J., Sell, G.R.: Ergodic properties of linear dynamical systems. SIAM J. Math. Anal. 18(1), 1–33 (1987)
Katok, A., Hasselblatt, B.: Introduction to the Modern Theory of Dynamical Systems. Cambridge University Press, Cambridge (1995)
Ljapunov, A.M.: The General Problem of the Stability of Motion. Comm. Soc. Math., Kharkow (1892, in Russian); reprinted in English, Taylor& Francis, London (1992)
Lust, A.: Eine hybride Methode zur Berechnung von Liapunow-Exponenten. PhD thesis, Universität Bielefeld (2006). http://www.math.uni-bielefeld.de/~beyn/AG_Numerik/html/en/theses/
Oseledec, V.: A multiplicativ ergodic theorem. Ljapunov characteristic numbers for dynamical systems. Trans. Moscow Math. Soc. 19, 197–231 (1968)
Pilyugin, S.Y.: Introduction to Structurally Stable Systems of Differential Equations. Brinkhäuser Verlag, Boston (1992)
Pollicott, M.: Lectures on Ergodic Theory and Pesin Theory on Compact Manifolds. Cambridge University Press, Cambridge (1993)
Sacker, R.J., Sell, G.R.: A spectral theory for linear differential systems. J. Differ. Equ. 27(3), 320–358 (1978)
Walters, P.: An Introduction to Ergodic Theory. Springer, Berlin (2000)
Author information
Authors and Affiliations
Corresponding author
Additional information
W.-J. Beyn and A. Lust supported by CRC 701 ‘Spectral Analysis and Topological Methods in Mathematics’, Bielefeld University.
The paper is mainly based on the PhD thesis [23] of A. Lust.
This is a companion article containing the error analysis of a numerical method that was suggested in article doi:10.1007/s00211-009-0236-4.
Appendices
Appendix A: Oseledec’s theorem
We state a special version of the multiplicative ergodic theorem of Oseledec (see [24]) which may be found e.g. in [26, 28].
Theorem 9
([24]) Let \(g\) be a \(C^1\)-diffeomorphism of a compact and smooth Riemannian manifold \(M\) of dimension \(d\) and let \(\mu \) be an ergodic measure of \(g\) on \(M\). Then there exists a Borel set \(M_{\mu }\subset M\) such that \(g(M_{\mu })= M_\mu , \mu (M_{\mu })=1\), and the following properties hold:
-
(i)
There exist natural numbers \(d_1,\ldots ,d_s\) with \(s\le d\) and \(\sum _{j=1}^s d_j=d\).
-
(ii)
For every \(x\in M_{\mu }\) there exists a measurable decomposition of the tangent spaces \(T_x M=\bigoplus \nolimits _{j=1}^s W^j(x)\) such that \(\dim W^j(x)=d_j\) and \(D\,g(x)\left(W^j(x)\right)=W^j(g(x))\).
-
(iii)
There are numbers \(\lambda _1>\lambda _2>\cdots >\lambda _s\) such that
$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{n} \log \Vert D g^n(x)v\Vert =\lambda _j \end{aligned}$$for all \(v\in \bigoplus \nolimits _{i=j}^sW^i(x)\) with \(v\notin \bigoplus \nolimits _{i=j+1}^{s} W^i(x)\) and for all \(x\in M_{\mu }\).
Remarks
-
1.
The points in \(M_{\mu }\) are called (Lyapunov-)regular and the decomposition \(T_x M=\bigoplus \nolimits _{j=1}^s W^j(x)\) into invariant subspaces is called the Oseledec decomposition of \(TM\).
-
2.
The number \(\lambda _j\) is called the \(j\)-th Lyapunov exponent (or characteristic number) with respect to the ergodic measure \(\mu \). The number \(d_j\) denotes the multiplicity of \(\lambda _j\).
-
3.
The largest Lyapunov exponent \(\lambda _1\) can also be expressed in terms of matrix norms as follows (see [26])
$$\begin{aligned} \lambda _1=\lim _{n\rightarrow \infty } \frac{1}{n} \ln \Vert D\,g^n(x)\Vert \quad \text{ for} \ \mu \text{-a.e.} x\in M. \end{aligned}$$
Appendix B: Exterior products
For the convenience of the reader we summarize in this appendix several results from the theory of exterior products that are used in this paper. Most of the results can be found in [1, 17, 19], but a few details have been added that are important for our estimates.
1.1 Coordinate representation of exterior products
A mapping \(\delta :\{1,\ldots ,j\}\rightarrow \{1,\ldots ,d\}\) with \(j\le d\) is called strictly monotone if \(\delta (1)<\delta (2)<\cdots <\delta (j)\) holds. By \(Ord(j,d)\) we denote the set of all strictly monotone mappings
Let \(D=\# Ord(j,d)\) and note that \(D={d \atopwithdelims ()j}\). We will frequently identify elements \(i\in Ord(j,d)\) with tuples \(i=(i_1,\ldots ,i_j)\) and simply write \(i=i_1,\ldots ,i_j\), where \(1\le i_1<\cdots <i_j\le d\). In \(Ord(j,d)\) we use lexicographical order written as \(\sigma <\delta \) and meaning that for some \(\ell \in \{1,\ldots ,j\}\) we have
The smallest element is \(\delta _1=(1,\ldots ,j)={{\small 1}\!\!1}_j\) and the largest element is \(\delta _D=(d+j-1,\ldots ,d)\).
Given vectors \(x_1,\ldots ,x_j\in \mathbb{R }^d\) with coordinates \(x_\ell =(x_{1\ell },\ldots ,x_{d\ell })^T\) we denote by \(X=[x_1,\ldots ,x_j]\) the \(d\times j\)-matrix with columns \(x_1,\ldots ,x_j\), i.e.
By \(X_{i_1,\ldots ,i_j}\) we denote the minor of \(X\) that belongs to the rows with indices \(i_1,\ldots ,i_j\), i.e.
The exterior product of vectors \(x_1,\ldots ,x_j\in \mathbb{R }^d\) is defined as the vector
with coordinates
From the Cartesian basis \(\{{e}_1,\ldots ,{e}_{d}\}\) in \(\mathbb{R }^d\) we obtain
as a basis of \(\mathbb{R }^{D}\).
The coordinate representation of exterior products has standard properties.
Lemma 10
For all \({x}_1,\ldots ,{x}_{j}, v\in \mathbb{R }^d\) and \(\alpha ,\beta \in \mathbb{R }\) the following holds:
-
(i)
For any permutation \(\pi \in S_j\),
$$\begin{aligned} {x}_{\pi (1)}\wedge \cdots \wedge {x}_{\pi (j)}= sign(\pi ){x}_{1}\wedge \cdots \wedge {x}_{j} \end{aligned}$$ -
(ii)
For \(\ell =1,\ldots ,j\),
$$\begin{aligned}&{x}_{1}\wedge \cdots \wedge {x}_{\ell -1}\wedge (\alpha x_\ell +\beta v) \wedge {x}_{\ell +1}\wedge \cdots \wedge {x}_{j}\\&\qquad =\alpha ({x}_{1}\wedge \cdots \wedge {x}_{\ell }\wedge \cdots \wedge x_j)+ \beta (x_1\wedge \cdots \wedge \,v\,\wedge \cdots \wedge x_j) \end{aligned}$$ -
(iii)
the vectors \({x}_1,\ldots ,{x}_{j}\) are linearly dependent if and only if \({x}_{1}\wedge \cdots \wedge {x}_{j}=0\).
Let \(\langle \cdot ,\cdot \rangle _k\) denote the standard scalar product in \(\mathbb{R }^k \) with norm \(\Vert x\Vert =\sqrt{\langle x,\,x\rangle _k}\) for \(x\in \mathbb{R }^k\).
Theorem 11
For two decomposable vectors
the scalar product can be written as
where \(Z\in \mathbb{R }^{j\times j}\) is defined by
Proof
We have \(Z=X^T Y\) for the matrices \(X=[{x}_1,\ldots ,{x}_{j}]\) und \(Y=[{y}_1,\ldots ,{y}_{j}]\), thus the multiplication theorem for determinants [14] shows
\(\square \)
An immediate consequence of this theorem is the following result.
Corollary 12
For any set of vectors \({x}_1,\ldots ,{x}_{j}\in \mathbb{R }^d\),
in particular, \(\Vert {x}_{1}\wedge \cdots \wedge {x}_{d}\Vert =\left| \det \left(X\right)\right|\) in case \(j=d\).
Recall that the Gramian determinant \(\Vert {x}_{1}\wedge \cdots \wedge {x}_{j}\Vert =\sqrt{\det \left(X^TX\right)}\) equals the \(j\)-dimensional volume of the parallelepiped spanned by the vectors \({x}_1,\ldots ,{x}_{j}\) [17, §9.5].
Angles between vectors and subspaces can also be described in terms of exterior products. Consider, for example, linearly independent vectors \(x_1,\ldots ,x_j \in \mathbb{R }^d\) and let \(x\in \mathbb{R }^d\) be arbitrary. Decompose \(x=x^p+x^o\) where \(x^p\in \mathrm{span}\{{x}_1,\ldots ,{x}_{j}\}\) and \(x^o\bot \,\mathrm{span}\{{x}_1,\ldots ,{x}_{j}\}\). Then the volume of the parallelepiped spanned by \({x}_1,\ldots ,{x}_{j}, x\) is
or equivalently
This quotient is the length of the \(x\)-component orthogonal to \(\mathrm{span}\{{x}_1,\ldots ,{x}_{j}\}\). In case \(\Vert x\Vert =1\) we obtain the sine of the angle between \(x\) and the subspace
We also note the obvious estimate
which has the following generalization.
Lemma 13
(Generalized Hadamard inequality). For vectors \({x}_1,\ldots ,{x}_{j}\in \mathbb{R }^d\) with \(j\le d\) and \(k\in \{1,\ldots ,j\}\),
-
(i)
\(\displaystyle \Vert {x}_{1}\wedge \cdots \wedge {x}_{j}\Vert \le \Vert {x}_{1}\wedge \cdots \wedge {x}_{k}\Vert \Vert {x}_{k+1}\wedge \cdots \wedge {x}_{j}\Vert .\)
-
(ii)
\(\displaystyle \Vert {x}_{1}\wedge \cdots \wedge {x}_{j}\Vert \le \prod \nolimits _{i=1}^j\Vert x_i\Vert \).
The proof of (i) may be found in [17, §9.5] while (ii) follows from (5.3) by induction.
Definition 14
For a matrix \(A\in \mathbb{R }^{d\times d}\) define its j-th exterior power \(\bigwedge \nolimits ^{j}A\in \mathbb R ^{D\times D},\ D={d\atopwithdelims ()j}\) by its action on exterior products
As an immediate consequence of the definition we obtain that the column of \(\bigwedge \nolimits ^{j}A\) belonging to the index \(\ell _1 ,\ldots , \ell _j\) is given by
By the definition of the exterior product (5.1) we obtain that the element \(i_1,\ldots ,i_j\) of column \(\ell _1,\ldots ,\ell _j\) is
The exterior power has the following properties:
Lemma 15
For \(A,B\in \mathbb{R }^{d\times d}\) holds,
-
(i)
\(\bigwedge \nolimits ^{j}\left(AB\right)=\bigwedge \nolimits ^{j}\left(A\right)\bigwedge \nolimits ^{j}\left(B\right)\).
-
(ii)
\(\bigwedge \nolimits ^{j}\left(A^T\right)= \left(\bigwedge \nolimits ^{j}A\right)^T\),
-
(iii)
\(\bigwedge \nolimits ^{j}\left(A^{-1}\right)= \left(\bigwedge \nolimits ^{j}A\right)^{-1}\), if \(A\) is nonsingular.
Finally, we note the transformation rule for exterior products.
Lemma 16
If \({x}_1,\ldots ,{x}_{d}\) and \({y}_1,\ldots ,{y}_{j}\) are vectors in \(\mathbb{R }^d\) that satisfy
then
Proof
With \(X=[{x}_1,\ldots ,{x}_{d}]\) we obtain
This proves the assertion. \(\square \)
1.2 Exterior product and QR-decomposition
As noted in [20] without proof the \(QR\)-decomposition is consistent with the formation of exterior powers.
Lemma 17
Let \(A\in \mathbb{R }^{d\times d}\) be nonsingular and let \(A=QR\) be its unique \(QR\)-decomposition (with positive diagonal entries for \(R\)). Then \(\bigwedge \nolimits ^{j}A=\left(\bigwedge \nolimits ^{j}Q\right)\,\left(\bigwedge \nolimits ^{j}R\right)\) is the unique \(QR\)-decomposition of \(\bigwedge \nolimits ^{j}A\). In particular, the diagonal elements of \(R\left(\bigwedge \nolimits ^{j}A\right)\) are given by
Proof
In view of Lemma 15 is is sufficient to show that \(\bigwedge \nolimits ^{j}Q\) is orthogonal and \(\bigwedge \nolimits ^{j}R\) is upper triangular with positive diagonal entries.
Lemma 15 shows the orthogonality of \(\bigwedge \nolimits ^{j}Q\),
Next note that according to (5.5),
If \(({i}_1,\ldots ,{i}_{j})>({\ell }_1,\ldots ,{\ell }_{j})\) then there exists an index \(\hat{k}\) such that \(i_k=\ell _k\) for \(k=1,\ldots ,\hat{k}-1\) and \(i_{\hat{k}}>\ell _{\hat{k}}\). Hence \(R_{i_{\hat{k}}\ell _{\hat{k}}}=0\). Since \(\ell _k<\ell _{\hat{k}}\) for \(k=1,\ldots ,\hat{k}-1\) and \(i_n>i_{\hat{k}}\) for \(n=\hat{k}+1,\ldots ,j\) we arrive at
and therefore,
Thus the first \(\hat{k}\) columns of the matrix in (5.7) are of the form
and hence linearly dependent. Moreover, the determinant in (5.7) vanishes for \(({i}_1,\ldots ,{i}_{j})>({\ell }_1,\ldots ,{\ell }_{j})\). Therefore, both equation (5.6) and the positivity of diagonal elements follow from (5.7). \(\square \)
Rights and permissions
About this article
Cite this article
Beyn, WJ., Lust, A. Error analysis of a hybrid method for computing Lyapunov exponents. Numer. Math. 123, 189–217 (2013). https://doi.org/10.1007/s00211-012-0486-4
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00211-012-0486-4