Skip to main content
Log in

Random Matrices and Lyapunov Coefficients Regularity

  • Published:
Journal of Statistical Physics Aims and scope Submit manuscript

Abstract

Analyticity and other properties of the largest or smallest Lyapunov exponent of a product of real matrices with a “cone property” are studied as functions of the matrices entries, as long as they vary without destroying the cone property. The result is applied to stability directions, Lyapunov coefficients and Lyapunov exponents of a class of products of random matrices and to dynamical systems. The results are not new and the method is the main point of this work: it is is based on the classical theory of the Mayer series in Statistical Mechanics of rarefied gases.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. The site n must be contained since J must contain n because \(\sigma _n\ge 1\); and the site N must be contained as otherwise the \(\varphi ^T\) does not depend on v.

  2. I.e only the largest eigenvalue, which is separated from the rest of the spectrum by an h-independent factor \({<}1\) (related to the \(\alpha \) in lemma 1), and the relative eigenvector are needed.

References

  1. Cammarota, C.: Decay of correlation for infinite range interactions in unbounded spin system. Communications in Mathematical Physics 85, 517–528 (1982)

    Article  ADS  MathSciNet  Google Scholar 

  2. Dobrushin, R.: Estimates of semiinvariants for the Ising model at low temperatures. Topics in Statistics and Theoretical Physics, American Mathematical Society Translations 177, 59–81 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  3. Fernandez, R., Procacci, A.: Cluster expansion for abstract polymer models. new bounds from an old approach. Communications in Mathematical Physics 274, 123–140 (2007)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  4. Fisher, M.E.: Theory of condensation and the critical point. Physics Physica Fyzika 3, 255–283 (1967)

    Google Scholar 

  5. Furstenberg, H., Kesten, H.: Products of random matrices. Annals of Mathematical Statistics 31, 457–469 (1960)

    Article  MathSciNet  MATH  Google Scholar 

  6. G. Gallavotti. Zeta functions and basic sets. Rendiconti Accademia dei Lincei, LXI:309–317, 1976 (Italian) (English: http://ipparco.roma1.infn.it/pagine/1967-1979)

  7. Gallavotti, G.: Statistical Mechanics. A short treatise. Springer Verlag, Berlin (2000)

    MATH  Google Scholar 

  8. Gallavotti, G., Bonetto, F., Gentile, G.: Aspects of the ergodic, qualitative and statistical theory of motion. Springer Verlag, Berlin (2004)

    Book  MATH  Google Scholar 

  9. G. Gallavotti, A. Martin-Löf, and S. Miracle-Solé. Some problems connected with the phase separation in the Ising model at low temperature. in Lecture Notes in Physics (ed. A. Lenard), 20:162–204, 1973

  10. Gallavotti, G., Miracle-Solé, S.: Correlation functions of a lattice system. Communications in Mathematical Physics 7, 274–288 (1968)

    Article  ADS  MathSciNet  Google Scholar 

  11. Gruber, C., Kunz, H.: General Properties of Polymer Systems. Communications in Mathematical Physics 22, 133–161 (1971)

    Article  ADS  MathSciNet  Google Scholar 

  12. Liverani, C.: Decay of correlations. Annals of Mathematics 142, 239–301 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  13. Miracle-Sole, S.: On the convergence of cluster expansions. Physica A 279, 244–249 (2000)

    Article  ADS  MathSciNet  Google Scholar 

  14. Peres, Y.: Domains of analytic continuation for the top lyapunov exponent. Annales de l’Institut Henri Poincaré B 28, 131–148 (1992)

    MathSciNet  MATH  Google Scholar 

  15. Pollicot, M.: Maximal lyapunov exponents for random matrix products. Inventiones Mathematicae 181, 209–226 (2010)

    Article  ADS  MathSciNet  Google Scholar 

  16. Kotecky, R., Preiss, D.: Cluster expansion for abstract polymer models. Communications in Mathematical Physics 103, 491–498 (1986)

    Article  ADS  MathSciNet  MATH  Google Scholar 

  17. Raghunathan, M.S.: A proof of Oseledec’s multiplicative ergodic theorem. Israel Journal of Mathematics 32, 356–362 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  18. Ruelle, D.: Cluster property of the correlation functions of classical gases. Reviews of Modern Physics 36, 580–584 (1964)

    Article  ADS  MathSciNet  Google Scholar 

  19. D. Ruelle. Thermodynamic formalism. Addison Wesley, Reading, 1978

  20. Ruelle, D.: Analyticity properties of the characteristic exponents of random matrix products. Advances in Mathematics 32, 68–80 (1979)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giovanni Gallavotti.

Appendices

Appendices

1.1 Algebraic Properties of Cones

Call \(\varepsilon \), “inclination”, the minimum angle between pairs of vectors in \(\Gamma \) and \(\Gamma '\); \(\vartheta \), “opening”, the maximum angle betwee pairs of vectors in \(\Gamma \) and \(\vartheta '\) the maximum angle between vectors in \(\Gamma '\): \(\pi >\vartheta >\vartheta '>0,\varepsilon >0\).

Proof of Lemma 1

(following [20]) Let T be a \(d\times d\) matrix and let \(\Gamma ,{\Gamma \,}'\) be proper, convex, closed cones (with apex at the origin O) in \(R^d\). Suppose that \(T\,\Gamma \subset {\Gamma \,}'\) and a relative inclination \(\varepsilon >0\) of \(\Gamma \) to \({\Gamma \,}'\).

Let \(\Gamma ^*=\{w| \langle w\vert v\rangle \ge 0,\, \forall v\in \Gamma \}\). Then, fixed \(0\ne v_0\in \Gamma \) and \(0\ne w_0\in \Gamma ^*\) the maps

$$\begin{aligned} \textstyle v\rightarrow \frac{T v}{\langle T^*w_0\vert v\rangle },\quad \mathrm{and}\quad w\rightarrow \frac{T^* w}{\langle w\vert Tv_0\rangle } \end{aligned}$$
(4.1)

map continuously the convex compact sets \(\{v| \langle w_0\vert v\rangle =1\}\) and, respectively, \(\{w| \langle w\vert v_0\rangle =1\}\) strictly into themselves. Hence \(\exists \) \(a\in \Gamma \) and \(a^*\in \Gamma ^*\) which are fixed points of the maps, respectively; hence, if \(b{\buildrel def\over =}\frac{a}{||a||}\) and \(b^*{\buildrel def\over =}\frac{a^*}{||a^*||}\)

$$\begin{aligned} \textstyle \frac{Tb}{||Tb||}=b,\qquad \frac{T^*b^*}{||T^*b^*||}=b^* \end{aligned}$$
(4.2)

Let \(\overline{T}\xi {\buildrel def\over =}\frac{T\xi }{||T b||}\) and let

$$\begin{aligned} K{\buildrel def\over =}\{\xi | \langle b^*\vert \xi \rangle =0,\ \mathrm{and}\ b+\xi \in \Gamma \} \end{aligned}$$
(4.3)

Since \(\frac{T b}{||T b||}=b\) and \(\frac{T^* b^*}{||T^* b^*||}=b^*\) the set K is mapped into itself by \(\overline{T}\) (e.g. \(\langle b^*\vert T\xi \rangle =0\) and \(b+\frac{T\xi }{||T b||}=\frac{T(b+\xi )}{||T b||}\in \Gamma \)) and since the cone \(\Gamma \) is shrunk by T the set K is mapped into \(\overline{T} K\subset \alpha K\) with \(\alpha <1\) (determined by the inclination and opening angles, see Sect. 2).

Hence \(\overline{T}^n(b+\xi )=b+\overline{T}^n\xi \) and \(||\overline{T}^n\xi ||\le \alpha ^n\). For any \(v\in \Gamma , v\ne 0,\) there is a \(\nu \ne 0\) such that \(v=\nu b+\xi \), \(\xi \in K\), so that

(4.4)

because T and \(\overline{T}\) are proportional: notice that the above analysis implies that the largest eigenvalue \(\lambda _0\) of T is positive and that it is the unique eigenvalue of T with maximum modulus.

proof of Lemma 2

(following [20]) Let \(T'\,{\buildrel def\over =}\, T_1 T_2\cdots T_p\) and let \(\varvec{\vartheta },\varvec{\vartheta }^*\) be the, respective, normalized eigenvectors with maximum modulus eigenvalue \(\lambda '>0\) for \(T'\) and \((T')^*\) (existing by lemma 1). Define

$$\begin{aligned} \varvec{\vartheta }_{p}{\buildrel def\over =}\varvec{\vartheta },\ \varvec{\vartheta }_{p-1}= & {} \frac{T_{p}\varvec{\vartheta }_{p}}{||T_{p}\varvec{\vartheta }_{p}||}, \ldots , \varvec{\vartheta }_{0}= \frac{T_{1}\varvec{\vartheta }_{1}}{||T_{1}\varvec{\vartheta }_{1}||}=\varvec{\vartheta }\nonumber \\ \varvec{\vartheta }^*_{0}{\buildrel def\over =}\varvec{\vartheta }^*,\ \varvec{\vartheta }_{1}^*= & {} \frac{T_1^*\varvec{\vartheta }^*_{0}}{||T^*_1\varvec{\vartheta }^*_{0}||}, \ldots , \varvec{\vartheta }^*_{p}=\frac{T^*_{p}\varvec{\vartheta }^*_{p-1}}{||T^*_{p}\varvec{\vartheta }^*_{p-1}||}=\varvec{\vartheta }^* \end{aligned}$$
(4.5)

By the argument in the proof of lemma 1 the action of \(T_j\) on the plane orthogonal to \(\varvec{\vartheta }^*_{j}\) maps it on the plane \(\varvec{\vartheta }^*_{j+1}\) and contracts by at least \(\alpha <1\). Therefore \(T_1T_2\cdots T_p\) contracts by at least \(\alpha ^p\) in the space orthogonal to \(\varvec{\vartheta }^*\), proving Lemma 2.

Cluster Expansion: A Rehearsal

This section follows [8, Ch. 7] (in turn based on [9, 18]) and it is here only for the purpose of making the paper self-contained for the reader. Cluster expansion is an algorithm to compute the logarithm of a sum

$$\begin{aligned} \Xi =\sum _{\mathbf{J}}^* \zeta (\mathbf{J})\equiv \sum _{\mathbf{J}}^*\prod _i \zeta (J_i)^{n_i} \end{aligned}$$
(5.1)

where: (1) \(\mathbf{J}=(J_1^{n_1},\ldots ,J_\mathcal{N}^{n_\mathcal{N}})\) with \(J_i\)’s subsets in a box \(\Lambda \) on a d-dimensional lattice (here \(d=1\)) called polymers and \(n_i\ge 0\) are integers defining the “multiplicity” of each (or “counting” how many times each set is counted) hence \(\mathcal{N}=2^{|\Lambda |}\). The sets J could be decorated by associating to each site \(k\in J\) a “spin”, i.e. a variable assuming \(d-1\) values. However in the following the decorations will not be mentioned as they would only make the notations heavier. In the applications in Sect. 2 the decorations will be necessary and the formulae of this section (which correspond to the case \(d=2\), i.e. all spins 1) are directly usable simply by imagining that each J is in fact a pair \(Y=(J,\varvec{\sigma }_J)\) where \(\varvec{\sigma }_J=(\sigma _j)_{j\in J}\) and \(\sigma _j=1,\ldots ,d-1\).

(2) \(\zeta (\mathbf{J})=\prod \zeta (J_i)^{n_i}\) with \(\zeta (J)\) (small) constants called activities, \(\zeta (\emptyset ){\buildrel def\over =}1\).

(3) the \(*\) means that the sum runs over the \(\mathbf{J}\)’s in which no two of the \(J_i\in \mathbf{J}\) with multiplicity \(n_i>0\) overlap in the sense that they contain pairs of points at distance \(\le 1\) on the lattice. If \(\widetilde{\mathbf{J}}\) denotes the sets in \(\mathbf{J}\) which have positive multiplicity then the \(*\) indicates that the sum is restricted to \(\mathbf{J}\equiv \widetilde{\mathbf{J}}\) in which no two of the J’s intersect.

In applications \(\zeta (J)\ne 0\) only for a few of the possible subsets of \(\Lambda \). For instance in the present case \(\Lambda \) is the interval [nN] and the “polymers” are just the subintervals.

The \(\Xi \) can certainly be written as \(\exp (\sum _{\mathbf{J}} \varphi ^T(\mathbf{J}) \zeta (\mathbf{J}))\) by expanding the \(\log \Xi \) in powers of the \(\zeta (J)\): of course the sum in the exponential will involve \(\mathbf{J}\) with J’s which can overlap or that can be counted many times. The \(\varphi ^T(\mathbf{J})\) are suitable combinatorial coefficients.

For instance if \(\Lambda \) is just one point \(\Xi =1+z\) can be written as the exponential of \(\sum _{k=1}^\infty \frac{(-1)^{k+1}}{k} z^k\). If \(\Lambda \) consists of two points, say 1 and 2 then the polymers are \(\emptyset ,1,2,12\) and \(\Xi =1+z_1+z_2+z_{12}\) is the exponential of \(\sum _{k_1+k_2+k_3>0} \frac{{(-1)^{k_1+k_2+k_3+1}}(k_1+k_2+k_3-1)!}{k_1! k_2!k_3!} z_1^{k_1} z_2^{k_2}z_{12}^{k_3}\).

The cluster expansion is the general form of the above examples. It is of interest, for instance, if \( \sum _{\mathbf{J}}^ \& |\varphi ^T(\mathbf{J})| |\zeta (\mathbf{J})|<+\infty \) where the & means that the sum is restricted to \(\mathbf{J}\)’s which contain any fixed point \(x\in \Lambda \) (i.e. with \(x\in \cup _{J\in \widetilde{\mathbf{J}}} J\)). It is therefore necessary to determine conditions that imply the mentioned convergence.

The first step is to define \(\mathbf{J}+\mathbf{J}'\) simply as \(J_1^{n_1+n'_1},\ldots , J_\mathcal{N}^{n_\mathcal{N}+n'_\mathcal{N}}\), i.e. as the family of polymers with multiplicities equal to the sum of the corresponding ones in \(\mathbf{J}\) and \(\mathbf{J}'\). Let

$$\begin{aligned} {\mathcal F}= & {} \mathrm{set\,of\,functions}\,\, F(\mathbf{J})\nonumber \\ {\mathcal F}_0= & {} \mathrm{set\,of\,functions}\,\, F(\mathbf{J}) \,\,\mathrm{with}\,\, F(\emptyset )=0\nonumber \\ {\mathcal F}_1= & {} \mathrm{set\,of\,functions}\,\, F(\mathbf{J}) \,\,\mathrm{with}\,\, F(\emptyset )=1\nonumber \\ \mathbf{1}(\mathbf{J})= & {} \left\{ \begin{array}{ll} 0 &{} \hbox {if} \quad \mathbf{J}\ne \emptyset \\ 1&{} \hbox {if}\quad \mathbf{J}=\emptyset \\ \end{array}\right. \nonumber \\ f\in & {} {\mathcal F}_1 \ \,{\leftarrow \rightarrow }\,\ \widetilde{f}\,{\buildrel def\over =}f -\mathbf{1}\in {\mathcal F}_0 \end{aligned}$$
(5.2)

and remark that \(f\in {\mathcal F}_1\) can be written \(f=\mathbf{1}+ \widetilde{f}\) with \(\widetilde{f}\in {\mathcal F}_0\).

Then if \(f*g(\mathbf{J}){\buildrel def\over =}\sum _{\mathbf{J}_1+\mathbf{J}_2=\mathbf{J}} f(\mathbf{J}_1)g(\mathbf{J}_2)\), for \(f,g\in {\mathcal F}\) define

$$\begin{aligned} \mathrm{Exp} f(\mathbf{J})= & {} \sum _{k=0}^\infty \frac{f^{*k}(\mathbf{J})}{k!},\quad f\in {\mathcal F}_0 \nonumber \\ \mathrm{Log} f(\mathbf{J})= & {} \sum _{k=1}^\infty \frac{(-1)^k \widetilde{f}^{*k}(\mathbf{J})}{k}, \quad f=\mathbf{1}+\widetilde{f}\in {\mathcal F}_1 \nonumber \\ f^{*-1}= & {} \sum _{k=1}^\infty (-1)^k \widetilde{f}^{*k},\qquad f=\mathbf{1}+\widetilde{f}\in {\mathcal F}_1 \nonumber \\ {\langle \,f\,\rangle }= & {} \sum _{\mathbf{J}\subset \Lambda } f(\mathbf{J}),\quad f\in {\mathcal F} \end{aligned}$$
(5.3)

here all sums over k are finite sums for f in the corresponding domains.

A key remark is

$$\begin{aligned}&\mathrm{Log}\,( \mathrm{Exp}( f))=f\qquad \forall \ f\in {\mathcal F}_0,\qquad \mathrm{Exp}\,(\mathrm{Log} (f))=f\qquad \forall \ f\in {\mathcal F}_1\nonumber \\&f^{*-1}*f=\mathbf{1},\quad \forall \ f=\mathbf{1}+\widetilde{f}\in {\mathcal F}_1,\qquad {\langle \,f*g\,\rangle }={\langle \,f\,\rangle }{\langle \,g\,\rangle } \end{aligned}$$
(5.4)

If \(\chi (\mathbf{J})=\prod \overline{\chi }(J_i)^{n_i}\) is a multiplitive function \(\chi \in {\mathcal F}\) then \({\langle \,f*g \chi \,\rangle }={\langle \,f\chi \,\rangle }{\langle \,g\chi \,\rangle }\) so that if \(\varphi \in {\mathcal F}_1\) and \(\overline{\chi }(J)=\zeta (J)\)

$$\begin{aligned} {\langle \,f\cdot \zeta \,\rangle }={\langle \,\mathrm{Exp( Log (f\cdot \zeta )}\,\rangle }=\exp {{\langle \,(\mathrm{Log}f\cdot \zeta )\,\rangle }} \end{aligned}$$
(5.5)

Therefore call \(\mathbf{J}\) compatible if \(n_i=0,1\) (i.e. \(\mathbf{J}=\widetilde{\mathbf{J}}\)) and the elements of \(\widetilde{\mathbf{J}}\) are not connected then if

$$\begin{aligned} \varphi (\mathbf{J})=\left\{ \begin{array}{ll} 0&{} \hbox {if} \quad {\mathbf{J}}\quad \hbox {is not compatible}\\ 1&{} \hbox {otherwise} \end{array}\right. \end{aligned}$$
(5.6)

then \(\varphi \in {\mathcal F}_1\) and \(\varphi ^T=\mathrm{Log} \varphi \in {\mathcal F}_0\) makes sense and

$$\begin{aligned} \Xi ={\langle \,\varphi \cdot \zeta \,\rangle }=\exp {\langle \,\varphi ^T\cdot \zeta \,\rangle } \equiv \exp \sum _{\mathbf{J}}\varphi ^T(\mathbf{J})\,\zeta (\mathbf{J}) \end{aligned}$$
(5.7)

which is the exponential of a power series in the \(\zeta (J)\) variables.

Calculating \(\varphi ^T(\mathbf{J})\) requires computing the sum of finitely many quantities: if \(\mathbf{J}\) is represented as a set of “points” or “nodes” and if G is the graph obtained by joining all pairs of polymers in \(\mathbf{J}\) which are “incompatible” (regarding as different, and incompatible with each other, the \(n_i\) copies of \(J_i\)) it is, (e.g. see [9, Eq. (4.21)]),

$$\begin{aligned} \varphi ^T(\mathbf{J})=\frac{1}{\prod n_i!}\sum _{C\subset G}^* (-1)^{\# \ of\ edges\ in \ C} \end{aligned}$$
(5.8)

where the \(*\) means that the sum is restricted to the subgraphs of G which visit all polymers in G: their number is huge, growing faster than any power in the number of polymers so that convergence occurs because of cancellations due to the relation in Eq. (5.12).

The series in Eq. (5.7) is certainly convergent for \(\zeta (J)\)’s small enough: however the radius of convergence might be very small and \(\Lambda \) dependent.

Define the differentiation operation as

$$\begin{aligned} (D_\Gamma \Psi )( {\mathbf{H}} ){\buildrel def\over =}\Psi (\Gamma + {\mathbf{H}}) \frac{(\Gamma + {\mathbf{H}} )!}{ {\mathbf{H}} !} \end{aligned}$$
(5.9)

with \(\Gamma !=\prod _{i=1}^s n_i!\). The name is attributed because of the validity of the following rules:

$$\begin{aligned} D_\gamma (\Psi _1*\Psi _2)= & {} (D_\gamma \Psi _1)*\Psi _2+\Psi _1*(D_\gamma \Psi _2) ,\nonumber \\ D_\gamma \mathrm{Exp}\Psi= & {} D_\gamma \Psi *\mathrm{Exp}\Psi , \end{aligned}$$
(5.10)

A direct check of the above relations can be reduced to the case in which \(\Gamma =n\gamma \), i.e. to the case in which there is only one polymer species \(\gamma \), and the check is left to the reader. The first relation above, Leibniz rule, can be seen as a consequence the combinatorial identity \(\sum _{p_1+p_2=n} {{q_1}\atopwithdelims (){p_1}} {{q_2}\atopwithdelims (){p_2}}= { {q_1+q_2}\atopwithdelims (){n}}\) for all \(n,q_1,q_2\) with \(n\le q_1+q_2\).

The definitions lead to the derivation of the expression for \(\varphi ^T(\Gamma )\) in (5.8): which not only is quite explicit but also implies immediately that \(\varphi ^T(\Gamma )\) vanishes for nonconnected \(\Gamma \)’s.

To determine sufficient conditions for the convergence which are independent on the size of \(\Lambda \) let \({\widehat{\varphi }}(\mathbf{Y}){\buildrel def\over =}\varphi (\mathbf{Y})\zeta (\mathbf{Y})\) and \(\Delta _{\mathbf{J}}(\mathbf{Y}){\buildrel def\over =}{\widehat{\varphi }}^{*-1}*D_{\mathbf{J}}{\widehat{\varphi }})(\mathbf{Y})\). Then if \(\gamma \) is a polymer, and \(\mathbf{J},\mathbf{Y}\) are polymer configurations

$$\begin{aligned} \Delta _{\gamma +\mathbf{J}}(\mathbf{Y})= & {} \sum _{\mathbf{Y}_1+\mathbf{Y}_2=\mathbf{Y}}{\widehat{\varphi }}^{*-1}*(\mathbf{Y}_1)\varphi (\gamma +\mathbf{J}+\mathbf{Y}_2)\zeta (\gamma +\mathbf{J}+\mathbf{Y}_2)\nonumber \\= & {} \zeta (\gamma )\sum _{\mathbf{Y}_1+\mathbf{Y}_2=\mathbf{Y}}{\widehat{\varphi }}^{*-1}(\mathbf{Y}_1)*\varphi (\gamma +\mathbf{J}+\mathbf{Y}_2)\zeta (\mathbf{J}+\mathbf{Y}_2) \end{aligned}$$
(5.11)

Here no factorials appear because \(\varphi (\mathbf{J})\) vanishes unless \(\mathbf{J}=\widetilde{\mathbf{J}}\).

Remark that \(\varphi (\gamma +\mathbf{J}+\mathbf{Y}_2)=\varphi (\mathbf{J}+\mathbf{Y}_2)\prod _{\gamma '\in \mathbf{Y}_2}(1+\chi (\gamma ,\gamma '))\) with \(\chi (\gamma ,\gamma ')=0\) if \(\gamma ,\gamma '\) do not overlap and \(\chi (\gamma ,\gamma ')=-1\) otherwise, so that \(\varphi (\gamma +\mathbf{J}+\mathbf{Y}_2)=\varphi (\mathbf{J}+\mathbf{Y}_2) \sum ^*_{\mathbf{S}\subset \mathbf{Y}_2} (-1)^{|\mathbf{S}|}\), with \(|\mathbf{S}|=\) number of polymers in \(\mathbf{S}=(s_1,s_2,\ldots )\) and \(*\) means that the \(s_i\) overlap with \(\gamma \), for all i. Hence setting \(\mathbf{Y}_2=\mathbf{S}+\mathbf{H}\)

$$\begin{aligned} \Delta _{\gamma +\mathbf{J}}(\mathbf{Y})= & {} \zeta (\gamma )\sum _{\mathbf{S}\subset \mathbf{Y}}^* \sum _{\mathbf{Y}_1+\mathbf{H}=\mathbf{Y}-\mathbf{S}}\quad {\widehat{\varphi }}^{-1}(\mathbf{Y}_1)\varphi (\mathbf{J}+\mathbf{S}+\mathbf{H})\zeta (\mathbf{J}+\mathbf{S}+\mathbf{H}) \nonumber \\= & {} \zeta (\gamma ) \sum ^*_{\mathbf{S}\subset \mathbf{Y}}(-1)^{|\mathbf{S}|} \Delta _{\mathbf{J}+\mathbf{S}}(\mathbf{Y}-\mathbf{S}) \end{aligned}$$
(5.12)

Let \(r(\gamma )\ge |\zeta (\gamma )|\) and \(r(\mathbf{X})=\prod _{\gamma \in \mathbf{X}} r(\gamma )\); then

$$\begin{aligned} I_m{\buildrel def\over =}\sup _{1\le n\le m} \sup _{|\mathbf{J}|=n} \sum _{\mathbf{Y}, |\mathbf{Y}|=m-n}\frac{|\Delta _{\mathbf{J}}(\mathbf{Y})|}{r(\mathbf{J})} \end{aligned}$$
(5.13)

and \(I_1\) is then \(I_1=\sup \frac{|\zeta (\gamma )|}{r(\gamma )}\) and recursively \(I_{m+1} \le \mu ^m I_1\) where

$$\begin{aligned} \mu {\buildrel def\over =}\sup _\gamma \frac{|\zeta (\gamma )|}{r(\gamma )} \exp {\sum _{J}^*r(J)} \end{aligned}$$
(5.14)

where here J is a single polymer (intersecting \(\gamma \)): see [8, Eq. 7.1.28] for more details on the algebra. Therefore \(I_{m+1}\le \mu ^m I_1\), if \(\mu <1\).

The latter property \(\mu <1\) holds in various applications, notably in the present work, to bound \(\Omega (n,N)\) as well as a few more quantities.

The method has several other applications, see [9], [8, Ch. 7]. Here the polymers J will be \(\varvec{\sigma }_J\) corresponding to intervals J (on the lattice [1, N]) with the associated spin structures \(\varvec{\sigma }_J\). We shall make use of Eq. (5.7) and, by Eq. (5.7) and the third in (5.10), of

$$\begin{aligned}&P(J){\buildrel def\over =}\frac{\sum _{\mathbf{H}\ni J}\zeta (\mathbf{H})}{\Xi }= \frac{{\langle \,D_J \varphi \zeta \,\rangle }}{{\langle \,\varphi \zeta \,\rangle }}= {\langle \,{\widehat{\varphi }}^{*-1}*D_J{\widehat{\varphi }}\,\rangle } \nonumber \\&={\langle \,{\widehat{\varphi }}^{*-1}*D_JExp({\widehat{\varphi }}^T)\,\rangle } ={\langle \,D_J{\widehat{\varphi }}^T\,\rangle },\qquad {\widehat{\varphi }}\equiv \varphi \zeta \end{aligned}$$
(5.15)

In an ensemble in which the polymer configurations \(\mathbf{J}\) in \(\Lambda \) are given a weight proportional to \(\prod _{\gamma \in \mathbf{J}}\zeta (\gamma )\) this would be the probability of finding a configuration of polymers containing the polymer J if \(\zeta (\gamma )\ge 0\). Hence the complementary sum \(P'(J){\buildrel def\over =}\frac{\sum _{\mathbf{H}\in J}\zeta (\mathbf{H})}{\Xi }\) will be such that \(P(J)+P'(J)=1\).

Dynamical Systems Application

Let \({\mathcal F}\) be a smooth compact manifold and \(\tau \) a smooth, smoothly invertible, map on \({\mathcal F}\) (take smooth to mean \(C^\infty \), for simplicity). At each point \(x\in {\mathcal F}\) there are proper closed convex cones \(\Gamma (x)\supset {\Gamma \,}'(x)\), with apex at x in a linear space E(x) of dimension d smoothly dependent on x (and call its adjoint \(E(x)^*\)). The cones are also supposed to depend smoothly on x.

Definition: The minimum angle between vectors on the boundary of \(\Gamma (x)\) and on that of \({\Gamma \,}'(x)\) will be called inclination \(\varepsilon (x)\); while the maximum angle between vectors in \(\Gamma (x)\) will be called \(\vartheta (x)\), likewise define \(\vartheta '(x)\).

Let T(x), \(x\in {\mathcal F}\), be an invertible mapping of E(x) onto \(E(\tau x)\), and T(x) maps \(\Gamma (x)\) into \({\Gamma \,}'(\tau x)\subset \Gamma (\tau x)\) with \({\Gamma \,}'(x)/\{x\}\subset \Gamma (x)^0\) and \(T^{\pm }(x),\Gamma (x),{\Gamma \,}'(x),\varepsilon (x),\vartheta (x),\vartheta '(x)\) be smooth, \(\pi >\vartheta (x)>\varepsilon (x),\vartheta '(x)>0\).

Making use of Lemma 1, 2 in “Appendix 4.1” it will not be restrictive to suppose that T(x) is “almost diagonalizable” in the sense that there exist \(\lambda _0(x)\), \({\vert x,0\rangle }\in E(\tau x),{\langle x,0\vert }\in E(x)^*\) smoothly dependent on x and \(\Theta (x)\) with norm such that \(||\Theta (x)||/\lambda _0(x)\) is smaller than a prefixed quantity (x-uniformly):

$$\begin{aligned} T(x)=\lambda _0(x){\vert x,0\rangle }{\langle x,0\vert }+\Theta (x) \end{aligned}$$
(6.1)

Then setting \(T_h{\buildrel def\over =}T(\tau ^{-h}x)\) and repeating the proof of theorem 1 leads to

Theorem 3

Let T(x) be as above. Let \(x\rightarrow v(x)\in {\Gamma \,}'(x), ||v(x)||\equiv 1\) be a measurable function (not necessarily continuous), it is

  1. (1)

    There are continuous functions \(x\rightarrow b(x)\in \Gamma (x)\), \(x\rightarrow \overline{\Lambda }(x,p)\) and \(x\rightarrow \Lambda (x,p), p=0,1,\ldots \), such that, for all \(p>0,x\in {\mathcal F},v\), exist the limits

    $$\begin{aligned} b(x)= & {} \lim _{N\rightarrow \infty } \frac{T(x) \cdots T(\tau ^{-(N-1)}x) v(\tau ^{-(N-1)}x)}{\overline{\Lambda }_v(x,N)} \nonumber \\ {\Lambda (x,p)}= & {} \lim _{N\rightarrow \infty } \frac{\overline{\Lambda }_v(\tau ^{-p} x,N)}{\overline{\Lambda }_v(x,N)}>0, \end{aligned}$$
    (6.2)
  2. (2)

    The vectors b(x) are eigenvectors for products of \(T(\tau ^{-j} x)\) in the sense

    $$\begin{aligned} b(\tau ^{-p}x)=\,\Lambda (x,p)\, T^{-1}(\tau ^{-(p-1)}x)\ldots T^{-1}(x)\, b(x) \end{aligned}$$
    (6.3)
  3. (3)

    \(b(x),\Lambda (x,p)\) are v-independent and continuous in \(x\in {\mathcal F}\) and \(\exists B\) such that \(B^{-1}<||b(x)||<B\); if T(x) is the Jacobian of \(\tau ^{-1}\) the unit vector \(\frac{b(x)}{||b(x)||}\) will be called the unstable unit vector, or unstable direction, at \(\tau x\).

  4. (4)

    The upper and lower limit values \(\ell ^\pm (x)\) of \(\frac{1}{p} \log \Lambda (x,p)\) as \(p\rightarrow \infty \) are constant along trajectories, i.e. k-independent if evaluated at \(\tau ^{-k} x\).

The continuity is an extra property due to the continuity of the terms appearing in the cluster expansion. Analiticity of \(b(x),\Lambda (x,p)\) can be obtained as in the case of theorem 2 under natural analogous assumptions.

If x is chosen randomly with respect to an invariant measure \(\rho \) then the limits in item (4) are a.e. equal (as in the case of Sect. 2: via the cluster expansion, they are represented as “Birhoff averages”. If \(\rho \) is ergodic the limits not only exist but are x-independent a.e. and b(x) identifies the unstable direction at x while \(\ell =\ell ^+=\ell ^-\) is the maximum Lyapunov exponent.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gallavotti, G. Random Matrices and Lyapunov Coefficients Regularity. J Stat Phys 166, 558–574 (2017). https://doi.org/10.1007/s10955-015-1429-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10955-015-1429-0

Keywords

Navigation