Skip to main content
Log in

Quantitative lower bounds on the Lyapunov exponent from multivariate matrix inequalities

  • Published:
Analysis and Mathematical Physics Aims and scope Submit manuscript

Abstract

Lyapunov exponents characterize the asymptotic behavior of long matrix products. In this work we introduce a new technique that yields quantitative lower bounds on the top Lyapunov exponent in terms of an efficiently computable matrix sum in ergodic situations. Our approach rests on two results from matrix analysis—the n-matrix extension of the Golden–Thompson inequality and an effective version of the Avalanche Principle. While applications of this method are currently restricted to uniformly hyperbolic cocycles, we include specific examples of ergodic Schrödinger cocycles of polymer type for which outside of the spectrum our bounds are substantially stronger than the standard Combes–Thomas estimates. We also show that these techniques yield short proofs of quantitative stability results for the top Lyapunov exponent which are known from more dynamical approaches. We also discuss the problem of finding stable bounds on the Lyapunov exponent for almost-commuting matrices.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Data availibility statement

Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

Notes

  1. For experts on Schrödinger operators, we note that the polymer models we construct here have spectrum along roughly a \(\frac{1}{p}\)-grid where p is a large parameter. Our bounds give order-1 lower bounds on the Lyapunov exponent inside the spectral gaps which increase linearly with p (showing that the Lyapunov exponent is large inside the gaps); see Theorem 6.8. By contrast, the Combes–Thomas estimates on the Lyapunov exponent only give the lower bound \(\frac{C}{p}\) (since there is always spectrum at that distance nearby and CT bounds deteriorate close the spectrum), so they significantly underestimate the size of the Lyapunov exponent. See Fig. 1 for a summary.

  2. We remark that a tighter upper bound in the form of a convex optimization problem that can be evaluated efficiently for any fixed a and b is presented in [44].

  3. We mention that there is some degree of flexibility in the choice of \(r_0\), since one can use alternative versions of the effective Avalanche Principle, e.g., Theorem 2.6.

  4. Alternatively it is also possible to use resolvent calculus to relate \(\left\Vert [A_j^{\mathrm {i}t},A_k]\right\Vert \) with \(\left\Vert [A_j,A_k]\right\Vert \).

References

  1. Aizenman, M., Warzel, S.: Random Operators: Disorder Effects on Quantum Spectra and Dynamics, vol. 168. American Mathematical Soc., Providence (2015)

    MATH  Google Scholar 

  2. Avila, A.: Global theory of one-frequency Schrödinger operators. Acta Math. 215(1), 1–54 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  3. Barreira, L.: Lyapunov Exponents. Birkhäuser, Basel (2017)

    MATH  Google Scholar 

  4. Barreira, L., Dragičević, D., Valls, C.: Lyapunov functions and cone families. J. Stat. Phys. 148(1), 137–163 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  5. Bocker-Neto, C., Viana, M.: Continuity of Lyapunov exponents for random two-dimensional matrices. Ergod. Theory Dyn. Syst. 37(5), 1413–1442 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  6. Bougerol, P., Lacroix, J.: Products of Random Matrices with Applications to Schrödinger Operators, vol. 8. Springer, Berlin (2012)

    MATH  Google Scholar 

  7. Bourgain, J.: Green’s Function Estimates for Lattice Schrödinger Operators and Applications. Princeton University Press, Princeton (2005)

  8. Carmona, R., Lacroix, J.: Spectral Theory of Random Schrödinger Operators. Birkhäuser, Boston (1990)

    Book  MATH  Google Scholar 

  9. Chapman, J., Stolz, G.: Localization for random block operators related to the XY spin chain. Ann. Henri Poincaré 16(2), 405–435 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  10. Combes, J.M., Thomas, L.: Asymptotic behaviour of eigenfunctions for multiparticle Schrödinger operators. Commun. Math. Phys. 34(4), 251–270 (1973)

    Article  MATH  Google Scholar 

  11. Damanik, D.: Schrödinger operators with dynamically defined potentials. Ergod. Theory Dyn. Syst. 37(6), 1681–1764 (2017)

    Article  MATH  Google Scholar 

  12. Duarte, P., Klein, S.: Lyapunov exponents of linear cocycles. Continuity via large deviations. In: Broer, H., Hasselblat, B. (eds.) Atlantis Studies in Dynamical Systems. Atlantis Press, Amsterdam (2016)

    Google Scholar 

  13. Duarte, P., Klein, S.: Continuity of the Lyapunov Exponents of Linear Cocycles. Publicações Matemáticas do IMPA (2017). Available https://impa.br/wp-content/uploads/2017/08/31CBM_02.pdf

  14. Duarte, P., Klein, S.: Large deviations for products of random two dimensional matrices. Commun. Math. Phys. 375, 2191–2257 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  15. Duarte, P., Klein, S., Santos, M.: A random cocycle with non Hölder Lyapunov exponent. Discrete Contin. Dyn. Syst. A 39(8), 4841 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  16. Dunlap, D.H., Wu, H.-L., Phillips, P.W.: Absence of localization in a random-dimer model. Phys. Rev. Lett. 65(1), 88–91 (1990)

    Article  Google Scholar 

  17. Furman, A.: Random walks on groups and random transformations. In: Hasselblatt, B., Katok, A. (eds.) Handbook of Dynamical Systems, vol. 1, pp. 931–1014. Elsevier, Amsterdam (2002)

    Google Scholar 

  18. Furstenberg, H.: Non commuting random products. Trans. Am. Math. Soc. 108, 377–428 (1963)

    Article  MATH  Google Scholar 

  19. Furstenberg, H., Kesten, H.: Products of random matrices. Ann. Math. Stat. 31(2), 457–469 (1960)

    Article  MathSciNet  MATH  Google Scholar 

  20. Furstenberg, H., Kifer, Y.: Random matrix products and measures on projective spaces. Israel J. Math. 46(1–2), 12–32 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  21. Furstenberg, H.: Random walks and discrete subgroups of Lie groups. Adv. Probab. Relat. Top. 1, 1–63 (1971)

    MathSciNet  MATH  Google Scholar 

  22. Golden, S.: Lower bounds for the Helmholtz function. Phys. Rev. 137, B1127–B1128 (1965)

    Article  MathSciNet  MATH  Google Scholar 

  23. Goldstein, M., Schlag, W.: Hölder continuity of the integrated density of states for quasi-periodic Schrödinger equations and averages of shifts of subharmonic functions. Ann. Math. 154, 155–203 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  24. Gorodetski, A., Kleptsyn, V.: Parametric Furstenberg theorem on random products of \({SL}(2,{\mathbb{R}})\) matrices (2018). Available at arXiv:1809.00416

  25. Han, R., Lemm, M., Schlag, W.: Effective multi-scale approach to the schrödinger cocycle over a skew-shift base. Ergod. Theory Dyn. Syst. 40(10), 2788–2853 (2020)

    Article  MATH  Google Scholar 

  26. Hislop, P.D.: Exponential decay of two-body eigenfunctions: a review. In: Proceedings of the Symposium on Mathematical Physics and Quantum Field Theory (Berkeley, CA, 1999), vol. 4, pp. 265–288 (2000)

  27. Jitomirskaya, S., Schulz-Baldes, H., Stolz, G.: Delocalization in random polymer models. Commun. Math. Phys. 233(1), 27–48 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  28. Jitomirskaya, S., Schulz-Baldes, H.: Upper bounds on wavepacket spreading for random Jacobi matrices. Commun. Math. Phys. 273(3), 601–618 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  29. Johnson, R.A.: Exponential dichotomy, rotation number, and linear differential operators with bounded coefficients. J. Differ. Equ. 61(1), 54–78 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  30. Katok, A., Burns, K.: Infinitesimal Lyapunov functions, invariant cone families and stochastic properties of smooth dynamical systems. Ergod. Theory Dyn. Syst. 14(4), 757–785 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  31. Kielstra, P.M., Lemm, M.: On the finite-size Lyapunov exponent for the Schroedinger operator with skew-shift potential. Commun. Math. Sci. 18(5), 1305–1314 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  32. Kifer, Y.: Perturbations of random matrix products. Z. Wahrscheinlichkeitstheorie Verwandte Geb. 61(1), 83–95 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  33. Kingman, J.F.C.: Subadditive ergodic theory. Ann. Probab. 1(6), 883–899 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  34. Krüger, H.: Multiscale analysis for ergodic Schrödinger operators and positivity of Lyapunov exponents. J. d’Anal. Math. 115(1), 343–387 (2011)

    Article  MATH  Google Scholar 

  35. Lemm, M.: On multivariate trace inequalities of Sutter, Berta, and Tomamichel. J. Math. Phys. 59(1), 012204 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  36. Lieb, E.H.: Convex trace functions and the Wigner–Yanase–Dyson conjecture. Adv. Math. 11(3), 267–288 (1973)

    Article  MathSciNet  MATH  Google Scholar 

  37. Pastur, L.A.: Spectral properties of disordered systems in the one-body approximation. Commun. Math. Phys. 75(2), 179–196 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  38. Pollicott, M.: Maximal Lyapunov exponents for random matrix products. Invent. Math. 181(1), 209–226 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  39. Protasov, V.Y., Jungers, R.M.: Lower and upper bounds for the largest Lyapunov exponent of matrices. Linear Algebra Appl. 438(11), 4448–4468 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  40. Ruelle, D.: Analycity properties of the characteristic exponents of random matrix products. Adv. Math. 32(1), 68–80 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  41. Schlag, W.: Regularity and convergence rates for the Lyapunov exponents of linear cocycles. J. Mod. Dyn. 7, 619 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  42. Sutter, D.: Approximate Quantum Markov Chains. SpringerBriefs in Mathematical Physics, vol. 28. Springer, Berlin (2018)

    MATH  Google Scholar 

  43. Sutter, D., Berta, M., Tomamichel, M.: Multivariate trace inequalities. Commun. Math. Phys. 352(1), 37–58 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  44. Sutter, D., Fawzi, O., Renner, R.: Bounds on Lyapunov exponents via entropy accumulation. IEEE Trans. Inf. Theory 67(1), 10–24 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  45. Thompson, C.J.: Inequality with applications in statistical mechanics. J. Math. Phys. 6(11), 1812–1813 (1965)

    Article  MathSciNet  Google Scholar 

  46. Viana, M.: Lectures on Lyapunov Exponents. Cambridge University Press, Cambridge (2014)

    Book  MATH  Google Scholar 

  47. Viana, M.: (Dis)continuity of Lyapunov exponents. Ergod. Theory Dyn. Syst. 40, 1–35 (2018)

    MathSciNet  Google Scholar 

  48. Wilkinson, A.: What are Lyapunov exponents, and why are they interesting? Bull. Am. Math. Soc. 54(1), 79–105 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  49. Wojtkowski, M.: Invariant families of cones and Lyapunov exponents. Ergod. Theory Dyn. Syst. 5(1), 145–161 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  50. Zhang, Z.: Uniform hyperbolicity and its applications to spectral analysis of 1D discrete Schrödinger operators (2013). Available at arXiv:1305.4226

Download references

Acknowledgements

We thank Jürg Fröhlich, Silvius Klein, Jeffrey Schenker, and Wilhelm Schlag for helpful remarks. DS acknowledges support from the Swiss National Science Foundation via the NCCR QSIT as well as project No. 200020_165843.

Author information

Authors and Affiliations

Authors

Ethics declarations

Conflict of interest

The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Proof of the Avalanche principle, version 2

Proof of Theorem 2.6

The proof follows the general steps in [25]. We only summarize the necessary changes in the argument, which amount to various refined estimates and a different choice of the scaling parameter t (see below).

We define \(\varepsilon _0=\frac{1}{5}\) and \(c_0=\frac{1}{6}\). Using that \((\varepsilon _0+\sqrt{1-\varepsilon _0^2})\pi /2\le 1.86\), the estimate \(\frac{\sqrt{2}\pi }{2}\kappa \varepsilon ^{-2}\) on the Lipschitz constant in Corollary 5.7(b) in [25] can be replaced by

$$\begin{aligned} 1.86*\kappa \varepsilon ^{-2}. \end{aligned}$$
(A.1)

This replacement then propagates through the argument in [25], specifically to Lemma 5.14, Corollary 5.15 and the proof of Theorem 5.5.

Another change is made starting with Lemma 5.12 in [25]. We define \({\tilde{\varepsilon }}=t\varepsilon \) with \(t=\frac{72}{100}\). (This choice of t is close to the largest one allowed here which is important for having a good bound in (A.4) below.) Lemma 5.12 in [25] then holds true with this choice of t under our present assumptions on \(\kappa \) and \(\varepsilon \). To see this, we note that condition (5.7) in [25] can be rearranged as

$$\begin{aligned} t>c_0\sqrt{\frac{1+2c_0^2\varepsilon ^2}{1+c_0^2\varepsilon ^2}} , \end{aligned}$$
(A.2)

which is comfortably satisfied for all \(\varepsilon \in (0,\varepsilon _0)\). To establish Lemma 5.12, it then remains to check that

$$\begin{aligned} f\left( \frac{c_0\varepsilon }{t}\sqrt{1-t^2\varepsilon ^2},\sqrt{1-\frac{\varepsilon ^2}{1+2c_0^2\varepsilon ^2}}\right) <\sqrt{1-t^2\varepsilon ^2} \end{aligned}$$
(A.3)

for \(f(x,y)=x\sqrt{1-y^2}+y\sqrt{1-x^2}\), which is tedious, but elementary. (Here we avoid estimating the square roots by their linearization as was done in (5.8) of [25].)

The previously established estimates yield the following modification of Lemma 5.16 in [25]. The relevant Lipschitz constant is now

$$\begin{aligned} L=1.86*\kappa {\tilde{\varepsilon }}^{-2}\le 1.86*c_0 t^{-2}<\frac{3}{5}<1 , \end{aligned}$$
(A.4)

where the fact that this is strictly less than 1 is important for having a contraction. The relevant estimate after summing the geometric series in the proof of Lemma 5.16 then yields

$$\begin{aligned} \frac{\kappa {\tilde{\varepsilon }}^{-1}}{1-L} = \frac{1/t}{1-L}\frac{\kappa }{\varepsilon } <\frac{7}{2}\frac{\kappa }{\varepsilon } \end{aligned}$$
(A.5)

and then the right-hand side replaces the upper bound from Lemma 5.16 (which was \(3\kappa /\varepsilon \) in [25]).

For the proof of Theorem 5.5(i), it suffices to note that

$$\begin{aligned} \frac{7}{2}\frac{\kappa }{\varepsilon } +\frac{1}{2}\le \frac{7}{2}c_0 \varepsilon _0+\frac{1}{2}=\frac{7}{30}+\frac{1}{2}<\sqrt{1-t^2\varepsilon _0^2}<\sqrt{1-{\tilde{\varepsilon }}^2}\, . \end{aligned}$$
(A.6)

In the conclusion of Theorem 5.5(i), the bound \(3\frac{\kappa }{\varepsilon }\) is replaced by \(\frac{7}{2}\frac{\kappa }{\varepsilon }\) due to our alternative version of Lemma 5.16 described above.

Finally, in the proof of Theorem 5.5(ii), we replace every instance of \(3\frac{\kappa }{\varepsilon }\) by \(\frac{7}{2}\frac{\kappa }{\varepsilon }\) in (5.17)–(5.19). This has the effect of replacing the lower bound in (5.19) by

$$\begin{aligned} \frac{\varepsilon }{\sqrt{1+2\frac{\kappa ^2}{\varepsilon ^2}}}-\frac{7}{2}\frac{\kappa }{\varepsilon } \ge \frac{\varepsilon }{\sqrt{1+2\frac{\kappa ^2}{\varepsilon ^2}}}-\frac{7}{12}\varepsilon > \frac{1}{3}\varepsilon \end{aligned}$$
(A.7)

and therefore the upper bound in (5.20) by \(\frac{21}{2}\frac{\kappa }{\varepsilon ^2}\). Combining this with the bound (5.21) in [25] and using \(\kappa \le c_0\varepsilon _0^2=\frac{1}{150}\) yields the lower bound on the main quantity

$$\begin{aligned} \frac{\rho (L_1,L_2,\ldots ,L_{n})}{\rho (L_1,L_2)\cdots \rho (L_{n-1}, L_{n})}\ge \mathrm {e}^{-11 n\kappa /{\varepsilon ^2}} \end{aligned}$$
(A.8)

so \(c_l=11\) as desired.

It remains to prove the upper bound on the main quantity. We first note that the upper bound \(\kappa '\) in Lemma 5.17 needs to be replaced by \(\kappa _j'=\left( \frac{3}{5}\right) ^{j-1} \frac{6}{5}\kappa \). Indeed, Lemma 5.16 (with the constant \(\frac{7}{2}\frac{\kappa }{\varepsilon }\le \frac{7}{12}\varepsilon =:r\)) and Proposition 2.29 in [13] imply \(\Vert (D\hat{g_0})_{\hat{\mathfrak {v}}(g^j)}\Vert \le \kappa \frac{r+\sqrt{1-r^2}}{1-r^2}\le \frac{60}{53}\kappa \). From there, applying the chain rule as in [25] implies

$$\begin{aligned} \mathrm {gr}(g^j)\le L^{j-1}\frac{60}{53}\kappa <\kappa ', \end{aligned}$$
(A.9)

with the Lipschitz constant \(L=1.86*\kappa {\tilde{\varepsilon }}^{-2}<\frac{3}{5}\) defined in (A.4). Equipped with this new version of Lemma 5.17 and recalling (A.7), the relevant bound becomes

$$\begin{aligned} \log \sqrt{1+2\frac{\big (\kappa _j'\big )^2}{\varepsilon /3}} \le \frac{\big (\kappa _j'\big )^2}{\varepsilon /3} \le 7.2\left( \frac{3}{5}\right) ^{j}\frac{\kappa ^2}{\varepsilon }. \end{aligned}$$
(A.10)

Since the new upper bound in (5.20) is \(\frac{21}{2}\frac{\kappa }{\varepsilon ^2}\) and we assume \(n\ge 36\), we have

$$\begin{aligned} \frac{21}{2}n+7.2\sum _{j=1}^n \left( \frac{3}{5}\right) ^{j} \le \frac{21}{2}n+7.2\frac{5}{2} \le 11n \end{aligned}$$
(A.11)

and hence the upper bound on the main quantity

$$\begin{aligned} \frac{\rho (L_1,L_2,\ldots ,L_{n})}{\rho (L_1,L_2)\cdots \rho (L_{n-1}, L_{n})} \le \mathrm {e}^{11 n\kappa /{\varepsilon ^2}} \end{aligned}$$
(A.12)

as desired. This proves Theorem 2.6. \(\square \)

Proofs for the almost-commuting case

1.1 Proof of Proposition 5.2

Note that for each index \(1\le k\le n\) it holds that \(A_k^{1+\mathrm {i}t}=A_k A_k^{\mathrm {i}t}=A_k^{\mathrm {i}t} A_k\). We start by rewriting the difference as a telescopic sum

$$\begin{aligned}&A_1^{1+\mathrm {i}t}A_2^{1+\mathrm {i}t}\ldots A_n^{1+\mathrm {i}t}-A_1\ldots A_n A_1^{\mathrm {i}t}\ldots A_n^{\mathrm {i}t}\\&=\sum _{j=1}^{n-1} A_1^{1+\mathrm {i}t}\ldots A_{j-1}^{1+\mathrm {i}t} A_j [A_j^{\mathrm {i}t},A_{j+1}\ldots A_n]A_{j+1}^{\mathrm {i}t}\ldots A_n^{\mathrm {i}t} . \end{aligned}$$

We express the long commutator \([A_j^{\mathrm {i}t},A_{j+1}\ldots A_n]\) as a sum of individual commutators \([A_j^{\mathrm {i}t},A_k ]\) by iteratively applying the Leibniz rule for commutators,

$$\begin{aligned} {[}B,CD]=C[B,D]+[B,C]D . \end{aligned}$$

We find

$$\begin{aligned} {[}A_j^{\mathrm {i}t},A_{j+1}\ldots A_n] =\sum _{k=j+1}^n A_{j+1}\ldots A_{k-1} \left[ A_j^{\mathrm {i}t}, A_k \right] A_{k+1}\ldots A_{n} . \end{aligned}$$

The unitary invariance of the operator norm then gives for \(\varepsilon _t:=\max _{j,k \in \mathbb {N}} \Vert [A_j^{\mathrm {i}t},A_k]\Vert \)

$$\begin{aligned} X_{1,n}(t) \le X_{1,n}(0) + \varepsilon _t \sum _{j<k\le n} X_{1,j-1}(t) X_{j,j}(0) X_{j+1,k-1}(0) X_{k+1,n}(0) . \end{aligned}$$

Thus we obtain

$$\begin{aligned} \frac{X_{1,n}(t)}{X_{1,n}(0)}&\le 1 + \varepsilon _t \sum _{j<k \le n} \frac{X_{1,j-1}(t) X_{j,j}(0) X_{j+1,k-1}(0) X_{k+1,n}(0)}{X_{1,n}(0)} \\&\le 2\varepsilon _t \sum _{j<k \le n} \frac{X_{1,j-1}(t) X_{j,j}(0) X_{j+1,k-1}(0) X_{k+1,n}(0)}{X_{1,n}(0)} , \end{aligned}$$

where the final step is valid because we can assume without loss of generality that \( 1 \le \varepsilon _t \sum _{j<k \le n} \frac{X_{1,j-1}(t) X_{j,j}(0) X_{j+1,k-1}(0) X_{k+1,n}(0)}{X_{1,n}(0)} \), as otherwise the assertion of Proposition 5.2 holds trivially.

Assumption 5.1 then gives

$$\begin{aligned} \frac{X_{1,j-1}(t) X_{j,j}(0) X_{j+1,k-1}(0) X_{k+1,n}(0)}{X_{1,n}(0)}&\le \frac{\mathrm {e}^{(j-1) \gamma _1(t) + c} \mathrm {e}^{\gamma _1 + c} \mathrm {e}^{(k-j-1) \gamma _1 + c} \mathrm {e}^{(n-k) \gamma _1+ c}}{\mathrm {e}^{n \gamma _1 - c}} \\&\le \mathrm {e}^{(j-1) (\gamma _1(t) - \gamma _1)} \mathrm {e}^{5c} . \end{aligned}$$

We thus find

$$\begin{aligned} \frac{X_{1,n}(t)}{X_{1,n}(0)}&\le 1+2 \mathrm {e}^{5c} \varepsilon _t \sum _{j<k\le n} \mathrm {e}^{j (\gamma _1(t) - \gamma _1)} \\&\le 1+ 2\mathrm {e}^{5c} \varepsilon _t \sum _{j<k\le n} \mathrm {e}^{j |\gamma _1(t) - \gamma _1|} \\&\le 1+ 2\mathrm {e}^{5c} \varepsilon _t \sum _{j=0}^n (n-j) \mathrm {e}^{j |\gamma _1(t) - \gamma _1|} . \end{aligned}$$

We next bounding the sum with an integral, i.e.,

$$\begin{aligned} \sum _{j=0}^n (n-j) \mathrm {e}^{j |\gamma _1(t) - \gamma _1|} \le \int _{0}^n (n-x) \mathrm {e}^{x |\gamma _1(t) - \gamma _1|} \mathrm {d}x + \frac{ \mathrm {e}^{n |\gamma _1(t) - \gamma _1| }}{|\gamma _1(t) - \gamma _1|} , \end{aligned}$$

which is correct because the function \(\phi _{n,c}: [0,n] \ni x \mapsto (n-x)\mathrm {e}^{x c}\) for \(c>0\) is monotonically increasing in \([0,x^\star ]\) and monotonically decreasing in \([x^\star ,n]\) where \(x^\star = n-1/c\). Furthermore we have \(\phi _{n,c}(x^\star )=\frac{1}{c}\mathrm {e}^{nc -1} \le \frac{1}{c}\mathrm {e}^{nc}\). Hence we find

$$\begin{aligned} \frac{X_{1,n}(t)}{X_{1,n}(0)}&\le 1 +2\mathrm {e}^{5c} \varepsilon _t \left( \frac{\mathrm {e}^{n |\gamma _1(t) - \gamma _1|}}{|\gamma _1(t) - \gamma _1|^2} + \frac{ \mathrm {e}^{n |\gamma _1(t) - \gamma _1| }}{|\gamma _1(t) - \gamma _1|} \right) . \end{aligned}$$

Together with Assumption 5.1 this implies

$$\begin{aligned} |\gamma _1(t) - \gamma _1 |&\le \frac{1}{n} \log X_{1,n}(t) - \frac{1}{n} \log X_{1,n}(0) + \frac{2c}{n} \\&\le \frac{1}{n} \log \left( 1 +2\mathrm {e}^{5c} \varepsilon _t \left( \frac{\mathrm {e}^{n |\gamma _1(t) - \gamma _1|}}{|\gamma _1(t) - \gamma _1|^2} + \frac{ \mathrm {e}^{n |\gamma _1(t) - \gamma _1| }}{|\gamma _1(t) - \gamma _1|} \right) \right) + \frac{2c}{n} . \end{aligned}$$

As a result we obtain

$$\begin{aligned} \mathrm {e}^{n |\gamma _1(t) - \gamma _1| } \le \left( 1 +2\mathrm {e}^{5c} \varepsilon _t \left( \frac{\mathrm {e}^{n |\gamma _1(t) - \gamma _1|}}{|\gamma _1(t) - \gamma _1|^2} + \frac{ \mathrm {e}^{n |\gamma _1(t) - \gamma _1| }}{|\gamma _1(t) - \gamma _1|} \right) \right) \mathrm {e}^{2c} . \end{aligned}$$

Since this is true for all \(n\in \mathbb {N}\) we can conclude that

$$\begin{aligned} |\gamma _1(t) - \gamma _1| \le \max \big \{\sqrt{ 4\mathrm {e}^{5c} \varepsilon _t}, 4\mathrm {e}^{5c} \varepsilon _t \big \} , \end{aligned}$$

which proves the assertion. \(\square \)

1.2 Proof of Corollary 5.3

The n-matrix Golden–Thompson inequality from Theorem 2.1 implies that

$$\begin{aligned} \int _{\mathbb {R}} f(t) \gamma _1(t) \mathrm {d}t&= \lim _{n \rightarrow \infty } \frac{1}{n} \int _{\mathbb {R}} f(t) \log \left\| \prod _{k=1}^n A_k^{1+\mathrm {i}t} \right\| \\&\ge \lim _{n \rightarrow \infty } \frac{1}{n} \log \left\| \exp \left( \sum _{k=1}^n \log A_k \right) \right\| \\&= \lim _{n \rightarrow \infty } \lambda _{\max } \left( \frac{1}{n} \sum _{k=1}^n \log A_k \right) \\&= \lambda _{\max }({\mathbb {E}}\log A_1) , \end{aligned}$$

where in the first step we swap the limit and the integral which is valid by the dominated convergence theorem. Proposition 5.2 implies

$$\begin{aligned} \gamma _1 \ge \int _{\mathbb {R}} f(t) \gamma _1(t) \mathrm {d}t - \int _{\mathbb {R}} f(t) \max \left\{ \sqrt{ 4 \mathrm {e}^{5c}\max _{j,k \in \mathbb {N}} \left\| \big [A_j^{\mathrm {i}t},A_k\big ] \right\| },4 \mathrm {e}^{5c}\max _{j,k \in \mathbb {N}} \left\| \big [A_j^{\mathrm {i}t},A_k\big ] \right\| \right\} \mathrm {d}t . \end{aligned}$$

We can use a well known bound [42, Lemma 3.8] to further simplify the boundFootnote 4 by using

$$\begin{aligned} \int _{\mathbb {R}} f(t) \sqrt{\max _{j,k \in \mathbb {N}} \left\| \big [A_j^{\mathrm {i}t},A_k\big ] \right\| } \mathrm {d}t&\le \max _{j,k \in \mathbb {N}} \sqrt{\left\Vert [\log A_j, A_k]\right\Vert } \int _{\mathbb {R}} f(t) \sqrt{|t|} \mathrm {d}t \\&\le \max _{j,k \in \mathbb {N}} \sqrt{\left\Vert [\log A_j, A_k]\right\Vert } , \end{aligned}$$

which then proves the assertion. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lemm, M., Sutter, D. Quantitative lower bounds on the Lyapunov exponent from multivariate matrix inequalities. Anal.Math.Phys. 12, 35 (2022). https://doi.org/10.1007/s13324-021-00641-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s13324-021-00641-x

Navigation