Skip to main content
Log in

Double variational principle for mean dimension

  • Published:
Geometric and Functional Analysis Aims and scope Submit manuscript

Abstract

We develop a variational principle between mean dimension theory and rate distortion theory. We consider a minimax problem about the rate distortion dimension with respect to two variables (metrics and measures). We prove that the minimax value is equal to the mean dimension for a dynamical system with the marker property. The proof exhibits a new combination of ergodic theory, rate distortion theory and geometric measure theory. Along the way of the proof, we also show that if a dynamical system has the marker property then it has a metric for which the upper metric mean dimension is equal to the mean dimension.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. Throughout the paper we assume that the base of the logarithm is two. The natural logarithm (i.e. the logarithm of base e) is written as \(\ln (\cdot )\).

  2. \(f{:}\,\mathcal {X}\rightarrow (C_N)^\mathbb {Z}\) is called an embedding of a dynamical system if it is a topological embedding and satisfies \(f\circ T = \sigma \circ f\).

  3. The papers [GT, GQT] used the ideas of communication theory and signal processing. This is another manifestation of the intimate connections between mean dimension and information theory.

  4. It seems that these attract new interests of information theory researchers in the context of compressed sensing; see, e.g. [WV10, RJEP].

  5. The idea of introducing mean Hausdorff dimension was partly motivated by the study of Kawabata–Dembo [KD94, Proposition 3.2]. Roughly speaking, their result [KD94, Proposition 3.2] corresponds to Step 2.2 for \((\mathcal {X}, T) = (A^\mathbb {Z},\mathrm {shift})\) with \(A\subset \mathbb {R}^n\). In other words, Step 2.2 is a generalization of their result to arbitrary dynamical systems.

  6. There also exists a small issue about the tame growth of covering numbers condition. But we ignore it here.

  7. We always assume that the \(\sigma \)-algebra of a finite set is the largest one (the set of all subsets).

  8. We can show this by proving the data-processing inequality (Lemma 2.5) for the quantity defined by (2.4) in the case that \(\mathcal {X}\) and \(\mathcal {Y}\) are finite sets. See [CT06, Section 2.8].

  9. The continuity of \(\rho \) and \(\lambda \) is inessential. But we assume it for simplicity. Indeed in our applications, \(\mathcal {X}= \mathcal {Y}\), \(\rho \) is a distance function and \(\lambda \) is a constant.

  10. E.g. expanding signals in a wavelet basis, discarding small terms and quantizing the remaining terms.

  11. Although the “operational meaning” of rate distortion function is important for the understanding, we do not use it in the paper. So we do not give a complete explanation. See [LDN79, ECG94, Gra90] for the non-ergodic case.

  12. Indeed here we use only \(\underline{{\mathrm {mdim}}}_{\mathrm {H}}(\mathcal {X},T,d) < s\).

  13. An important point for us is that the statement is valid for each fixed \(\delta \) (not only the limits of \(\delta \rightarrow 0\)).

  14. Since we assume that A is a finite set, this just means that \(\mu _n(x)\rightarrow \mu (x)\) at each \(x\in A\).

  15. The coupling between X(k) and Y is given by the probability mass function

    $$\begin{aligned} \sum _{x'\in A^m} \pi _k(x,x') \mathbb {P}(Y=y|\mathcal {P}^m(X)=x'), \end{aligned}$$

    which converges to \(\mathbb {P}(\mathcal {P}^m(X)=x, Y=y)\).

  16. Indeed we cannot find this statement in [PS32]. The main theorem of their paper states that \(\dim \mathcal {X}\) is equal to the infimum of \(\underline{\dim }_{\mathrm {M}}(\mathcal {X},d)\) over \(d\in \mathscr {D}(\mathcal {X})\). But their argument actually proves Theorem 5.1.

  17. “Open” is easy. To show “dense”, take arbitrary \(f\in C(\mathcal {X},V)\) and \(\delta >0\). Choose \(0<\varepsilon <1/n\) such that \(d(x,y)<\varepsilon \) implies \(\left| \left| f(x)-f(y)\right| \right| <\delta \). There exists an \(\varepsilon \)-embedding \(\pi {:}\,\mathcal {X}\rightarrow P\) in a simplicial complex P of dimension \(\le \dim \mathcal {X}\). From Lemma 5.3 (2) and (3) in Section 5.2 we can find a linear embedding \(g:P\rightarrow V\) with \(\left| \left| g(\pi (x))-f(x)\right| \right| < \delta \). From Lemma 5.3 (1), \(\log \#(g(P), \left| \left| \cdot \right| \right| ,\varepsilon ')/\log (1/\varepsilon ')\) is less than \(\dim \mathcal {X} + 1/n\) for sufficiently small \(\varepsilon '\). This shows \(g\circ \pi \in A_n\). So \(A_n\) is dense. Therefore the main point of the proof of Theorem 5.2 is a “polyhedral approximation”. The basic idea of the proof of Theorem 5.1 is also a polyhedral approximation, but in a much more accurate way. See Section 5.3.

  18. Here is a technical point. The number \(A(P_n*Q_n)\) is defined by using the simplicial complex structure of \(P_n*Q_n\). We use the natural simplicial complex structure of the join \(P_n*Q_n\) here, not its subdivision introduced in (5.8).

  19. Since r is finite, this is more restricted than in the literatures [Sch95, LL18]. They consider automorphisms of general compact Abelian groups. But we study only the restricted class here for simplicity.

References

  1. T. Berger. Rate Distortion Theory: A Mathematical Basis for Data Compression. Englewood Cliffs, Prentice-Hall, NJ (1971).

    MATH  Google Scholar 

  2. T.M. Cover and J.A. Thomas. Elements of Information Theory, 2nd edition. Wiley, New York (2006).

    MATH  Google Scholar 

  3. E.I. Dinaburg. A correlation between topological entropy and metric entropy. Dokl. Akad. Nauk SSSR, 190 (1970), 19–22.

    MathSciNet  Google Scholar 

  4. M. Effros, P.A. Chou and G.M. Gray. Variable-rate source coding theorems for stationary nonergodic sources. IEEE Trans. Inf. Theory, 40 (1994), 1920–1925.

    Article  MathSciNet  MATH  Google Scholar 

  5. M. Einsiedler and T. Ward. Ergodic Theory with a View Towards Number Theory, Graduate Texts in Mathematics. 259, Springer, London.

  6. T.N.T. Goodman. Relating topological entropy and measure entropy. Bull. London Math. Soc., 3 (1971), 176–180.

    Article  MathSciNet  MATH  Google Scholar 

  7. L.W. Goodwyn. Topological entropy bounds measure-theoretic entropy. Proc. Amer. Math. Soc., 23 (1969), 679–688.

    Article  MathSciNet  MATH  Google Scholar 

  8. R.M. Gray. Entropy and Information Theory. Springer-Verlag, New York (1990).

    Book  MATH  Google Scholar 

  9. M. Gromov. Topological invariants of dynamical systems and spaces of holomorphic maps: I. Math. Phys. Anal. Geom., 2 (1999), 323–415.

    Article  MathSciNet  MATH  Google Scholar 

  10. Y. Gutman. Mean dimension and Jaworski-type theorems. Proceedings of the London Mathematical Society, (4)111 (2015), 831–850.

    Article  MathSciNet  MATH  Google Scholar 

  11. Y. Gutman, E. Lindenstrauss and M. Tsukamoto. Mean dimension of \(\mathbb{Z}^k\)-actions. Geom. Funct. Anal., (3)26 (2016), 778–817.

    Article  MathSciNet  MATH  Google Scholar 

  12. Y. Gutman, Y. Qiao and M. Tsukamoto. Application of signal analysis to the embedding problem of \(\mathbb{Z}^k\)-actions arXiv:1709.00125, to appear in Geom. Funct. Anal.

  13. Y. Gutman and M. Tsukamoto. Embedding minimal dynamical systems into Hilbert cubes, preprint. arXiv:1511.01802.

  14. J.D. Howroyd. On dimension and on the existence of sets of finite, positive Hausdorff measures. Proc. London Math. Soc., 70 (1995), 581–604.

    Article  MathSciNet  MATH  Google Scholar 

  15. R.I. Jewett. The prevalence of uniquely ergodic systems. J. Math. Mech., 19 (1970), 717–729.

    MathSciNet  MATH  Google Scholar 

  16. T. Kawabata and A. Dembo. The rate distortion dimension of sets and measures. IEEE Trans. Inf. Theory., (5)40 (1994), 1564–1572.

    Article  MathSciNet  MATH  Google Scholar 

  17. A.N. Kolmogorov and V.M. Tihomirov. \(\varepsilon \)-entropy and \(\varepsilon \)-capacity of sets in functional spaces. Amer. Math. Soc. Transl., (2)33 (1963), 277–367.

    MathSciNet  Google Scholar 

  18. W. Krieger. On unique ergodicity. In: Proc. sixth Berkeley symposium, Math. Statist. Probab. Univ. of California Press (1970), pp. 327–346.

  19. A. Leon-Garcia, L.D. Davisson and D.L. Neuhoff. New results on coding of stationary nonergodic sources. IEEE Trans. Inform. Theory., 25 (1979), 137–144.

    Article  MathSciNet  MATH  Google Scholar 

  20. H. Li and B. Liang. Mean dimension, mean rank and von Neumann–Lück rank. J. Reine Angew. Math., 739 (2018), 207–240.

    Article  MathSciNet  MATH  Google Scholar 

  21. E. Lindenstrauss. Mean dimension, small entropy factors and an embedding theorem. Inst. Hautes Études Sci. Publ. Math. 89 (1999), 227–262.

    Article  MathSciNet  MATH  Google Scholar 

  22. E. Lindenstrauss and M. Tsukamoto. Mean dimension and an embedding problem: an example. Israel J. Math., 199 (2014), 573–584.

    Article  MathSciNet  MATH  Google Scholar 

  23. E. Lindenstrauss and M. Tsukamoto. From rate distortion theory to metric mean dimension: variational principle. IEEE Trans. Inf. Theory, (5)64 (2018), 3590–3609.

    Article  MathSciNet  MATH  Google Scholar 

  24. E. Lindenstrauss, and B. Weiss. Mean topological dimension. Israel J. Math., 115 (2000), 1–24.

    Article  MathSciNet  MATH  Google Scholar 

  25. P. Mattila. Geometry of Sets and Measures in Euclidean Spaces, Fractals and Rectifiability. Cambridge Studies in Advanced Mathematics, 44. Cambridge University Press, Cambridge (1995).

  26. T. Meyerovitch and M. Tsukamoto. Expansive multiparameter actions and mean dimension. Trans. Amer. Math. Soc., 371 (2019), 7275–7299.

    Article  MathSciNet  MATH  Google Scholar 

  27. M. Misiurewicz. A short proof of the variational principle for \(\mathbb{Z}^N_+\) actions on a compact space. In: International Conference on Dynamical Systems in Mathematical Physics (Rennes, 1975), Astérisque, vol. 40, pp. 145–157, Soc. Math. France, Paris (1976).

  28. L. Pontrjagin and L. Schnirelmann. Sur une propriété métrique de la dimension. Ann. Math., 33 (1932), 152–162.

    MATH  Google Scholar 

  29. A. Rényi. On the dimension and entropy of probability distributions. Acta Math. Sci. Hung., 10 (1959), 193–215.

    Article  MathSciNet  MATH  Google Scholar 

  30. F.E. Rezagah, S. Jalali, E. Erkip and H.V. Poor. Rate-distortion dimension of stochastic processes. arXiv:1607.06792.

  31. K. Schmidt. Dynamical Systems of Algebraic Origin, Progress in Mathematics, 128, Birkhäuser Verlag, Basel (1995).

  32. C.E. Shannon. A mathematical theory of communication. Bell Syst. Tech. J., 27 (1948), 379–423, 623–656.

  33. C.E. Shannon. Coding theorems for a discrete source with a fidelity criterion. IRE Nat. Conv. Rec. Pt. 4, pp. 142–163 (1959).

    Google Scholar 

  34. M. Tsukamoto. Deformation of Brody curves and mean dimension. Ergod. theory Dyn. Syst. 29 (2009), 1641–1657.

    Article  MathSciNet  MATH  Google Scholar 

  35. M. Tsukamoto. Mean dimension of the dynamical system of Brody curves. Invent. Math., 211 (2018), 935–968.

    Article  MathSciNet  MATH  Google Scholar 

  36. M. Tsukamoto. Large dynamics of Yang–Mills theory: mean dimension formula. J. Anal. Math., 134 (2018), 455–499.

    Article  MathSciNet  MATH  Google Scholar 

  37. A. Velozo and R. Velozo. Rate distortion theory, metric mean dimension and measure theoretic entropy. arXiv:1707.05762.

  38. C. Villani. Optimal Transport Old and New. Springer-Verlag, Berlin (2009).

    Book  MATH  Google Scholar 

  39. Y. Wu and S. Verdú. Rényi information dimension: fundamental limits of almost lossless analogue compression. IEEE Trans. Inf. Theory, (8)56, (2010) 3721–3747.

    Article  MATH  Google Scholar 

Download references

Acknowledgements

This project was initiated at the Banff International Research Station meeting “Mean Dimension and Sofic Entropy Meet Dynamical Systems, Geometric Analysis and Information Theory” in 2017. We thank BIRS for hosting this workshop, and for providing ideal conditions for collaborations. We also thank the referee for her/his helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Masaki Tsukamoto.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

E.L. was partially supported by ISF Grant 891/15. M.T. was partially supported by JSPS KAKENHI 18K03275.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lindenstrauss, E., Tsukamoto, M. Double variational principle for mean dimension. Geom. Funct. Anal. 29, 1048–1109 (2019). https://doi.org/10.1007/s00039-019-00501-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00039-019-00501-8

Keywords and phrases

Mathematics Subject Classification

Navigation