Skip to main content
Log in

Abstract

A necessary and sufficient condition (“nonresonance”) is established for every solution of an autonomous linear difference equation, or more generally for every sequence \((x^\top A^n y)\) with \(x,y\in \mathbb {R}^d\) and \(A\in \mathbb {R}^{d\times d}\), to be either trivial or else conform to a strong form of Benford’s Law (logarithmic distribution of significands). This condition contains all pertinent results in the literature as special cases. Its number-theoretical implications are discussed in the context of specific examples, and so are its possible extensions and modifications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Anderson, T.C., Rolen, L., Stoehr, R.: Benford’s law for coefficients of modular forms and partition functions. Proc. Am. Math. Soc. 139, 1533–1541 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  2. Benford, F.: The law of anomalous numbers. Proc. Am. Philos. Soc. 78, 551–572 (1938)

    MATH  Google Scholar 

  3. Benford Online Bibliography: http://www.benfordonline.net

  4. Berger, A.: Multi-dimensional dynamical systems and Benford’s law. Discrete Contin. Dyn. Syst. 13, 219–237 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  5. Berger, A., Eshun, G.: Benford solutions of linear difference equations. In: Z. AlSharawi et al. (eds.), Theory and Applications of Difference Equations and Discrete Dynamical Systems. Springer Proceedings in Mathematics & Statistics, vol. 102. Springer, Heidelberg (2014)

  6. Berger, A., Hill, T.P.: A basic theory of Benford’s law. Probab. Surv. 8, 1–126 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  7. Berger, A., Hill, T.P.: An Introduction to Benford’s Law. Princeton University Press, Princeton (2015)

    Book  MATH  Google Scholar 

  8. Berger, A., Hill, T.P., Kaynar, B., Ridder, A.: Finite-state Markov chains obey Benford’s law. SIAM J. Matrix Anal. Appl. 32, 665–684 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  9. Berger, A., Siegmund, S.: On the distribution of mantissae in nonautonomous difference equations. J. Differ. Equ. Appl. 13, 829–845 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  10. Brown, J.L., Duncan, R.L.: Modulo one uniform distribution of the sequence of logarithms of certain recursive sequences. Fibonacci Quart. 8, 482–486 (1970)

    MathSciNet  MATH  Google Scholar 

  11. Bumby, R., Ellentuck, E.: Finitely additive measures and the first digit problem. Fund. Math. 65, 33–42 (1969)

    MathSciNet  MATH  Google Scholar 

  12. Chow, Y.S., Teicher, H.: Probability Theory. Independence, Interchangeability, Martingales, 3rd edn. Springer, New York (1997)

    MATH  Google Scholar 

  13. Cohen, D.I.A., Katz, T.M.: Prime numbers and the first digit phenomenon. J. Number Theory 18, 261–268 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  14. Dajani, K., Kraikamp, C.: Ergodic Theory of Numbers, Carus Mathematical Monographs, vol. 29. Mathematical Association of America, Washington, DC (2002)

    Google Scholar 

  15. Diaconis, P.: The distribution of leading digits and uniform distribution Mod 1. Ann. Probab. 5, 72–81 (1977)

    Article  MathSciNet  MATH  Google Scholar 

  16. Diekmann, A.: Not the first digit! Using Benford’s law to detect fraudulent scientific data. J. Appl. Stat. 34, 321–329 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  17. Docampo, S., del Mar Trigo, M., Aira, M.J., Cabezudo, B., Flores-Moya, A.: Benfords law applied to aerobiological data and its potential as a quality control tool. Aerobiologia 25, 275–283 (2009)

    Article  Google Scholar 

  18. Drmota, M., Tichy, R.: Sequences, Discrepancies, and Applications. Springer Lecture Notes in Mathematics, vol. 1651. Springer, Heidelberg (1997)

  19. Duncan, R.L.: An application of uniform distributions to the Fibonacci numbers. Fibonacci Quart. 5, 137–140 (1967)

    MathSciNet  MATH  Google Scholar 

  20. Geyer, C.L., Williamson, P.P.: Detecting Fraud in data sets using Benford’s law. Commun. Stat. 33, 229–246 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  21. Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1985)

    Book  MATH  Google Scholar 

  22. Kanemitsu, S., Nagasaka, K., Rauzy, G., Shiue, J.-S.: On Benford’s Law: The First Digit Problem. Springer Lecture Notes in Mathematics, vol. 1299, pp. 158–169. Springer, Heidelberg (1988)

    MATH  Google Scholar 

  23. Kontorovich, A.V., Miller, S.J.: Benford’s law, values of \(L\)-functions and the \(3x+1\) problem. Acta Arith. 120, 269–297 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  24. Krantz, S.G., Parks, H.R.: A Primer of Real Analytic Functions, 2nd edn. Birkhäuser, Basel (2002)

    Book  MATH  Google Scholar 

  25. Kuipers, L.: Remark on a paper by R.L. Duncan concerning the uniform distribution mod 1 of the sequence of the logarithms of the Fibonacci numbers. Fibonacci Quart. 7, 465–466, 473 (1969)

  26. Kuipers, L., Niederreiter, H.: Uniform Distribution of Sequences. Wiley, New York (1974)

    MATH  Google Scholar 

  27. Lagarias, J.C., Soundararajan, K.: Benford’s law for the \(3x+1\) function. J. Lond. Math. Soc. 74, 289–303 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  28. Magnus, W., Oberhettinger, F., Soni, R.P.: Formulas and Theorems for the Special Functions of Mathematical Physics. Springer, New York (1966)

    Book  MATH  Google Scholar 

  29. Massé, B., Schneider, D.: A survey on weighted densities and their connection with the first digit phenomenon. Rocky Mt. J. Math. 41, 1395–1415 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  30. Miller, S.J., Nigrini, M.J.: Order statistics and Benford’s law. Int. J. Math. Math. Sci. Art. ID 382948 (2008)

  31. Myerson, G., van der Poorten, A.J.: Some problems concerning recurrence sequences. Am. Math. Monthly 102, 698–705 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  32. Nagasaka, K., Shiue, J.-S.: Benford’s law for linear recurrence sequences. Tsukuba J. Math. 11, 341–351 (1987)

    MathSciNet  MATH  Google Scholar 

  33. Newcomb, S.: Note on the frequency of use of the different digits in natural numbers. Am. J. Math. 4, 39–40 (1881)

    Article  MathSciNet  MATH  Google Scholar 

  34. NIST Digital Library of Mathematical Functions: http://dlmf.nist.gov

  35. Sambridge, M., Tkalčić, H., Jackson, A.: Benford’s law in the natural sciences. Geophys. Res. Lett. 37, L22301 (2010)

    Article  Google Scholar 

  36. Schatte, P.: On the uniform distribution of certain sequences and Benford’s law. Math. Nachr. 136, 271–273 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  37. Schürger, K.: Extensions of Black-Scholes processes and Benford’s law. Stoch. Process. Appl. 118, 1219–1243 (2008)

    Article  MATH  Google Scholar 

  38. Waldschmidt, M.: Diophantine Approximation on Linear Algebraic Groups. Transcendence Properties of the Exponential Function in Several Variables. Springer, Berlin (2000)

    Book  MATH  Google Scholar 

  39. Wlodarski, J.: Fibonacci and Lucas numbers tend to obey Benfords law. Fibonacci Quart. 9, 87–88 (1971)

    Google Scholar 

Download references

Acknowledgments

The authors have been supported by an Nserc Discovery Grant. They like to thank T.P. Hill, B. Schmuland, M. Waldschmidt, A. Weiss and R. Zweimüller for helpful discussions and comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arno Berger.

Appendix: Some Auxiliary Results

Appendix: Some Auxiliary Results

The purpose of this appendix is to provide proofs for several analytical facts that have been used in establishing the main results of this article. Throughout, let \(d\) be a fixed positive integer.

Lemma 5.1

Given any \(z_1, \ldots , z_d \in \mathbb {S}= \{z\in \mathbb {C}: |z| = 1\}\), the following are equivalent:

  1. (i)

    If \(c_1, \ldots , c_d \in \mathbb {C}\) and \(\lim _{n\rightarrow \infty } (c_1 z_1^n + \cdots + c_d z_d^n)\) exists then \(c_1 = \cdots = c_d = 0\);

  2. (ii)

    \(z_j \not \in \{1\}\cup \{z_k : k\ne j\}\) for every \(1\le j \le d\).

Proof

Clearly (i) \(\Rightarrow \) (ii) because if \(z_j = 1\) for some \(j\) simply let \(c_j = 1\) and \(c_{\ell }=0\) for all \(\ell \ne j\), whereas if \(z_j = z_k\) for some \(j\ne k\) take \(c_j = 1\), \(c_k = -1\), and \(c_{\ell } = 0\) for all \(\ell \in \{1, \ldots , d\}\setminus \{j,k\}\). To show that (ii) \(\Rightarrow \) (i) as well, proceed by induction. Trivially, if \(d=1\) then \((c_1 z_1^n)\) with \(z_1\in \mathbb {S}\) converges only if \(c_1=0\) or \(z_1 = 1\). Assume now that (ii) \(\Rightarrow \) (i) has been established already for some \(d\in \mathbb {N}\), let \(z_1, \ldots , z_{d+1}\in \mathbb {S}\), and assume that \(z_j \not \in \{1\}\cup \{z_k : k \ne j\}\) for every \(1\le j \le d+1\). If \(\lim _{n\rightarrow \infty } (c_1 z_1^n + \cdots + c_{d+1} z_{d+1}^n)\) exists then, as \(z_{d+1}\ne 1\),

$$\begin{aligned}&\left\{ c_1 \left( \frac{z_1}{z_{d+1}}\right) ^n \frac{z_1 - 1}{z_{d+1} -1 } + \cdots + c_d \left( \frac{z_d}{z_{d+1}}\right) ^n \frac{z_d - 1}{z_{d+1} -1 } + c_{d+1} \right\} z_{d+1}^n (z_{d+1} - 1) \\&\quad \qquad = c_1 z_1^n (z_1 - 1) + \cdots + c_{d+1} z_{d+1}^n (z_{d+1} - 1) \\&\quad \qquad = c_1 z_1^{n+1} + \cdots + c_{d+1} z_{d+1}^{n+1} -\bigl ( c_1 z_1^n + \cdots + c_{d+1} z_{d+1}^n \bigr ) \mathop {\longrightarrow }\limits ^{n\rightarrow \infty } 0, \end{aligned}$$

which in turn yields

$$\begin{aligned} \lim \nolimits _{n\rightarrow \infty } \left\{ c_1 \frac{z_1 - 1}{z_{d+1} -1 } \left( \frac{z_1}{z_{d+1}}\right) ^n + \cdots + c_d \frac{z_d - 1}{z_{d+1} -1 } \left( \frac{z_d}{z_{d+1}}\right) ^n \right\} = -c_{d+1}. \end{aligned}$$

Note that \(\displaystyle \frac{z_j}{z_{d+1}} \not \in \{1\}\cup \left\{ \frac{z_k}{z_{d+1}} : k \ne j\right\} \) for every \(1\le j \le d\). By the induction assumption, \(\displaystyle c_j \frac{z_j-1}{z_{d+1} - 1} = 0\) for all \(1\le j \le d\). Hence \(c_1 = \cdots = c_{d}=0\), and clearly \(c_{d+1} = 0\) as well. \(\square \)

Two simple consequences of Lemma 5.1 have been used repeatedly.

Lemma 5.2

Let \(0 = t_0 < t_1 < \cdots < t_{d} < t_{d+1} = \pi \) and \(c_0, c_1 \ldots , c_{d}, c_{d+1}\in \mathbb {C}\). If

$$\begin{aligned} \lim \nolimits _{n\rightarrow \infty } \mathfrak {R}(c_0 e^{\imath n t_0} + c_1 e^{\imath n t_1} + \cdots + c_d e^{\imath n t_d} + c_{d+1} e^{\imath n t_{d+1}}) = 0\, , \end{aligned}$$

then \(\mathfrak {R}c_0 = \mathfrak {R}c_{d+1} = 0\) and \(c_1 = \cdots = c_d = 0\).

Proof

For every \(j\in \{1, \ldots , 2d+1\}\) let

$$\begin{aligned} z_j = \left\{ \begin{array}{lll} e^{\imath t_j} &{} &{} \text{ if } 1 \le j \le d+1, \\ e^{-\imath t_{2d+2 - j}} &{} &{} \text{ if } d+2 \le j \le 2d+1, \end{array} \right. \end{aligned}$$

and note that \(z_j \not \in \{1\} \cup \{z_k : k\ne j\}\). Since

$$\begin{aligned} 2 \lim \nolimits _{n\rightarrow \infty } \mathfrak {R}&\left( c_0 e^{\imath n t_0} + c_1 e^{\imath n t_1} + \cdots + c_d e^{\imath n t_d} + c_{d+1} e^{\imath n t_{d+1}} \right) - 2 \mathfrak {R}c_0 \\&= 2 \lim \nolimits _{n\rightarrow \infty } \mathfrak {R}\left( c_1 e^{\imath n t_1} + \cdots + c_d e^{\imath n t_d} + c_{d+1} e^{\imath n t_{d+1}} \right) \\&= \lim \nolimits _{n\rightarrow \infty } \left( \sum \nolimits _{j=1}^d c_j z_j^n + 2 (\mathfrak {R}c_{d+1} ) z_{d+1}^n + \sum \nolimits _{j=d+2}^{2d+1} \overline{c_{2d+2 -j}} z_j^n \right) \end{aligned}$$

exists by assumption, Lemma 5.1 shows that \(c_1 = \cdots = c_d = 0\) and \(\mathfrak {R}c_{d+1}=0\), and so clearly \(\mathfrak {R}c_0 = 0\) as well. \(\square \)

Lemma 5.3

Given any \(z_1, \ldots , z_d \in \mathbb {S}\), the following are equivalent:

  1. (i)

    If \(c_1, \ldots , c_d \in \mathbb {C}\) and \(\lim _{n\rightarrow \infty } \mathfrak {R}(c_1 z_1^n + \cdots + c_d z_d^n)\) exists then \(c_1 = \cdots = c_d = 0\);

  2. (ii)

    \(z_j \not \in \{-1, 1\}\cup \{z_k, \overline{z_k} : k\ne j\}\) for every \(1\le j \le d\).

Proof

Clearly (i) \(\Rightarrow \) (ii) because if \(z_j\in \{-1,1\}\) for some \(1\le j \le d\) simply let \(c_j = \imath \) and \(c_{\ell } = 0\) for all \(\ell \ne j\), whereas if \(z_j \in \{z_k , \overline{z_k}\}\) for some \(j\ne k\), take \(c_j = 1\), \(c_k = -1\), and \(c_{\ell } = 0\) for all \(\ell \in \{1, \ldots , d\}\setminus \{j,k\}\). Conversely, if

$$\begin{aligned} \lim \nolimits _{n\rightarrow \infty } \mathfrak {R}(c_1 z_1^n + \cdots + c_d z_d^n) = {\textstyle \frac{1}{2}} \lim \nolimits _{n\rightarrow \infty } (c_1 z_1^n + \overline{c_1} \, \overline{z_1}^n + \cdots + c_d z_d^n + \overline{c_d} \, \overline{z_d}^n) \end{aligned}$$

exists then, by Lemma 5.1, \(c_1 = \cdots = c_d = 0\) unless either \(z_j = 1\) or \(z_j = \overline{z_j}\) (and hence \(z_j \in \{-1,1\}\)) for some \(j\), or else \(z_j \in \{z_k , \overline{z_k}\}\) for some \(j\ne k\). Overall, \(c_1 = \cdots = c_d=0\) unless \(z_j \in \{-1,1,z_k , \overline{z_k}\}\) for some \(j\ne k\). Thus (ii) \(\Rightarrow \) (i), as claimed. \(\square \)

Let \(\vartheta _1, \ldots , \vartheta _d\) and \(\beta \ne 0\) be real numbers, and \(p_1, \ldots , p_d\) integers. With these ingredients, consider the sequence \((x_n)\) of real numbers given by

$$\begin{aligned} x_n = p_1 n \vartheta _1 + \cdots + p_d n \vartheta _d + \beta \ln \big |u_1 \cos (2\pi n \vartheta _1) + \cdots + u_d \cos (2\pi n \vartheta _d) \big |, \quad \forall n \in \mathbb {N}, \end{aligned}$$
(5.1)

where \(u\in \mathbb {R}^d\). Recall that Lemma 2.7, which has been instrumental in the proof of Theorem 3.4, asserts that it is possible to choose \(u\in \mathbb {R}^d\) in such a way that \((x_n)\) is not u.d. mod \(1\) whenever the \(d+1\) numbers \(1,\vartheta _1, \ldots , \vartheta _d\) are \(\mathbb {Q}\)-independent. The remainder of this appendix is devoted to providing a rigorous proof of Lemma 2.7.

To prepare for the argument, recall that \(\mathbb {T}^d\) denotes the \(d\)-dimensional torus \(\mathbb {R}^d/\mathbb {Z}^d\), together with the \(\sigma \)-algebra \(\mathcal {B}(\mathbb {T}^d)\) of its Borel sets. Let \(\mathcal {P}(\mathbb {T}^d)\) be the set of all probability measures on \(\bigl ( \mathbb {T}^d , \mathcal {B}(\mathbb {T}^d)\bigr )\), and given any \(\mu \in \mathcal {P}(\mathbb {T}^d)\), associate with it the family \(\bigl ( \widehat{\mu } (k)\bigr )_{k\in \mathbb {Z}^d}\) of its Fourier coefficients, defined as

$$\begin{aligned} \widehat{\mu } (k) = \int _{\mathbb {T}^d} e^{2\pi \imath k^{\top } t} \, \mathrm{d} \mu (t) = \int _{\mathbb {T}^d} e^{2\pi \imath (k_1 t_1 + \cdots + k_d t_d)} \, \mathrm{d} \mu (t_1, \ldots , t_d), \quad \forall k\in \mathbb {Z}^d. \end{aligned}$$

Recall that \(\mu \mapsto \bigl ( \widehat{\mu } (k)\bigr )_{k\in \mathbb {Z}^d}\) is one-to-one, i.e., the Fourier coefficients determine \(\mu \) uniquely. Arguably the most prominent element in \(\mathcal {P}(\mathbb {T}^d)\) is the Haar measure \(\lambda _{\mathbb {T}^d}\) for which, with \(\mathrm{d}\lambda _{\mathbb {T}^d}(t)\) abbreviated \(\mathrm{d}t\) as usual,

$$\begin{aligned} \widehat{\lambda _{\mathbb {T}^d}}(k) = \int _{\mathbb {T}^d} e^{2\pi \imath (k_1 t_1 + \cdots + k_d t_d)} \, \mathrm{d}t = \prod \nolimits _{j=1}^d \int _{\mathbb {T}} e^{2\pi \imath k_j t} \mathrm{d}t = \left\{ \begin{array}{cll} 1 &{} \quad &{}\quad \text{ if } k=0 \in \mathbb {Z}^d, \\ 0 &{} \quad &{}\quad \text{ if } k \ne 0. \end{array} \right. \end{aligned}$$

Given \(\mu \in \mathcal {P}(\mathbb {T}^d)\), therefore, to show that \(\mu \ne \lambda _{\mathbb {T}^d}\) it is (necessary and) sufficient to find at least one \(k \in \mathbb {Z}^d \setminus \{0\}\) for which \(\widehat{\mu } (k)\ne 0\). Recall also that, given any (Borel) measurable map \(T:\mathbb {T}^d \rightarrow \mathbb {T}\), each \(\mu \in \mathcal {P}(\mathbb {T}^d)\) induces a unique \(\mu \circ T^{-1}\in \mathcal {P}(\mathbb {T})\), via

$$\begin{aligned} \mu \circ T^{-1} (B) = \mu \bigl ( T^{-1}(B)\bigr ), \quad \forall B \in \mathcal {B}(\mathbb {T}). \end{aligned}$$

Note that the Fourier coefficients of \(\mu \circ T^{-1}\) are simply

$$\begin{aligned} \widehat{ \mu \circ T^{-1}}(k) = \int _{\mathbb {T}} e^{2\pi \imath k t} \mathrm{d}(\mu \circ T^{-1}) (t) = \int _{\mathbb {T}^d} e^{2\pi \imath k T(t)} \mathrm{d}\mu (t), \quad k \in \mathbb {Z}. \end{aligned}$$

If in particular \(d=1\) and \(\mu \circ T^{-1} = \mu \) then \(\mu \) is said to be \(T\)-invariant (and \(T\) is \(\mu \)-preserving).

With a view towards Lemma 2.7, for any \(p_1, \ldots , p_d \in \mathbb {Z}\) and \(\beta \in \mathbb {R}\) consider the map

$$\begin{aligned} \Lambda _u : \left\{ \begin{array}{ccl} \mathbb {T}^d &{} \rightarrow &{} \mathbb {T}, \\ t &{} \mapsto &{} \bigl \langle p_1 t_1 + \cdots + p_d t_d + \beta \ln \big | u_1 \cos (2\pi t_1) + \cdots + u_d \cos (2\pi t_d)\big | \bigr \rangle \, ; \end{array} \right. \end{aligned}$$
(5.2)

here \(u \in \mathbb {R}^d\) may be thought of as a parameter. (Recall the convention, adhered to throughout, that \(\ln 0 = 0\).) Note that each map \(\Lambda _u\) is (Borel) measurable, in fact differentiable outside a set of \(\lambda _{\mathbb {T}^d}\)-measure zero. For every \(\mu \in \mathcal {P}(\mathbb {T}^d)\), therefore, the measure \(\mu \circ \Lambda _u^{-1}\) is a well-defined element of \(\mathcal {P}(\mathbb {T})\). Lemma 2.7 is a consequence of the following fact which may also be of independent interest.

Theorem 5.4

For every \(p_1, \ldots , p_d \in \mathbb {Z}\) and \(\beta \in \mathbb {R}\setminus \{0\}\), there exists \(u\in \mathbb {R}^d\) such that \(\lambda _{\mathbb {T}^d} \circ \Lambda _u^{-1} \ne \lambda _{\mathbb {T}}\), with \(\Lambda _u\) given by (5.2).

To see that Theorem 5.4 does indeed imply Lemma 2.7, let \(p_1, \ldots , p_d \in \mathbb {Z}\) and \(\beta \in \mathbb {R}\setminus \{0\}\) be given, and pick \(u\in \mathbb {R}^d\) such that \(\lambda _{\mathbb {T}^d} \circ \Lambda _u^{-1} \ne \lambda _{\mathbb {T}}\). Consequently, there exists a continuous function \(f:\mathbb {T}\rightarrow \mathbb {C}\) for which \(\int _{\mathbb {T}} f \, \mathrm{d}( \lambda _{\mathbb {T}^d}\circ \Lambda _u^{-1} ) \ne \int _{\mathbb {T}} f \, \mathrm{d}\lambda _{\mathbb {T}}\). Note that \(f\circ \Lambda _u: \mathbb {T}^d \rightarrow \mathbb {C}\) is continuous \(\lambda _{\mathbb {T}^d}\)-almost everywhere as well as bounded, hence Riemann integrable. Also recall that the sequence \(\bigl ( (n\vartheta _1, \ldots , n\vartheta _d) \bigr )\) is u.d. mod \(1\) in \(\mathbb {R}^d\) whenever \(1, \vartheta _1, \ldots , \vartheta _d\) are \(\mathbb {Q}\)-independent [26, Exp.I.6.1]. In the latter case, therefore,

$$\begin{aligned} \lim \nolimits _{N\rightarrow \infty } \frac{1}{N} \sum \nolimits _{n=1}^N f(\langle x_n \rangle )&= \lim \nolimits _{N\rightarrow \infty } \frac{1}{N} \sum \nolimits _{n=1}^N f \circ \Lambda _u \bigl ( \langle (n\vartheta _1, \ldots , n\vartheta _d) \rangle \bigr ) \\&= \int _{\mathbb {T}^d} f \circ \Lambda _u \, \mathrm{d}\lambda _{\mathbb {T}^d} = \int _{\mathbb {T}} f \, \mathrm{d}(\lambda _{\mathbb {T}^d} \circ \Lambda _u^{-1} ) \ne \int _{\mathbb {T}} f \, \mathrm{d}\lambda _{\mathbb {T}}, \end{aligned}$$

showing that \((x_n)\) is not u.d. mod \(1\).

Thus it remains to prove Theorem 5.4. Though the assertion of the latter is quite plausible intuitively, the authors do not know of any simple but rigorous justification. The proof presented here is computational and proceeds in essentially two steps: First the case of \(d=1\) is analyzed in detail. Specifically, it is shown that \(\lambda _{\mathbb {T}} \circ \Lambda _u^{-1} \ne \lambda _{\mathbb {T}}\) unless \(p_1 \ne 0\) and \(\beta u_1 = 0\). For itself, this could be seen directly by noticing that the map \(\Lambda _u :\mathbb {T}\rightarrow \mathbb {T}\) has a non-degenerate critical point whenever \(\beta u_1 \ne 0\), and hence cannot possibly preserve \(\lambda _{\mathbb {T}}\), see e.g. [5, Lem. 2.6] or [6, Ex. 5.27(iii)]. The more elaborate calculation given here, however, is useful also in the second step of the proof, i.e. the analysis for \(d\ge 2\). As it turns out, the case of \(d\ge 2\) can, in essence, be reduced to calculations already done for \(d=1\).

To concisely formulate the subsequent results, recall that the Euler Gamma function, denoted \(\Gamma = \Gamma (z)\) as usual, is a meromorphic function with poles precisely at \(z\in - \mathbb {N}_0 = \{ 0,-1,-2, \ldots \}\), and \(\Gamma (z+1) = z \Gamma (z)\ne 0\) for every \(z\in \mathbb {C}\setminus (-\mathbb {N}_0)\). Also, for convenience every “empty sum” is understood to equal zero, e.g. \(\sum _{2\le j \le 1} j^2 =0\), whereas every “empty product” is understood to equal \(1\), e.g. \(\prod _{2\le j \le 1} j^2 = 1\). Finally, the standard (ascending) Pochhammer symbol \((z)_n\) will be used where, given any \(z\in \mathbb {C}\),

$$\begin{aligned} (z)_n:= z (z+1) \cdots (z+n-1) = \prod \nolimits _{\ell =0}^{n-1} (z+\ell ), \quad \forall n \in \mathbb {N}, \end{aligned}$$

and \((z)_0:= 1\), in accordance with the convention on empty products. Note that \((z)_n = \Gamma (z+n)/\Gamma (z)\) whenever \(z\not \in \mathbb {C}\setminus (-\mathbb {N}_0)\).

For every \(p\in \mathbb {Z}\) and \(\beta \in \mathbb {R}\), consider now the integral

$$\begin{aligned} I_{p,\beta }:= \int _{\mathbb {T}} e^{4\pi \imath p t + 2\imath \beta \ln |\cos (2\pi t)|} \, \mathrm{d}t. \end{aligned}$$
(5.3)

The specific form of \(I_{p, \beta }\) is suggested by the Fourier coefficients of \(\lambda _{\mathbb {T}} \circ \Lambda _u^{-1}\) in the case of \(d=1\); see the proof of Lemma 5.6 below. Not surprisingly, the value of \(I_{p,\beta }\) can be expressed explicitly by means of special functions.

Lemma 5.5

For every \(p\in \mathbb {Z}\) and \(\beta \in \mathbb {R}\setminus \{0\}\),

$$\begin{aligned} I_{p,\beta } = (-1)^p e^{-\imath \beta \ln 4} \frac{2\imath \beta \Gamma (2\imath \beta )}{\bigl ( \imath \beta \Gamma (\imath \beta ) \bigr )^2} \cdot \frac{(-\imath \beta )_{|p|}}{(1+\imath \beta )_{|p|}}\, , \end{aligned}$$
(5.4)

and hence in particular

$$\begin{aligned} |I_{p,\beta }|^2 = \frac{\beta \tanh (\pi \beta )}{ \pi ( p^2 + \beta ^2) } > 0. \end{aligned}$$
(5.5)

Proof

Substituting \(-t\) for \(t\) in (5.3) shows that \(I_{p,\beta } = I_{|p|, \beta }\), and a straightforward calculation, with \(T_{\ell }\) denoting the \(\ell \)-th Chebyshev polynomial (\(\ell \in \mathbb {N}_0\)), yields

$$\begin{aligned} I_{p,\beta }&= \int _{\mathbb {T}} e ^{4\pi \imath |p|t + 2 \imath \beta \ln |\cos (2\pi t)|} \, \mathrm{d}t = \int _0^1 e^{2\pi \imath |p| x + 2 \imath \beta \ln |\cos (\pi x)|} \, \mathrm{d}x \\&= \int _0^{\frac{1}{2}} 2 \cos (2\pi |p| x ) e^{2\imath \beta \ln |\cos (\pi x)|} \, \mathrm{d}x = 2 \int _0^{\frac{1}{2}} T_{2|p|} \bigl (\cos (\pi x)\bigr ) e^{2\imath \beta \ln |\cos (\pi x)|} \, \mathrm{d}x \\&= \frac{2}{\pi } \int _0^1 \frac{T_{2|p|}(x)}{\sqrt{1-x^2}} e^{2\imath \beta \ln x} \, \mathrm{d}x = \frac{2}{\pi } \int _0^{+\infty } T_{2|p|} \left( \frac{1}{\sqrt{1+x^2}}\right) \frac{e^{-\imath \beta \ln (1+x^2)}}{1+x^2} \, \mathrm{d} x. \end{aligned}$$

As the polynomial \(T_{2|p|}\) can, for every \(p\in \mathbb {Z}\) and \(y\ne 0\), be written as

$$\begin{aligned} T_{2|p|}(y) = y^{2|p|} \sum \nolimits _{\ell = 0}^{|p|} \left( \!\! \begin{array}{c} 2|p| \\ 2 \ell \end{array} \!\! \right) (1-y^{-2})^{\ell }, \end{aligned}$$

it follows that

$$\begin{aligned} I_{p,\beta }&= \frac{2}{\pi } \sum \nolimits _{\ell = 0}^{|p|} (-1)^{\ell } \left( \!\! \begin{array}{c} 2|p| \\ 2 \ell \end{array} \!\! \right) \int _0^{+\infty } \frac{x^{2\ell }}{(1+x^2)^{1+|p| + \imath \beta }} \, \mathrm{d}x \\&= \frac{1}{\pi } \sum \nolimits _{\ell = 0}^{|p|} (-1)^{\ell } \left( \!\! \begin{array}{c} 2|p| \\ 2 \ell \end{array} \!\! \right) \int _0^{+\infty } \frac{x^{\ell - \frac{1}{2}}}{ (1+x)^{1+|p| + \imath \beta }} \, \mathrm{d}x \\&= \frac{1}{\pi \Gamma (1 + |p| + \imath \beta )} \sum \nolimits _{\ell = 0}^{|p|} (-1)^{\ell } \left( \!\! \begin{array}{c} 2|p| \\ 2 \ell \end{array} \!\! \right) \Gamma (\textstyle {\frac{1}{2}} + \ell ) \Gamma (\textstyle {\frac{1}{2}} + |p| - \ell + \imath \beta ). \end{aligned}$$

Note that \(\Gamma \) is finite and non-zero for each argument appearing in this sum. Recall that

$$\begin{aligned} \Gamma ({\textstyle \frac{1}{2}} + \ell ) = \frac{(2\ell )! \sqrt{\pi }}{\ell ! \, 2^{2\ell } }, \quad \forall \ell \in \mathbb {N}_0 \, , \end{aligned}$$

and so

$$\begin{aligned} I_{p,\beta }&= \frac{(-1)^p (2|p|)!}{ \sqrt{\pi } 2^{2|p|} \Gamma (1+|p| + \imath \beta )} \sum \nolimits _{\ell =0}^{|p|} \left\{ (-1)^{\ell } \, \frac{2^{2\ell } \Gamma (\frac{1}{2} + \ell + \imath \beta )}{(2\ell )! (|p| - \ell )!} \right\} \\&= \frac{(-1)^p \Gamma (\frac{1}{2} + |p|) \Gamma (\frac{1}{2} + \imath \beta )}{\pi \Gamma (1+|p| + \imath \beta )} \sum \nolimits _{\ell =0}^{|p|} \left\{ (-1)^{\ell } \! \left( \!\! \begin{array}{c} |p| \\ \ell \end{array} \!\! \right) \prod \nolimits _{k=1}^{\ell } \frac{2k -1 + 2\imath \beta }{2k -1} \right\} \\&= \frac{(-1)^p \Gamma (\frac{1}{2} + \imath \beta )}{\sqrt{\pi } 2^{|p|} \Gamma (1+|p| + \imath \beta )}\\&\quad \cdot \sum \nolimits _{\ell =0}^{|p|} \left\{ (-1)^{\ell } \! \left( \!\! \begin{array}{c} |p| \\ \ell \end{array} \!\! \right) \prod \nolimits _{k=1}^{\ell } (2k - 1 + 2\imath \beta ) \prod \nolimits _{k= \ell + 1}^{|p|} (2k-1) \right\} \\&= \frac{(-1)^p \Gamma (\frac{1}{2} + \imath \beta )}{\sqrt{\pi } 2^{|p|} \Gamma (1+|p| + \imath \beta )} R_{|p|} (2\imath \beta ), \end{aligned}$$

where, for every \(m\in \mathbb {N}_0\), the polynomial \(R_m\) is given by

$$\begin{aligned} R_m(z) = \sum \nolimits _{\ell =0}^{m} \left\{ (-1)^{\ell } \! \left( \!\! \begin{array}{c} m \\ \ell \end{array} \!\! \right) \prod \nolimits _{k=1}^{\ell } (2k - 1 + z) \prod \nolimits _{k= \ell + 1}^{m} (2k-1) \right\} . \end{aligned}$$
(5.6)

Thus for example \(R_0 (z) \equiv 1\), \(R_1(z) = - z\), \(R_2(z) = -2z + z^2\). Note that the degree of \(R_m\) equals \(m\), and for every \(m\in \mathbb {N}\) and \(j\in \{0,1,\ldots , m-1\}\),

$$\begin{aligned} R_m(2j)&= \sum \nolimits _{\ell =0}^{m} \left\{ (-1)^{\ell } \! \left( \!\! \begin{array}{c} m \\ \ell \end{array} \!\! \right) \prod \nolimits _{k=1}^{\ell } (2k - 1 + 2j) \prod \nolimits _{k= \ell + 1}^{m} (2k-1) \right\} \\&= \sum \nolimits _{\ell =0}^{m} \left\{ (-1)^{\ell } \! \left( \!\! \begin{array}{c} m \\ \ell \end{array} \!\! \right) \prod \nolimits _{k=j+1}^{m} (2k - 1) \prod \nolimits _{k= \ell + 1}^{\ell + j} (2k-1) \right\} \\&= \left\{ \prod \nolimits _{k=j+1}^{m} (2k-1) \right\} \sum \nolimits _{\ell =0}^{m} \left\{ (-1)^{\ell } \! \left( \!\! \begin{array}{c} m \\ \ell \end{array} \!\! \right) \prod \nolimits _{k=1}^{j} (2\ell + 2k -1) \right\} \, = 0. \end{aligned}$$

Here the elementary fact has been used that \( \sum \nolimits _{\ell =0}^{m} (-1)^{\ell } \! \left( \!\! \begin{array}{c} m \\ \ell \end{array} \!\! \right) Q(\ell ) = 0\) holds for every polynomial \(Q\) of degree less than \(m\). As the polynomial \(R_m\) has degree \(m\), it cannot have any further roots besides \(0,2,4, \ldots , 2m-2\), and so

$$\begin{aligned} R_m(z) = c_m \prod \nolimits _{\ell = 0}^{m-1} (z - 2\ell ), \end{aligned}$$
(5.7)

with a constant \(c_m\) yet to be determined. The correct value of \(c_m\) is readily found by observing that (5.7) yields

$$\begin{aligned} R_m(-1) = c_m \prod \nolimits _{\ell = 0}^{m-1} ( -1 -2\ell ) = c_m (-1)^m \cdot 1 \cdot 3 \cdot \cdots \cdot (2m-1), \end{aligned}$$

whereas, by the very definition (5.6) of \(R_m\),

$$\begin{aligned} R_m(-1) = \sum \nolimits _{\ell =0}^{m} \! \left\{ (-1)^{\ell } \! \left( \!\! \begin{array}{c} m \\ \ell \end{array} \!\! \right) \! \prod \nolimits _{k=1}^{\ell } (2k -2) \prod \nolimits _{k= \ell + 1}^{m} (2k-1) \right\} = \prod \nolimits _{k=1}^m (2k-1). \end{aligned}$$

Thus \(c_m = (-1)^m\), and overall

$$\begin{aligned} R_m(z) = (-1)^m \prod \nolimits _{\ell = 0}^{m-1} (z- 2\ell ) = \prod \nolimits _{\ell = 0}^{m-1} (2\ell - z) = 2^m \bigl ( -\textstyle {\frac{1}{2}} z \bigr )_m. \end{aligned}$$

With this, one obtains

$$\begin{aligned} I_{p,\beta }&= \frac{(-1)^p \Gamma (\frac{1}{2} + \imath \beta )}{\sqrt{\pi } 2^{|p|} \Gamma (1+|p| + \imath \beta )} \prod \nolimits _{\ell = 0}^{|p|-1}(2\ell - 2\imath \beta ) \\&= \frac{2 (-1)^{p+1} e^{-\imath \beta \ln 4} }{|p| - \imath \beta }\cdot \frac{\Gamma (2\imath \beta )}{\Gamma (\imath \beta )^2} \prod \nolimits _{\ell =1}^{|p|} \frac{\ell - \imath \beta }{\ell + \imath \beta } \\&= (-1)^p e^{-\imath \beta \ln 4} \frac{2\imath \beta \Gamma (2\imath \beta )}{\bigl ( \imath \beta \Gamma (\imath \beta ) \bigr )^2} \cdot \frac{(-\imath \beta )_{|p|}}{(1+\imath \beta )_{|p|}}, \end{aligned}$$

where the so-called Legendre duplication formula for the \(\Gamma \)-function has been used in the form

$$\begin{aligned} \Gamma (\imath \beta ) \Gamma (\textstyle {\frac{1}{2}} + \imath \beta ) = 2^{1 - 2\imath \beta } \sqrt{\pi } \, \Gamma (2 \imath \beta ), \quad \forall \beta \in \mathbb {R}\setminus \{0\}\, . \end{aligned}$$

Thus (5.4) has been established, and together with the standard fact

$$\begin{aligned} |\Gamma (\imath \beta )|^2 = \frac{\pi }{\beta \sinh (\pi \beta )}, \quad \forall \beta \in \mathbb {R}\setminus \{0\}, \end{aligned}$$

this immediately yields

$$\begin{aligned} |I_{p,\beta }|^2 = \frac{4}{p^2 +\beta ^2} \cdot \frac{|\Gamma (2\imath \beta )|^2}{|\Gamma (\imath \beta )|^4} = \frac{4\beta ^2 \pi }{2\beta \sinh (2\pi \beta )} \cdot \frac{\sinh ^2 (\pi \beta )}{\pi ^2 (p^2 + \beta ^2)} = \frac{\beta \tanh (\pi \beta )}{\pi ( p^2 + \beta ^2)}, \end{aligned}$$

i.e., (5.5) holds as claimed. \(\square \)

An immediate consequence of Lemma 5.5 is that for \(d=1\) the map \(\Lambda _u\) does typically not preserve \(\lambda _{\mathbb {T}}\). Notice that the following result is much stronger than (and hence obviously proves) Theorem 5.4 for \(d=1\).

Lemma 5.6

Let \(p_1 \in \mathbb {Z}\), \(\beta \in \mathbb {R}\) and \(u_1 \in \mathbb {R}\). Then \(\lambda _{\mathbb {T}} \circ \Lambda _u^{-1} = \lambda _{\mathbb {T}}\), where \(\Lambda _u\) is given by (5.2) with \(d=1\), if and only if \(p_1 \ne 0\) and \(\beta u_1 = 0\).

Proof

Simply note that for \(\beta u_1 = 0\) and every \(k\in \mathbb {Z}\),

$$\begin{aligned} \widehat{\lambda _{\mathbb {T}} \circ \Lambda _u^{-1}} (k) = \left\{ \begin{array}{cll} 1 &{} &{} \text{ if } kp_1 = 0, \\ 0 &{} &{} \text{ if } kp_1 \ne 0, \end{array} \right. \end{aligned}$$

and hence \(\lambda _{\mathbb {T}} \circ \Lambda _u^{-1} = \lambda _{\mathbb {T}}\) precisely if \(p_1 \ne 0\). On the other hand, for \(\beta u_1 \ne 0\),

$$\begin{aligned} \widehat{\lambda _{\mathbb {T}} \circ \Lambda _u^{-1}} (2)&= \int _{\mathbb {T}} e^{4\pi \imath t} \, \mathrm{d}(\lambda _{\mathbb {T}} \circ \Lambda _u^{-1} ) (t) = \int _{\mathbb {T}} e^{4\pi \imath (p_1 t + \beta \ln |u_1 \cos (2\pi t)|)} \, \mathrm{d}t \\&= e^{4\pi \imath \beta \ln |u_1|} I_{p_1, 2\pi \beta } \ne 0, \end{aligned}$$

showing that \(\lambda _{\mathbb {T}}\circ \Lambda _u^{-1} \ne \lambda _{\mathbb {T}}\) in this case. \(\square \)

As indicated earlier, the case of \(d\ge 2\) of Theorem 5.4 is now going to be studied and, in a way, reduced to the case of \(d=1\). To this end, let again \(p\in \mathbb {Z}\) and \(\beta \in \mathbb {R}\) be given, and consider the function \(i_{p,\beta }:\mathbb {R}\rightarrow \mathbb {C}\) with

$$\begin{aligned} i_{p,\beta } (x) = \int _{\mathbb {T}} e^{4\pi \imath p t + 2 \imath \beta \ln |x + \cos (2\pi t) |} \, \mathrm{d}t, \quad \forall x \in \mathbb {R}. \end{aligned}$$
(5.8)

A few elementary properties of \(i_{p,\beta }\) are contained in

Lemma 5.7

For every \(p\in \mathbb {Z}\) and \(\beta \in \mathbb {R}\), the function \(i_{p,\beta }\) is continuous and even, with \(|i_{p,\beta }(x)|\le 1\) for all \(x\in \mathbb {R}\). Moreover, \(i_{p,\beta }(0) = I_{p,\beta }\) and \(i_{p,\beta }(1) = e^{\imath \beta \ln 4} I_{2p,2\beta }\); in particular, \(i_{p,\beta }(0)\ne i_{p,\beta }(1)\) whenever \(\beta \ne 0\).

Proof

Since for every \(x\in \mathbb {R}\),

$$\begin{aligned} \lim \nolimits _{y \rightarrow x} \ln | y + \cos (2\pi t) | = \ln | x + \cos (2\pi t) | \end{aligned}$$

holds for all but (at most) two \(t\in \mathbb {T}\), the continuity of \(i_{p,\beta }\) follows from the Dominated Convergence Theorem. Clearly, \(i_{p,\beta }\) is even, with \(|i_{p,\beta }(x)| \le \int _{\mathbb {T}} 1\, \mathrm{d}\lambda _{\mathbb {T}} = 1\) for every \(x\in \mathbb {R}\), and \(i_{p,\beta }(0) =I_{p,\beta }\). Finally, it follows from

$$\begin{aligned} i_{p,\beta } (1) = e^{\imath \beta \ln 4} \int _{\mathbb {T}} e^{4\pi \imath p t + 4 \imath \beta \ln |\cos (\pi t)|} \, \mathrm{d}t = e^{\imath \beta \ln 4} I_{2 p , 2 \beta }, \end{aligned}$$

and (5.5) that, for every \(p\in \mathbb {Z}\) and \(\beta \in \mathbb {R}\setminus \{0\}\),

$$\begin{aligned} \left| \frac{i_{p,\beta }(1)}{i_{p,\beta }(0)} \right| ^2 = \frac{|I_{2p, 2\beta }|^2}{|I_{p,\beta }|^2} = \frac{2\beta \tanh (2\pi \beta )}{4 p^2 + 4 \beta ^2} \cdot \frac{p^2 + \beta ^2}{\beta \tanh (\pi \beta )} = \frac{1}{2} \left( 1 + \frac{1}{\cosh (2\pi \beta )}\right) < 1, \end{aligned}$$

and hence \(i_{p,\beta }(1)\ne i_{p,\beta }(0)\). \(\square \)

The subsequent analysis crucially depends on the fact that \(i_{p,\beta }\) is actually much smoother than Lemma 5.7 seems to suggest. Recall that a function \(f:\mathbb {R}^m\rightarrow \mathbb {C}\) is real-analytic on an open set \(\mathcal {U}\subset \mathbb {R}^m\) if \(f\) can, in a neighbourhood of each point in \(\mathcal {U}\), be represented as a convergent power series. As will become clear soon, the ultimate proof of Theorem 5.4 relies heavily on the following refinement of Lemma 5.7.

Lemma 5.8

For every \(p\in \mathbb {Z}\) and \(\beta \in \mathbb {R}\), the function \(i_{p,\beta }\) is real-analytic on \((-1,1)\).

Proof

As \(i_{p,0}\) is constant, and thus trivially real-analytic, henceforth assume \(\beta \ne 0\). By Lemma 5.7, the function \(f:\mathbb {T}\rightarrow \mathbb {C}\) with \(f(t)= i_{p,\beta }\bigl (\cos (\pi t) \bigr )\) is well-defined and continuous. Hence it can be represented, at least in the \(L^2(\lambda _{\mathbb {T}})\)-sense, as a Fourier series \(f(t) \sim \sum _{k\in \mathbb {Z}} c_k e^{2\pi \imath k t}\) where, for every \(k\in \mathbb {Z}\),

$$\begin{aligned} c_k&= \int _{\mathbb {T}} f(t) e^{-2\pi \imath k t} \, \mathrm{d} t = \int _{\mathbb {T}^2} e ^{-2\pi \imath k t_1 + 4 \pi \imath |p|t_2 + 2\imath \beta \ln | \cos (\pi t_1) + \cos (2\pi t_2) |} \, \mathrm{d} t \\&= \int _{\mathbb {T}^2} e^{4\pi \imath |p| (t_1 - t_2) - 4\pi \imath k (t_1 + t_2) + 2\imath \beta \ln |2\cos (2\pi t_1) \cos (2\pi t_2)|} \, \mathrm{d}t \\&= e^{\imath \beta \ln 4} \int _{\mathbb {T}} e^{4\pi \imath (|p|-k)t + 2 \imath \beta \ln |\cos (2\pi t)|} \, \mathrm{d}t \int _{\mathbb {T}} e^{4\pi \imath (|p|+k) t + 2\imath \beta \ln |\cos (2 \pi t)|} \, \mathrm{d} t \\&= e^{\imath \beta \ln 4} I_{|p|-k, \beta } I_{|p|+k, \beta }. \end{aligned}$$

Since \(c_{-k} = c_k\), the Fourier series of \(f\) is

$$\begin{aligned} c_0 + 2 \sum \nolimits _{n\in \mathbb {N}} c_n \cos (2\pi n t ) = c_0 + 2 \sum \nolimits _{n=1}^{\infty } c_n T_{2n}\bigl ( \cos (\pi t ) \bigr ) \, , \end{aligned}$$

and since furthermore

$$\begin{aligned} |c_n| = |I_{n-|p|, \beta } I_{n+|p|, \beta }| =\frac{\beta \tanh (\pi \beta )}{\pi \sqrt{(n^2 + p^2 + \beta ^2)^2 - 4 n^2 p^2}} = \mathcal {O}(n^{-2}), \quad \text{ as } n \rightarrow \infty , \end{aligned}$$

and hence \(\sum _{n=1}^{\infty }|c_n|<+\infty \), this series converges uniformly on \(\mathbb {T}\), by the Weierstrass M-test. It follows that \(i_{p,\beta } (x) = c_0 + 2 \sum \nolimits _{n=1}^{\infty } c_n T_{2n}(x)\) uniformly in \(x\in [-1,1]\).

For every \(y \in (-1,1)\), consider now the auxiliary function

$$\begin{aligned} h(x,y):= 2 \sum \nolimits _{n = 1+ |p|}^{\infty } c_n T_{2n}(x) y^n. \end{aligned}$$

Note that \( i_{p,\beta } (x) = c_0 + 2 \sum \nolimits _{n=1}^{|p|} c_n T_{2n}(x) + \lim \nolimits _{y \uparrow 1} h(x,y) \) uniformly in \(x\in [-1,1]\). In addition, introduce an analytic function on the open unit disc as

$$\begin{aligned} H(z) := \sum \nolimits _{n=1+ |p|}^{\infty } c_n z^n, \quad \forall z\in \mathbb {C}: |z| < 1, \end{aligned}$$
(5.9)

and observe that

$$\begin{aligned} H&( z) = z^{1+|p|} \sum \nolimits _{n=0}^{\infty } c_{n+1+ |p|} z^n = e^{\imath \beta \ln 4} z^{1+ |p|} \sum \nolimits _{n=0}^{\infty } I_{n+1,\beta } I_{n+1+2|p|, \beta } z^n \\&= e^{-\imath \beta \ln 4}z^{1+|p|} \frac{\bigl ( 2 \imath \beta \Gamma (2\imath \beta )\bigr )^2}{\bigl ( \imath \beta \Gamma (\imath \beta ) \bigr )^4} \sum \nolimits _{n=0}^{\infty } \frac{(-\imath \beta )_{n+1}(-\imath \beta )_{n+1+2|p|}}{(1+\imath \beta )_{n+1} (1+\imath \beta )_{n+1+2|p|}} z^n \\&= e^{-\imath \beta \ln 4}z^{1+ |p|} \frac{\bigl ( 2 \imath \beta \Gamma (2\imath \beta )\bigr )^2}{\bigl ( \imath \beta \Gamma (\imath \beta ) \bigr )^4} \cdot \frac{(\imath \beta )^2}{(1+\imath \beta )^2}\cdot \frac{(1-\imath \beta )_{2|p|}}{(2+\imath \beta )_{2|p|}}\\&\quad \cdot \sum \nolimits _{n=0}^{\infty } \frac{(1-\imath \beta )_n(1+2|p|-\imath \beta )_{n}}{(2+\imath \beta )_n (2+2|p|+\imath \beta )_{n}}z^n \\&= \frac{4 e^{-\imath \beta \ln 4} \Gamma (2\imath \beta )^2 (1-\imath \beta )_{2|p|}}{(1+\imath \beta )^2 \Gamma (\imath \beta )^4 (2+\imath \beta )_{2|p|}}\\&\qquad \cdot {}_{3}{F}^{ }_{2} (1-\imath \beta ,1+ 2|p| - \imath \beta , 1; 2+\imath \beta , 2+2|p|+ \imath \beta ; z) z^{1+|p|}\, ; \end{aligned}$$

here the standard notation for (generalized) hypergeometric functions has been used, see e.g. [28, Ch.II] or [34, Ch.16]. Recall that \({}_{3}{F}^{ }_{2}\) is an analytic function on \(\mathbb {C}\setminus [1,+\infty )\), that is, on the entire complex plane minus a cut from \(1\) to \(\infty \) along the positive real axis. Hence \(H\) as given by (5.9) can be extended analytically to \(\mathbb {C}\setminus [1,+\infty )\) as well. Observe now that

$$\begin{aligned} H (e^{2\pi \imath t} y ) + H(e^{-2\pi \imath t} y)&= 2 \! \sum \nolimits _{n=1+ |p|}^{\infty } \! \! c_n T_{2n} \bigl (\cos (\pi t ) \bigr ) y^n \\&= h\bigl ( \cos (\pi t) , y \bigr ), \quad \forall t \in \mathbb {T}, y \in (-1,1). \end{aligned}$$

It follows that, for all \(x\in [-1,1]\),

$$\begin{aligned} i_{p,\beta }(x)&= c_0 + 2 \sum \nolimits _{n=1}^{|p|} c_n T_{2n}(x) \, + \\&\quad \,\, + \lim \nolimits _{y \uparrow 1} \left\{ H \bigl ( (2x^2 - 1 + 2\imath x \sqrt{1-x^2}) y \bigr ) + H \bigl ( (2x^2 - 1 - 2\imath x \sqrt{1-x^2}) y \bigr ) \right\} \\&= c_0 + 2 \sum \nolimits _{n=1}^{|p|} c_n T_{2n}(x) \, + \\&\quad \,\,+ H \bigl ( 2x^2 - 1 + 2\imath x \sqrt{1-x^2}\, \bigr ) + H \bigl ( 2x^2 - 1 - 2\imath x \sqrt{1-x^2}\, \bigr ). \end{aligned}$$

Note now that \(2z^2 - 1\pm 2\imath z \sqrt{1-z^2}\not \in [1,+\infty )\) whenever \(|z|<1\). The function

$$\begin{aligned} z\mapsto c_0 + 2 \sum \nolimits _{n=1}^{|p|} c_n T_{2n}(z) + H \bigl ( 2z^2 - 1 + 2\imath z \sqrt{1-z^2}\, \bigr ) + H \bigl ( 2z^2 - 1 - 2\imath z \sqrt{1-z^2}\, \bigr ), \end{aligned}$$

therefore, is analytic on the open unit disc and coincides with \(i_{p,\beta }\) on \(\{z: |z|<1\}\cap \mathbb {R}= (-1,1)\). Thus \(i_{p,\beta }\) is real-analytic on \((-1,1)\), and in fact \(i_{p,\beta }(x) = \sum _{n=0}^{\infty } i_{p,\beta }^{(n)}(0) x^n/n!\) for all \(x\in (-1,1)\). \(\square \)

Remark 5.9

Since \(t \mapsto x+\cos (2\pi t)\) does not change sign on \(\mathbb {T}\) whenever \(|x|>1\), it is clear from (5.8) that the function \(i_{p,\beta }\) is real-analytic on \(\mathbb {R}\setminus [-1,1]\) as well.

For every \(d\in \mathbb {N}\), define a non-empty open subset of \(\mathbb {R}^d\) as

$$\begin{aligned} \mathcal {E}_d:= \left\{ u \in \mathbb {R}^d : \exists j \in \{1, \ldots , d\}\, \text{ with } |u_{j}|> \sum \nolimits _{k\ne j} |u_k| \right\} . \end{aligned}$$

Geometrically, \(\mathcal {E}_d\) is the disjoint union of \(2d\) open cones. For example, \(\mathcal {E}_1 = \mathbb {R}\setminus \{0\}\) and \(\mathcal {E}_2 = \{u\in \mathbb {R}^2 : |u_1| \ne |u_2|\}\), hence \(\mathcal {E}_d\) is also dense in \(\mathbb {R}^d\) for \(d=1,2\). For \(d\ge 3\) this is no longer the case. In fact, a simple calculation shows that

$$\begin{aligned} \frac{\text{ Leb } (\mathcal {E}_d \cap [-1,1]^d)}{\text{ Leb } ([-1,1]^d)} = \frac{2^d/\Gamma (d)}{2^d} = \frac{1}{\Gamma (d)}, \quad \forall d \in \mathbb {N}, \end{aligned}$$

and so the (relative) portion of \(\mathbb {R}^d\) taken up by \(\mathcal {E}_d\) decays rapidly with growing \(d\).

In order to utilize Lemma 5.8 for a proof of Theorem 5.4, given any \(p_1, \ldots , p_d \in \mathbb {Z}\) and \(\beta \in \mathbb {R}\), recall the map \(\Lambda _u\) from (5.2) and consider the integral

$$\begin{aligned} J= J(u)&= \widehat{\lambda _{\mathbb {T}^d} \circ \Lambda _{u}^{-1}} (2) = \int _{\mathbb {T}} e^{4\pi \imath t} \, \mathrm{d} (\lambda _{\mathbb {T}^d} \circ \Lambda _u^{-1})(t) \nonumber \\&= \int _{\mathbb {T}^d} e^{4\pi \imath (p_1 t_1 + \cdots p_d t_d + \beta \ln |u_1 \cos (2\pi t_1) + \cdots + u_d \cos (2\pi t_d)|)} \, \mathrm{d}t. \end{aligned}$$
(5.10)

An important consequence of Lemma 5.8 is

Lemma 5.10

For every \(p_1, \ldots , p_d \in \mathbb {Z}\) and \(\beta \in \mathbb {R}\setminus \{0\}\), the function \(u\mapsto J(u)\) given by (5.10) is real-analytic and non-constant on each connected component of \(\mathcal {E}_d\).

Proof

If \(d=1\) then, as seen in essence already in the proof of Lemma 5.6,

$$\begin{aligned} u_1 \mapsto J(u_1) = \int _{\mathbb {T}} e^{4\pi \imath p_1 t + 4 \pi \imath \beta \ln |u_1 \cos (2\pi t)|} \, \mathrm{d}t = e^{4\pi i \beta \ln |u_1|} I_{p_1 , 2 \pi \beta } \end{aligned}$$

is real-analytic and non-constant on each of the two connected parts of \(\mathbb {R}\setminus \{0\} = \mathcal {E}_1\).

Assume in turn that \(d\ge 2\). As the roles of \(t_1, \ldots , t_d\) can be interchanged in (5.10), assume w.l.o.g. that \(u_d \ne 0\). Since \(J(\pm u_1, \ldots , \pm u_d)= J(u_1, \ldots , u_d)\) for all \(u \in \mathbb {R}^d\) and every possible combination of \(+\) and \(-\) signs, and since also

$$\begin{aligned} J(u) = e^{4\pi i \beta \ln |u_d|} J \left( \frac{u_1}{u_d} , \ldots , \frac{u_{d-1}}{u_d}, 1\right) \!, \end{aligned}$$

it suffices to show that \(\widetilde{J} = \widetilde{J}(u):= J(u_1, \ldots , u_{d-1}, 1)\) is real-analytic and non-constant on \(\widetilde{\mathcal {E}}_{d-1}:= \{u \in \mathbb {R}^{d-1}: \sum _{j=1}^{d-1}|u_j|<1\}\). To this end note first that

$$\begin{aligned} \widetilde{J}(u) = \int _{\mathbb {T}^{d-1}} e^{4\pi \imath (p_1 t_1 + \cdots + p_{d-1}t_{d-1})} i_{p_d, 2 \pi \beta } \bigl ( u_1 \cos (2\pi t_1) + \cdots + u_{d-1} \cos (2\pi t_{d-1}) \bigr ) \, \mathrm{d}t. \end{aligned}$$

With Lemma 5.7 and the Dominated Convergence Theorem, it is clear that \(\widetilde{J}\) is continuous on \(\mathbb {R}^{d-1}\). Recall from the proof of Lemma 5.8 that \(i_{p, \beta }\) can be represented by a power series, namely \(i_{p,\beta }(x) = \sum _{n=0}^{\infty } i_{p, \beta }^{(n)}(0) x^n/n!\) for all \(p\in \mathbb {Z}\), \(\beta \in \mathbb {R}\) and \(|x|<1\). For every \(u\in \widetilde{\mathcal {E}}_{d-1}\), therefore,

$$\begin{aligned} \widetilde{J}(u)&= \int _{\mathbb {T}^{d-1}} e^{4\pi \imath (p_1 t_1 + \cdots + p_{d-1}t_{d-1})} \sum \nolimits _{n=0}^{\infty } \frac{i_{p_d, 2\pi \beta }^{(n)}(0)}{n!} \left( \sum \nolimits _{j=1}^{d-1} u_{j} \cos (2\pi t_{j}) \right) ^n \nonumber \\&= \sum \nolimits _{n=0}^{\infty } \frac{ i_{p_d, 2\pi \beta }^{(2n)} (0) }{2^{2n}} \sum \nolimits _{|\nu | = n} \left\{ \prod \nolimits _{j=1}^{d-1} \frac{u_j^{2\nu _j}}{(2\nu _j)!} \left( \!\! \begin{array}{c} 2\nu _j \\ \nu _j \! + \! |p_j| \end{array} \!\! \right) \right\} \!, \end{aligned}$$
(5.11)

where the standard notation for multi-indices \(\nu = (\nu _1, \ldots , \nu _{d-1})\in (\mathbb {N}_0)^{d-1}\) has been used, see e.g. [24, pp.25–29]. Thus \(\widetilde{J}\) is real-analytic on \(\widetilde{\mathcal {E}}_{d-1}\), by [24, Prop.2.2.7].

It remains to show that \(\widetilde{J}\) is non-constant on \(\widetilde{\mathcal {E}}_{d-1}\). Consider first the case of \(d=2\), for which (5.11) takes the form

$$\begin{aligned} \widetilde{J}(u_1) = \sum \nolimits _{n=|p_1|}^{\infty } \frac{ i_{p_2, 2\pi \beta }^{(2n)} (0) }{2^{2n}} \left( \!\! \begin{array}{c} 2n \\ n \! + \! |p_1| \end{array} \!\! \right) \frac{u_1^{2n}}{(2n)!}, \quad \forall u_1 \in \widetilde{\mathcal {E}}_1 = (-1,1). \end{aligned}$$
(5.12)

Recall that \(u_1 \mapsto \widetilde{J}(u_1)\) is continuous. If \(p_1 \ne 0\) then \(\widetilde{J}(0)=0\) whereas

$$\begin{aligned} \widetilde{J}(1)&= \int _{\mathbb {T}^2} e^{4\pi \imath (p_1 t_1 + p_2 t_2 + \beta \ln |\cos (2\pi t_1) + \cos (2\pi t_2)|)} \, \mathrm{d} t \\&= \int _{\mathbb {T}^2} e^{4\pi \imath (p_1 (t_1 - t_2) + p_2 (t_1 + t_2) + \beta \ln |2\cos (2\pi t_1) \cos (2\pi t_2)|)} \, \mathrm{d} t \\&= e^{4\pi \imath \beta \ln 2} I_{p_1 + p_2 , 2\pi \beta } I_{p_1 - p_2 , 2 \pi \beta } \ne 0, \end{aligned}$$

since \(\beta \ne 0\). If, on the other hand, \(p_1 = 0\) then \(\widetilde{J}(0) = I_{p_2, 2\pi \beta }\), while \(\widetilde{J}(1) = e^{4\pi \imath \beta \ln 2} I_{p_2, 2\pi \beta }^2 \ne \widetilde{J}(0)\). In either case, therefore, \(u_1 \mapsto \widetilde{J}(u_1)\) is non-constant on \(\widetilde{\mathcal {E}}_1=(-1,1)\). This concludes the proof for \(d=2\).

Finally, to deal with the case of \(d\ge 3\), note first that the above argument for \(d=2\) really shows that, given any \(p\in \mathbb {Z}\) and \(\beta \in \mathbb {R}\setminus \{0\}\), the number \(i_{p, 2\pi \beta }^{(2n)}(0)\) is non-zero for infinitely many \(n\in \mathbb {N}_0\). (Otherwise, by (5.12), the function \(u_1 \mapsto \widetilde{J}(u_1)\) would be constant for \(|p_1|\) sufficiently large, which has just been shown not to be the case.) But then

$$\begin{aligned} \widetilde{J}(u) = \sum \nolimits _{n=|p_1| + \cdots + |p_{d-1}|}^{\infty } \frac{ i_{p_d, 2\pi \beta }^{(2n)} (0) }{2^{2n}} \sum \nolimits _{|\nu | = n} \left\{ \prod \nolimits _{j=1}^{d-1} \frac{u_j^{2\nu _j}}{(2\nu _j)!} \left( \!\! \begin{array}{c} 2\nu _j \\ \nu _j \! + \! |p_j| \end{array} \!\! \right) \right\} \end{aligned}$$

is obviously non-constant on \(\widetilde{\mathcal {E}}_{d-1}\). \(\square \)

Given \(p_1, \ldots , p_d \in \mathbb {Z}\) and \(\beta \in \mathbb {R}\), denote by \(\mathcal {D}_d\) the set of all \(u\in \mathbb {R}^d\) for which \(\lambda _{\mathbb {T}^d} \circ \Lambda _u^{-1}\) coincides with \(\lambda _{\mathbb {T}}\), i.e., let \( \mathcal {D}_d = \{u \in \mathbb {R}^d : \lambda _{\mathbb {T}^d} \circ \Lambda _u^{-1} = \lambda _{\mathbb {T}} \}\). An immediate consequence of Lemma 5.10 is

Lemma 5.11

For every \(p_1, \ldots , p_d \in \mathbb {Z}\) and \(\beta \in \mathbb {R}\setminus \{0\}\) the set \(\mathcal {D}_d \cap \mathcal {E}_d\subset \mathbb {R}^d\) is nowhere dense and has Lebesgue measure zero.

Proof

This is clear from the fact that \(\mathcal {D}_d \cap \mathcal {E}_d \subset \{u \in \mathcal {E}_d: J(u) = 0\}\). As \(u\mapsto J(u)\) is real-analytic and non-constant on each component of \(\mathcal {E}_d\), the zero-locus of \(J\) on \(\mathcal {E}_d\) is nowhere dense and has Lebesgue measure zero; see e.g. [8, Lem. 19] or [24, Sec. 4.1]. \(\square \)

At long last, the Proof of Theorem 5.4 has become very simple: Since \(\mathcal {D}_d \cap \mathcal {E}_d\) is nowhere dense, \(\mathcal {E}_d \setminus \mathcal {D}_d \ne \varnothing \), and \(\lambda _{\mathbb {T}^d}\circ \Lambda _u^{-1} \ne \lambda _{\mathbb {T}}\) for every \(u\in \mathcal {E}_d \setminus \mathcal {D}_d\), by the definition of \(\mathcal {D}_d\). \(\square \)

Remark 5.12

(i) Since \(\mathcal {E}_1\) and \(\mathcal {E}_2\) are dense in \(\mathbb {R}\) and \(\mathbb {R}^2\), respectively, the set \(\mathcal {D}_d\) is nowhere dense in \(\mathbb {R}^d\) for \(d=1,2\) whenever \(\beta \ne 0\). It may be conjectured that \(\mathcal {D}_d\) is nowhere dense (and has Lebesgue measure zero) for \(d\ge 3\) also; no proof of, or counter-example to this conjecture is known to the authors.

(ii) Note that \(\lambda _{\mathbb {T}^d} \circ \Lambda _u^{-1} = \lambda _{\mathbb {T}}\) if, for some \(j\in \{1,\ldots , d\}\), both \(p_j \ne 0\) and \(\beta u_j =0\). Thus

$$\begin{aligned} \bigcup \nolimits _{j:p_j \ne 0} \left\{ u\in \mathbb {R}^d : \beta u_j = 0 \right\} \subset \mathcal {D}_d, \end{aligned}$$
(5.13)

and hence for \(\beta \ne 0\) the set \(\mathcal {D}_d\) contains the union of at most \(d\) coordinate hyper-planes. Beyond the conjecture formulated in (i), it is tempting to speculate whether in fact equality holds in (5.13) always, i.e. for any \(p_1, \ldots , p_d\in \mathbb {Z}\) and \(\beta \in \mathbb {R}\)—as it does for \(\beta =0\) (trivial) and \(d=1\) (Lemma 5.6). Obviously, equality in (5.13) would establish a much stronger version of Theorem 5.4.

(iii) Even if the set \(\mathcal {D}_d \subset \mathbb {R}^d\) is indeed nowhere dense and has Lebesgue measure zero for every \(d\in \mathbb {N}\), as conjectured in (i), for large values of \(d\) the equality \(\lambda _{\mathbb {T}^d}\circ \Lambda _u^{-1} = \lambda _{\mathbb {T}}\), though generically false, is nevertheless often true approximately—in some sense, and quite independently of the specific values of \(p_1, \ldots , p_d \in \mathbb {Z}\) and \(\beta \in \mathbb {R}\setminus \{0\}\). Under mild conditions on these parameters, this observation can easily be made rigorous as follows: Assume, for instance, that the integer sequence \((p_n)\) is not identically zero, say \(p_1 \ne 0\) for convenience, and \(\beta \ne 0\). Also assume that

$$\begin{aligned} (u_n) \, \, \text{ is } \text{ a } \text{ bounded } \text{ sequence } \text{ in } \, \, \mathbb {R}\, \, \text{ with } \sum \nolimits _{n=1}^{\infty } u_n^2 = +\infty . \end{aligned}$$
(5.14)

If \(u_{1} =0\) then \(\lambda _{\mathbb {T}^d} \circ \Lambda _u^{-1} = \lambda _{\mathbb {T}}\) for all \(d\in \mathbb {N}\). On the other hand, if \(u_{1} \ne 0\), let \(\sigma _d := \sqrt{1 + \sum _{j=2}^d u_j^2}\) and observe that, for every \(k\in \mathbb {Z}\setminus \{0\}\),

$$\begin{aligned} \widehat{\lambda _{\mathbb {T}^d} \circ \Lambda _u^{-1}} (k)&= e^{2\pi \imath k\beta \ln \sigma _d}\\&\quad \cdot \int _{\mathbb {T}^d} e^{2\pi \imath k \left( p_{1} t_{1} + \sum _{j=2}^d p_j t_j + \beta \ln \left| u_1/ \sigma _d \cos (2\pi t_1) + \sum _{j=2}^d u_j / \sigma _d \cos (2\pi t_j) \right| \right) } \, \mathrm{d} t. \end{aligned}$$

Since \(\sigma _d \rightarrow +\infty \) as \(d \rightarrow \infty \) yet \((u_n)\) is bounded, it follows from the Central Limit Theorem (see e.g. [12, Sec. 9.1]) that \(\lim _{d\rightarrow \infty } \widehat{\lambda _{\mathbb {T}^d} \circ \Lambda _u^{-1}} (k) = 0\). Under the mild assumption (5.14), therefore, \( \lim \nolimits _{d \rightarrow \infty } \lambda _{\mathbb {T}^d} \circ \Lambda _u^{-1} = \lambda _{\mathbb {T}} \) in \(\mathcal {P}(\mathbb {T})\) in the sense of weak convergence of probability measures. Informally put, the probability measure \(\lambda _{\mathbb {T}^d}\circ \Lambda _u^{-1}\) typically differs but little from \(\lambda _{\mathbb {T}}\) whenever \(d\) is large.

(iv) The above proof of Theorem 5.4 relies heavily on specific properties of the logarithm, notably on the fact that \(\ln |xy| = \ln |x| + \ln |y|\) whenever \(xy\ne 0\). It seems plausible, however, that the conclusion of that theorem may remain valid if the function \(\ln |\cdot |\) in (5.2) is replaced by virtually any non-constant function that is real-analytic on \(\mathbb {R}\setminus \{0\}\) and has \(0\) as a mild singularity. Establishing such a much more general version of Theorem 5.4 will likely require a conceptual approach quite different from the rather computational strategy pursued herein.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Berger, A., Eshun, G. A Characterization of Benford’s Law in Discrete-Time Linear Systems. J Dyn Diff Equat 28, 431–469 (2016). https://doi.org/10.1007/s10884-014-9393-y

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10884-014-9393-y

Keywords

Mathematics Subject Classification

Navigation