1 Introduction

Recall that the Möbius function is an arithmetic function defined by the formula

$$\begin{aligned} \mu (n)= {\left\{ \begin{array}{ll} 1 &{}\text {if}~n=1,\\ (-1)^k ~&{}\text {if} \, n \, \text {is a product of} \, k \, \text {distinct primes,}\\ 0 &{}\text {otherwise}. \end{array}\right. } \end{aligned}$$

It is well known that the Möbius function plays an important role in number theory: for example, the statement

$$\begin{aligned} \sum _{n=1}^{N} \mu (n)=O_{\epsilon }(N^{\frac{1}{2}+\epsilon }), ~\text {for each}~\epsilon >0, \end{aligned}$$

is equivalent to Riemann’s hypothesis (see [18]). In [2], Chowla formulated a conjecture on the higher orders correlations of the Möbius function:

Conjecture 1.1

For each choice of \(0\le a_1< \dots <a_r, r\ge 0, i_s\in \{1,2\}\) not all equal to 2,

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\frac{1}{N}\sum _{n=0}^{N-1} \mu ^{i_0}(n+a_1)\cdot \mu ^{i_1}(n+a_2)\cdot \dots \cdot \mu ^{i_r}(n+a_r)=0. \end{aligned}$$

Let T be a continuous transformation on a compact metric space X. Following Sarnak [15], we will say a sequence \((a(n))_{n\in {\mathbb {N}}}\) of complex numbers is orthogonal to the topological dynamical system (XT), if

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\frac{1}{N}\sum _{n=0}^{N-1} f(T^nx)a(n)=0, \end{aligned}$$
(1.1)

for all \(x\in X\) and for all \(f\in C(X)\) where C(X) is the space of continuous functions on X. In 2010, Sarnak [15] formulated the following conjecture which is implied by Chowla’s conjecture.

Conjecture 1.2

The Möbius function is orthogonal to all the topological dynamical systems of zero entropy.

Recently, el Abdalaoui, Kulaga-Przymus, Lemańczyk and de la Rue [6] studied these conjectures from ergodic theory point of view. Let z be an arbitrary sequence taking values in \(\{-1,0,1\}\) as the Möbius function. Following el Abdalaoui et al., we say that z satisfies the condition \(\mathrm{(Ch)}\) if it satisfies the condition in Chowla’s conjectureFootnote 1 ; similarly, it satisfies the condition \(\mathrm{(S_0)}\) if it satisfies the condition in Sarnak’s conjecture.Footnote 2 They proved that the condition \(\mathrm{(Ch)}\) implies the condition \(\mathrm{(S_0)}\). El Abdalaoui et al. also provided some non-trivial examples of sequences satisfying the condition \(\mathrm{(Ch)}\) and the condition \(\mathrm{(S_0)}\) respectively.

In [9], Fan defined that a bounded sequence \((c(n))_{n\in {\mathbb {N}}}\) is oscillating of order d (\(d\ge 1\) being an integer) if

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\frac{1}{N}\sum _{n=0}^{N-1} c(n) e^{2\pi i P(n)}=0 \end{aligned}$$

for any \(P\in {\mathbb {R}}_d[t]\) where \({\mathbb {R}}_d[t]\) is the space of real polynomials of degree smaller than or equal to d; it is said to be fully oscillating if it is oscillating of order d for every integer \(d\ge 1\). It is clear that an oscillating sequence of order d is always oscillating of order \(d-1\) for any \(d\ge 2\). The Möbius function is a fully oscillating sequence [11]. In [16], we gave several equivalent definitions of oscillating sequences, one of which states that a sequence is fully oscillating if and only if it is orthogonal to all affine maps on all compact abelian groups of zero entropy. This result yields that sequences satisfying the condition \(\mathrm{(S_0)}\) are fully oscillating.

In what follows, a sequence taking values in \(\{-1,0,1\}\) is said to have the Chowla property (resp. the Sarnak property) if it satisfies the condition \(\mathrm{(Ch)}\) (resp. the condition \(\mathrm{(S_0)}\)). In this paper, we consider a more general class of sequences, taking values 0 or complex numbers of modulus 1. We define the Chowla property and the Sarnak property for this class of sequences. We show that the Chowla property implies the Sarnak property. A sequence having the Chowla property is called a Chowla sequence. Moreover, we are interested in finding non-trivial Chowla sequences. We prove that for almost every \(\beta >1\), the sequence \((e^{2\pi i \beta ^n})_{n\in {\mathbb {N}}}\) is a Chowla sequence and consequently is orthogonal to all topological dynamical systems of zero entropy. This extends a result of Akiyama and Jiang [1] who proved that for almost every \(\beta >1\), the sequence \((e^{2\pi i \beta ^n})_{n\in {\mathbb {N}}}\) is fully oscillating. On the other hand, from a probability point of view, we discuss whether the samples of a given random sequence have almost surely the Chowla property. We also construct a sequence of dependent random variables which is a Chowla sequence almost surely.

The paper is organized as follows. In Sect. 2, we will recall some basic notions in topological and measurable dynamical systems. In Sect. 3, we will introduce the definition of the Chowla property for sequences taking values 0 or complex numbers of modulus 1 and the definition of the Sarnak property for sequences of complex numbers. In Sect. 4, it will be proved that for almost every \(\beta >1\), the sequence \((e^{2\pi i\beta ^n})_{n\in {\mathbb {N}}}\) is a Chowla sequence. In Sect. 5, we will show a unified definition of Chowla sequences and study random sequences: we will give an equivalent condition for stationary random sequences to be Chowla sequences almost surely; we will also prove that the Sarnak property does not imply the Chowla property by probability method. In Sect. 6, dependent random Chowla sequences will be constructed. In Sect. 7, we will discuss several equivalent definitions of Sarnak sequences and show that the Chowla property implies the Sarnak property.

Through this paper, we write \({\mathbb {N}}=\{0, 1, 2, \dots \}\) and \({\mathbb {N}}_{\ge 2}={\mathbb {N}}{\setminus } \{0,1\}\).

2 Preliminary

In this section, we recall some basic notions in dynamical systems. Recall that a pair (XT) is called a topological dynamical system if X is a compact metric space and T is a continuous transformation on X.

2.1 Quasi-generic measure

Let (XT) be a topological dynamical system. Denote by \({\mathcal {P}}_T(X)\) the set of all T-invariant probability measures on X equipped with the weak topology. Under this weak\({\text{- }}^*\) topology, it is well known that the space \({\mathcal {P}}_T(X)\) is compact, convex and not empty. Let \(x\in X\). Set

$$\begin{aligned} \delta _{N,T,x}:=\frac{1}{N}\sum _{n=0}^{N-1} \delta _{T^nx}. \end{aligned}$$

Sometimes, we write simply \(\delta _{N,x}\) if the system is fixed. Denote the set of all quasi-generic measures associated to x by

$$\begin{aligned} Q\text {-}gen(x):=\left\{ \nu \in {\mathcal {P}}_T(X): \delta _{N_k,x}\xrightarrow {k\rightarrow \infty }\nu , (N_k)_{k\in {\mathbb {N}}}\subset {\mathbb {N}} \right\} . \end{aligned}$$

By compactness of \({\mathcal {P}}_T(X)\), the set \(Q\text {-}gen(x)\) is not empty. Furthermore, the point x is said to be completely deterministic if for any measure \(\nu \in Q\text {-}gen(x)\), the entropy \(h(T,\nu )\) is zero. On the other hand, we say that the point x is quasi-generic for a measure \(\nu \in {\mathcal {P}}_T(X)\) along the sequence \((N_k)_{k\in {\mathbb {N}}}\) if \(\delta _{N_k,x}\rightarrow \nu \) as \(k\rightarrow \infty \). Moreover, x is said to be generic for \(\nu \) if the set \(Q\text {-}gen(x)\) contains only one element \(\nu \).

2.2 Joinings

The notion of joinings was first introduced by Furstenberg [10]. It is a very powerful tool in ergodic theory. In this subsection, we will recall some basic concepts about joinings.

By a measurable dynamical system \((Y, {\mathcal {B}}, S, \mu )\), we mean that \((Y,{\mathcal {B}})\) is a compact measurable space, S is a measurable map from Y to itself and \(\mu \) is an S-invariant probability measure, that is to say, \(\mu (Y)=1\) and \(\mu (S^{-1}B)=\mu (B)\) for any \(B\in {\mathcal {B}}\).

Let \((X,{\mathcal {A}},T,\mu )\) and \((Y,{\mathcal {B}},S,\nu )\) be two measurable dynamical systems. We say that \((Y,{\mathcal {B}},S,\nu )\) is a factor of \((X,{\mathcal {A}}, T,\mu )\) (or \((X,{\mathcal {A}}, T,\mu )\) is an extension of \((Y,{\mathcal {B}},S,\nu )\)) if there exists a measurable map \(\pi :X\rightarrow Y\) such that \(\pi \circ S=T\circ \pi \) and the pushforward of \(\mu \) via \(\pi \) is \(\nu \), that is, \(\pi _*\mu =\nu \). We call the map \(\pi \) the factor map. In this case, it is classical that we can identify \({\mathcal {B}}\) with the \(\sigma \)-algebra \(\pi ^{-1}({\mathcal {B}})\subset {\mathcal {A}}\), which is T-invariant. On the other hand, any T-invariant sub-\(\sigma \)-algebra \({\mathcal {C}}\subset {\mathcal {A}}\) will be identified with the corresponding factor \((X/ {\mathcal {C}},{\mathcal {C}}, \mu _{|{\mathcal {C}}}, T)\).

Recall that a measurable dynamical system is said to be a Kolmogorov system if any non-trivial factor of it has positive entropy. Let \(\pi : (X,{\mathcal {A}}, T,\mu ) \rightarrow (Y,{\mathcal {B}}, S, \nu )\) be a factor map. The relative entropy of the factor map \(\pi \) is defined by the quantity \(h(T,\mu )-h(S,\nu )\). We say that the factor map \(\pi \) is relatively Kolmogorov if \(\pi \) is non trivial and if for any intermediate factor \((Z,{\mathcal {C}}, U, \rho )\) with factor map \(\pi ': (Z,{\mathcal {C}}, U, \rho ) \rightarrow (Y,{\mathcal {B}}, S, \nu )\), the relative entropy of \(\pi '\) is positive unless \(\pi '\) is an isomorphism.

Given measurable dynamical systems \((X_i, {\mathcal {B}}, T_i, \mu _i)\) for \(i=1,2,\dots , k\), we recall that a joining \(\rho \) is a \(T_1\times \dots \times T_k\)-invariant probability measure on \((X_1\times \dots \times X_k, {\mathcal {B}}_1\otimes \dots \otimes {\mathcal {B}}_k)\) such that \((P_i)_{*}(\rho )=\mu _i\), where \(P_i\) is the projection from \(X_1\times \dots \times X_k\) to \(X_i\) for \(1\le i\le k\). The set consisting of all joinings is denoted by \(J(T_1, T_2, \dots , T_k)\). Following Furstenberg [10], two measurable dynamical systems \((X_1, {\mathcal {B}}, T_1, \mu _1)\) and \((X_2, {\mathcal {B}}, T_2, \mu _2)\) are said to be disjoint if their joining is unique, which is \(\mu _1\otimes \mu _2\).

Let \(\pi _i: (X_i,{\mathcal {A}}_i, T_i,\mu _i) \rightarrow (Y,{\mathcal {B}}, S,\nu )\) be two factor maps for \(i=1,2\). Given \(\rho \in J(S,S)\), the relatively independent extension of \(\rho \) is defined by \({\widehat{\rho }}\in J(T_1,T_2)\) such that

$$\begin{aligned} {\widehat{\rho }}(A_1\times A_2):=\int _{Y\times Y} {\mathbb {E}}[1_{A_1}| \pi _1^{-1}({\mathcal {B}})](x) {\mathbb {E}}[1_{A_2}| \pi _2^{-1}({\mathcal {B}})](y) d\rho (x,y), \end{aligned}$$

for each \(A_i\in {\mathcal {A}}_i, i=1,2.\) We say that the factor maps \(\pi _1\) and \(\pi _2\) are relatively independent over their common factor if \(J(T_1, T_2)\) only consists of \({\widehat{\Delta }}\) where \(\Delta \in J(S,S)\) is given by \(\Delta (B_1\times B_2):=\nu (B_1\cap B_2)\) for any \(B_1, B_2\in {\mathcal {B}}.\)

The following lemma is classical.

Lemma 2.1

([17], Lemma 3) Let \(\pi _i: (X_i,{\mathcal {A}}_i, T_i,\mu _i) \rightarrow (Y,{\mathcal {B}}, S,\nu )\) be two factor maps for \(i=1,2\). Suppose that \(\pi _1\) is of relative zero entropy and \(\pi _2\) is relatively Kolmogorov. Then \(\pi _1\) and \(\pi _2\) are relatively independent.

2.3 Shift system

Let X be a nonempty compact metric space. The product space \(X^{{\mathbb {N}}}\) endowed with the product topology is also a compact metric space. Coordinates of \(x\in X^{{\mathbb {N}}}\) will be denoted by x(n) for \(n\in {\mathbb {N}}\).

On the space \(X^{\mathbb {N}}\) there is a natural continuous action by the (left) shift \(S: X^{{\mathbb {N}}}\rightarrow X^{{\mathbb {N}}}\) defined by \((x(n))_{n\in {\mathbb {N}}} \mapsto (x(n+1))_{n\in {\mathbb {N}}}\). Here and below, we call the topological dynamical system \((X^{\mathbb {N}}, S)\) a (topological) shift system. Through this paper, we always denote by S the shift on the shift system.

We recall that the subsets of \(X^{\mathbb {N}}\) of the form

$$\begin{aligned}{}[C_0, C_1, \dots , C_k]:=\{x\in X^{\mathbb {N}}: x(j)\in C_j, ~\forall 0\le j \le k \} \end{aligned}$$

where \(k\ge 0\) and \(C_0, \dots , C_k\) are open subsets in X , are called cylinders and they form a basis of the product topology of \(X^{\mathbb {N}}\). It follows that any Borel measure on \(X^{\mathbb {N}}\) is uniquely determined by the values on these cylinders. Through this paper, we always denote \({\mathcal {B}}\) the Borel \(\sigma \)-algebra on the shift system. For any Borel measure \(\mu \) on \(X^{\mathbb {N}}\), we sometimes write \((X^{\mathbb {N}}, S, \mu )\) for the measurable shift system \((X^{\mathbb {N}}, {\mathcal {B}}, S, \mu )\).

In what follows, we denote by \(F:X^{{\mathbb {N}}}\rightarrow X\) the projection on the first coordinate, i.e. \((x(n))_{n\in {\mathbb {N}}} \mapsto x(0)\). It is easy to see that the projection is continuous.

3 Definition of the Chowla property

In this section, we show how to generalize the Chowla property from \(\{-1,0, 1\}^{{\mathbb {N}}}\) to \((S^1\cup \{0\})^{{\mathbb {N}}}\) where \(S^1\) is the unit circle. Before we really get into the generalization of the Chowla property, we first generalize the Sarnak property for the sequences in \(\{-1,0, 1\}^{\mathbb {N}}\) to ones in \({\mathbb {C}}^{\mathbb {N}}\) in a natural way.

Definition 3.1

We say that a sequence \((z(n))_{n\in {\mathbb {N}}}\) of complex numbers has the Sarnak property if for each homeomorphism T of a compact metric space X with topological entropy \(h(T)=0\), we have

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\frac{1}{N}\sum _{n=0}^{N-1} f(T^nx)z(n)=0, \end{aligned}$$

for all \(f\in C(X)\) and for all \(x\in X\).

Now we begin to show how to generalize the Chowla property. To this end, we introduce the notion of the index of a sequence in \((S^1\cup \{0\})^{{\mathbb {N}}}\). Let \((z(n))_{n\in {\mathbb {N}}}\in (S^1\cup \{0\})^{{\mathbb {N}}}\). For \(m\in {\mathbb {N}}_{\ge 2}\), we denote by

$$\begin{aligned} U(m)=\left\{ e^{2\pi i \frac{k}{m}}: 0\le k\le m-1 \right\} \end{aligned}$$

the set consisting of all m-th roots of the unity. We define supplementarily \(U(\infty )=S^1\), which is considered as the set of points with Euclidean norm 1. The index of the sequence \((z(n))_{n\in {\mathbb {N}}}\) is defined by the smallest number \(m\in {\mathbb {N}}_{\ge 2}\cup \{\infty \}\) such that the set

$$\begin{aligned} \{n\in {\mathbb {N}}:z(n)\notin U(m)\cup \{0\} \} \end{aligned}$$

has zero density in \({\mathbb {N}}\), that is,

$$\begin{aligned} \lim \limits _{N\rightarrow +\infty } \frac{\sharp \{0\le n\le N-1: z(n) \not \in U(m)\cup \{0\} \}}{N}=0. \end{aligned}$$

We will show some examples to illustrate this definition.

Examples 3.2

  1. (1)

    Every sequence \((z(n))_{n\in {\mathbb {N}}}\in \{ U(m)\cup \{0\}\}^{{\mathbb {N}}}\) has index at most m. In particular, every sequence \((z(n))_{n\in {\mathbb {N}}}\in \{-1,0,1\}^{{\mathbb {N}}}\) has index 2.

  2. (2)

    For \(\alpha \in {\mathbb {R}}{\setminus } {\mathbb {Q}}\), the sequence \((e^{2\pi i n\alpha })_{n\in {\mathbb {N}}}\) has index \(\infty \). In general, every sequence \((z(n))_{n\in {\mathbb {N}}}\) taking values in \(S^1{\setminus } \{e^{2\pi i a}: a\in {\mathbb {Q}} \}\) has index \(\infty \).

  3. (3)

    For any non-integer number \(\beta \), the sequence \((e^{2\pi i \beta ^n})_{n\in {\mathbb {N}}}\) has index \(\infty \). In fact, if \(\beta \) is an irrational number, then the sequence \((e^{2\pi i \beta ^n})_{n\in {\mathbb {N}}}\) taking values in \(S^1{\setminus } \{e^{2\pi i a}: a\in {\mathbb {Q}} \}\) and thus has index \(\infty \); if \(\beta \in {\mathbb {Q}}{\setminus } {\mathbb {Z}}\), say \(\beta =p/q\) with \(q>1\) and \({gcd}(p,q)=1\), then \(e^{2\pi i \beta ^n}\in U(q^n)\) and \(e^{2\pi i \beta ^n}\not \in U(m)\) for any \(m<q^n\) which implies that the index of the sequence \((e^{2\pi i \beta ^n})_{n\in {\mathbb {N}}}\) is \(\infty \).

The sequences \((S^1\cup \{0\})^{{\mathbb {N}}}\) can be naturally divided into two groups according to the cases when their indices are finite or infinite. We first define the Chowla property for sequences of finite index.

Definition 3.3

Suppose that a sequence \(z\in (S^1\cup \{0\})^{{\mathbb {N}}}\) has index \(m\in {\mathbb {N}}_{\ge 2}\). We say that the sequence z has the Chowla property if

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\frac{1}{N}\sum _{n=0}^{N-1} z^{i_0}(n+a_1)\cdot z^{i_1}(n+a_2)\cdot \dots \cdot z^{i_r}(n+a_r)=0, \end{aligned}$$
(3.1)

for each choice of \(0\le a_1< \dots <a_r, r\ge 0, i_s\in \{1,2,\dots , m\}\) not all equal to m.

Obviously, a sequence taking values in \(\{-1,0,1\}\) has the Chowla property if and only if it satisfies the condition \(\mathrm{(Ch)}\) defined in [6].

Hereafter it will be convenient to use symbol \(0^n=0\) for non-positive integer n.

Definition 3.4

Suppose that a sequence \(z\in (S^1\cup \{0\})^{{\mathbb {N}}}\) has index infinite. We say that the sequence z has the Chowla property if

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\frac{1}{N}\sum _{n=0}^{N-1} z^{i_0}(n+a_1)\cdot z^{i_1}(n+a_2)\cdot \dots \cdot z^{i_r}(n+a_r)=0, \end{aligned}$$
(3.2)

for each choice of \(0\le a_1< \dots <a_r, r\ge 0, i_s\in {\mathbb {Z}}\) not all equal to 0.

We will see later the unified definition of the Chowla property for sequences of index finite and infinite in Sect. 5.1.

Similarly to the property that the condition \(\mathrm{(Ch)}\) implies the condition \(\mathrm{(S_0)}\), we have the following proposition.

Proposition 3.5

The Chowla property implies the Sarnak property.

The essential idea of the proof of Proposition 3.5 is from [6]. For completeness sake, we will present the proof of Proposition 3.5 in Sect. 7.

We remark here that a sequence satisfying (3.2) only for choices of \(0\le a_1< \dots <a_r, r\ge 0, i_s\in {\mathbb {N}}\) not all equal to 0, may not have the Sarnak property. In fact, we consider the sequence \(z=(e^{2\pi i n\alpha })_{n\in {\mathbb {N}}}\) for arbitrary \(\alpha \in {\mathbb {R}}{\setminus } {\mathbb {Q}}\). For each choice of \(0\le a_1< \dots <a_r, r\ge 0, i_s\in {\mathbb {N}}\) not all equal to 0, we have

$$\begin{aligned} z^{i_0}(n+a_1)\cdot z^{i_1}(n+a_2)\cdot \dots \cdot z^{i_r}(n+a_r)=e^{2\pi i (n \alpha \sum _{s=1}^{r}i_s + \alpha \sum _{s=1}^{r} i_sn_s)}. \end{aligned}$$

Since \(\sum _{s=1}^{r}i_s>0\), we have \(\alpha \sum _{s=1}^{r}i_s\in {\mathbb {R}}{\setminus }{\mathbb {Q}}\). It follows that

$$\begin{aligned} \begin{aligned}&\lim \limits _{N\rightarrow \infty }\frac{1}{N}\sum _{n=0}^{N-1} z^{i_0}(n+a_1)\cdot z^{i_1}(n+a_2)\cdot \dots \cdot z^{i_r}(n+a_r)\\&\quad =\lim \limits _{N\rightarrow \infty }\frac{1}{N}\sum _{n=0}^{N-1} e^{2\pi i (n \alpha \sum _{s=1}^{r}i_s + \alpha \sum _{s=1}^{r} i_sn_s)} =0 \end{aligned} \end{aligned}$$

The last equality is due to the equidistribution of \((\{ n\alpha \sum _{s=1}^{r}i_s\})_{n\in {\mathbb {N}}}\) on [0, 1] and Wely’s criterion, where \(\{x \}\) is the fractional part of the real number x.

In what follows, a sequence will be called a Chowla sequence (resp. Sarnak sequence) if it has the Chowla property (resp. the Sarnak property).

4 Sequences \((e^{2\pi i \beta ^ng(\beta )})_{n\in {\mathbb {N}}}\) for \(\beta >1\)

In this section, we prove that the sequences \((e^{2\pi i \beta ^n})_{n\in {\mathbb {N}}}\) are Chowla sequences for almost all \(\beta >1\). Actually, we prove that a more general class of sequences are Chowla sequences. To this end, we study the distribution of a given sequence \((e^{2\pi i x(n)})_{n\in {\mathbb {N}}}\).

In fact, the problem about the distribution of the sequence \((e^{2\pi i x(n)})_{n\in {\mathbb {N}}}\) is that of the sequence \((\{x(n)\})_{n\in {\mathbb {N}}}\) on the unite interval [0, 1). Recall that a sequence \((x(n))_{n\in {\mathbb {N}}}\) of real numbers is said to be uniformly distributed modulo one if, for every real numbers uv with \(0\le u< v\le 1,\) we have

$$\begin{aligned} \lim \limits _{N\rightarrow \infty } \frac{\sharp \{n:1\le n\le N, u\le \{x_n\}\le v \}}{N}=v-u. \end{aligned}$$

For a given real number \(\beta >1\), we know few about the distribution of the sequence \((\beta ^n)_{n\in {\mathbb {N}}}\). For example, we still do not know a concrete sequence \((\beta ^n)_{n\in {\mathbb {N}}}\) which is uniformly distributed modulo one in [0, 1). However, several metric statements have been established. The first one was obtained by Koksma [12], who proved that for almost every \(\beta >1\) the sequence \((\beta ^n)_{n\in {\mathbb {N}}}\) is uniformly distributed modulo one on the unit interval [0, 1). Actually, Koksma [12] proved the following theorem.

Theorem 4.1

(Koksma’s theorem). Let \((f_n)_{n\in {\mathbb {N}}}\) be a sequence of real-valued \(C^1\) functions defined on an interval \([a,b]\subset {\mathbb {R}}\). Suppose that

  1. (1)

    \(f_m'-f_n'\) is monotone on [ab] for all \(m\not =n\);

  2. (2)

    \(\inf _{m\not =n}\min _{x\in [a,b]} |f_m'(x)-f_n'(x)|>0\).

Then for almost every \(x\in [a,b]\), the sequence \((f_n(x))_{n\in {\mathbb {N}}}\) is uniformly distributed modulo one on the unit interval [0, 1).

Akiyama and Jiang [1] proved that for almost every \(\beta >1\) the sequence \((e^{2\pi i \beta ^n})_{n\in {\mathbb {N}}}\) is fully oscillating. Actually, they proved the following theorem.

Theorem 4.2

([1], Corollary 2). Given any real number \(\alpha \not = 0\) and any \(g\in C^2_+((1,\infty ))\) where \(C^2_+((1,+\infty ))\) is the space of all positive 2-times continuously differentiable functions on \((1,+\infty )\) whose i-th derivative is non-negative for \(i=1,2\). Then for almost every real number \(\beta >1\) and for every real polynomial P, the sequence

$$\begin{aligned} \left( e^{2\pi i (\alpha \beta ^n g(\beta )+P(n))}\right) _{n\in {\mathbb {N}}} \end{aligned}$$

is fully oscillating.

The main step of their proof is to show the uniform distribution modulo one of the sequence \((\alpha \beta ^n g(\beta ))_{n\in {\mathbb {N}}}\) by using Koksma’s theorem, for a given \(\alpha \), a function \(g\in C^2_+((1,\infty ))\) and almost every real number \(\beta >1\).

El Abdalaoui [5] also studied such class of sequences and proved that these sequences satisfy the following property:

For any \(\alpha \not =0\), for any \(g\in C^2_+((1,\infty ))\), for any real polynomials P and for any increasing sequence of integers \(0=b_0<b_1<b_2<\cdots \) with \(b_{k+1}-b_k\rightarrow \infty \) as k tends to infinity, for almost all real numbers \(\beta >1\), for any \(\ell \not =0\), we have

$$\begin{aligned} \lim \limits _{K\rightarrow \infty }\frac{1}{b_{K}} \sum _{k=0}^{K} \left| \sum _{n=b_k}^{b_{k+1}-1} e^{2\pi i \ell (\alpha \beta ^n g(\beta )+P(n)) } \right| =0. \end{aligned}$$
(4.1)

Now we state our result: for almost every \(\beta >1\), the sequence \((e^{2\pi i \beta ^n})_{n\in {\mathbb {N}}}\) is a Chowla sequence, which improves the above presented results about this sequence. Actually, we prove the following theorem which asserts that a more general class of sequences are Chowla sequences.

Theorem 4.3

Given any \(g\in C^2_{count}\) where the space \(C^2_{count}\) consists of all \(C^2\) functions which have at most countable zeros on \((1,+\infty )\). Then for almost all real numbers \(\beta >1\), the sequences

$$\begin{aligned} \left( e^{2\pi i \beta ^n g(\beta )} \right) _{n\in {\mathbb {N}}} \end{aligned}$$
(4.2)

are Chowla sequences.

The main tool to prove Theorem 4.3 is the following technical lemma and the auxiliary lemma of Davenport, Erdős and LeVeque [3]:

Lemma 4.4

([13], Lemma 2.1) Let f be a \(C^2\) real-valued function on the interval [ab]. Suppose that \(f'(x)\) is monotone on [ab] and there is \(\lambda >0\) such that

$$\begin{aligned} \min _{x\in [a,b]} |f'(x)|>\lambda \end{aligned}$$

Then there exists \(C>0\) such that

$$\begin{aligned} \left| \int _a^b e^{2\pi i f(x)} dx \right| < \frac{C}{\lambda }. \end{aligned}$$

Lemma 4.5

Let \((X_n)_{n\in {\mathbb {N}}}\) be a sequence of bounded random variables. Suppose that the series

$$\begin{aligned} \sum _{N=1}^{\infty } \frac{1}{N} {\mathbb {E}}\left| \frac{1}{N} \sum _{n=1}^{N} X_n \right| ^2 < +\infty . \end{aligned}$$

Then almost surely

$$\begin{aligned} \lim \limits _{N\rightarrow +\infty } \frac{1}{N} \sum _{n=1}^{N} X_n =0. \end{aligned}$$

The following lemma not only has its own interest, but also is helpful to Theorem 4.3.

Lemma 4.6

Let g be a positive \(C^2\)-function on the compact interval \(J\subset (1,+\infty )\). Then for almost every real numbers \(\beta \in J\), the sequence \((\beta ^{n}g(\beta ))_{n\in {\mathbb {N}}}\) is uniformly distributed modulo 1.

Proof

Denote by \(f_n(x)=x^n g(x)\). A simple calculation allows us to see that

$$\begin{aligned} (f_m(x)-f_n(x))'=\left( (mx^{m-1}-nx^{n-1})g(x)+(x^m-x^n)g'(x) \right) , \end{aligned}$$
(4.3)

for all \(m,n\in {\mathbb {N}}\). For given \(m>n\), we deduce that for all \(x>1\),

$$\begin{aligned} \begin{aligned} \left| \frac{(f_m(x)-f_n(x))'}{x^mg(x)}\right| =&\left| \frac{m}{x}-\frac{n}{x^{m-n+1}}+\left( 1-\frac{1}{x^{m-n}}\right) \frac{g'(x)}{g(x)}\right| \\ \ge&\left( \frac{m}{x}-\frac{n}{x^{m-n+1}}-\left( 1-\frac{1}{x^{m-n}}\right) \left| \frac{g'(x)}{g(x)}\right| \right) \\ \ge&\left( \frac{m(x-1)}{x^{2}}-\left| \frac{g'(x)}{g(x)}\right| \right) . \end{aligned} \end{aligned}$$
(4.4)

Since \(J\subset (1, +\infty )\) is a compact interval, the function \(\left| \frac{g'(x)}{g(x)}\right| \) is bounded and the function \(\frac{(x-1)}{x^{2}}\) has a positive lower bound on J. Hence the right-hand side of (4.4) tends to \(+\infty \) when m tends to \(+\infty \). It follows that there is \(M_1>0\) such that

$$\begin{aligned} \begin{aligned} \epsilon :=\inf _{m\not =n\ge M_1} \min _{x\in J}|(f_m(x)-f_n(x))'|>0. \end{aligned} \end{aligned}$$
(4.5)

On the other hand, we have

$$\begin{aligned} \begin{aligned} (f_m'(x)-f_n'(x))'=&\,( (m(m-1)x^{m-2}-n(n-1)x^{n-2})g(x)\\&+2(mx^{m-1}-nx^{n-1})g'(x)+(x^m-x^n)g''(x) ). \end{aligned} \end{aligned}$$
(4.6)

Similarly to calculation (4.4), we have

$$\begin{aligned} \begin{aligned}&\left| \frac{(f_m'(x)-f_n'(x))'}{x^mg(x)}\right| \\&\quad \ge \left( \frac{m(m-1)(x-1)}{x^3} -\frac{2m}{x}\left| \frac{g'(x)}{g(x)}\right| -\left| \frac{g''(x)}{g(x)}\right| \right) . \end{aligned} \end{aligned}$$
(4.7)

Since the functions \(\frac{1}{x}|\frac{g'(x)}{g(x)}|\) and \(\left| \frac{g''(x)}{g(x)}\right| \) are bounded on J and the function \(\frac{x-1}{x^3}\) has a positive lower bound on J, the right-hand side of (4.7) tends to \(+\infty \) when m tends to \(+\infty \). It follows that there is \(M_2>0\) such that

$$\begin{aligned} \begin{aligned} \inf _{m\not =n\ge M_2} \min _{x\in J}|(f_m'(x)-f_n'(x))'|>0. \end{aligned} \end{aligned}$$
(4.8)

This implies that \(f_m'(x)-f_n'(x)\) is monotone on J for \(m\not =n\ge M_2\). From this and (4.4), we deduce that

$$\begin{aligned} |f_m'(x)-f_n'(x)|\ge \epsilon |m-n|, ~\forall ~n\not =m\ge M, \end{aligned}$$
(4.9)

where \(M=\max \{M_1, M_2\}\).

Combining (4.4), (4.7), (4.9) and Lemma 4.4, we obtain that there exists a constant \(C>0\) such that

$$\begin{aligned} \left| \int _{J} e^{2\pi i (f_n(x)-f_m(x))} dx \right| \le \frac{C}{\epsilon |m-n|}, ~\forall ~n\not =m\ge M . \end{aligned}$$
(4.10)

Let \(k\in {\mathbb {Z}}{\setminus }\{0\}\). Let \(X_n(\beta )=e^{2\pi ik\beta ^{n+M}g(\beta )}\). It follows from (4.10) that for any \(n\not =m\), we have

$$\begin{aligned} \left| {\mathbb {E}}[X_m{\overline{X}}_n]\right| =\left| \int _{J} e^{2\pi i k\left( f_{n+M}(x)-f_{m+M}(x)\right) } dx\right| \le \frac{C}{\epsilon |k||m-n|}. \end{aligned}$$
(4.11)

Thus we have

$$\begin{aligned} \begin{aligned} \sum _{N=1}^{\infty } \frac{1}{N} {\mathbb {E}}\left| \frac{1}{N} \sum _{n=1}^{N} X_n \right| ^2&\le \sum _{N=1}^{\infty } \frac{1}{N^3} \left( |J|N+ \sum _{1\le n\not =m\le N}\frac{C}{\epsilon |k||m-n|} \right) \\&\le \sum _{N=1}^{\infty } \frac{1}{N^3} \left( |J|N+ \frac{2CN\log N}{\epsilon |k|} \right) \\&=\sum _{N=1}^{\infty }\left( \frac{|J|}{N^2}+\frac{2C\log N}{\epsilon |k|N^2} \right) <+\infty . \end{aligned} \end{aligned}$$
(4.12)

Due to Lemma 4.5, we deduce that for almost every \(\beta \in J\),

$$\begin{aligned} \lim \limits _{N\rightarrow +\infty } \frac{1}{N} \sum _{n=1}^{N} e^{2\pi ik\beta ^{n+M}g(\beta )} =0. \end{aligned}$$

Since the integer k is chosen arbitrarily, we complete the proof by using Wely’s criterion. \(\square \)

Now we are in a position to prove Theorem 4.3.

Proof of Theorem 4.3

Let \(c_\beta (n)=e^{2 \pi i \beta ^{n} g(\beta )}\). Fix a choice of \(0\le a_1< \dots <a_r, r\ge 0, i_s\in {\mathbb {Z}}\) not all equal to 0. Then we have

$$\begin{aligned} c_\beta ^{i_0}(n+a_1)\cdot c_\beta ^{i_1}(n+a_2)\cdot \dots \cdot c_\beta ^{i_r}(n+a_r)=e^{2\pi i \beta ^n h(\beta )}, \end{aligned}$$
(4.13)

where \(h(\beta )=g(\beta )\sum _{j=1}^{r} i_j \beta ^{a_j}\). Since not all \(i_s\) are equal to zero for \(1\le s\le r\), we have that \(\sum _{j=1}^{r} i_j x^{a_j}\) is a non-trivial real polynomial. From this, we deduce that the function h is also in the space \(C^2_{count}\). It follows that for any \(\epsilon >0\) there is a union of disjoint compact intervals \(\sqcup _{i\in I} J_i^{\epsilon }\subset (1, +\infty )\) such that

  1. (1)

    the index set I is at most countable,

  2. (2)

    the function h does not vanish on \(J_i^{\epsilon }\),

  3. (3)

    the Lebesgue measure of the set \((1, +\infty ){\setminus }\sqcup _{i\in I} J_i^{\epsilon }\) is bounded above by \(\epsilon \).

By (4.13) and Lemma 4.6, we have that for every \(i\in I\) and for almost every \(\beta \in J_i^{\epsilon }\),

$$\begin{aligned} \lim \limits _{N\rightarrow +\infty }\frac{1}{N}\sum _{n=1}^{N}c_\beta ^{i_0}(n+a_1)\cdot c_\beta ^{i_1}(n+a_2)\cdot \dots \cdot c_\beta ^{i_r}(n+a_r)=0. \end{aligned}$$

By (1), (2), (3) and arbitrary of \(\epsilon \), we completes the proof. \(\square \)

4.1 Further discussion

Putting \(g\equiv 1\) in Theorem 4.3, we obtain that the sequences \((e^{2\pi i \beta ^n})_{n\in {\mathbb {N}}}\) are Chowla sequences for almost all \(\beta >1\). By Proposition 3.5, the sequences \((e^{2\pi i \beta ^n})_{n\in {\mathbb {N}}}\) are orthogonal to all dynamical systems of zero entropy for almost all \(\beta >1\), which improves the result in [1]. Moreover, by Proposition 7.4, the sequences \((e^{2\pi i \beta ^n})_{n\in {\mathbb {N}}}\) are strongly orthogonal on moving orbits to all dynamical systems of zero entropy for almost all \(\beta >1\), which improves the result in [5]. On the other hand, by Corollary 7.14 in Sect. 7.2, these Chowla sequences \((e^{2\pi i \beta ^n})_{n\in {\mathbb {N}}}\) are not strongly orthogonal on moving orbits to any dynamical systems of positive entropy.

It is clear that a Chowla sequence (or even an oscillating sequence of order 1) is aperiodic.Footnote 3 By Theorem 3.1 in [4], we obtain that for almost every \(\beta >1\), there exists a sequence of dynamical systems \((X_n, T_n)\) with entropy \(h(T_n)\rightarrow \infty \) when \(n \rightarrow \infty \) such that the sequence \((e^{2\pi i \beta ^n})_{n\in {\mathbb {N}}}\) is orthogonal to \((X_n, T_n)\) for all \(n\in {\mathbb {N}}\).

We remark that if \(\beta \) is an algebraic number, then the sequence \((e^{2\pi i \beta ^ng(\beta )})_{n\in {\mathbb {N}}}\) is not a Chowla sequence. In fact, since \(\beta \) is an algebraic number, there exist r different non-negative integers \(a_1, a_2, \dots , a_r\) and r integers \(i_1, i_2, \dots , i_r\) not all equal to 0 such that

$$\begin{aligned} i_1\beta ^{a_1}+i_2\beta ^{a_2}+\dots +i_r\beta ^{a_r}=0. \end{aligned}$$
(4.14)

Denote by \(z(n)=e^{2\pi i \beta ^n}\). Due to (4.14), we obtain

$$\begin{aligned} z(n+a_1)^{i_1}z(n+a_2)^{i_2}\cdots z(n+a_2)^{i_2}=e^{2\pi i \beta ^ng(\beta )(i_1\beta ^{a_1}+i_2\beta ^{a_2}+\dots +i_r\beta ^{a_r})}=1. \end{aligned}$$

It follows that

$$\begin{aligned} \lim \limits _{N\rightarrow +\infty } \frac{1}{N}\sum _{n=0}^{N-1}z(n+a_1)^{i_1}z(n+a_2)^{i_2}\dots z(n+a_2)^{i_2}=1. \end{aligned}$$
(4.15)

But to have the Chowla property, it requires the limit (4.15) equal to 0. Therefore, the sequence \((e^{2\pi i \beta ^ng(\beta )})_{n\in {\mathbb {N}}}\) is not a Chowla sequence for any algebraic number \(\beta \).

5 The Chowla property for random sequences

In this section, we study sequences of random variables having the Chowla property almost surely. We give a sufficient and necessary condition for a stationary random sequence to be a Chowla sequence almost surely. We will show that random stationary Chowla sequences of index infinite must be independent. We also construct some random Sarnak sequences which are not random Chowla sequences.

For convenience, we first show a unified definition of Chowla sequences.

5.1 Chowla sequences

Now we want to give an unification of Definitions 3.3 and 3.4. To this end, we revisit the functions \(x\mapsto x^{n}\) for \(n\in {\mathbb {Z}}\). For a fixed \(m\in {\mathbb {N}}_{\ge 2}\), it is easy to check that

$$\begin{aligned} x^{i}=x^{j}~\text {for all}~x\in U(m)\cup \{0\}\Longleftrightarrow i\equiv j \mod m. \end{aligned}$$

We define

$$\begin{aligned} I(m):={\left\{ \begin{array}{ll} \{0,1,\dots ,m-1 \}, &{}~\text {if}~m\in {\mathbb {N}}_{\ge 2}; \\ {\mathbb {Z}}, &{}~\text {if}~m=\infty , \end{array}\right. } \end{aligned}$$

which identifies the set of functions having the form \(x\mapsto x^{n}\) for \(n\in {\mathbb {Z}}\). This leads us to combine Definition 3.3 and Definition 3.4 together.

Definition 5.1

Suppose that a sequence \(z\in (S^1\cup \{0\})^{{\mathbb {N}}}\) has index \(m\in {\mathbb {N}}_{\ge 2}\cup \{\infty \}\). We say that the sequence z has the Chowla property if

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\frac{1}{N}\sum _{n=0}^{N-1} z^{i_0}(n+a_1)\cdot z^{i_1}(n+a_2)\cdot \dots \cdot z^{i_r}(n+a_r)=0, \end{aligned}$$
(5.1)

for each choice of \(0\le a_1< \dots <a_r, r\ge 0, i_s\in I(m)\) not all equal to 0.

5.2 The Chowla property for stationary random sequences

Here, we study sequences of stationary random variables.

Proposition 5.2

Let \({\mathcal {Z}}=(Z_n)_{n\in {\mathbb {N}}}\) be a sequence of stationary random variables taking values in \(S^1\cup \{0\}\). Suppose that the random sequence \({\mathcal {Z}}\) has index \(m\in {\mathbb {N}}_{\ge 2}\cup \{\infty \}\) almost surely. Then almost surely the random sequence \({\mathcal {Z}}\) is a Chowla sequence if and only if

$$\begin{aligned} {\mathbb {E}}[Z_{a_1}^{i_1}\cdot Z_{a_2}^{i_2}\cdot \dots \cdot Z_{a_r}^{i_r}]=0, \end{aligned}$$
(5.2)

for each choice of \(1\le a_1< \dots < a_r, r\ge 0, i_s\in I(m)\) not all equal to 0.

Proof

\(``\Rightarrow ''\) Since the random sequence \({\mathcal {Z}}\) is stationary, it can be viewed as a sequence \((f(T^n))_{n\in {\mathbb {N}}}\) for some \({\mathcal {C}}\)-measurable complex function f and \({\mathcal {C}}\)-measurable transformation \(T:X\rightarrow X\) where X is a compact metric space and \({\mathcal {C}}\) is a \(\sigma \)-algebra on X. By Birkhoff’s ergodic theorem, for any \(1\le a_1< \dots < a_r, r\ge 0, i_s\in I(m)\) not all equal to 0, almost surely we have

$$\begin{aligned}&{\mathbb {E}}[Z_{a_1}^{i_1}\cdot Z_{a_2}^{i_2}\cdot \dots \cdot Z_{a_r}^{i_r}|{\mathcal {I}}]\\&\quad =\lim \limits _{N\rightarrow \infty }\frac{1}{N}\sum _{n\le N} Z_{n+a_1}^{i_1}\cdot Z_{n+a_2}^{i_2}\cdot \dots \cdot Z_{n+a_r}^{i_r}=0, \end{aligned}$$

where \({\mathbb {E}}[f|{{\mathcal {I}}}]\) is the expectation of f conditional on the \(\sigma \)-algebra \({\mathcal {I}}\) consisting of invariant sets of T. Thus we conclude that

$$\begin{aligned} {\mathbb {E}}[Z_{a_1}^{i_1}\cdot Z_{a_2}^{i_2}\cdot \dots \cdot Z_{a_r}^{i_r}]={\mathbb {E}}[{\mathbb {E}}[Z_{a_1}^{i_1}\cdot Z_{a_2}^{i_2}\cdot \dots \cdot Z_{a_r}^{i_r}|{\mathcal {I}}]]=0, \end{aligned}$$

for any \(1\le a_1< \dots < a_r, r\ge 0, i_s\in I(m)\) not all equal to 0.

\(\Leftarrow \)” Fix a choice of \(1\le a_1< \dots < a_r, r\ge 0, i_s\in I(m)\) not all equal to 0. Let \(s=\min \{1\le j\le r: i_j\not =0\}\). Define the random variables

$$\begin{aligned} Y_n:=Z_{a_1+n}^{i_1}\cdot Z_{a_2+n}^{i_2}\cdot \dots \cdot Z_{a_r+n}^{i_r}, ~~\text {for all}~~n\in {\mathbb {N}}. \end{aligned}$$

Then for \(k> 1\) and \(n\ge 1\), the product \(Y_n\cdot Y_{n+k}\) has the form

$$\begin{aligned} Z_{a_1+n}^{i_1'}\cdot Z_{a_2+n}^{i_2'}\cdot \dots \cdot Z_{a_r+n+k}^{i_{r+k-1}'} \end{aligned}$$
(5.3)

where \(i_j'\in I(m)\) and \(i_j'=i_j+i_{j-k+1} (\text {mod}~~m)\)Footnote 4 for \(1\le j\le r+k-1\) with \(i_l:=0\) for \(l\le 0\) or \(l\ge r+1\). It is not hard to see that \(i_s'\not =0\). It follows that the expectation of (5.3) vanishes by hypothesis (5.2), that is,

$$\begin{aligned} {\mathbb {E}}[Y_nY_{n+k}]=0, ~\text {for all}~ n,k\ge 1. \end{aligned}$$

This means that the random sequence \((Y_n)_{n\in {\mathbb {N}}}\) is orthogonalFootnote 5. Therefore almost surely we have

$$\begin{aligned} \lim \limits _{N\rightarrow \infty } \frac{1}{N}\sum _{n=0}^{N-1} Y_n=0, \end{aligned}$$

i.e.

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\frac{1}{N}\sum _{n\le N} Z_{n+a_1}^{i_1}\cdot Z_{n+a_2}^{i_2}\cdot \dots \cdot Z_{n+a_r}^{i_r}=0. \end{aligned}$$

Since there are only countably many choices of \(1\le a_1< \dots < a_r, r\ge 0, i_s\in I(m)\) not all equal to 0, we deduce that almost surely

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\frac{1}{N}\sum _{n\le N} Z_{n+a_1}^{i_1}\cdot Z_{n+a_2}^{i_2}\cdot \dots \cdot Z_{n+a_r}^{i_r}=0, \end{aligned}$$

for all choices of \(1\le a_1< \dots < a_r, r\ge 0, i_s\in I(m)\) not all equal to 0. \(\square \)

We remark that if we drop the assumption that the random sequence \((Z_n)_{n\in {\mathbb {N}}}\) is stationary, then the proof of the sufficiency in Proposition 5.2 is still valid.

If additionally we suppose the random variables are independent and identically distributed, then we have the following equivalent condition of the Chowla property for random sequences.

Corollary 5.3

Suppose that \({\mathcal {Z}}=(Z_n)_{n\in {\mathbb {N}}}\) is a sequence of independent and identically distributed random variables taking values in \(S^1\cup \{0\}\). Let \(m\in {\mathbb {N}}_{\ge 2}\cup \{0\}\). Suppose that \({\mathcal {Z}}\) has index m almost surely. Then \({\mathcal {Z}}\) is a Chowla sequence almost surely if and only if \({\mathbb {E}}[Z_1^{i}]=0\) for all \(i\in I(m){\setminus } \{0\}\).

Proof

Since \({\mathcal {Z}}\) is independent and identically distributed, we observe that the equivalent condition (5.2) of the Chowla property in Proposition 5.2 becomes

$$\begin{aligned} {\mathbb {E}}[Z_{a_1}^{i_1}]\cdot {\mathbb {E}}[Z_{a_2}^{i_2}]\cdot \cdots \cdot {\mathbb {E}}[Z_{a_r}^{i_r}]=0, \end{aligned}$$

for each choice of \(1\le a_1< \dots < a_r, r\ge 0, i_s\in I(m)\) not all equal to 0. This is equivalent to say that \({\mathbb {E}}[Z_1^{i}]=0\) for all \(i\in I(m){\setminus } \{0\}\). \(\square \)

We show an example of sequences of independent and identically distributed random variables having the Chowla property almost surely.

Example 5.4

For any \(m\in {\mathbb {N}}_{\ge 2}\cup \{\infty \}\), let \(((U(m)\cup \{0\})^{{\mathbb {N}}}, S, \mu _m^{\otimes {\mathbb {N}}})\) be a measurable dynamical system. Recall that \(F: (U(m)\cup \{0\})^{{\mathbb {N}}}\rightarrow (U(m)\cup \{0\})\) is the projection on the first coordinate. Let \(Z_n:=F\circ S^n\). Then \((Z_n)_{n\in {\mathbb {N}}}\) is a sequence of independent and identically distributed random variables, which has index m almost surely. It is easy to check that \({\mathbb {E}}[Z_1^{i}]=0\) for all \(i\in I(m){\setminus } \{0\}\) by Lemma 7.8. Therefore, almost surely the sequence \((Z_n)_{n\in {\mathbb {N}}}\) is a Chowla sequence.

A natural question is raised whether there are sequences of dependent random variables having the Chowla property almost surely. We will prove later (in Sect. 6) that such sequences exist. For this propose, we prove additionally the following lemma.

Lemma 5.5

Let \(X_1, X_2, \dots , X_k\) be bounded random variables. The random variables \(X_1, \dots , X_k\) are independent if and only if

$$\begin{aligned} {\mathbb {E}}[X_1^{n_1}X_2^{n_2}\dots X_k^{n_k}]={\mathbb {E}}[X_1^{n_1}]{\mathbb {E}}[X_2^{n_2}]\dots {\mathbb {E}}[X_k^{n_k}] \end{aligned}$$
(5.4)

for all positive integers \(n_1,\dots ,n_k\).

Proof

The necessity being trivial, we just prove the sufficiency. A simple computation allows us to see that if

$$\begin{aligned} {\mathbb {E}}[f_{1,i}(X_1)f_{2,i}(X_2)\dots f_{k,i}(X_k)]={\mathbb {E}}[f_{1,i}(X_1)]{\mathbb {E}}[f_{2,i}(X_2)]\dots {\mathbb {E}}[f_{k,i}(X_k)] \end{aligned}$$

holds for functions \(f_{j,i}\) for all \(1\le j\le k\) and for all \(1\le i\le l\), then

$$\begin{aligned} \begin{aligned}&{\mathbb {E}} \left[ \sum _{i=1}^{l}f_{1,i}(X_1)\sum _{i=1}^{l}f_{2,i}(X_2)\dots \sum _{i=1}^{l}f_{k,i}(X_k)\right] \\&\quad ={\mathbb {E}}\left[ \sum _{i=1}^{l}f_{1,i}(X_1)\right] {\mathbb {E}} \left[ \sum _{i=1}^{l}f_{2,i}(X_2)\right] \dots {\mathbb {E}}\left[ \sum _{i=1}^{l}f_{k,i}(X_k)\right] . \end{aligned} \end{aligned}$$
(5.5)

Combining (5.4) and (5.5), we have

$$\begin{aligned} {\mathbb {E}}[P_1(X_1)\dots P_k(X_k)]={\mathbb {E}}[P_1(X_1)]{\mathbb {E}}\dots {\mathbb {E}}[P_k(X_k)], \end{aligned}$$
(5.6)

for all complex polynomials \(P_1, \dots , P_k\). It is well known that the random variables \(X_1, \dots , X_k\) are independent if and only if for any continuous functions \(f_1, f_2, \dots , f_k\),

$$\begin{aligned} {\mathbb {E}}[f_1(X_1)\dots f_k(X_k)]={\mathbb {E}}[f_1(X_1)]\dots {\mathbb {E}}[f_k(X_k)]. \end{aligned}$$
(5.7)

It is noticed that if additionally the random variable \(X_1, \dots , X_k\) are bounded, then “continuous functions” can be replaced by “bounded continuous functions” in the above statement. It is classical that any continuous bounded function is approximated by polynomials under uniform norm. Due to (5.6) and the approximation by polynomials, we see that (5.7) holds for any bounded continuous functions \(f_1, f_2, \dots , f_k\), which implies that the random variables \(X_1, \dots , X_k\) are independent. \(\square \)

Now we show a sufficient and necessary condition for a sequence of stationary random variables having the Chowla property almost surely to be independent.

Corollary 5.6

Let \((X_n)_{n\in {\mathbb {N}}}\) be a sequence of stationary random variables taking values in \(S^1\cup \{0\}\). Let \(m\in {\mathbb {N}}_{\ge 2}\cup \{0\}\). Suppose that almost surely the sequence \((X_n)_{n\in {\mathbb {N}}}\) has the index m and has the Chowla property. Then

  1. (1)

    if m is finite, then the sequence \((X_n)_{n\in {\mathbb {N}}}\) is independent if and only if

    $$\begin{aligned} {\mathbb {E}}[|X_{a_1}X_{a_2}\cdots X_{a_r}|]={\mathbb {E}}[|X_{a_1}|]{\mathbb {E}}[|X_{a_2}|]\cdots {\mathbb {E}}[|X_{a_r}|] \end{aligned}$$

    for each choice of \(1\le a_1< \dots < a_r, r\ge 0;\)

  2. (2)

    if m is infinite, then the sequence \((X_n)_{n\in {\mathbb {N}}}\) is always independent.

Proof

We first consider the case when the index m is finite. Since the sequence \((X_n)_{n\in {\mathbb {N}}}\) has the Chowla property almost surely, by Proposition 5.2, we obtain that for each choice of \(1\le a_1< \dots < a_r, r\ge 0, i_s\in {\mathbb {N}}{\setminus }\{0\}\) not all equal to the multiples of m,

$$\begin{aligned} {\mathbb {E}}[Z_{a_1}^{i_1}Z_{a_2}^{i_2}\cdots Z_{a_r}^{i_r}]=0, \end{aligned}$$
(5.8)

and at least one of \( {\mathbb {E}}[Z_{a_1}^{i_1}], {\mathbb {E}}[Z_{a_2}^{i_2}], \cdots , {\mathbb {E}}[Z_{a_r}^{i_r}]\) equal to zero. It follows that

$$\begin{aligned} {\mathbb {E}}[Z_{a_1}^{i_1}]\cdot {\mathbb {E}}[Z_{a_2}^{i_2}]\cdot \cdots \cdot {\mathbb {E}}[Z_{a_r}^{i_r}]=0={\mathbb {E}}[Z_{a_1}^{i_1}Z_{a_2}^{i_2}\cdots Z_{a_r}^{i_r}], \end{aligned}$$
(5.9)

for each choice of \(1\le a_1< \dots < a_r, r\ge 0, i_s\in {\mathbb {N}}{\setminus }\{0\}\) not all equal to the multiples of m. Due to Lemma 5.5 and Eq. (5.9), the sequence \((X_n)_{n\in {\mathbb {N}}}\) is independent if and only if

$$\begin{aligned} {\mathbb {E}}[Z_{a_1}^{k_1m}]\cdot {\mathbb {E}}[Z_{a_2}^{k_2m}]\cdot \cdots \cdot {\mathbb {E}}[Z_{a_r}^{k_rm}]={\mathbb {E}}[Z_{a_1}^{k_1m}] {\mathbb {E}}[Z_{a_2}^{k_2m}]\cdots {\mathbb {E}}[Z_{a_r}^{k_rm}], \end{aligned}$$
(5.10)

for each choice of \(1\le a_1< \dots < a_r, r\ge 0, k_s\in {\mathbb {N}}{\setminus }\{0\}\). It is not hard to see that \(X_n^{km}=|X_n|\) for every \(n\in {\mathbb {N}}\) and for every positive integer k. It follows that the sequence \((X_n)_{n\in {\mathbb {N}}}\) is independent if and only if

$$\begin{aligned} {\mathbb {E}}[|X_{a_1}X_{a_2}\cdots X_{a_r}|]={\mathbb {E}}[|X_{a_1}|]{\mathbb {E}}[|X_{a_2}|]\cdots {\mathbb {E}}[|X_{a_r}|], \end{aligned}$$

for each choice of \(1\le a_1< \dots < a_r, r\ge 0.\) This completes the proof of (1).

Now we consider the case when the index m is infinite. Repeating the same reasoning with (5.9), we obtain

$$\begin{aligned} {\mathbb {E}}[Z_{a_1}^{i_1}]\cdot {\mathbb {E}}[Z_{a_2}^{i_2}]\cdot \cdots \cdot {\mathbb {E}}[Z_{a_r}^{i_r}]={\mathbb {E}}[Z_{a_1}^{i_1}Z_{a_2}^{i_2}\cdots Z_{a_r}^{i_r}]=0, \end{aligned}$$
(5.11)

for each choice of \(1\le a_1< \dots < a_r, r\ge 0, i_s\in {\mathbb {N}}{\setminus }\{0\}\). According to Lemma 5.5 and equation (5.11), the sequence \((X_n)_{n\in {\mathbb {N}}}\) is independent. \(\square \)

5.3 The Chowla property for independent random sequences

Now we study the Chowla property for independent random sequences. A sequence taking values in \(\cup _{m\in {\mathbb {N}}_{\ge 2}}U(m)\), having index \(\infty \) and having the Chowla property almost surely is also constructed in this subsection.

As it has already been noticed just after Proposition 5.2, the proof of the sufficiency in Proposition 5.2 is still valid without the assumption of stationarity. If additionally assuming that the random sequence is independent, we obtain the following proposition which is a direct consequence of the sufficiency in Proposition 5.2.

Proposition 5.7

Let \((X_n)_{n\in {\mathbb {N}}}\) be a sequence of independent random variables which take values in \(S^1\cup \{0\}\). Let \(m\in {\mathbb {N}}_{\ge 2}\cup \{\infty \}\). Suppose that almost surely the sequence \((X_n)_{n\in {\mathbb {N}}}\) has index m. Suppose that for any \(k\in I(m){\setminus }\{0\}\), there exists \(N>0\) such that for any \(n>N\), the expectation \({\mathbb {E}}[X_n^{k}]=0\). Then almost surely the sequence \((X_n)_{n\in {\mathbb {N}}}\) is a Chowla sequence.

We remark that if the index m is finite, then the value N in the sufficient condition in Proposition 5.7 is uniform for all \(k\in I(m){\setminus } \{0\}\).

We show the existence of sequences taking values in \(\cup _{m\in {\mathbb {N}}_{\ge 2}}U(m)\), having index \(\infty \) and having the Chowla property almost surely in the following example.

Example 5.8

Let \((Z_n)_{n\ge 2}\) be a sequence of independent random variables on \((S^1\cup \{0\})^{{\mathbb {N}}}\) such that \(Z_n\) has distribution \(\mu _n\) for \(n\ge 2\). It follows easily that almost surely the index of \((Z_n)_{n\ge 2}\) is infinite. It is not difficult to see that

$$\begin{aligned} {\mathbb {E}}[Z_n^k]=0,~\text {for all}~n\ge 2 ~\text {and for all}~k\in I(n){\setminus }\{0\}. \end{aligned}$$

Thus the sequence \((Z_n)_{n\ge 2}\) satisfies the condition in Proposition 5.7. Therefore, almost surely the sequence \((Z_n)_{n\ge 2}\) is a Chowla sequence.

5.4 The Sarnak property does not imply the Chowla property

In this subsection, for every \(m\in {\mathbb {N}}_{\ge 2}\cup \{0\}\), we prove that the Sarnak property does not imply the Chowla property for sequences of index m. We first prove a sufficient condition for sequences to be Sarnak sequences.

Lemma 5.9

Let \((X, T, \mu )\) be a Kolmogorov system. Let \(f: X \rightarrow {\mathbb {C}}\) be a measurable function. Suppose that \({\mathbb {E}}[f]=0\). Then for every generic point x of \(\mu \), the sequence \((f\circ T^n(x))_{n\in {\mathbb {N}}}\) is a Sarnak sequence.

Proof

Let (YU) be an arbitrary topological dynamical system. Let \(y\in Y\) be a completely deterministic point. Let x be a generic point for \(\mu \). Let \(\rho \) be a quasi-generic measure for (yx) along \((N_k)_{k\in {\mathbb {N}}}\) in the product system \((Y\times X, U\times T)\). It follows that for any \(g\in C(Y)\),

$$\begin{aligned} \lim \limits _{k\rightarrow +\infty } \sum _{n=0}^{N_k-1} g\circ U^n(y)\cdot f\circ T^n (x)={\mathbb {E}}^\rho [g\otimes f]. \end{aligned}$$
(5.12)

Since the systen \((Y, U, \rho _{|Y})\) is of zero entropy and \((X, T, \mu )\) is a Kolmogorov system, \((X, T, \mu )\) and \((Y, U, \rho _{|Y})\) are independent. It follows that for any \(g\in C(Y)\),

$$\begin{aligned} {\mathbb {E}}^\rho [g\otimes f]={\mathbb {E}}^{\rho _{|Y}}[g]\cdot {\mathbb {E}}^\mu [f]=0. \end{aligned}$$
(5.13)

Combining (5.12) and (5.13), we deduce that for any \(g\in C(Y)\),

$$\begin{aligned} \lim \limits _{k\rightarrow +\infty } \sum _{n=0}^{N_k-1} g\circ U^n(y)\cdot f\circ T^n(x) =0. \end{aligned}$$

This completes the proof. \(\square \)

Comparing the condition in Lemma 5.9 and the necessary and sufficient condition for sequences of stationary random variables to be Chowla sequences almost surely in Proposition 5.2, we obtain the existence of the sequences which have the Sarnak property but not the Chowla property. We illustrate this in the following examples. The idea of Example 5.10 is essentially from Example 5.1 in [6]. Our novel contribution is the construction of Example 5.11 of sequences of index \(\infty \).

Example 5.10

Recall that a Bernoulli system is always a Kolmogorov system. Let \((\{0,1,2\}^{\mathbb {N}}, S, B_3)\) be a Bernoulli system where \(B_3\) denotes the uniform-Bernoulli measure. Let \(g: \{0,1,2\}^2 \rightarrow \{-1, 0,1\}\) be a function defined by

$$\begin{aligned} g(0,1)=1, g(1,2)=-1 ~\text {and}~ g(a,b)=0 ~\text {otherwise}. \end{aligned}$$

Define \(G: \{0,1,2\}^{\mathbb {N}} \rightarrow \{-1, 0,1\}\) by \((x(n))_{n\in {\mathbb {N}}} \mapsto g(x(0),x(1))\). Let \(Z_n:=G\circ S^n\). It is easy to check that

$$\begin{aligned} {\mathbb {E}}[Z_1]=0\quad \text {but}\quad {\mathbb {E}}[Z_1Z_2]=-1. \end{aligned}$$

It follows from Lemma 5.9 that almost surely \((Z_n)_{n\in {\mathbb {N}}}\) is a Sarnak sequence. On the other hand, by Proposition 5.2, almost surely the sequence \((Z_n)_{n\in {\mathbb {N}}}\) is not a Chowla sequence.

Example 5.11

Define \(h: [0,1)\rightarrow [0,1)\) by

$$\begin{aligned} h(t)= {\left\{ \begin{array}{ll} \frac{3}{2}t ~&{}\text {if}~0\le t<\frac{1}{4},\\ \frac{1}{2}t+\frac{1}{4} ~&{}\text {if}~\frac{1}{4}\le t<\frac{1}{2},\\ \frac{3}{2}t-\frac{1}{4} ~&{}\text {if}~\frac{1}{2}\le t<\frac{3}{4},\\ \frac{1}{2}t+\frac{1}{2} ~&{}\text {if}~\frac{3}{4}\le t<1.\\ \end{array}\right. } \end{aligned}$$

We define the function \(H: S^1\rightarrow S^1\) by the formula \(H(e^{2\pi i t})=e^{2\pi ih(t)}\). Let \(((S^1)^{\mathbb {N}}, S, Leb^{\otimes {\mathbb {N}}})\) be a Kolmogorov system where Leb is the Lebesgue measure on \(S^1\). Let \(Z_n:=H\circ F\circ S^n\). It is not hard to see that almost surely the index of the sequence \((Z_n)_{n\in {\mathbb {N}}}\) is \(\infty \). A simple computation allows us to see that

$$\begin{aligned} {\mathbb {E}}[Z_1]=0\quad \text {but}\quad {\mathbb {E}}[Z_1^2]\not =0. \end{aligned}$$

By Lemma 5.9 and Proposition 5.2, we conclude that almost surely \((Z_n)_{n\in {\mathbb {N}}}\) is a Sarnak sequence but not a Chowla sequence.

6 Construction of dependent random Chowla sequences

In this section, we construct sequences of dependent random variables which have the Chowla property almost surely. As it has been proved in Corollary 5.6 that the sequences of stationary random variables having index \(\infty \) and having the Chowla property almost surely are always independent, we will concentrate on the situation where the index is a finite number. In what follows, we construct sequences of dependent stationary random variables which have index 2 and have the Chowla property almost surely. The sequences of dependent stationary random variables which have the finite index \(m>2\) and have the Chowla property almost surely can be constructed in a similar way.

Let \({\mathcal {A}}\) be a finite alphabet. Now we study the symbolic dynamics \(({\mathcal {A}}^{{\mathbb {N}}}, S, \mu )\) where \(\mu \) is the uniform-Bernoulli measure. Let \(G:{\mathcal {A}}^{{\mathbb {N}}}\rightarrow \{-1,0,1\}\) be a continuous function. Denote by \(Z_{G,n}:=G\circ S^n\). We are concerned with the random sequence \((Z_{G,n})_{n\in {\mathbb {N}}}\). It is easy to check that the sequence \((Z_{G,n})_{n\in {\mathbb {N}}}\) has index 2 almost surely.

Obviously, \(G^{-1}(t)\) is a clopen subset of \({\mathcal {A}}^{{\mathbb {N}}}\) for each \(t\in \{-1,0,1\}\). It follows that the function G is determined by finite words, i.e., there exist \(l\in {\mathbb {N}}{\setminus }\{0\}\) and a function g on \({\mathcal {A}}^l\) such that for all \(x=(x_n)_{n\in {\mathbb {N}}}\in {\mathcal {A}}^{{\mathbb {N}}}\),

$$\begin{aligned} G(x)=g(x_0, x_1, \dots , x_{l-1}). \end{aligned}$$

We associate a rooted 3-color tree \({\mathcal {T}}_G\) with the sequence \((Z_{G,n})_{n\in {\mathbb {N}}}\).

We first draw a root as the 0-th level. We then draw \(\sharp {\mathcal {A}}\) descendants from the vertex in 0-th level as the 1-th level. The vertices at the first level are entitled by the elements in \({\mathcal {A}}\). Draw “black” (i.e. ) the edges between the vertices of the 0-th level and the vertices of the 1-th level. Repeat this until we get the \((l-1)\)-th level and thus obtain a finite tree with only black edges and the property that each vertex has \(\sharp {\mathcal {A}}\) descendants. Observe that the branch from the root to a vertex in the \((l-1)\)-th level can be viewed as a word of length \((l-1)\). Now we show how to draw the l-th level. A vertex identified by \(a_1a_2\dots a_{l-1}\) at \((l-1)\)-th level has a descendant entitled \(a\in {\mathcal {A}}\) if \(g(a_1,a_2,\dots ,a_{l-1}, a)\not =0\). Moreover, the edge between them is “red” (i.e. ) if \(g(a_1,a_2,\dots ,a_{l-1}, a)=1\) and is “green” (i.e. \({-}{-}{-}{-}{-}\)) if \(g(a_1,a_2,\dots ,a_{l-1}, a)=-1\). In general, for \(k\ge l-1\), a vertex identified by \(a_1a_2\dots a_{k}\) in k-th level has a descendant entitled \(a\in {\mathcal {A}}\) if and only if \(g(a_{k-l+2},a_{k-l+3},\dots ,a_{k}, a)\not =0\) and the color of the edge between them is red if \(g(a_{k-l+2},a_{k-l+3},\dots ,a_{k}, a)=1\) and green if it has the value \(-1\). There edges are called the edges of the k-th level. Therefore, the rooted 3-color tree that we obtained is named by \({\mathcal {T}}_G\).

We illustrate the above construction by the following examples.

Example 6.1

  1. (1)

    Let \({\mathcal {A}}=\{0,1,2\}\). Define \(g_1: {\mathcal {A}}^2\rightarrow \{-1,0,1\}\) by

    $$\begin{aligned}&(0, 0)\mapsto 1, (1, 2)\mapsto 1, (2, 1)\mapsto 1,\\&(0,1)\mapsto -1, (1, 0)\mapsto -1, (2, 2)\mapsto -1,\\&(2,0)\mapsto 0, (0,2) \mapsto 0, (1,1) \mapsto 0. \end{aligned}$$

    Let \(G_1: {\mathcal {A}}^{\mathbb {N}}\rightarrow \{-1,0,1\}, G_1(x)=g_1(x_0,x_1)\). The associated tree \({\mathcal {T}}_{G_1}\) is an infinite tree as shown in Fig. 1.

  2. (2)

    Let \({\mathcal {A}}=\{0,1,2,3\}\). Define \(g_1: {\mathcal {A}}^2\rightarrow \{-1,0,1\}\) by

    $$\begin{aligned} (0, 1)\mapsto 1, (1, 3)\mapsto -1, (2, 0)\mapsto -1 ~\text {and otherwise}~(a,b)\mapsto 0. \end{aligned}$$

    Let \(G_2: {\mathcal {A}}^{\mathbb {N}}\rightarrow \{-1,0,1\}, G_2(x)=g_2(x_0,x_1)\). The associated tree \({\mathcal {T}}_{G_2}\) is a finite tree as shown in Fig. 2.

Fig. 1
figure 1

The infinite tree \({\mathcal {T}}_{G_1}\)

Fig. 2
figure 2

The finite tree \({\mathcal {T}}_{G_2}\)

Two branches from the root to the vetrices of the same level are said to be of different type if they have different colors in some edges of the some level. Two branches are of same type if they are not of different type. We define the homogeneity for the tree as follows.

Definition 6.2

Suppose the function \(G:{\mathcal {A}}^{\mathbb {N}}\rightarrow \{-1,0,1\}\) is determined by the words of length l. We say that the associated tree \({\mathcal {T}}_G\) is homogeneous if for each \(k\ge l\), the numbers of branches of same type from the root to the vetrices at the k-th level are constant.

It is not hard to see that Example 6.1 (1) is homogeneous but (2) is not.

Actually, Definition 6.2 means that

$$\begin{aligned} \sharp \{x\in {\mathcal {A}}^{{\mathbb {N}}}: G(x)=c_0, \dots , G(S^kx)=c_k \} \end{aligned}$$

is a constant which is independent of the choice of \(c=(c_n)_{n\in {\mathbb {N}}}\in \{-1,1\}^{{\mathbb {N}}}\). Thus the following proposition is a direct consequence of Proposition 5.2.

Proposition 6.3

Let A be a finite set. Let \(G: A^{\mathbb {N}} \rightarrow \{-1,0,1\}\) be a continuous map. Then the random sequence \((Z_{G,n})_{n\in {\mathbb {N}}}\) is a Chowla sequence almost surely if and only if the associated tree \({\mathcal {T}}_G\) is homogeneous.

Now we show the existence of sequences of dependent random variables having the Chowla property almost surely in the following example.

Example 6.4

Let \({\mathcal {A}}=\{-1,0,1, \star \}\). Denote by \({\widetilde{\mu }}_2=\frac{1}{2}\mu _2+\frac{1}{2}\delta _{\star }\). Then \(({\mathcal {A}}^{\mathbb {N}}, S, {\widetilde{\mu }}_2^{\mathbb {N}})\) is a measurable dynamical system. Define \(g: {\mathcal {A}}^2 \rightarrow U(m)\) by

$$\begin{aligned} g(a,b)= {\left\{ \begin{array}{ll} a~&{}\text {if}~a\not = \star ~\text {and}~b=\star ;\\ 0&{}\text {otherwise}. \end{array}\right. } \end{aligned}$$

We define \(G: {\mathcal {A}}^{\mathbb {N}} \rightarrow U(m)\) by \((x_n)_{n\in {\mathbb {N}}} \mapsto g(x_0x_1)\). It is not hard to see that the associated tree \({\mathcal {T}}_{G}\) is a finite homogeneous tree (see Fig. 3). It follows directly from Proposition 6.3 that almost surely \((Z_{G,n})_{n\in {\mathbb {N}}}\) is a Chowla sequence.

On the other hand, since \({\mathbb {E}}[|Z_1|]=\frac{1}{2}\), we obtain that

$$\begin{aligned} {\mathbb {E}}[|Z_1Z_2|]=0\quad \text {but}\quad {\mathbb {E}}[|Z_1|]{\mathbb {E}}[|Z_2|]=\frac{1}{4}, \end{aligned}$$

that is,

$$\begin{aligned} {\mathbb {E}}[|Z_1Z_2|]\not ={\mathbb {E}}[|Z_1|]{\mathbb {E}}[|Z_2|]. \end{aligned}$$

By Corollary 5.6, the sequence of random variables \((Z_n)\) is not independent.

Fig. 3
figure 3

The associated tree \({\mathcal {T}}_{G}\)

Let \({\mathcal {A}}\) be a finite alphabet. Let \(({\mathcal {A}}^{{\mathbb {N}}}, S, \mu )\) be a symbolic dynamics where \(\mu \) is the uniform-Bernoulli measure. Let \(m\in {\mathbb {N}}_{\ge 2}\). Let \(G:{\mathcal {A}}^{{\mathbb {N}}}\rightarrow U(m)\cup \{0 \}\) be a continuous function with \(G^{-1}(t)\) not empty for each \(t\in U(m)\cup \{0 \}\). Denote by \(Z_{G,n}:=G\circ S^n\). Similarly to the case when \(m=2\), we associate a rooted \(m+1\)-color tree \({\mathcal {T}}_G\) with the sequence \((Z_{G,n})_{n\in {\mathbb {N}}}\). Similarly to Definition 6.2 and Proposition 6.3, we define the homogeneity of the associated tree \({\mathcal {T}}_G\) and have the following proposition.

Proposition 6.5

Let A be a finite set and \(m\ge 2\) be an integer. Let \(G: A^{\mathbb {N}} \rightarrow U(m)\cup \{0\}\) be a continuous map. Then the random sequence \((Z_{G,n})_{n\in {\mathbb {N}}}\) is a Chowla sequence almost surely if and only if the associated tree \({\mathcal {T}}_G\) is homogeneous.

For \(m\in {\mathbb {N}}_{\ge 2}\), we give examples of sequences of dependent random variables having index m and having the Chowla property almost surely.

Example 6.6

Let \(m\in {\mathbb {N}}_{\ge 2}\). Let \({\mathcal {A}}=U(m)\cup \{0, \star \}\). Denote by \({\widetilde{\mu }}_m=\frac{1}{2}\mu _m+\frac{1}{2}\delta _{\star }\). Then \(({\mathcal {A}}^{\mathbb {N}}, S, {\widetilde{\mu }}_m^{\mathbb {N}})\) is a measurable dynamical system. Define \(g: {\mathcal {A}}^2 \rightarrow U(m)\) by

$$\begin{aligned} g(ab)= {\left\{ \begin{array}{ll} a~&{}\text {if}~a\in U(m)~\text {and}~b=\star ;\\ 0&{}\text {otherwise}. \end{array}\right. } \end{aligned}$$

We define \(G: {\mathcal {A}}^{\mathbb {N}} \rightarrow U(m)\) by \((x_n)_{n\in {\mathbb {N}}} \mapsto g(x_0x_1)\). It is not hard to see that the associated tree \({\mathcal {T}}_{G}\) is a finite homogeneous tree. It follows directly from Proposition 6.5 that almost surely \((Z_{G,n})_{n\in {\mathbb {N}}}\) is a Chowla sequence.

On the other hand, since \({\mathbb {E}}[|Z_1|]=\frac{1}{2}\), we obtain that

$$\begin{aligned} {\mathbb {E}}[|Z_1Z_2|]=0~\text {but}~{\mathbb {E}}[|Z_1|]{\mathbb {E}}[|Z_2|]=\frac{1}{4}, \end{aligned}$$

that is,

$$\begin{aligned} {\mathbb {E}}[|Z_1Z_2|]\not ={\mathbb {E}}[|Z_1|]{\mathbb {E}}[|Z_2|]. \end{aligned}$$

By Corollary 5.6, the sequence of random variables \((Z_n)\) is not independent.

7 Further remarks on the Chowla property

In this section, we first discuss several equivalent definitions of Sarnak sequences. Then we present the proof of Proposition 3.5.

7.1 Sarnak sequences

Here, we give several equivalent definitions of Sarnak sequences. Firstly, it follows from the characterization of completely deterministic points by Weiss [19]Footnote 6 that the following is a equivalent definition of Sarnak sequences.

We say that a sequence \((z(n))_{n\in {\mathbb {N}}}\) of complex numbers is a Sarnak sequence if for each homeomorphism T of a compact metric space X,

$$\begin{aligned} \lim \limits _{N\rightarrow \infty }\frac{1}{N}\sum _{n=0}^{N-1} f(T^nx)z(n)=0, \end{aligned}$$

for all \(f\in C(X)\) and for all completely deterministic points \(x\in X\).

Next, we recall the property of (strongly) Möbius orthogonality on moving orbits.

Definition 7.1

We say that a sequence \((z(n))_{n\in {\mathbb {N}}}\) of complex numbers is orthogonal on moving orbits to a dynamical system (XT) if for any increasing sequence of integers \(0=b_0<b_1<b_2<\cdots \) with \(b_{k+1}-b_k\rightarrow \infty \), for any sequence \((x_k)_{k\in {\mathbb {N}}}\) of points in X, and any \(f\in C(X)\),

$$\begin{aligned} \lim \limits _{K\rightarrow \infty }\frac{1}{b_K}\sum _{k=0}^{K-1} \sum _{b_k\le n<b_{k+1}} f(T^{n-b_k}x_k)z(n)=0. \end{aligned}$$
(7.1)

It is easy to see that the a sequence orthogonal on moving orbits to a dynamical system (XT) is always orthogonal to (XT).

Definition 7.2

We say that a sequence \((z(n))_{n\in {\mathbb {N}}}\) of complex numbers is strongly orthogonal on moving orbit to a dynamical system (XT) if for any increasing sequence of integers \(0=b_0<b_1<b_2<\cdots \) with \(b_{k+1}-b_k\rightarrow \infty \), for any sequence \((x_k)_{k\in {\mathbb {N}}}\) of points in X, and any \(f\in C(X)\),

$$\begin{aligned} \lim \limits _{K\rightarrow \infty }\frac{1}{b_K}\sum _{k=0}^{K-1} \left| \sum _{b_k\le n<b_{k+1}} f(T^{n-b_k}x_k)z(n) \right| =0. \end{aligned}$$
(7.2)

Putting \(f=1\) in (7.2), we see that if a sequence \((z(n))_{n\in {\mathbb {N}}}\) is strongly orthogonal on moving orbits to a dynamical system (XT), then

$$\begin{aligned} \lim \limits _{K\rightarrow \infty }\frac{1}{b_K}\sum _{k=0}^{K-1} \left| \sum _{b_k\le n<b_{k+1}} z(n) \right| =0, \end{aligned}$$
(7.3)

for any increasing sequence of integers \(0=b_0<b_1<b_2<\cdots \) with \(b_{k+1}-b_k\rightarrow \infty \). Matomäki and Radziwiłł [14] proved that the Möbius function satisfies (7.3).

Now we define the uniform orthogonality.

Definition 7.3

We say that a sequence \((z(n))_{n\in {\mathbb {N}}}\) of complex numbers is uniformly orthogonal to a dynamical system (XT) if for any \(f\in C(X)\), the sums \(\frac{1}{N}\sum _{n=1}^{N} f(T^nx)z(n)\) is uniformly convergent to zeros for every \(x\in X\).

By Footnote 4 of Page 4 in [8], we know that if a sequence is strongly orthogonal on moving orbits to a dynamical system (XT), then it is uniformly orthogonal to (XT).

Summarily, given a dynamical system (XT) and a sequence z, the strongly orthogonality on moving orbits implies the orthogonality on moving orbits and the uniform orthogonality, which implies the orthogonality. However, the following proposition tells us that if we consider all dynamical systems of zero entropy, then these orthogonalities are actually equivalent.

Proposition 7.4

(Corollary 9, Corollary 10, [8]) Let z be a sequence of complex numbers. The following are equivalent.

  1. (1)

    The sequence z is a Sarnak sequence.

  2. (2)

    The sequence z is orthogonal on moving orbits to all dynamical systems of zero entropy.

  3. (3)

    The sequence z is strongly orthogonal on moving orbits to all dynamical systems of zero entropy.

  4. (4)

    The sequence z is uniformly orthogonal to all dynamical systems of zero entropy.

7.2 The Chowla property implies the Sarnak property

In this section, we present the proof of Proposition 3.5. The idea is essential due to [6].

Our main observation is the following lemma, which follows directly from the Stone-Weierstrass theorem.

Lemma 7.5

The linear subspace generated by the constants and the family

$$\begin{aligned} {\mathcal {F}}:=\{F^{i_1}\circ S^{a_1}\cdots F^{i_r}\circ S^{a_r}: 0\le a_1< \dots <a_r, r\ge 0, i_s\in I(m) \} \end{aligned}$$

of continuous functions is a complex algebra (closed under taking products and conjugation) of functions separating points, hence it is dense in \(C((U(m)\cup \{0\})^{\mathbb {N}})\) for every \(m\in {\mathbb {N}}_{\ge 2}\cup \{\infty \}\).

The family \({\mathcal {F}}\) is an analogue of the family of \({\mathcal {A}}\) in Lemma 4.6 of [6]. The different point is that \({\mathcal {F}}\) is a complex algebra but \({\mathcal {A}}\) is real.

Now we provide a detailed proof of Proposition 3.5. It is well known that for every \(m\in {\mathbb {N}}_{\ge 2}\cup \{\infty \}\), the system \((U(m)^{\mathbb {N}}, S, \mu _m^{\otimes {\mathbb {N}}})\) is a Bernoulli system for every \(m\in {\mathbb {N}}\), where \(\mu _m\) is the equidistributed probability measure on U(m) and \(\mu _m^{\otimes {\mathbb {N}}}\) is the product measure. Since a Bernoulli system is a Kolmogorov system, the following is classical.

Proposition 7.6

For any \(m\in {\mathbb {N}}_{\ge 2}\cup \{\infty \}\), the system \((U(m)^{\mathbb {N}}, S, \mu _m^{\otimes {\mathbb {N}}})\) is a Kolmogorov system.

For any \(m\in {\mathbb {N}}_{\ge 2}\cup \{\infty \}\) and for any S-invariant measure \(\nu \) on \((\{0,1\}^{\mathbb {N}}, S)\), we define the diagram

(7.4)

where \(\rho : (x_n,y_n)_{n\in {\mathbb {N}}}\mapsto (x_ny_n)_{n\in {\mathbb {N}}}\), \(\pi : (x_n)_{n\in {\mathbb {N}}}\mapsto (x_n^{0})_{n\in {\mathbb {N}}}\), \(P_2: (x_n,y_n)_{n\in {\mathbb {N}}}\mapsto (y_n)_{n\in {\mathbb {N}}}\) and \({\widehat{\nu }}=\rho _*(\mu _m^{\mathbb {N}}\otimes \nu )\). We study this diagram in the following proposition.

Proposition 7.7

The diagram (7.4) satisfies the following properties.

  1. (1)

    The diagram (7.4) commutes.

  2. (2)

    The factor map \(\pi \) is either trivial or relatively Kolmogorov.

  3. (3)

    The measure \({\widehat{\nu }}\) has the form:

    $$\begin{aligned} {\widehat{\nu }}(B)=\nu (\pi (B))\prod _{i=0,b_i\not =0}^{k-1}\mu _m(B_i), \end{aligned}$$

    for any cylinder \(B=[B_0, B_1, \cdots , B_{k-1}]\) in \((U(m)\cup \{0\})^{\mathbb {N}}\).

Proof

To prove (1), it suffices to show \(P_2=\pi \circ \rho \). In fact, we have

$$\begin{aligned} \pi \circ \rho ((x(n),y(n))_{n\in {\mathbb {N}}}) =\pi ((x(n)y(n))_{n\in {\mathbb {N}}}) =(x(n)^{0}y(n)^{0})_{n\in {\mathbb {N}}}=(y(n))_{n\in {\mathbb {N}}}, \end{aligned}$$

because of the facts that \(x^{0}=1\) for all \(x\in U(m)\) and \(y^{0}=y\) for all \(y\in \{0,1\}\). This completes the proof of (1).

By Proposition 7.6, the factor \(P_2\) is relatively Kolmogorov. Since the factor map \(\pi \) is an intermediate factor by (1), it is either trivial or relatively Kolmogorov. This implies (2).

It remains to show the form of \({\widehat{\nu }}\). Let \(B=[B_0, B_1, \cdots , B_{k-1}]\) be a cylinder in \((U(m)\cup \{0\})^{\mathbb {N}}\) with \(B_i=\{0\}\) or \(0\notin B_i\) for all \(0\le i\le k-1\). We observe that \(\rho ^{-1}(B)\) is the cylinder

$$\begin{aligned} {[}{\widetilde{B}}_0, {\widetilde{B}}_1, \cdots , {\widetilde{B}}_{k-1}] \times [B_0^{0}, B_1^{0}, \cdots , B_{k-1}^{0}], \end{aligned}$$

where \({\widetilde{B}}_i {\left\{ \begin{array}{ll} B_i,~&{}\text {if}~0\notin B_i\\ U(m),&{}\text {if}~B_i=\{0\}, \end{array}\right. }\) and \(B_i^{0} {\left\{ \begin{array}{ll} 1,~&{}\text {if}~0\notin B_i\\ 0,&{}\text {if}~B_i=\{0\}, \end{array}\right. }\) for all \(0\le i\le k-1\). Since the support of the probability measure \(\mu _m\) is U(m), we obtain

$$\begin{aligned} {\widehat{\nu }}(B)&=\rho _*(\mu _m^{\otimes {\mathbb {N}}}\otimes \nu )(B)=\mu _m^{\otimes {\mathbb {N}}}\otimes \nu (\rho ^{-1}(B))\\&=\nu (\pi (B))\prod _{i=0,b_i\not =0}^{k-1}\mu _m(B_i), \end{aligned}$$

which completes the proof of (3). \(\square \)

The following lemma follows directly from the orthogonality of continuous group characters on the group U(m).

Lemma 7.8

Let \(m\in {\mathbb {N}}_{\ge 2}\cup \{\infty \}\). For \(k\in I(m)\), we have the integral

$$\begin{aligned} \int _{U(m)} x^{k} d\mu _m(x)= {\left\{ \begin{array}{ll} 1 &{} ~~\text {if}~~k=0,\\ 0 &{} ~~\text {else}. \end{array}\right. } \end{aligned}$$

Now we do some calculations in the following two lemmas.

Lemma 7.9

Let \(m\in {\mathbb {N}}_{\ge 2}\cup \{\infty \}\). Let \(0\le a_1< \dots <a_r, r\ge 0\) and \(i_s\in I(m)\) for \(1\le s\le r\). Then the integral

$$\begin{aligned} \int _{(U(m)\cup \{0\})^{\mathbb {N}}} F^{i_1}\circ S^{a_1}\cdot F^{i_2}\circ S^{a_2}\cdots F^{i_r}\circ S^{a_r} d{\widehat{\nu }} \end{aligned}$$
(7.5)

equals 0 if not all \(i_s\) are zero, and equals

$$\begin{aligned} \int _{\{0,1\}^{\mathbb {N}}} F\circ S^{a_1}\cdot F\circ S^{a_2}\cdots F\circ S^{a_r} d\nu , \end{aligned}$$

if all \(i_s\) are zero.

Proof

The second case follows directly from the facts that \(\pi _*({\widehat{\nu }})=\nu \) by Proposition 7.7 and \(F^{0}\circ S^{a_s}=F\circ S^{a_s}\circ \pi \). Now we study the first case. Fix a choice of \(0\le a_1< \dots <a_r, r\ge 0, i_s\in I(m)\) not all equal to 0. A straightforward computation shows that

$$\begin{aligned} \begin{aligned}&\int _{(U(m)\cup \{0\})^{\mathbb {N}}} F^{i_1}\circ S^{a_1}\cdot F^{i_2}\circ S^{a_2}\cdots F^{i_r}\circ S^{a_r} d{\widehat{\nu }}\\&\quad = \int _{(U(m)\cup \{0\})^{\mathbb {N}}} x(a_1)^{i_1}x(a_2)^{i_2}\cdots x(a_r)^{i_r} d{\widehat{\nu }}(x), \end{aligned} \end{aligned}$$
(7.6)

where we denote \(x=(x(n))_{n\in {\mathbb {N}}}\). We write the right-hand side of (7.6) a sum of two parts:

$$\begin{aligned} \int _{B} x(a_1)^{i_1}x(a_2)^{i_2}\cdots x(a_r)^{i_r} d{\widehat{\nu }}(x) \end{aligned}$$
(7.7)

and

$$\begin{aligned} \int _{(U(m)\cup \{0\})^{\mathbb {N}}{\setminus } B} x(a_1)^{i_1}x(a_2)^{i_2}\cdots x(a_r)^{i_r} d{\widehat{\nu }}(x), \end{aligned}$$
(7.8)

where B denotes the cylinder

$$\begin{aligned} \left\{ x\in (U(m)\cup \{0\})^{\mathbb {N}}: x(a_s)\not =0, \forall 1\le s\le r \right\} . \end{aligned}$$

A simple observation shows that the integral (7.8) equals 0 because at least one of \(x(a_s)\) are equal to 0 for all \(1\le s\le r\). On the other hand, we see that the integral (7.7) equals

$$\begin{aligned} \begin{aligned} \nu (B) \prod _{j=1}^{r} \int _{U(m)} x(a_j)^{i_j} d\mu _m(x(a_j)). \end{aligned} \end{aligned}$$
(7.9)

By Lemma 7.8, the equation (7.9) equals 0. Therefore, we conclude that in the first case the integral (7.5) equals 0. \(\square \)

Lemma 7.10

\({\mathbb {E}}^{{\widehat{\nu }}}[F|\pi ^{-1}({\mathcal {B}})]=0.\)

Proof

It suffices to show \({\mathbb {E}}^{{\widehat{\nu }}}[F1_{\pi ^{-1}(B)}]=0\) for any cylinder B in \(\{0,1\}^{{\mathbb {N}}}\). Let \(B=[b_0,b_1,\dots ,b_k]\) be a cylinder in \(\{0,1\}^{{\mathbb {N}}}\) with \(b_i\in \{0,1\}\) for all \(0\le i\le k-1\). Then \(\pi ^{-1}(B)=[C_0,C_1,\dots ,C_k]\) where \(C_i=U(m)\) if \(b_i=1\); \(C_i=\{0\}\) if \(b_i=0\) for \(0\le i\le k\).

If \(b_0=0\), then

$$\begin{aligned} {\mathbb {E}}^{{\widehat{\nu }}}[F1_{\pi ^{-1}(B)}]={\mathbb {E}}^{{\widehat{\nu }}}[0\cdot 1_{\pi ^{-1}(B)}]=0. \end{aligned}$$

If \(b_0\not =0\), then \(C_0=U(m)\). A simple calculation shows that

$$\begin{aligned} {\mathbb {E}}^{{\widehat{\nu }}}[F1_{\pi ^{-1}(B)}]&=\int _{\pi ^{-1}(B)} x(0) d{\widehat{\nu }}(x) =\nu (B)\int _{U(m)} x(0) d\mu _m(x(0))=0. \end{aligned}$$

The last equality is due to Lemma 7.8. This completes the proof. \(\square \)

Now we prove some criteria of the Chowla property.

Proposition 7.11

Let \(z\in (S^1\cup \{0\})^{{\mathbb {N}}}\). Then the followings are equivalent.

  1. (1)

    The sequence z has the Chowla property.

  2. (2)

    Q-gen\((z)=\{{\widehat{\nu }}: \nu \in \text {Q-gen}(\pi (z)) \}\).

  3. (3)

    As \(k\rightarrow \infty \), \(\delta _{N_k, \pi (z)} \rightarrow \nu \) if and only if \(\delta _{N_k, z} \rightarrow {\widehat{\nu }}\).

Actually, Proposition 7.11 follows directly from the following lemma.

Lemma 7.12

Let \(z\in (S^1\cup \{0\})^{{\mathbb {N}}}\). Suppose that the sequence z has index \(m\in {\mathbb {N}}_{\ge 2}\cup \{\infty \}\). Suppose that \(\pi (z)\) is quasi-generic for \(\nu \) along \((N_k)_{k\in {\mathbb {N}}}\). Then

$$\begin{aligned} \delta _{N_k, z} \rightarrow {\widehat{\nu }}, ~~\text {as}~~k\rightarrow \infty \end{aligned}$$

if and only if

$$\begin{aligned} \lim \limits _{k\rightarrow \infty }\frac{1}{N_k}\sum _{n=0}^{N_k-1} z^{i_1}(n+a_1)\cdot z^{i_2}(n+a_2)\cdot \dots \cdot z^{i_r}(n+a_r)=0, \end{aligned}$$
(7.10)

for each choice of \(0\le a_1< \dots <a_r, r\ge 0, i_s\in I(m)\) not all equal to 0.

Proof

\(``\Rightarrow ''\) Fix a choice of \(0\le a_1< \dots <a_r, r\ge 0, i_s\in I(m)\) not all equal to 0. Observe that

$$\begin{aligned} \begin{aligned}&\frac{1}{N_k}\sum _{n=0}^{N_k-1} z^{i_1}(n+a_1)\cdot z^{i_2}(n+a_2)\cdot \dots \cdot z^{i_r}(n+a_r)\\&\quad = \frac{1}{N_k}\sum _{n=0}^{N_k-1} (F^{i_1}\circ S^{a_1}\cdot F^{i_2}\circ S^{a_2}\cdot \dots \cdot F^{i_r}\circ S^{a_r})\circ S^n(z), \end{aligned} \end{aligned}$$
(7.11)

for all \(k\in {\mathbb {N}}\). Since the measure \({\widehat{\nu }}\) is quasi-generic for z along \((N_k)_{k\in {\mathbb {N}}}\), we obtain

$$\begin{aligned} \begin{aligned}&\lim \limits _{k\rightarrow \infty } \frac{1}{N_k}\sum _{n=0}^{N_k-1} (F^{i_1}\circ S^{a_1}\cdot F^{i_2}\circ S^{a_2}\cdot \dots \cdot F^{i_r}\circ S^{a_r})\circ S^n(z)\\&\quad =\int _{(U(m)\cup \{0\})^{\mathbb {N}}} F^{i_1}\circ S^{a_1}\cdot F^{i_2}\circ S^{a_2}\cdot \dots \cdot F^{i_r}\circ S^{a_r} d{\widehat{\nu }}. \end{aligned} \end{aligned}$$
(7.12)

According to Lemma 7.9, the right-hand side of the above equation equals 0, which completes the proof.

\(``\Leftarrow ''\) Without loss of generality, we may assume that

$$\begin{aligned} \delta _{N_k, z} \rightarrow \theta , ~~\text {as}~~k\rightarrow \infty , \end{aligned}$$

for some S-invariant probability measure \(\theta \) on \((U(m)\cup \{0\})^{\mathbb {N}}\). Now we prove that the measure \(\theta \) is exactly the measure \({\widehat{\nu }}\). Similarly to (7.11) and (7.12), we have

$$\begin{aligned}&\lim \limits _{k\rightarrow \infty } \frac{1}{N_k}\sum _{n=0}^{N_k-1} z^{i_1}(n+a_1)\cdot z^{i_2}(n+a_2)\cdot \dots \cdot z^{i_r}(n+a_r)\\&\quad =\int _{(U(m)\cup \{0\})^{\mathbb {N}}} F^{i_1}\circ S^{a_1}\cdot F^{i_2}\circ S^{a_2}\cdots F^{i_r}\circ S^{a_r} d\theta , \end{aligned}$$

for each choice of \(0\le a_1< \dots <a_r, r\ge 0, i_s\in I(m)\). According to hypothesis (7.10), we have

$$\begin{aligned} \int _{(U(m)\cup \{0\})^{\mathbb {N}}} F^{i_1}\circ S^{a_1}\cdot F^{i_2}\circ S^{a_2}\cdots F^{i_r}\circ S^{a_r} d\theta =0, \end{aligned}$$

for each choice of \(0\le a_1< \dots <a_r, r\ge 0, i_s\in I(m)\) not all equal to 0. Repeating the same argument in Lemma 7.9, we have

$$\begin{aligned}&\int _{(U(m)\cup \{0\})^{\mathbb {N}}} F^{0}\circ S^{a_1}\cdot F^{0}\circ S^{a_2}\cdots F^{0}\circ S^{a_r} d\theta \\&\quad =\int _{\{0,1\}^{\mathbb {N}}} F\circ S^{a_1}\cdot F\circ S^{a_2}\cdots F\circ S^{a_r} d\nu . \end{aligned}$$

Hence

$$\begin{aligned} \int _{(U(m)\cup \{0\})^{\mathbb {N}}} f d\theta = \int _{(U(m)\cup \{0\})^{\mathbb {N}}} f d{\widehat{\nu }}, \end{aligned}$$

for all f belonging to the family \({\mathcal {F}}\) defined in Lemma 7.5. Since the family \({\mathcal {F}}\) is dense in C(X), by Lemma 7.5, we conclude that \(\theta ={\widehat{\nu }}\). \(\square \)

If additionally we assume that the sequence does not take the value 0 in Proposition 7.11, then we obtain the following criterion of the Chowla property for sequences in \((S^1)^{\mathbb {N}}\).

Corollary 7.13

Let \(z\in (S^1)^{{\mathbb {N}}}\). Suppose that z has index \(m\in {\mathbb {N}}_{\ge 2}\cup \{\infty \}\). Then the sequence z has the Chowla property if and only if z is generic for \(\mu _m^{\otimes {\mathbb {N}}}\).

Proof

Observe that \(\pi (z)=(1,1,\dots )\). It follows that Q-gen\((\pi (z))=\{\delta _{(1,1,\dots )} \}\), that is, \(\pi (z)\) is generic for the measure \(\delta _{(1,1,\dots )}\). A straightforward computation shows that the measure \({\widehat{\delta }}_{(1,1,\dots )}\) is exactly \(\mu _m^{\otimes {\mathbb {N}}}\). Due to the equivalence between (1) and (2) in Proposition 7.11, we conclude that the sequence z has the Chowla property if and only if z is generic for \(\mu _m^{\otimes {\mathbb {N}}}\). \(\square \)

As an analogue of Corollary 15 in [8], we have the following result. The proof is similar to the proof of Corollary 15 in [8], which is omitted here.

Corollary 7.14

Let \(z\in (S^1)^{{\mathbb {N}}}\). Suppose that z has the Chowla property. Then the sequence z is not strongly orthogonal on moving orbits to any dynamical systems of positive entropy.

Now we construct a new diagram. Let (XT) be a topological dynamical system and \(x\in X\) be a completely deterministic point. Let \(\nu \) be a S-invariant measure on \(\{0,1\}^{\mathbb {N}}\). Let \(z\in (U(m)\cup \{0\})^{\mathbb {N}}\) be quasi-generic for \({\widehat{\nu }}\) along \((N_k)_{k\in {\mathbb {N}}}\). Suppose that (xz) is quasi-generic for some \(T\times S\)-invariant measure \(\rho \) on \(X\times U(m)^{\mathbb {N}}\) along \((N_k)_{k\in {\mathbb {N}}}\). Let \((Y, U, \beta )\) be an intermediate factor between \((X\times (U(m)\cup \{0\})^{\mathbb {N}}, T\times S, \rho )\) and \((\{0,1\}^{\mathbb {N}}, S, \nu )\) factor maps \(\gamma \) and \(\sigma \). We define the diagram

(7.13)

where the factor map \(\pi \) is already defined in diagram (7.4) and \(P_2\) is the projection on the 2-th coordinate.

We have the following lemma.

Lemma 7.15

Suppose that the factor map \(\sigma \) is of relatively entropy zero. Then the factor maps \(\sigma \) and \(\pi \) are relatively independent.

Proof

By Proposition 7.7, the factor map \(\pi \) is relatively Kolmogorov. According to Lemma 2.1, we conclude that the factor maps \(\sigma \) and \(\pi \) are relatively independent. \(\square \)

Now we prove the Chowla property implying the Sarnak property by using the diagram (7.13).

Proof of Theorem 3.5

Let (XT) be a topological dynamical system. Let \(x\in X\) be a completely deterministic point. Suppose that (xz) is quasi-generic for some measure \(\rho \) along \((N_k)_{k\in {\mathbb {N}}}\). For any \(f\in C(X)\), since \(z(n)=F(S^nz)\), we have

$$\begin{aligned} \lim \limits _{k\rightarrow \infty }\frac{1}{N_k}\sum _{n=0}^{N_k-1} f(T^nx)z(n)={\mathbb {E}}^\rho [f\otimes F]. \end{aligned}$$
(7.14)

On the other hand, by the criteria of the Chowla property in Proposition 7.11, without loss of generality, we may assume that z is quasi-generic for some measure \({\widehat{\nu }}\) along \((N_k)_{k\in {\mathbb {N}}}\) where \(\nu \) is a S-invariant probability measure on \(\{0,1\}^{\mathbb {N}}\). Observing \((P_{2})_{*}(\rho )={\widehat{\nu }}\), we establish the diagram (7.13) and suppose that the factor map \(\sigma \) is of relatively entropy zero. By Lemma 7.15, the factor maps \(\sigma \) and \(\pi \) are relatively independent. Let \(P:=(\sigma \circ \gamma , \pi \circ P_2)\). A simple computation shows that

$$\begin{aligned} {\mathbb {E}}^\rho [f\otimes F|P^{-1}({\mathcal {B}}\otimes {\mathcal {B}}) ]={\mathbb {E}}^{\gamma _{*}(\rho )} [f| \sigma ^{-1}({\mathcal {B}})]{\mathbb {E}}^{\widehat{\nu }} [F| \pi ^{-1}({\mathcal {B}})]. \end{aligned}$$

Since Lemma 7.10 tells us that \({\mathbb {E}}^{\widehat{\nu }} [F| \pi ^{-1}({\mathcal {B}})]=0\), we obtain

$$\begin{aligned} {\mathbb {E}}^\rho [f\otimes F|P^{-1}({\mathcal {B}}\otimes {\mathcal {B}}) ]=0. \end{aligned}$$

Hence we have

$$\begin{aligned} {\mathbb {E}}^\rho [f\otimes F]={\mathbb {E}}^\rho [{\mathbb {E}}^\rho [f\otimes F|P^{-1}({\mathcal {B}}\otimes {\mathcal {B}}) ]]=0. \end{aligned}$$
(7.15)

Combining (7.14) and (7.15), we complete the proof. \(\square \)