Quenched Decay of Correlations for One-dimensional Random Lorenz Maps

We study rates of mixing for small random perturbations of one-dimensional Lorenz maps. Using a random tower construction, we prove that, for Hölder observables, the random system admits exponential rates of quenched correlation decay.


Introduction
In 1963, Lorenz [17] introduced the following system of equations as a simplified model for atmospheric convection.Numerical simulations performed by Lorenz showed that the above system exhibits sensitive dependence on initial conditions and has a non-periodic "strange" attractor.Since then, (1) became a basic example of a chaotic deterministic system that is notoriously difficult to analyse.We refer to [4] for a thorough account on this topic.
It is now well known that a useful technique to analyse the statistical properties of such a flow, and any nearby flow in the C 2 -topology, is to study the dynamics of the flow restricted to a Poincaré section via a well defined Poincaré map [5].Such a Poincaré map admits an invariant stable foliation; moreover, it is strictly uniformly contracting along stable leaves [5].Therefore, the dynamics of the Poincaré map can be understood via quotienting along stable leaves; i.e., by studying the dynamics of its one dimensional quotient map along stable leaves.The above technique have been employed to obtain statistical properties of Lorenz flows [15] and to prove that such statistical properties of this family of flows is stable under deterministic perturbations [3,8,7].This illustrates the importance of understanding the statistical properties of one-dimensional Lorenz maps.
One-dimensional Lorenz maps have been thoroughly studied in the literature.In [12], which is the main inspiration for this paper, a deterministic Lorenz-like map is studied for which it is proved there is exponential decay of correlation.Additionally, in [8] they examine a perturbed family of Lorenz-like maps with differing singularity points and show statistical stability.Another example is [2], where they examine a family of perturbed contracting Lorenz-like maps, in contrast to the expanding case.They show that the set of points that have not achieved exponential growth in the derivative or slow recurrence to the critical point at a given time decays exponentially, which thereby implies further statistical properties.
In this paper, we study rates of mixing for small random perturbations of such one dimensional maps.We use a random tower construction to prove that for Hölder observables the random system admits exponential rates of quenched correlation decay.The paper is organised as follows.In Section 2 we present the setup and introduce the random system under consideration.Section 2 also includes the main result of the paper, Theorem 2.2.Section 3 includes the proof of Theorem 2.2.Section 4 is an Appendix, which contains a version of the abstract random tower result of [9,14], which is used to deduce Theorem 2.2.Acknowledgment: I would like to thank Marks Ruziboev for our helpful discussions throughout this work.I would also like to thank anonymous referees whose comments improved the presentation of the paper.
We express the random dynamics of our system in terms of the skew product where S(x, ω) = (σ(ω), T ω 0 (x)).Iterates of S are defined naturally as To simplify the notation, we denote T ω = T ω 0 .We assume uniform expansion on random orbits: for every ω ∈ Ω, whenever x / ∈ n−1 j=0 (T j ω ) −1 (0).Here we are looking at the quenched statistical properties of S, i.e. we study statistical properties of the system generated by the compositions T n ω on I for almost every ω ∈ Ω, which we refer to as {T ω } without confusion, since the underlying driving process (Ω, σ, P) is fixed.We call a family of Borel probability measures {µ ω } ω∈Ω on I equivariant if ω → µ ω is measurable and T ω * µ ω = µ σω for P almost all ω ∈ Ω.
For fixed η ∈ (0, 1), let C η (I) denote the set of η-Hölder functions on I, and let L ∞ (I) denote the set of bounded functions on I.The following theorem is the main result of this paper.
Theorem 2.2.The random system {T ω } admits a unique equivariant family of absolutely continuous probability measures {µ ω }.Moreover, there exists a constant b > 0 such that for every ϕ ∈ C η (I) and ψ ∈ L ∞ (I), we have for some constant C ϕ,ψ > 0 which only depends on ϕ and ψ and is uniform for all ω ∈ Ω.
3. Proof of theorem 2.2 3.1.Strategy of the proof.We prove Theorem 2.2 by showing that the random system {T ω } admits a Random Young Tower structure [9,14] for every ω ∈ Ω with exponential decay for the tail of the return times.Thus, uniform (in ω) rates leads to uniform (in ω) exponential decay of correlations.For convenience, we include the random tower theorem of [9,14] in the Appendix, which we will refer to in the proof of Theorem 2.2.Similar to the work of [12,13] in the deterministic setting, to construct a random tower, we first construct an auxiliary stopping time called escaping time E : J → N on subintervals of I. Roughly speaking, E is a moment of time when the image of a small interval J ′ ⊂ J reaches a fixed interval length δ > 0 while at the same time T E ω (J ′ ) does not intersect a fixed neighborhood ∆ 0 of the singularity 0. We show that T E ω has good distortion bounds and that |{E > n}| decays exponentially fast as n goes to infinity.The next step will be to construct a full return partition Q ω of some small neighborhood ∆ * ⊂ ∆ 0 of the singularity.Once this is obtained, we induce a random Gibbs-Markov map F ω in the sense of Definition 1.2.1 of [14].This means we will need to show that F ω : ∆ * → ∆ * , defined as F ω (x) = T τω(x) ω (x), where τ ω : ∆ * → N a measurable return time function, has the following properties: • τ ω |Q is constant for every Q ∈ Q ω and every ω ∈ Ω; • F ω |Q : Q → ∆ * is a uniformly expanding diffeomorphism with bounded distortion for every Q ∈ Q ω and every ω ∈ Ω; • |{τ ω > n}| ≤ Be −bn for some constants B > 0, b ∈ (0, 1); • there exists N 0 ∈ N and two sequence {t i ∈ N, i = 1, 2, . . ., N 0 }, {ε i > 0, i = 1, . . ., N 0 } such that g.c.d.{t i } = 1 and |{x ∈ ∆ * | τ ω (x) = t i }| > ε i for almost every ω ∈ Ω; • on sets {τ ω = n}, τ ω only depends on the first n elements of ω, i.e. ω 0 , . . ., ω n−1 .
Let J 0 ⊂ I be an interval such that either J 0 ∩ ∆ 0 = ∅ and |J 0 | ≥ δ/5, or J 0 = ∆ 0 .We wish to construct an escape partition P ω (J 0 ) of J 0 , and we also wish to construct a stopping time function E ω : J 0 → N, which we call the escape time, that has the following properties for every ω ∈ Ω: • for every J ∈ P ω (J 0 ) and for every time j ≤ E ω (J) such that T j ω (J) ∩ ∆ 0 = ∅, T j ω (J) does not intersect more than three adjacent intervals of the form I r,m , |r| ≥ r 0 , m = 1, . . ., r ϑ .We have the following proposition.Proposition 3.1.Let δ > 0 be our previously chosen constant, and consider J 0 ⊂ I as described above.For every ω ∈ Ω there exists a countable partition P ω (J 0 ) and an escape time E ω : J 0 → N that satisfies the properties mentioned above.Furthermore, there exist constants C δ > 0, γ > 0 such that for any given time n ∈ N we have We prove the proposition above in a series of lemmas.It is important to keep distortion under control, which means that we have to keep track of visits of the orbits near the singularity.To this end, we use a chopping algorithm following [10].
Let us fix ω ∈ Ω and k ≥ 1, and suppose that the elements J * ∈ P ω (J 0 ) satisfying E ω (J * ) < n have already been constructed.Let J ⊂ J 0 be an interval in the complement of {E ω (J * ) < n}.If T k ω (J) ∩ ∆ 0 = ∅, we call k a free iterate for J. Now, let us consider the following cases depending on the position and length of J ω,k = T k ω (J), k ∈ N.
Chopping times.Suppose that |J ω,k | < δ, ∆ 0 ∩ J ω,k = ∅ and J ω,k intersects more than three adjacent intervals I r,m .Then k is called an essential return time for J.In this case we chop J into pieces J ω,r,j ⊂ J in such that I r,j ⊂ T k ω (J ω,r,j ) ⊂ Îr,j , where Îr,j is the union of I r,j and its two neighbours.In this case, we say that J ω,r,j has the associated return depth r ω = min{|r| : T k ω (J ω,r,j ) ∩ I r = ∅}, and we refer to J as the ancestor of J ω,r,j .
Escape times.Suppose that k is the first free iterate for J such that |J ω,k | ≥ δ.We then set E ω (J) = k and add J to P ω (J 0 ), where we call J ω,k an escape interval.Notice that the partition will depend on ω.
The proof of Proposition 3.1 closely follows the approach of [12].Suppose that J ∈ P ω (J 0 ), and suppose that J had s returns before escaping.Let us denote r i as the return depth of the i-th return of J.Note that r i will depend on ω, but for ease of notation this is not denoted explicitly.We define the total return depth as R ω (J) = s i=1 r i .Lemma 3.2.There exists λ < 1 2 depending on δ such that for any Proof.Suppose that R ω (J) = 0 and that J has s returns before escaping.Let d 0 be the number of iterates before the first return, let d 1 , ..., d s−1 be the free iterates between successive returns or before escaping for We have for some ξ ∈ J. Notice that by using the chain rule we obtain the following: where is the i-th return time for i = 1, . . ., s and ν 0 = 0. Notice that for every i = 0, . . ., s, every x ∈ I and every ω ∈ Ω, we have DT d i ω (x) ≥ Ce ℓd i by the expansion property from (6).Likewise, since T ν i ω (x) returns for all x ∈ J, and since we can write λ)r i for all x ∈ J by the order of singularity condition (2).Using these two inequalities, we have where Ĉ = min{C −1 , C(s+1)/s }.
Recall that r i ≥ r 0 , where e −r 0 = δ.Thus R ω ≥ sr 0 , which implies 1 The above lemma allows us to prove the following exponential estimate.Lemma 3.3.Let J 0 ⊂ I and R ω : J 0 → N be the sum of return depths for all ω ∈ Ω.Then we have Proof.Let N k denote the set of all sequences of return depths (r 1 , . . ., r s ) with s ≥ 1 such that Then from [11,Lemma 3.4], we know that for sufficiently large k we have Furthermore, we know that for any given sequence of return times (r 1 , . . ., r s ), there can be at most two escaping intervals.Thus, combining these and Lemma 3.2 we have where in the last inequality we have used The following lemma relates the total return depth and the escape time.
Lemma 3.4.Let J ∈ P ω (J 0 ) and let R = (r 1 , . . ., r s ) be its associated sequence of return depths.Furthermore, let R ω (J) = s i=1 r i .Then we have Proof.Since intervals are not chopped at the inessential return times, we distinguish between essential and inessential return times.Thus, let R ω (J) = R e ω (J) + R ie ω (J) be the corresponding splitting into essential and inessential total return depths respectively.Furthermore, let K e = {ν e 1 < ... < ν e q } be the ordered set of essential return times, and define where d e 0 is the number of free iterates before the first essential return, each d e i is the number of free iterates between consecutive essential return times, and d e q is the number of free iterates after the last essential return and before the escape time.Likewise, let where we have already defined the ν i 's and d i 's in the proof of lemma 3.2.Notice that if d i j is the number of free iterates between ν e j+1 and the previous return before ν e j+1 , essential or inessential, for some i j = 1, . . ., s − 1, then d e j ≥ d i j for all j = 0, . . ., q − 1.But also notice that D e (J) = D(J), thus q ≤ s.
Recall in the definition of chopping times that we say an interval J ⊃ J is the ancestor of J if J is obtained via chopping of J at an essential return time.In fact, we can write of sequence of subsets of the form J = J (q) ⊂ J (q−1) ⊂ • • • ⊂ J (1) ⊂ J (0) such that J (0) is our starting interval and J (i) is obtained by chopping J (i−1) at the i-th essential return time.We call J (i) the ancestor of J of order i.Now, let J (i−1) be the ancestor of J of order i − 1 which is obtained at the essential return time ν e i−1 .We claim that where r e i is the return depth of the i-th essential return.Indeed, the first inequality is true by the definition of essential return.For the second inequality, one can easily show this by calculating the ratio |I r+1 |/|I r |, which is just a constant not dependent on r, and then taking the geometric sum.
Let ρ 0 be the number of inessential returns before the first essential return, let ρ i denote the number of inessential returns between ν e i and ν e i+1 for i = 1, . . ., q − 1, and let ρ q denote the number of inessential returns after the last essential return and before the escape time.Now, using the above inequality we have e ℓd e i −r e i , for i = 1, . . ., q, where we just set ν e q+1 = E ω (J), and where we use then we can simply choose an even larger r 0 to cancel these terms out.Otherwise, if CC −1 ≥ 1, then we can simply remove these terms.Thus, we obtain 1 ≥ e ℓd e i −r e i , and taking the log of both sides we obtain 0 ≥ ℓd e i − r e i , and thus d e i ≤ r e i ℓ for i = 1, . . ., q. Similarly for d e 0 , we know that Using a similar argument to the previous one above, we obtain d e 0 ≤ r e 1 ℓ .Note by assumption that there must be (s−q) inessential returns and that there must be (q + 1) iterates when a return or the escape happens.Thus, using the fact that R ω (J) ≥ s and R e ω (J) ≤ R ω (J) we obtain Finally, we are ready to prove Proposition 3.1.
Combining this with lemma 3.3 we have that Notice that we need to fix δ > 0, since C δ = O(δ −1 ).Thus, P ω (J 0 ) defines a partition of J 0 and every element of the partition is assigned an escape time.Notice that the way we constructed the escape time immediately implies that for every J ∈ P ω (J 0 ) and time j ≤ E ω (J) such that T j ω (J) ∩ ∆ 0 = ∅, the image T j ω (J) does not intersect more than three adjacent intervals of the form I r,m , |r| ≥ r 0 , m = 1, . . ., r ϑ .
3.3.Bounded Distortion.In this subsection we prove that T n ω has bounded distortion on every interval J such that T k ω (J) does not intersect more than three adjacent intervals I r,m for all 1 ≤ k ≤ n.Therefore, the escape map in Proposition 3.1 has bounded distortion.
We prove the following lemma: Lemma 3.5.For some fixed ω ∈ Ω, consider an interval J ⊂ I such that |J| < δ and for which there exists n J ∈ N such that for every 0 ≤ k ≤ n J − 1 either J ω,k ∩ ∆ 0 = ∅ or J ω,k is contained in at most three intervals I r,m .There exists a constant D = D(δ) (independent of ω) such that for every ω and for every such J described above, we have Proof.Let ω ∈ Ω.If we set x ω,i = T i ω (x), then by using the chain rule and log(1 + x) ≤ x for x ≥ 0 we obtain One can show that where λ σ i ω ∈ [λ 0 , λ] is the order of singularity associated with the map T σ i ω .Furthermore, we know , and we also know that 1 . Thus, combining these and setting κ = KC, we obtain Let J = [x, y] ⊂ I, and let K = {ν 1 < ν 2 < ... < ν i < ... < ν p } be the ordered set of all returns under the dynamics of T k ω on J from k = 0 to k = n J − 1.Note that since we have already assumed there are no essential returns at or before before n J − 1, all of these returns must be inessential.The largest possible size for J ω,ν i = T ν i ω (J) is if it is contained in one interval of the form I r i ,m and two of the form I (r i −1),m , thus Let us examine time k.There are two possibilities: We will start with the case that k ≤ ν p + 1.Let ν s(k) denote the last return time before k, and let us split the sum in (9) accordingly: where the first sum is all times up to time ν s(k) but excluding return times, the second sum is the times after ν s(k) , and the third sum is the return times.
For the first sum, we use the fact that our expansion condition implies that Thus, we have . We note that D 1 is independent of both k and δ.
For the second sum, we also have that Finally, for the third sum, we define the set K r = {ν η 1 < ν η 2 < ... < ν η i < ... < ν ηq } such that for the associated return depths we have r η i = r for all ν η i ∈ K r .Clearly Let M(r) denote the maximum value in K r .Then we have Thus, we have C−α e −ℓ(M (r)−i)α r ϑα using ( 14) This concludes the part of the proof for k ≤ ν p + 1.
If we instead have ν p + 1 < k, then the above proof will not work.This is because, after ν p , the upper bound of |J ω,i | < δ no longer applies so long as the i-th iterate does not intersect ∆ 0 .Instead, we use the fact that, since J ω,i does not intersect ∆ 0 , we have |J ω,i | < 1 − δ to give us the following estimate: To summarize the last two sections, we have now shown that for a chosen δ > 0 we can construct an escape partition and an escape time function E ω , the tails for which decay exponentially.Furthermore, since |J| < δ for every J ∈ P ω (J 0 ), then the above bounded distortion result holds on each J ∈ P ω (J 0 ).

3.4.
Full return partition.Below we construct the full return partition of some neighborhood of 0. For this we use the following lemma: Lemma 3.6.Let δ > 0 be our previously chosen constant.Then there exists some sufficiently small δ * > 0 and some t * ∈ N such that for every ω ∈ Ω and for every interval J with |J| ≥ δ and J ∩ ∆ 0 = ∅ there exists J ⊂ J with the following properties: (i) There exists t < t * such that T t ω : J → ∆ * = (−δ * , δ * ) is a diffeomorphism; (ii) both components of J\ J have size greater than δ/5; (iii) | J| ≥ β|J|, where β is a uniform constant.
Proof.By assumption (A3) of the unperturbed map, we know that n∈N T −n 0 (0) is dense in I. Since all the maps in A(ε) are close to T 0 , for any small constant δ > 0 there exists an iterate tδ ∈ N such that n≤tδ (T n ω ) −1 (0) is δ-dense in I and uniformly bounded away from 0 for all ω.Let us fix some 0 < δ < δ/5, and thereby we also fix some tδ.Then the following holds: (a) if we take the subinterval in the middle of J with length δ, then there is a point x * ∈ J which is a preimage of 0, i.e. there exists some t ≤ tδ such that T t ω (x * ) = 0; (b) there exists δ * sufficiently small such that any connected component of (T n ω ) −1 (∆ * ) has length less than δ for any n ≤ tδ, where ∆ * = (−δ * , δ * ), and is uniformly bounded away from 0 for all ω ∈ Ω ; (c) the distance from x * to the boundary of J is larger than δ/5.
To show there is lower bound, β, for this, we need to show there is an upper bound for DT t ω (ξ 1 ).Recall that |J| ≥ δ, that T ω is expanding, and that both components of J\ J have size greater than δ/5.For t = 1, the lower bound is automatic since , and thus DT t ω (ξ 1 ) has an upper bound.This completes the proof The above lemma implies that, since each escape interval has length equal to or greater than δ, each escape interval therefore contains a subinterval which maps bijectively onto ∆ * within a uniformly bounded number of iterates.We use this property repeatedly in order to construct a full return partition Q ω (∆ * ) of ∆ * = (−δ * , δ * ), i.e. for every ω ∈ Ω we construct a countable partition Q ω (∆ * ) such that for every J ∈ Q ω (∆ * ) there exists an associated return time τ ω (J) ∈ N such that T τω(J) ω : J → ∆ * is a diffeomorphism. 1.To this end, we start with an exponential partition of ∆ * of the form P(∆ * ) = {I r,m } |r|≥r * , where each I r,m is defined as in the beginning of subsection 3.2.We construct an escape partition on each I r,m , which by extension naturally induces a partition on ∆ * , denoted by P ω,1 (∆ * ).Thus, for every J ∈ P ω,1 (∆ * ) we have J ⊂ I r,m for some r and m which has an associated escape time E ω (J).Lemma 3.6 implies that each J ∈ P ω,1 (∆ * ) contains a subinterval J that maps diffeomorphically to ∆ * under some number of iterates bounded above by E ω (J)+t * .Thus, in order to construct the next partition P ω,2 , we divide J into J and the components of J\ J. To each of these we assign the first escape time E ω,1 = E ω (J), and we assign to the returning subinterval J the return time τ ω ( J) = E ω,1 ( J) + t( J), where t( J) is the number of iterates such that T t( J ) ( J)) = ∆ * .We place J in P ω,2 , and it will remain unchanged in all subsequent P ω,k 's.
Next, we apply the escape partition algorithm to the connected components of T E ω,1 ω (J\ J).This is possible since Lemma 3.6 guarantees that these components will be of size greater than δ/5.Let J N R denote one of the non-returning components of J\ J, and let K ⊂ J N R be the preimage of a subinterval that is obtained after applying the escape partition algorithm to T E ω,1 ω (J N R ).We place K in P ω,2 , and again by Lemma 3.6 there exists K ⊂ K that also maps diffeomorphically to ∆ * after a bounded number of iterates.We divide K into K and the components of K\ K, and to each of these we assign the second escape time , and to K we assign the return time τ ω ( K) = E ω,2 ( K)+t( K).We then repeat this process ad infinitum, by 1) taking each non-returning L ∈ P ω,k (for some k); 2) chopping L into its returning component and non-returning components; 3) assigning to each of these the k-th escape time 4) assigning to the returning component L the return time τ ω ( L) = E ω,k ( L) + t( L) and placing L in P ω,k+1 ; and 5) performing the escape partition on the remaining non-returning components and placing the resulting subintervals in P ω,k+1 .
Note that the i-th escape time of a non-returning interval, i ≤ k, will be the same as the escape time of its i-th ancestor.
Finally, using the above we define the full partition as i.e. the set of all possible intersections of all P ω,k 's.Below we will show that Q ω defines a full-measure partition of ∆ * and set τ ω (J) = E ω,k (J) + t(J) for every J ∈ Q ω .
Remark 3.7.One must note in the above algorithm that, when constructing P ω,k , the choice of J ⊂ J which maps diffeomorphically onto ∆ * is not necessarily unique.However, 1 Notice that in principle we need to distinguish between different fibers and consider T ω : I × {ω} → I × {σω}.Then we obtain the inducing domain at fiber ω by ∆ * ω = ∆ * .Then the induced map is ) ω .This extension does not cause any problem, since all the estimates are uniform in ω.
we also want that if τ ω (J) = k for some J ∈ Q ω , then τ ω only depends on the first k − 1 elements of ω.Furthermore, we want that if ω and ω ′ share their first k − 1 entries, then τ ω (J) = k ′ ≤ k for some J ∈ Q ω if and only if τ ω ′ (J) = k ′ .To ensure this, for any ω's that share the first k − 1 entries, we require that the same J ⊂ J is chosen for all such ω when defining τ ω ( J ) ≤ k.
The following lemmas are useful for us.Lemma 3.8.Let J ⊂ J ∈ P ω,i be a non-returning subinterval of J which has had its i-th escape.Then we have where Proof.Since J ∈ P ω,i , we therefore know that T j ω (J) is contained in at most three intervals of the form I (r,m) for j ≤ E ω,k , which means we can apply our bounded distortion Lemma 3.5.We make use of the following property of bounded distortion: if T ω is of bounded distortion with distortion constant D, for any intervals A ⊂ I and B ⊂ I we have where k ≤ min{n A , n B }.Here n A ∈ N is the constant such that for every 0 ≤ k ≤ n A − 1, either T k ω (A) does not intersect ∆ 0 or T k ω (A) is contained in at most three intervals of the form I r,m , and n B denotes the same condition for B. Indeed, since T ω is a differentiable function, we know from the mean value theorem that there exist ξ 1 ∈ A, ξ 2 ∈ B such that Thus, and rearranging this we obtain inequality (17).Using this, we can write • |J| using ( 17) An important corollary of this is the following: Corollary 3.9.We denote by Q ω (E ω,1 , ..., E ω,i ) the set of all J ∈ Q ω such that J has escape times {E ω,1 < E ω,2 < ... < E ω,i < n} and whose (i + 1)-th escape time is after n.Then we have Proof.For each J ∈ Q (n) ω (E ω,1 , ..., E ω,i ) there exists a sequence of ancestors J ⊂ J (i) ⊂ J (i−1) ⊂ ... ⊂ J (2) ⊂ J (1) ⊂ J (0) = I r,m .Note that we can write where we set E ω,0 (J (0) ) = 0. Let us define ))}, and then apply ( 16) recursively: we have and thus by recursion we have Furthermore, we make use of the following lemma, the proof of which can be found in [11], lemma 3.4: Lemma 3.10.Let η ∈ (0, 1) and R k,q = {(n 1 , n 2 , ..., n q ) : n i ≥ 1 ∀i = 1, ..., q : q i=0 n i = k}.Then for q ≤ ηk there exists a positive function η(η) such that η → 0 as η → 0, and 3.5.The tail of the return times.To establish mixing rates, we need to obtain decay rates for the tails of the return times.By construction, the tail of the return times depend on the number of escape times that occurred before returning.We have the following lemma: Lemma 3.11.Fix ω ∈ Ω and let (n 1 , n 2 , . . ., n i ) be the sequence of escape times of an interval J ∈ Q ω before time n ≥ 1 such that i j=1 n j = n.Then there exist constants B, b > 0 that are uniform in ω such that Proof.Let us define the following: We decompose Q (n) into the following sums: where ζ denotes the proportion of total time n that we consider having few escapes (< ζn) or many escapes (> ζn), which the two sums above represent respectively.For the many escapes, we have where we recall β, defined in lemma 3.6, is the uniform minimum proportion of size between an escape interval and its returning subinterval.Furthermore, for big enough i we have where γ β = ζ(log(1 − β) −1 ) and C 4 = 2/β.For the intervals that have few escapes, we have This constant can grow exponentially if C 3 ≥ 1, so to avoid this we choose a sufficently small ζ.Combining this with the inequality for many escapes, we have for positive constants B and b, as long as we have chose small enough ζ.
Thus, for every ω ∈ Ω we have that τ ω (x) is defined and finite for a.e.x ∈ ∆ * .It should be emphasized that the rates of decay of the return times are exponential, independently of ω.Furthermore, for every ω ∈ Ω, if J ∈ Q ω has escape times (n 1 , n 2 , . . ., n i ) before returning, then for the return time we have τ ω (J) < n i + t * , where t * is the constant in lemma 3.6 which is independent of ω.
Proof.Let us define Then for every x, y ∈ J n ∈ Q ω n , F i ω (x) and F i ω (y) stay in the same element of We show that there exists κ ∈ (0, 1) such that for all ω ∈ Ω and We prove this inequality via induction.For n = 1 notice that Q ω 1 = Q ω and proceed as follows: fix ω ∈ Ω and let J 2 ∈ Q ω 2 be such that Thus, we obtain Iterating the the process we obtain (25).Consider x, y ∈ J 1 with n = s ω (x, y) and F ω (x), F ω (y) ∈ J n ∈ Q ω n and suppose that J n ⊂ J ∈ Q ω .By equation ( 17) and Lemma 3.5 we have | for all k ≤ n and ω ∈ Ω.Thus proceeding as in the proof of Lemma 3.5 we have for suitable constant D and β.
3.7.Aperiodicity.Finally we address the problem of aperiodicity; i.e. there exists N 0 ∈ N and two sequence To show this, we recall that the original Lorenz system and all sufficiently close systems are mixing [18].Thus, we can proceed as in [1,Remark 3.14].Since the unperturbed map admits a unique invariant probability measure, it can be lifted to the induced map over ∆ * constructed following the algorithm in the previous 2 subsections.Moreover, the lifted measure is invariant and mixing for the tower map.Therefore, there exists partition Q 0 of ∆ * and a return time τ 0 : ∆ * → N such that τ 0 i = τ 0 (Q i ), Q i ∈ Q 0 such that g.c.d.{τ 0 i } N 0 i=1 = 1 for some N 0 > 1.Now, by shrinking ε if necessary, we can ensure that the first N 0 elements of the partition . .N 0 and for all ω ∈ Ω.Thus, we may take ε i = |Q i |/2.Notice that we define only finitely many domains in this way.Therefore, the tails estimates, distortion, etc. are not affected, and we can stop at some ε 0 > 0, thereby proving aperiodicity.
where J F τω ω denotes the Jacobian of F τω ω .(C4) Weak forwards expansion: the diameters of the partitions n j=0 ( F j ω ) −1 Z σ j ω tend to zero as n → ∞.For almost every ω let K ω : Ω → R + be a random variable which satisfies inf Ω K ω > 0 and where u, v > 0. We then define the spaces We assign to L Kω ∞ and F Kω We can now apply the following theorem from [14,9]: Theorem 4.1.Let Fω satisfy (C1)-(C6), and let K ω satisfy the above condition.Then for almost every ω ∈ Ω there exists an absolutely continuous Fω -equivariant probability measure ν ω = h ω m on ∆ ω , satisfying ( Fω ) * ν ω = ν σω , with h ω ∈ F + γ .Furthermore, there exists a full-measure subset Ω 2 ⊂ Ω such that for every ω ∈ Ω 2 , ϕ ω ∈ L Kω ∞ and ψ ω ∈ F Kω γ there exists a constant C ϕ,ψ such that for all n we have (ϕ σ n ω • F n ω )ψ ω dm − ϕ σ n ω dν σ n ω ψ ω dm ≤ C ϕ,ψ e −bn (32) and Usually there would be a constant C ω dependent on ω in front of the right hand side for both 32 and 33 because there is often a waiting time dependent on ω before we see exponential tails of return times.However, since we have uniform tails in (C5), in the sense that we do not have a waiting time and instead we immediately have exponential tails of returns, we just have a uniform constant B > 0 instead of C ω , which we can just absorb into C ϕ,ψ .