UvA-DARE (Digital Academic Repository) Critical Intermittency in Random Interval Maps

: Critical intermittency stands for a type of intermittent dynamics in iterated function systems, caused by an interplay of a superstable ﬁxed point and a repelling ﬁxed point. We consider critical intermittency for iterated function systems of interval maps and demonstrate the existence of a phase transition when varying probabilities, where the absolutely continuous stationary measure changes between ﬁnite and inﬁnite. We discuss further properties of this stationary measure and show that its density is not in L q for any q > 1. This provides a theory of critical intermittency alongside the theory for the well studied Manneville–Pomeau maps, where the intermittency is caused by a neutral ﬁxed point.


Introduction
Intermittency refers to the behaviour of a dynamical system that alternates between long periods of exhibiting one out of several types of dynamical characteristics.In their seminal paper [32] Manneville and Pomeau investigated intermittency in the context of transitions to turbulence in convective fluids, see also [27,12], and distinguished several different types of intermittency.An illustrative example of a one-dimensional map with intermittent behaviour is the Manneville-Pomeau map T : [0, 1] → [0, 1], x → x + x 1+α (mod 1) for some α > 0. The source of intermittency for this map is the presence of a neutral fixed point at the origin, which causes orbits to spend long periods of time close to zero, while behaving chaotically once they escape.
The dynamics of the Manneville-Pomeau map and similar maps with a single neutral fixed point have been extensively studied over the past decades.It is known for example that such maps admit an absolutely continuous invariant measure (acim) and that their statistical properties are determined by the characteristics of the fixed point.See [26,30,35,19,20,11,10,17] for results on Manneville-Pomeau type maps, and [33,15,22,31,36] for other related results on one-dimensional systems with neutral fixed points.
Intermittency caused by neutral fixed points was also studied in random dynamical systems, see e.g.[7,6,23,8,9,24].These results show that a random dynamical system, built as a mixture of 'good' maps with finite acim's and 'bad' maps with slower mixing rates or without finite acim's, inherit ergodic properties typical for the 'good' maps: e.g., the random systems still admit a unique finite acim.On the other hand, it is clear that in the random mixture of good and bad maps, the presence of bad maps should be visible in the properties of the acim.In [24] it was shown that in the random system built using the 'good' Gauss and 'bad' Rényi continued fractions maps, the density of the acim is provably less smooth than the invariant density of the Gauss map.This loss of smoothness is an interesting new phenomenon, which deserves further study.
The topic of the present paper is another type of intermittency observed in random dynamical systems: the so-called critical intermittency introduced recently in [2,21].To illustrate the concept, consider the Markov process generated by random applications of one of the two logistic maps T 2 (x) = 2x(1 − x) and T 4 (x) = 4x(1 − x): for each n, independently The dynamics of these two maps individually is quite different: T 4 exhibits chaotic behaviour and admits an ergodic absolutely continuous invariant probability measure, while T 2 has 1 2 as a superattracting fixed point with (0, 1) as its basin of attraction.Under random compositions of T 2 and T 4 the typical behaviour is the following: orbits are quickly attracted to 1  2 by applications of T 2 and are then repelled first close to 1 and then close to 0 by one application of T 4 followed by an application of either T 2 or T 4 .Since 0 is a repelling fixed point for both maps, orbits then leave a neighbourhood of 0 after a number of time steps, see Figure 1.This pattern occurs infinitely often in typical random orbits and is the result of the interplay between the exponential divergence from 0 under T 2 and T 4 and the superexponential convergence to 1  2 under T 2 .Figure 2(c) shows an orbit under random compositions of T 2 and T 4 as well as an orbit of a point under a Manneville-Pomeau map in (a) and a random orbit under compositions of the Gauss and Rényi maps in (b).The dynamical behaviour of random compositions of the two logistic maps T 2 and T 4 was studied in [4,5,2,14,21] among others.In [2,21] the authors investigated the existence and finiteness of absolutely continuous invariant measures for this random system and for iterated function systems consisting of rational maps on the Riemann sphere.One particular result from [2] states that the random dynamical system generated by i.i.d.compositions of T 2 and T 4 chosen with probabilities p 2 and p 4 = 1 − p 2 admits an absolutely continuous invariant measure that is σ-finite on the interval [0, 1] and that is infinite in case p 2 > 1  2 .An interesting question that was left open in [2] is whether for p 2 ≤ 1 2 this measure is infinite or finite.In this article we answer this question.We consider a large family of random interval maps with critical intermittency that includes the random combination of T 2 and T 4 .The systems we consider consist of i.i.d.compositions of a finite number of maps of two types: bad maps which share a superattracting fixed point and good maps that map the superattacting fixed point onto a common repelling fixed point.To be precise, the families of maps we consider are defined as follows.
Throughout the text we fix a point c ∈ (0, 1) that will represent the single critical point of our maps, both good and bad.
A map T g : [0, 1] → [0, 1] is in the class of good maps, denoted by G, if (G1) T g | (0,c) and T g | (c,1) are C 3 diffeomorphisms onto (0, 1) and T g ({0, c, 1}) ⊆ {0, 1}; (G2) T g has non-positive Schwarzian derivative on [0, c) and (c, 1]; (G3) to T g we can associate three constants r g ≥ 1, 0 < K g < 1 and M g > r g such that These conditions imply in particular that at least one of the maps T g | [0,c] or T g | [c,1] is continuous, and that both branches of T g are strictly monotone.Note also that the conditions K g < 1 and M g > r g are superfluous, since we can always choose a smaller constant K and larger constant M to satisfy (1.1), but we need these specific bounds in our estimates later.The critical point c is mapped to either 0 or 1 under each of the good maps and both 0 and 1 are (eventually) fixed points or periodic points (with period 2) by (G1) that are repelling by (G4).Examples include the doubling map and any surjective unimodal map, see Figures 3(a The choice of conditions (G1)-(G4) is based on two factors: firstly, these conditions incorporate the most important properties of the 'good' logistic map T 4 (x) = 4x(1 − x), which is the primary motivating example for this work, and secondly, the techniques used in this paper are motivated by the work of Nowicki and Van Strien [29] where the following result has been proven.Throughout the text we let λ denote the one-dimensional Lebesgue measure.
Theorem 1.1.Suppose that T : [0, 1] → [0, 1] is unimodal, C 3 , has negative Schwarzian derivative and that the critical point of T is of order r ≥ 1. Moreover assume that the growth rate of |DT n (c 1 )|, c 1 = T (c), is so fast that Then T has a unique absolutely continuous invariant probability measure µ which is ergodic and of positive entropy.Furthermore, there exists a positive constant K such that for any measurable set A ⊂ (0, 1).Finally, the density ρ = dµ dλ of the measure µ with respect to λ is an L τ − -function where τ = r/(r − 1) and Formally this result is not immediately applicable to the good maps we introduced.The difference, however, is not principal and the conclusion remains exactly the same, the main reason being that the conditions (G1) and (G4) imply the growth rate (1.2), and hence any good map admits a unique probability acim.
In particular (B1) implies that T b is continuous, and that T b strictly monotone on the intervals [0, c] and [c, 1].In contrast to (G3), note that in (B3) we have assumed that b is not equal to one.This means that DT b (c) = 0, so c is a superattracting fixed point for each bad map.An immediate consequence of the presence of a globally attracting fixed point at c is that the only finite invariant measures are linear combinations of Dirac measures at 0, c, and 1.For examples, see Figures 3(c) and (d).
The random systems we consider in this article are the following.Let T 1 , . . ., T N ∈ G∪B be a finite collection of good and bad maps.Write Σ G = {1 ≤ j ≤ N : T j ∈ G} and Σ B = {1 ≤ j ≤ N : T j ∈ B} for the index sets of the good and bad maps respectively and assume that Σ G , Σ B = ∅.Write Σ = {1, . . ., N } = Σ G ∪ Σ B .The skew product transformation or random map F is defined by where σ denotes the left shift on sequences in Σ N .Let p = (p j ) j∈Σ be a probability vector representing the probabilities with which we choose the maps T j , j ∈ Σ.We will consider measures of the form P × µ p , where P is the p-Bernoulli measure on Σ N and µ p is a Borel measure on [0, 1] absolutely continuous with respect to λ and satisfying In this case P×µ p is an invariant measure for F and we say that µ p is a stationary measure for F .We also say that a stationary measure µ p is ergodic for F if P × µ p is ergodic for F .Our main results are the following.Theorem 1.2.Let {T j : j ∈ Σ} be as above and p = (p j ) j∈Σ a positive probability vector.
(1) There exists a unique (up to scalar multiplication) stationary σ-finite measure µ p for F that is absolutely continuous with respect to the one-dimensional Lebesgue measure λ.Moreover, this measure is ergodic.(2) The density dµp dλ is bounded away from zero, is locally Lipschitz on (0, c) and (c, 1) and is not in L q for any q > 1.
We call the measure µ p from Theorem 1.2 an acs measure.
Theorem 1.3.Let {T j : j ∈ Σ} be as above and p = (p j ) j∈Σ a positive probability vector.Let µ p be the unique acs measure from Theorem 1.2.Set θ = b∈Σ B p b b .Then µ p is finite if and only if θ < 1.In this case, there exists a constant C > 0 such that As we shall see in (4.12) the bound in (1.7) can be improved by not bounding mixtures b r g = k i=1 b i r g by their maximal value k max r max , but this improvement does not change the qualitative behaviour of the bound.
It will become clear that the density dµp dλ in Theorem 1.2 blows up to infinity at the points zero and one and also (at least on one side) at c. Theorem 1.3 says that dµp dλ is integrable if and only if θ is small enough, namely θ < 1.This intuitively makes sense since for a smaller value of θ the attraction of orbits to c is weaker on average and consequently orbits typically spend less time near zero and one once a good map is applied.
The inequality (1.7) is the counterpart of the Nowicki-Van Strien inequality (1.3), and naturally gives a substantially worse bound due to the presence of bad maps.It is not immediately clear how much worse (1.7) is in comparison to (1.3).However, the following holds.
Moreover, the acs measure µ p from Theorem 1.2 depends continuously on the probability vector p ∈ R N .Corollary 1.2.Let {T j : j ∈ Σ} be as above.For each n ≥ 0, let p n = (p n,j ) j∈Σ be a positive probability vector such that sup n b∈Σ B p n,b b < 1 and assume that lim n→∞ p n = p in R N + .Then the sequence µ pn converges weakly to µ p .
In (B3) we have assumed that for any bad map T b the corresponding value b is not equal to one.Note that a bad map T b for which we allow b = 1 satisfies |DT b (c)| > 0, so in this case c is an attracting fixed point for T b but not superattracting.It should not come as a surprise that results similar to Theorem 1.2 and Theorem 1.3 also hold in case some or all of the bad maps T b have b = 1.The proofs presented for these theorems, however, do not immediately carry over.In the last section we explain how the results are affected in case some or all maps T b satisfy b = 1 and what the necessary changes in the proofs are.
The proofs use a mixture of techniques.For the existence result from Theorem 1.2 we use an inducing scheme.This approach is inspired by [2], but the choice of the inducing domain needed some care.With the help of Kac's Lemma we then obtain that the acs measure is infinite in case θ ≥ 1.To prove that this measure is finite for θ < 1 we use an approach similar to the one employed in [29].The main difficulty here is that it may take an arbitrarily long time before the superattracting fixed point is mapped onto the repelling orbit by one of the good maps, which decreases the regularity of the density of the acs measure.
The paper is organised as follows.In Section 2 we list some preliminaries and first consequences of the conditions (G1)-(G4) and (B1)-(B4).Section 3 is devoted to the proof of Theorem 1.2 and in Section 4 we prove Theorem 1.3.In Section 5 we prove Corollaries 1.1 and 1.2 and explain what the analogues of Theorem 1.2 and 1.3 are in case b = 1 for one or more b ∈ Σ and how the proofs of Theorem 1.2 and 1.3 need to be modified to get these results.We end with some final remarks.

Acknowledgments
We would like to thank Sebastian van Strien for useful suggestions.

Preliminaries
We start by introducing some notation and collecting some general preliminaries.
2.1.Words, sequences and invariant measures.For any finite subset Σ ⊆ N and any n ≥ 1 we use u ∈ Σ n to denote a word u = u 1 • • • u n .Σ 0 contains only the empty word, which we denote by .On the space of infinite sequences Ω = Σ N we use to denote the cylinder set corresponding to u.The notation |u| indicates the length of u, so |u| = n for u ∈ Σ n .For two words u ∈ Σ n and v ∈ Σ m the concatenation of u and v is denoted by uv ∈ Σ n+m .For a probability vector p = (p j ) j∈Σ and u ∈ Σ n we write p u = n i=1 p u i with p u = 0 if n = 0. We use σ to denote the left shift on Ω: for ω ∈ Ω and all n ∈ N, (σω) n = ω n+1 .
Given a finite family of Borel measurable maps {T j : [0, 1] → [0, 1]} j∈Σ , the skew product or the random map F is defined by We use the following notation for the iterates of the maps T j .For each ω ∈ Ω and each n ∈ N 0 define With this notation, we can write the iterates of the random system F as The following lemma on invariant measures for F holds.
Lemma 2.1 ( [28], see also Lemma 3.2 of [18]).If all maps T j are non-singular with respect to λ (that is, λ(A) = 0 if and only if λ(T −1 j A) = 0 for all A ⊆ [0, 1] measurable) and P is the p-Bernoulli measure on Ω for some positive probability vector p, then the P × λabsolutely continuous F -invariant measures are precisely the measures of the form P × µ where µ is absolutely continuous w.r.t.λ and satisfies j∈Σ p j µ(T −1 j A) = µ(A) for all Borel sets A.
Now let (X, F, m) be a measure space and T : X → X measurable and non-singular with respect to m.For a set The following result can be found in e.g.[1,Proposition 1.5.7].Note that this statement asks for T to be conservative.This is not used in the proof however and the condition m X \ n≥1 T −n Y = 0 is enough to guarantee that the induced transformation is well defined.
Lemma 2.2 (see e.g.Proposition 1.5.7. in [1]).Let T be a measurable and non-singular transformation on a measure space (X, F, m) and let Y ∈ F be such that m| Y is a finite invariant measure for the induced transformation T Y , then the measure µ on (X, F, m) defined by We will also use the following result on the first return time.
Lemma 2.3 (Kac's Formula, see e.g.1.5.5. in [1]).Let T be a conservative, ergodic, measure preserving transformation on a measure space (X, F, m).Let Y ∈ F be such that 0 < m(Y ) < ∞ and let ϕ Y be the first return map to Y .Then Y ϕ Y dm = m(X).
One can also obtain invariant measures via a functional analytic approach.Here we give a specific result for interval maps.Let I be an interval.If T : I → I is piecewise strictly monotone and C 1 , then the Perron-Frobenius operator P T is defined on the space of non-negative measurable functions h on I by (2.5) A non-negative measurable function ϕ on I is a fixed point of P T if and only if it provides an invariant measure µ for T that is absolutely continuous with respect to λ by setting µ(A) = A ϕ dλ for each Borel set A.
For a random map F using a finite family of transformations {T j : I → I} j∈Σ , such that each map T j is piecewise strictly monotone and C 1 , and a positive probability vector p = (p j ) j∈Σ , the Perron-Frobenius operator P F is given on the space of non-negative measurable functions h on I by where each P T j is as given in (2.5).Let P denote the p-Bernoulli measure on Ω.Then a non-negative measurable function ϕ on I is a fixed point of P F if and only if the measure P × µ, where µ is the absolutely continuous measure with dµ dλ = ϕ, is F -invariant.In Subsection 3.3 it will be shown that the density dµp dλ from Theorem 1.2, which is a fixed point of the Perron-Frobenius operator for the random system F given by (1.5), is bounded away from zero.From this it is easy to see that (2.6) implies that dµp dλ blows up to infinity at the points zero and one and also at least on one side of c.

2.2.
Estimates on good and bad maps.Now let T : I → I be a C 3 map of an interval I into itself.The Schwarzian derivative of T at x ∈ I with DT (x) = 0 is defined by We say that T has non-positive Schwarzian derivative on I if DT (x) = 0 and ST (x) ≤ 0 for all x ∈ I.A direct computation shows that the Schwarzian derivative of the composition of two transformations T 1 , T 2 : From (2.8) it follows that for a finite collection {T j : I → I} j∈Σ of C 3 interval maps with non-positive Schwarzian derivative, we can write the Schwarzian derivative of T n ω , n ∈ N and ω ∈ Ω, as (2.9) By (G2) and (B2) this implies that for a collection of good and bad maps {T j } j∈Σ , T n ω has non-positive Schwarzian derivative on [0, 1] outside of the critical points of T n ω for all ω ∈ Ω and n ∈ N.
We will use the following two well-known properties of maps with non-positive Schwarzian derivative (see e. A consequence of the Minimum Principle is that for any T ∈ G ∪ B the derivative |DT | has locally no strict minima in the intervals (0, c) and (c, 1).In particular, there cannot be any attracting fixed points for T in (0, c) and (c, 1).Therefore, if T ∈ B, then T n (x) → c as n → ∞ for all x ∈ (0, 1).
Koebe Principle: For each ρ > 0 there exist K (ρ) > 1 and M (ρ) > 0 with the following property.Let J ⊆ I be two intervals and suppose that T : I → I has non-positive Schwarzian derivative.If both components of T (I)\T (J) have length at least ρ • λ(T (J)), then Note that the constants K (ρ) , M (ρ) only depend on ρ and not on the map T .
From (2.11) one can obtain a bound on the size of the images of intervals: Let J ⊆ J be another interval.By the Mean Value Theorem there exists an Proof.It follows from (B3) that for any j ∈ Σ B and x ∈ [0, 1], By induction we get that for each n ∈ N and ω ∈ Σ N B , From (B3) we see that min{K b : b∈Σ B } max < 1.The lower bound now follows by observing that The result for the upper bound follows similarly, by noticing that in this case from (B3) it follows that It follows that under iterations of bad maps the distance |T n ω (x) − c| is eventually decreasing superexponentially fast in n.
Furthermore, note that there exists a δ > 0 such that The upper bound on |T n ω (x) − c| that we obtained in Lemma 2.4 will be used in Section 4 to prove that µ p in Theorem 1.3 is infinite if θ ≥ 1.The lower bound from Lemma 2.4 will be used for the proof that µ p is finite if θ < 1.

Existence of a σ-finite acs measure
From now on we fix an integer N ≥ 2 and consider a finite collection T 1 , . . ., T N ∈ G ∪ B of good and bad maps in the classes G and B. As in the Introduction write Σ G = {1 ≤ j ≤ N : T j ∈ G} and Σ B = {1 ≤ j ≤ N : T j ∈ B} for the corresponding index sets and assume that Σ G , Σ B = ∅.Write Σ = {1, 2, . . ., N } and set Ω = Σ N for the set of infinite sequences of elements in Σ.In this section we prove Theorem 1.2, i.e., we establish the existence of an ergodic acs measure and several of its properties using an inducing scheme for the random system F .We fix the index g ∈ Σ G of one good map T g and start by constructing an inducing domain that depends on this g.

3.
1.The induced system and return time partition.The first lemma is needed to specify the set on which we induce.For each k ∈ N let x k and x k in (0, c) denote the critical points of T k g closest to 0 and c, respectively.Furthermore, let y k and y k in (c, 1) denote the critical points of T k g closest to 1 and c, respectively.Lemma 3.1.We have Proof.Let a and b denote the critical points of T 2 g in (0, c) and (c, 1), respectively.Then at least one of the branches T 2 g | (0,a) and On the other hand, by the Minimum Principle, DT 2 g (y) ≥ min{DT 2 g (0), DT 2 g (z)}, a contradiction.Combining this with DT 2 g (0) > 1 and defining L : (0, 1) → (0, a) by By the previous lemma and (G1), for k ∈ N large enough it holds that and, using also (G4), (B1) and (B4), for every j ∈ Σ, Fix a κ ∈ N for which (3.1) and (3.2) hold.We introduce some notation.Let t ∈ Σ be such that t = g, and define The next lemma shows that P × λ-almost all (ω, x) eventually enter Y under iterations of F , and hence that P × λ-almost all (ω, x) ∈ Y will return to Y infinitely many times.Lemma 3.2.
Proof.For P-almost all ω ∈ Ω we have σ n ω ∈ [g] for infinitely many n ∈ N.For any such n and each x ∈ (0, c)∪(c, 1) either 1), which means that we are in the first case if T n+1 ω (x) / ∈ J. Hence, for P × λ-almost all (ω, x) ∈ Ω × [0, 1] we have T n ω (x) ∈ J for infinitely many n ∈ N. Consider such an (ω, x), and let (n j ) j∈N be an increasing sequence in N that satisfies T n j ω (x) ∈ J for each j ∈ N. Recall that σ denotes the left shift on sequences and define E = Σ\( j∈N σ −n j C), i.e., E contains precisely those sequences ω that satisfy σ n j (ω ) ∈ C for all n j .According to the Lebesgue Differentiation Theorem (see e.g.[34]) we may assume that ω is a Lebesgue point of 1 E , which yields as j → ∞.
Since P(C) > 0, we conclude that ω / ∈ E. Hence, there is an n j so that By Lemma 3.2 the first return time map ϕ Y , see (2.4), and the induced transformation F Y are well defined on the full measure subset of points in Y that return to Y infinitely often under iterations of F , which we call Y again.The set of points in Y that return to Y after n iterations of F can be described as and that by construction each map T n ω | J in (3.7) consists of branches that all have range (0, c) or (c, 1) or (0, 1), since any branch of T κ ω | J maps onto (0, 1).Therefore, Y ∩ F −n (Y ) can be written as a finite union of products A = [ug κ t] × I of cylinders [ug κ t] ⊆ C with |u| = n and open intervals I ⊆ J, each of which is mapped under F n onto C × J 0 or C × J 1 .Call the collection of these sets P n and let α = n>κ P n .Let P C and λ J denote the normalized restrictions of P to C and λ to J respectively.Lemma 3.3.
(1) The collection α forms a countable return time partition of Y , i.e., the measure Proof.The fact that P C × λ J ( A∈α A) = 1 follows from Lemma 3.2 and it is clear from the construction that the first return time map ϕ Y is constant on any element A ∈ α.To show that any two elements are disjoint, note that for A, A ∈ P n this is clear.Suppose there are 1 . Moreover, I ∩ ∂I = ∅ or I = I .In both cases, note that F m+κ+1 ([vg κ t] × ∂I ) ⊆ Ω × {0, 1}, so by (G1) and (B1) also For (2) note that, since α is a partition of Y , for each x ∈ J it holds that there is an A = [ug κ t] × I ∈ α with x ∈ I or x ∈ ∂I.In the first case there is nothing to prove, so assume that x ∈ ∂I.Then T u (x) ∈ ∂J i for some i ∈ {0, 1}.From the first part of the proof of Lemma 3.2 it then follows that there is an n > |u| and an ω ∈ C such that T n ω (x) ∈ J.If we write I for the interval in T −n ω (J) containing x, then this means that there exists a The second part of Lemma 3.3 shows that even though the partition elements of α are disjoint, their projections on the second coordinate are not.The same is true for the first coordinate as the same string u can lead points in J to J 0 and J 1 .

3.2.
Properties of the induced transformation.It follows from (3.7) and Lemma 3.3 that for each A ∈ α we have either For any [ug κ t] × I ∈ α, the transformation T u | I is invertible from I to one of the sets J 0 or J 1 .Define the operator P u,I : L 1 (J, λ J ) → L 1 (J, λ J ) by The random Perron-Frobenius-type operator P Y : L 1 (J, λ J ) → L 1 (J, λ J ) on Y is given by Note that P Y is not exactly of the same form as the usual Perron-Frobenius operator in (2.6).Nonetheless, we have the following result.
Proof.For each cylinder K ⊆ C and each Borel set E ⊆ J we have In Lemma 3.5 below we show that a fixed point of Each T Z has non-positive Schwarzian derivative, so we can apply the Koebe Principle.The image T Z (J Z ) either equals J 0 or J 1 .Choose a ρ > 0 such that ).There is a canonical way to extend the domain of each T Z to an interval I containing J Z , such that T Z (I) equals either I 0 or I 1 and S(T Z ) ≤ 0 on I. Then by the Koebe Principle there exist constants K (ρ) > 1 and M (ρ) > 0 such that for all m ∈ N, Z ∈ α m and x, y Note that for the random Perron-Frobenius-type operator from (3.9) we have for each m ≥ 1 that where P T Z is as in (2.5).Lemma 3.5 (cf.Lemmata V.2.1 and V.2.2 of [16]).P Y admits a fixed point ϕ ∈ L 1 (J, λ J ) that is bounded, Lipschitz and bounded away from zero.
Proof.For each m ∈ N and x ∈ J, Using the Mean Value Theorem, for all m ∈ N and Z ∈ α m there exists a ξ ∈ J Z such that Set K 1 = max{K ( ρ) ,M ( ρ) } P(C)•min{λ(J 0 ),λ(J 1 )} , where ρ is as in (3.11) and (3.12).Since DT Z (ξ) and DT Z (y) have the same sign for any y ∈ J Z , (3.15) together with (3.11) implies Moreover, if for A = [ug κ t] × I ∈ α we take x, y ∈ I, then for any Z ∈ α m it holds that x ∈ T Z (J Z ) if and only if y ∈ T Z (J Z ).For such Z, let x Z , y Z ∈ J Z be such that T Z (x Z ) = x and T Z (y Z ) = y.Then by (3.12) (3.17 Hence, ϕ is bounded and by Lemma 3.3(2) it is clear that ϕ is Lipschitz (with Lipschitz constant bounded by K 2 1 ).It is readily checked that ϕ is a fixed point of P Y , so that P C ×ν with ν = ϕ dλ is an invariant probability measure for F Y .
What is left is to verify that for each A = [ug κ t] × I ∈ α the function ϕ is bounded from below on the interior of I. Suppose that there is such an A = [ug κ t] × I for which inf x∈I ϕ(x) = 0. Then from (3.18) it follows that ϕ(y) = 0 for all y ∈ I, hence ν(I) = 0. Either I ⊆ J 0 or I ⊆ J 1 .If I ⊆ J 0 , then for any set A = [vg κ t] × I ∈ α with T v (I ) = J 0 it holds that , which together give inf x∈I ϕ(x) = 0 and therefore, like before, ν(I ) = 0.There are sets A = [vg κ t] × I with I ⊆ J 1 and T v (I ) = J 0 , so we can repeat the argument to show that also for any set A = [vg κ t]×I ∈ α with T v (I ) = J 1 we have ν(I ) = 0.So P C ×ν(A) = 0 for all A ∈ α.If I ⊆ J 1 we come to the same conclusion.This gives a contradiction, so ϕ is bounded from below on each interval I.
It follows from Lemma 3.4 that P C × ν with ν = ϕdλ J is a finite F Y -invariant measure.To show that P C × λ J is F Y -ergodic we need the following result, which states that the sets π(A) for A ∈ α m shrink uniformly to λ-null sets as m → ∞.
Since JZ \ J Z consists of at most two intervals, with (3.11) and (2.13) this gives : i = 0, 1 ∈ (0, 1).Then by repeating the same steps, we obtain We show that P C × λ J (E) = 1.The Borel measure ρ on Y given by for Borel sets V is F Y -invariant.According to Lemma 2.2 and Lemma 2.1 this yields a stationary measure μ on [0, 1] that is absolutely continuous w.r.t.λ and satisfies (P× μ)| Y = ρ.Let L := supp(μ| J ) denote the support of the measure μ| J .Since ρ is a product measure, this gives supp(ρ) = C × L and so by the definition of ρ we get C × L ⊆ E and ρ(E\(C × L)) = 0. Since ϕ is bounded away from zero, this yields To obtain the result, it remains to show that λ J (J\L) = 0.
We have Let ε > 0. Since λ J (L) > 0, it follows from Lemma 3.6 and the Lebesgue Density Theorem that there are i ∈ {0, 1}, m i ∈ N and Z i ∈ α m i such that and from (3.11) it follows that Hence, So, for each ε > 0 we can find an i = i(ε) for which (3.22) holds.If for each ε 0 > 0 and each i 0 ∈ {0, 1} there exists an ε ∈ (0, ε 0 ) such that i(ε) = i 0 , we obtain from (3.22) that λ J (J\L) = 0. Otherwise, there exists ε 0 > 0 and i 0 ∈ {0, 1} such that i(ε) = i 0 for all ε ∈ (0, ε 0 ).Without loss of generality, suppose that i 0 = 0. Then (3.22) gives λ J (J 0 \L) = 0.By the equivalence of ν and λ J and the fact that every good map has full branches it follows that Together with the Poincaré Recurrence Theorem this gives that satisfies P C × ν(A) > 0, and therefore P C × λ J (A) > 0. Together with λ J (J 0 \L) = 0 it follows from the Lebesgue Density Theorem that there exists a Lebesgue point x ∈ π(A)∩L of 1 π(A)∩L .Since x ∈ π(A), for infinitely many m ∈ N there exists Z m ∈ α m such that x ∈ J Zm and T Zm (J Zm ) = J 1 .This again together with Lemma 3.6 yields that for each ε > 0 there exist m ∈ N and Z ∈ α m such that Similar as before, this gives λ J (J 1 \L) = 0, so λ J (J\L) = 0.

3.3.
The proof of Theorem 1.2.In the previous paragraphs we collected all the ingredients necessary to prove Theorem 1.2.
Proof of Theorem 1.2.(1) We have constructed a finite F Y -invariant measure P C ×ν which is absolutely continuous with respect to P C × λ J .Since F is non-singular with respect to P×λ, we can therefore by Lemma 2.2 extend P C ×ν to an F -invariant measure P×µ which is absolutely continuous with respect to P × λ.Lemma 3.2 immediately implies that µ is σ-finite.What is left to show is that P × µ is the unique such measure (up to multiplication by constants) and that it is ergodic.
A well known result [1, Theorem 1.5.6]states that a conservative ergodic non-singular transformation T on a probability space (X, B, m) admits at most one (up to scalar multiplication) m-absolutely continuous σ-finite invariant measure.Therefore, it suffices to show that F is conservative and ergodic with respect to P × λ.We are going to deduce these properties of F from the corresponding properties of the induced transformation F Y .
In the proof of part (2) below we will see that the density of dµ dλ is bounded away from zero.Hence, λ µ.Combining Lemma 3.2 with Maharam's Recurrence Theorem gives that F is conservative with respect to P × µ and thus also with respect to P × λ.Furthermore, from the ergodicity of F Y with respect to P C × λ J it follows by Lemma 3.2 combined with [1, Proposition 1.5.2(2)] that F is ergodic with respect to P λ.
(2) For the density ψ := dµ dλ it holds that ψ| J = ϕ.Since we can take κ in the definition of J as large as we want, ψ is locally Lipschitz on (0, c) and (c, 1).Moreover, it is a fixed point of the Perron-Frobenius operator from (2.6) and thus for all x ∈ [0, 1], From Lemma 3.5 we conclude that ψ is bounded from below by some constant C > 0. It remains to show that ψ is not in L q for any q > 1.To see this, fix a b ∈ Σ B .Since ψ is bounded from below by C > 0, we have for all k ∈ Z ≥0 and x ∈ [0, 1] that Let b , M b , r g , M g , K g be as in (B3) and (G3).From (B3), (G3) and Lemma 2.4 we get for the positive constant On the other hand, from (G3) we obtain for any y ∈ (T g T k b ) −1 {x} as in the proof of Lemma 2.4 that for a positive constant K 3 .This gives the result.
Remark 3.1.The result from Theorem 1.2 still holds if we allow the critical order b from (B3) to be equal to 1 for some b, as long as max > 1.To see this, note that in the proof of Theorem 1.2 condition (B3) only plays a role in proving that dµp dλ ∈ L q for any q > 1.Here we refer to Lemma 2.4 and the constants K and M , which are not well defined if  1) and obtain the same result.In case max = 1, then most parts from Theorem 1.2 still remain valid with the exception that then we can only say that dµp dλ ∈ L q if q ≥ rmax rmax−1 .This follows from the above reasoning by taking k = 0 in the definition of τ in the proof of Theorem 1.2 and by noting that τ = (1 − r −1 max )q ≥ 1 if q ≥ rmax rmax−1 .

Estimates on the acs measure
In this section we prove Theorem 1.3.Recall the definition of θ from Theorem 1.3: 4.1.The case θ ≥ 1.To prove one direction of Theorem 1.3, namely that the unique acs measure µ from Theorem 1.2 is infinite if θ ≥ 1, we introduce another induced transformation.For any ω ∈ [bb] and x ∈ (a, ξ) it follows by (4.2) and (2.16) that T n ω (x) can only return to (a, ξ) after at least one application of a good map.Assume that ω ∈ [bb] is of the form ω = (b, b, ω 3 , ω 4 , . . ., ω n , g, ω n+2 , . ..), From (G3) and ( 4.3) we obtain that Then ζ > 1 by (G4), (B4).Assume κ(ω, x) = m+n for some m ≥ 1.Then T m+n ω (x) ∈ (a, ξ) so that by (G1), Because of (4.4) this implies We obtain that for any g ∈ Σ G , we get [bb]×(a,ξ) κ dP × µ = ∞ and from Lemma 2.3 we now conclude that µ is infinite.4.2.The case θ < 1.For the other direction of Theorem 1.3, assume θ < 1.We first obtain a stationary probability measure μ for F as in (1.5) using a standard Krylov-Bogolyubov type argument.For this, let M denote the set of all finite Borel measures on [0, 1], and define the operator P : M → M by where ν • T −1 j denotes the pushforward measure of ν under T j .Then P is a Markov-Feller operator (see e.g.[25]) with dual operator U on the space B([0, 1]) of all bounded Borel measurable functions given by 1 U f = j∈Σ p j f • T j for f ∈ B([0, 1]).As before, let λ denote the Lebesgue measure on [0, 1], and set λ n = P n λ for each n ≥ 0. Furthermore, for each n ∈ N define the Cesáro mean µ n = 1 n n−1 k=0 λ k .Since the space of probability measures on [0, 1] equipped with the weak topology is sequentially compact, there exists a subsequence (µ n k ) k∈N of (µ n ) n∈N that converges weakly to a probability measure μ on [0, 1].Using that a Markov-Feller operator is weakly continuous, it then follows from a standard argument that P μ = μ, that is, μ is a stationary probability measure for F .The next theorem will lead to the estimate (1.7) from Theorem 1.3.For any Since θ < 1, the sum is bounded and with the Dominated Convergence Theorem we can take the limit as δ → 0 to obtain This proves that μ is absolutely continuous with respect to the Lebesgue measure on [0, 1].It follows that the probability measure μ is equal to the unique acs measure µ p from Theorem 1.2.The estimate (1.7) follows directly from (4.12).
It remains to give the proof of Theorem 4.1.We shall do this in a number of steps.Proposition 4.2.There exists a constant K 1 > 0 such that for all n ∈ N, all u ∈ Σ n and all Borel sets Proof.Let n ∈ N, u ∈ Σ n and a Borel set A ⊆ [0, 1] with 0 < 3λ(A) < 1 2 min{c, 1 − c} < 1 be given and write η = λ(A).The map T u has non-positive Schwarzian derivative on any of its intervals of monotonicity (see (2.9)) and the image of any such interval is Suppose the minimal value is attained at f −1 (2η) and set A 3 = (2η, 3η) and J = f −1 A 3 .By the condition on the size of A it follows from the Koebe Principle that Combining (4.13) and (4.14) gives We conclude that Furthermore, a similar reasoning can be done for the interval [c, 1] to conclude that Hence, setting K 1 = max{K (η) , 1} gives the desired result.
Proposition 4.2 shows that to get the desired estimate from Theorem 4.1 it suffices to consider small intervals on the left and right of [0, 1] and around c, i.e., sets of the form for ε > 0. We first focus on estimating the measure of the intervals I c (ε). Lemma 4.1.There exists a constant K 2 ≥ 1 such that for all n ∈ N, u ∈ Σ n−1 × Σ G and all ε > 0 we have Now suppose ε < 1  4 min{c, 1−c}.Again the map T u has non-positive Schwarzian derivative on the interior of any of its intervals of monotonicity and since u n ∈ Σ G the image of any such interval is [0, 1].Use I to denote the collection of connected components of T −1 u I c (ε).Let A ∈ I and write J = J A and I = I A for the intervals that satisfy A ⊆ J, A ⊆ I and Also, write f = T u | I .Since f has non-positive Schwarzian derivative, it follows from (2.13) that We conclude that min{c,1−c} , the desired result now follows from (4.18) and (4.20).
To find λ n I c (ε) , first note that from Lemma 2.4 it follows that for all ε > 0, n ∈ N, u ∈ Σ n B , (4.21) By splitting Σ n according to the final block of bad indices, we can then write using (4.21) and Lemma 4.1 that We now focus on for all x ∈ I 0 (ε 0 ) and each j ∈ Σ. (4.23)Such ε 0 and t exist because of (G4) and (B4).From (G3) it follows that for each 0 < ε < ε 0 and g ∈ Σ G , Furthermore, from (B1) it follows that for each ε ∈ (0, ε 0 ) and b ∈ Σ B , for some s ∈ {1, . . ., n}, where for each i we have b Moreover, we introduce notation to indicate the length of the tails of the block u: If necessary to avoid confusion, we write s(u), k i (u), etcetera to emphasize the dependence on u.Lemma 4.2.There exists a constant K 5 > 0 such that for each 0 < ε < ε 0 , n ∈ N and Proof.We prove the statement by an induction argument for s.Let u be a word with symbols in Σ, and write u = b 1 g 1 • • • b sg s for its decomposition as in (4.26).First suppose that s = 1.If m 1 = 0, then the statement immediately follows from repeated application of (4.25).If m 1 ≥ 1, then repeated application of (4.24) gives (4.27) T −1 .
Note that this is true for the case that k 1 = 0 as well.This proves the statement if s = 1.Now suppose s(u) > 1 and suppose that the statement holds for all words v with s(v) = s(u) − 1.In particular, the statement then holds for the word b 2 g 2 • • • b sg s.Note that m 1 ≥ 1.Again, by repeated application of (4.24) it follows that (4.28) T −1 .
This together with the statement being true for the word b 2 g 2 • • • b sg s yields the statement for u.

33)
The set A i,k,q contains all words of length n−k−q−1 that can precede the word b Hence, using (4.31) we can rewrite and bound the sum in (4.30) that runs from i = 2 to τ as follows: Here the last step follows from the fact that Furthermore, for each b ∈ Σ k B and g ∈ Σ G we have again by setting r max = max{r j : j ∈ Σ G } and α = t 1/rmax that where the last step follows from the fact that f (x) = x α x −1 is a decreasing function and lim x↓0 f (x) = 1 log α .Hence, combining (4.30), (4.36) and (4.37) gives We are now ready to prove Theorem 4.1.
Proof of Theorem 4.1.Let A ⊆ [0, 1] be a Borel set.First suppose that λ(A) ≥ ε 0 3 .Then there exists a constant C = C( ε 0 3 ) > 0 such that Now suppose that λ(A) < ε 0 3 and set ε = 3λ(A).It follows from Proposition 4.2 that for all n ∈ N and all u ∈ Σ n we have ) .Together with (4.22) and Proposition 4.3 this yields for all n ∈ N that This gives the result.

Further results and final remarks
5.1.Proof of Corollaries 1.1 and 1.2.In this section we prove Corollaries 1.1 and 1.2.
Proof of Corollary 1.2.For each n ≥ 0, let p n = (p n,j ) j∈Σ be a positive probability vector such that sup n b∈Σ B p n,b b < 1 and assume that lim n→∞ p n = p in R N + for some p = (p j ) j∈Σ .Let μ be a weak limit point of µ pn .Again, note that such a μ exists because the space of probability measures on [0, 1] equipped with the weak topology is sequentially compact.After passing to a subsequence we have for any continuous function ϕ : Moreover, by the stationarity of the measures µ pn it follows that for each n ≥ 1, To prove that μ is stationary for p, it is sufficient to show that for each j ∈ Σ, If j ∈ Σ B this is obvious, since then ϕ • T j is continuous.For j ∈ Σ G the map ϕ • T j might have a discontinuity at c.In this case, we let ϕ δ be the continuous function given by ϕ δ (x) = ϕ • T j (x) for x ∈ I \ (c − δ, c + δ) and ϕ δ is linear otherwise.Then we have ϕ δ dμ = 0, by the weak convergence and since p n,j → p j as n → ∞.Also, we have where the convergence is uniform in n because of (1.7).Similarly, The last three relations imply (5.2).
To show that μ is absolutely continuous with respect to the Lebesgue measure λ we proceed as in the proof of Theorem 1.3.We set θ = sup n b∈Σ B p n,b b < 1.Let A ⊆ [0, 1] be a Borel set.By Theorem 1.2 every µ pn satisfies (1.7), so that where the constant C n depends on ( g∈Σ G p n,g −1 and (min{p n,g : g ∈ Σ G }) −1 (and properties of the good and bad maps themselves that are not linked to the probabilities).Since each p n , n ≥ 0, is a positive probability vector and lim n→∞ p n = p, both these quantities can be bounded from above and C := sup n C n < ∞.From the weak convergence of µ pn to μ we obtain as in (4.12) using the Portmanteau Theorem that Hence, μ λ.By Theorem 1.2 we know that µ p is the unique acs probability measure for F and p.So, μ = µ p .5.2.The non-superattracting case.With some modifications the results from Theorem 1.2 and Theorem 1.3 can be extended to the class B 1 ⊇ B of bad maps of which critical order b in (B3) is allowed to be equal to 1.We will list the modified statements and the necessary modifications to the proofs here.Note that for each T ∈ B 1 \B, we have DT (c) = 0, and due to the minimal principle, B .Theorem 5.1.Let {T j : j ∈ Σ} be as above and p = (p j ) j∈Σ a positive probability vector.
(1) There exists a unique (up to scalar multiplication) stationary σ-finite measure µ p for F that is absolutely continuous with respect to the one-dimensional Lebesgue measure λ.This measure is ergodic and the density dµp dλ is bounded away from zero and is locally Lipschitz on (0, c) and (c, 1).
(i) The measure µ p is finite, and for each η The main issue we need to deal with in order to get Theorem 5.1 is adapting Lemma 2.4, i.e., finding suitable bounds for |T n ω (x) − c|, since the constants K and M from Lemma 2.4 are not well defined in case min = 1.This is done in the next two lemmata.For the upper bound of |T n ω (x) − c| we assume max > 1 since we only need it for the proof of part (2)(i).Lemma 5.1.Let {T j : j ∈ Σ} be as above.Suppose max > 1.There are constants M > 1 and δ > 0 such that for all n ∈ N, ω ∈ (Σ 1 B ) N and x ∈ [c − δ, c + δ] we have  This proves the remaining part of (3)(i).5.3.Final remarks.The results from Theorem 5.1 contain one possible extension of our main results to another set of conditions (G1)-(G4), (B1)-(B4).In this section we discuss some of the questions that our main results brought up in this respect, i.e., about whether or not some of the conditions (G1)-(G4), (B1)-(B4) can be relaxed, and questions about other possible future extensions.
A condition that plays a fundamental role in the proofs of Theorem 1.2 and Theorem 1.3 is the fact that the critical point is mapped to a point that is a common repelling fixed point for all maps T j .We considered whether this condition can be relaxed, for instance by assuming that the branches of one of the good maps are not full.However, in this case the critical values of the random system are not just 0, c, 1 but contain all the values of all possible postcritical orbits of c.This has several consequences: -An invariant density (if it exists) clearly cannot be locally Lipschitz on (0, c) and (c, 1).
-Proposition 4.2 and all subsequent arguments fail, since it is not sufficient to restrict to neighbourhoods around only 0, c and 1.One might try to solve this issue by requiring that the postcritical orbits 'gain enough expansion' as was done in for instance [29] for deterministic maps.An analogous condition for random systems, however, would become much stronger since it would have to hold for all possible random orbits of c.
-The argument using Kac's Lemma might fail, because in that case there exist words u with symbols in Σ and neighbourhoods U of c such that T u (x) is bounded away from zero and one uniformly in x ∈ U .
The dynamical behaviour of the system is governed by the interplay between the superexponential convergence at c and the exponential divergence from 0 and 1.In this article we fixed the exponential divergence away from 0 and 1 and the two regimes θ < 1 and θ ≥ 1 in Theorem 1.3 only refer to the convergence at c: For smaller θ orbits are less attracted to c.It would be interesting to see under what other conditions on the rates of convergence to c and divergence from 0 and 1 the system admits an acs measure.Could one for example -take exponential convergence to c and polynomial divergence from 0 and 1, or -replace the conditions (G4) and (B4) stating that all good and bad maps are expanding at 0 and 1 by the condition that the random system is expanding on average at a sufficiently large neighbourhood of 0 and 1?
There are also some additional questions that our main results raise.It would be interesting for example to study further statistical properties of the random system such as mixing properties and if possible mixing rates in case the acs measure is finite.It is not clear a priori if the behaviour of the good maps dominates the statistical properties of the random system, since trajectories spend long periods of time near the points 0, c and 1.In this respect the dynamics resembles that of the Manneville-Pomeau maps, and mixing rates might be polynomial rather than exponential.A way to approach this problem is by estimating the measures P × λ({ϕ Y > n}), where ϕ Y is the first return time to Y defined in Section 3 as they give information on the rates of decay of correlations.To obtain the desired decay rates it is sufficient to obtain estimates for P × λ({ϕ Y = k}) for all k > n.Recall that every returning set {ϕ Y = k} is of the form C k × J(C k ), where C k ⊂ Σ N is a cylinder set and J(C k ) ⊂ I is an interval with return time k, which depends only on C k .Obtaining effective estimates on individual intervals J by directly looking at pre-images of Y under the skew product system does not seem very feasible at the moment, since cylinders can contain a positive proportion of bad maps.An alternative approach could be a combinatorial construction as in [3] or [13], where a two step induction process is introduced.To perform a similar construction we have to find a suitable way to define the binding period or the slow recurrence to the critical set, which takes into account the existence of bad maps.
Finally, in Theorems 1.2 and 5.1 we have seen that the regularity of the density dµp dλ depends on whether or not there is a bad map for which c is superattracting: If max > 1, then dµp dλ is not in L q for any q > 1.On the other hand, if max = 1 and the bad maps are expanding on average at c, i.e. |DT b (c)| < 1, then the density has the same regularity as in the setting of Theorem 1.1 by Nowicki and van Strien.Indeed, in this case, if r max > 1, we have dµp dλ ∈ L q if and only if 1 ≤ q < rmax rmax−1 and in the case that r max = 1 we have dµp dλ ∈ L q for all q ∈ [1, ∞].In view of this, one could wonder for which q > 1 we have

Figure 1 .
Figure 1.Critical intermittency in the random system of logistic maps T 2 , T 4 .The dashed line indicates part of a random orbit of x

Figure 2 .
Figure 2. Intermittent behaviour of orbits of (a) a single Manneville-Pomeau map with α = 1.5,(b) a random mixture of the Gauss and Rényi continued fractions maps where the Gauss map is chosen with probability p = 0.1 and (c) a random mixture of the logistic maps T 2 and T 4 where the map T 4 is chosen with probability p = 0.6.

Figure 3 .
Figure 3. Four maps with critical point c = 1 2 .(a) and (b) show two good maps, while in (c) and (d) we see the graphs of two bad maps.
absolutely continuous with respect to m and µ| Y = ν.
Recall the constants b , K b and M b from condition (B3) and set min = min{ b : b ∈ Σ B } and max = max{ b : b ∈ Σ B }. (B3) gives us control over the distance between T n ω (x) and c.

Lemma
any two different sets A, A ∈ α are disjoint and on any A ∈ α the first return time map ϕ Y is constant.(2)Let π denote the canonical projection onto the second coordinate.Any x ∈ J is contained in a set π(A) for some set A ∈ α.

1 m
is uniformly bounded and equicontinuous on I for each A = [ug κ t] × I.By Lemma 3.3(2) it follows that the same holds on J. Hence, by the Arzela-Ascoli Theorem there exists a a function ϕ : J → [0, ∞) satisfying ϕ ≤ K 1 and for each A = [ug κ t] × I ∈ α and x, y ∈ I, 28) |x − T g (c)| ≥ K 2 |y − c| k b rg for the positive constant K 2 = Kg rg K b rg .Now for any q > 1 we can choose k ∈ Z ≥0 large enough so that τ := (1 − −k b r −1 g )q ≥ 1. Combining (3.25), (3.27) and (3.28) we obtain

min = 1 . 1 bb b 1 b
In (3.27) however, we use the estimates from Lemma 2.4 only for one arbitrary fixed b ∈ Σ B .By the same reasoning as in the proof of Lemma 2.4 it follows that K b b −1 |x − c| n b ≤ |T n b (x) − c| ≤ M −1 |x − c| n b .(3.29) for any b ∈ Σ B with b > 1.Hence, if there exists at least one b ∈ Σ B with b > 1, then we can replace the bounds obtained from Lemma 2.4 in (3.27) and (3.28) by constants

Proposition 4 . 1 . 2 M
Suppose θ ≥ 1.Then the unique acs measure µ from Theorem 1.2 is infinite.Proof.Fix a b ∈ Σ B .Recall the definitions of M from Lemma 2.4 and δ from in and below the proof of Lemma 2.4, and set γ = min{δ, 1 −1 }.Let a ∈ [c − γ, c).Then there exists a ξ ∈ (a, c) such that T b (a) > ξ and T 2 b (a) > ξ.Take [bb] × (a, ξ) as the inducing domain and let κ(ω, x) = inf{k ∈ N : F k (ω, x) ∈ [bb] × (a, ξ)} (4.1) be the first return time to [bb] × (a, ξ) under F .If P × µ([bb] × (a, ξ)) = ∞, there is nothing left to prove.If not, then we compute [bb]×(a,ξ) κ dP × µ and use Kac's Formula from Lemma 2.3 to prove the result.So, assume that P × µ([bb] × (a, ξ)) < ∞.The conditions that T b (a) > ξ and T 2 b (a) > ξ together with the fact that any bad map has c as a fixed point and is strictly monotone on the intervals [0, c] and [c, 1], guarantee that for each n ∈ N and ω ∈ Σ N B ∩ [bb] we get T n ω ((a, ξ)) ∩ (a, ξ) = ∅.(4.2) recall that we abbreviate p b = k i=1 p b i and also let b = k i=1 b i where we use p b = 1 = b in case k = 0. Theorem 4.1.There exists a constant C > 0 such that for all n ∈ N and all Borel sets A ⊆ [0, 1] we have

. 10 )
Before we prove this theorem, we first show how it gives Theorem 1.3.Proof of Theorem 1.3.The first part of the statement follows from Proposition 4.1.For the second part, assume that θ < 1 and that Theorem 4.1 holds.Let A ⊆ [0, 1].Using the regularity of λ, for any δ > 0 there exists an open set G ⊆ [0, 1] such that A ⊆ G and λ(G) ≤ λ(A) + δ.Using that (µ n k ) k∈N converges weakly to μ, we obtain from the Portmanteau Theorem together with Theorem 4.1 that μ

4. 1
to the conclusion that there exists a constant C > 0 such that for all n ∈ N and all Borel sets A ⊆ [0, 1],λ n (A) ≤ C • dµp dλ ∈ L q in the intermediate case that max = 1 and b∈Σ 1 B p b |DT b (c)| ≥ 1, i.e. if c is not superattracting for any bad map and the bad maps are not expanding on average at c.
e. on Y , and moreover m-a.e.y ∈ Y returns to Y infinitely often.If we remove from Y the m-null set of points that return to Y only finitely many times, and for convenience call this set Y again, then we can define the induced transformation T