1 Introduction

Ergodic properties of random dynamical systems have been extensively studied for many years (see [3] and the references therein). The most important concepts related to these properties are attractors and invariant measures (see [4, 6, 10, 11]). Among random systems, iterated function systems and related skew products are studied (see [1, 2, 5]). This note is concerned with iterated function systems generated by orientation preserving homeomorphisms on the interval [0, 1]. It was proved (see [7]) that under quite general conditions the system has a unique invariant measure on the open interval (0, 1). (Clearly, such systems also have trivial invariant measures supported on the endpoints.) It is well known that this measure is either absolutely continuous or singular with respect to the Lebesgue measure. The methods that would allow us to distinguish each of these cases are still unknown. The main purpose of our paper is to support the conjecture that typically such invariant measures should be singular (see [1]). In fact, we prove that the set of iterated function systems that have a unique invariant measure on the interval (0, 1) which is singular is generic in the natural class of systems.

2 Notation

By \({\mathcal {M}}_1\) and \({\mathcal {M}}_{fin}\) we denote the set of all probability measures and all finite measures on the \(\sigma \)-algebra of Borel sets \(\mathcal {B}([0,1])\), respectively. By C([0, 1]) we denote the family of all continuous functions equipped with the supremum norm \(\Vert \cdot \Vert \).

An operator \(P:{\mathcal {M}}_{fin}\rightarrow {\mathcal {M}}_{fin}\) is called a Markov operator if it satisfies the following two conditions:

  1. (1)

    positive linearity: \(P(\lambda _1\mu _1+\lambda _2\mu _2)=\lambda _1 P\mu _1+\lambda _2 P\mu _2\) for \(\lambda _1\), \(\lambda _2\ge 0\); \(\mu _1\), \(\mu _2\in {\mathcal {M}}_{fin}\);

  2. (2)

    preservation of measure: \(P\mu ([0,1])=\mu ([0,1])\) for \(\mu \in {\mathcal {M}}_{fin}\).

A Markov operator P is called a Feller operator if there is a linear operator \(P^*:C([0,1])\rightarrow C([0,1])\) (dual to P) such that

$$\begin{aligned} \int \limits _{[0,1]} P^* f(x)\mu (dx) = \int \limits _{[0,1]} f(x) P\mu (dx) \;\;\text {for}\;\; f\in C([0,1]), \mu \in {\mathcal {M}}_{fin}. \end{aligned}$$

Note that, if such an operator exists, then \(P^*(\mathbf{1})=\mathbf{1}\). Moreover, \(P^*(f)\ge 0\) if \(f\ge 0\) and

$$\begin{aligned} \Vert P^*(f) \Vert \le \Vert P^*(|f|) \Vert \le \Vert f \Vert , \end{aligned}$$

so \(P^*\) is a continuous operator. A measure \(\mu _*\) is called invariant if \(P\mu _*=\mu _*\).

Let \(\Gamma =\{g_1,\ldots ,g_k\}\) be a finite collection of nondecreasing absolutely continuous functions, mapping [0, 1] onto [0, 1], and let \(p=(p_1,\ldots ,p_k)\) be a probability vector. Clearly, it defines a probability distribution p on \(\Gamma \) by putting \(p(g_i)=p_i\). Put \(\Sigma _n=\{1,\ldots ,k\}^n\), and let \(\Sigma _*=\bigcup _{n=1}^\infty \Sigma _n\) be the collection of all finite words with entries from \(\{1,\ldots ,k\}\). For a sequence \(\omega \in \Sigma _*\), \(\omega =(i_1,\ldots ,i_n)\), we denote by \(|\omega |\) its length (equal to n). Finally, let \(\Omega =\{1,\ldots ,k\}^{{\mathbb {N}}}\). For \(\omega =(i_1,i_2,\ldots )\in \Omega \) and \(n\in {\mathbb {N}}\), we set \(\omega |_n=(i_1,i_2,\ldots ,i_n)\). Let \({\mathbb {P}}\) be the product measure on \(\Omega \) generated by the initial distribution on \(\{1,\ldots ,k\}\). To any \(\omega =(i_1,\ldots ,i_n)\in \Sigma _{*}\), there corresponds a composition \( g_\omega =g_{i_n,i_{n-1},\ldots ,i_1} =g_{i_n}\circ g_{i_{n-1}}\circ \cdots \circ g_{i_1} \). The pair \((\Gamma ,p)\), called an iterated function system, generates a Markov operator \(P:{\mathcal {M}}_{fin}\rightarrow {\mathcal {M}}_{fin}\) of the form

$$\begin{aligned} P\mu = \sum _{i=1}^{k} p_i\, g_i\mu , \end{aligned}$$

where \(g_i\mu (A)=\mu (g_i^{-1}(A))\) for \(A\in {\mathcal {B}}([0,1])\); which describes the evolution of distribution due to the action of randomly chosen maps from the collection \(\Gamma \). It is a Feller operator, that is, its dual operator \(P^*\), given by the formula

$$\begin{aligned} P^* f(x) = \sum _{i=1}^{k} p_i f( g_i(x)) \quad \;\;\text {for}\;\; f\in C([0,1]), \; x\in [0,1], \end{aligned}$$

has the property \(P^*(C[0,1]) \subset C([0,1])\).

Definition 1

We denote by \(C^{+}_{\beta }\), \(\beta >0\), the space of continuous functions \(g:[0,1]\rightarrow [0,1]\) satisfying the following properties:

  1. (1)

    g is nondecreasing, \(g(0)=0\), and \(g(1)=1\),

  2. (2)

    g is continuously differentiable on \([0,\beta ]\) and \([1-\beta ,1]\).

We introduce the space \({\mathcal {F}}_0\) of pairs \((\Gamma ,p)\) such that \(\Gamma =\{g_1,\ldots ,g_k\}\subset C^{+}_{\beta }\) is a finite collection of homeomorphisms, and \(p=(p_1,\ldots ,p_k)\) is a probabilistic vector. We endow this space with the metric

$$\begin{aligned} d((\Gamma ,p),(\Delta ,q))= & {} \sum _{i=1}^{k} \Big ( |p_i-q_i| + \Vert g_i-h_i\Vert + \Vert g_i^{-1}-h_i^{-1}\Vert \\&+ \sup _{[0,\beta ]\cup [1-\beta ,1]}|g_i'-h_i'| \Big ), \end{aligned}$$

where \(\Delta =\{h_1,\ldots ,h_k\}\) and \(q=(q_1,\ldots ,q_k)\). It is easy to check that \(({\mathcal {F}}_0,d)\) is a complete metric space.

Definition 2

Let \(\Gamma =\{g_1,\ldots ,g_k\}\subset C^{+}_{\beta }\) be a finite collection of homeomorphisms, and let \(p=(p_1,\ldots ,p_k)\) be a probability vector such that \(p_i>0\) for all \(i=1,\ldots ,k\). The pair \((\Gamma ,p)\) is called an admissible iterated function system if

  1. (1)

    for any \(x\in (0,1)\), there exist \(i,j\in \{1,\ldots ,k\}\) such that \(g_i(x)<x<g_j(x)\);

  2. (2)

    the Lyapunov exponents at both common fixed points 0, 1 are positive, i.e.,

    $$\begin{aligned} \sum _{i=1}^{k}p_i\log g_i'(0)> 0 \;\;\text {and}\;\; \sum _{i=1}^{k}p_i\log g_i'(1) > 0. \end{aligned}$$

We consider the space \({\mathcal {F}}\subset {\mathcal {F}}_0\) of admissible iterated function systems \((\Gamma ,p)\) with all maps \(g\in \Gamma \) absolutely continuous. The closure \(\overline{{\mathcal {F}}}\) of \({\mathcal {F}}\) in \(({\mathcal {F}}_0,d)\) is clearly complete. Recall that a subset of a complete metric space is called residual if its complement is a set of the first Baire category.

Let \({\mathcal {M}}_1(0,1)\) denote the space of \(\mu \in {\mathcal {M}}_1\) supported in the open interval (0, 1), i.e., satisfying \(\mu ((0,1))=1\). Clearly, each \((\Gamma ,p)\in {\mathcal {F}}\) has two invariant measures: \(\nu _1=\delta _0\) and \(\nu _2=\delta _1\). As we will show, it also admits a unique invariant measure in \({\mathcal {M}}_1(0,1)\).

Since \(P\mu \) is singular for singular \(\mu \in {\mathcal {M}}_{fin}\), a unique invariant measure is either singular or absolutely continuous (see [9]). The aim of this paper is to show that the set of all \((\Gamma ,p)\in \overline{{\mathcal {F}}}\), which have singular unique invariant measure \(\mu \in {\mathcal {M}}_1(0,1)\), is residual in \(\overline{{\mathcal {F}}}\).

3 Invariant measure

The paper [7] deals with a special case of an IFS, namely for \(k=2\), with \(g_1(x)<x\), \(g_2(x)>x\), on (0, 1), being twice continuously differentiable. The proof of [7, Lemma 3.2] gives the following result.

Lemma 1

Let \((\Gamma ,p)\) be an admissible iterated function system. Then there exists an ergodic invariant measure \(\mu \in {\mathcal {M}}_1(0,1)\).

The main ingredient of the proof of Lemma 1 is the following subset of \({\mathcal {M}}_1(0,1)\): for small \(0<\alpha <1\) and positive M, one defines

$$\begin{aligned} {\mathcal {N}}_{M,\alpha }= \{ \mu \in {\mathcal {M}}_1(0,1) : \mu ([0,x]) \le Mx^\alpha \;\text {and}\; \mu ( [1-x,1] ) \le Mx^\alpha \}. \end{aligned}$$
(1)

For each admissible IFS, thanks to the positivity of the Lyapunov exponents, the corresponding Markov operator P keeps an \({\mathcal {N}}_{M,\alpha }\) invariant for some M and \(\alpha \). Then, among the invariant measures for P, given by the Krylov-Bogolyubov procedure, one can find an ergodic one, due to the convexity of \({\mathcal {N}}_{M,\alpha }\).

For later use, we give below an easy extension of the invariance part of [7, Lemma 3.2] and its proof.

Lemma 2

Let \((\Gamma ,p)\) be an admissible iterated function system and let P be the Markov operator corresponding to \((\Gamma ,p)\). There exist \(M>0\) and \(\alpha >0\) such that \({\mathcal {N}}_{M,\alpha }\ne \emptyset \) and

$$\begin{aligned} P({\mathcal {N}}_{M,\alpha }) \subset {\mathcal {N}}_{M,\alpha }. \end{aligned}$$

Proof

Note that, by Definitions 1 and 2, we can find \(x_0>0\) and \(\lambda _1,\ldots ,\lambda _k>0\) such that

$$\begin{aligned} \Lambda :=\sum _{i=1}^{k} p_i\log \lambda _i > 0 \end{aligned}$$

and

$$\begin{aligned} g_i^{-1}(x) \le x/\lambda _i \quad \;\;\text {for all}\;\; x\in [0,x_0]. \end{aligned}$$

Let \(F(\alpha ) = \sum _i p_i e^{-\alpha \log \lambda _i}\). Writing the Taylor expansion at 0, we obtain

$$\begin{aligned} F(\alpha )= & {} \sum _i p_i (1-\alpha \log \lambda _i+\mathcal {O}(\alpha ^2)) = 1-\alpha \sum _i p_i \log \lambda _i +\mathcal {O}(\alpha ^2) \\= & {} 1-\alpha \Lambda +\mathcal {O}(\alpha ^2) < 1 \end{aligned}$$

for \(\alpha \) small enough; choose such \(\alpha >0\). Choose also \(M>0\) large enough so that

$$\begin{aligned} M x_0^\alpha \ge 1. \end{aligned}$$

Assume that \(x\in [0,x_0]\) and let \(\mu \in {\mathcal {N}}_{M,\alpha }\). Then

$$\begin{aligned} \begin{aligned} P\mu ([0,x])&= \sum _i p_i\mu ([0,g_i^{-1}(x)]) \le \sum _i p_i\mu ([0,x/\lambda _i]) \le M \sum _i p_i x^\alpha /\lambda _i^\alpha \\&= Mx^\alpha \sum _i p_i e^{-\alpha \log \lambda _i} = Mx^\alpha \cdot F(\alpha ) \le Mx^\alpha . \end{aligned} \end{aligned}$$

On the other hand, if \(x>x_0\), then \( P\mu ([0,x]) \le 1 \le M x_0^\alpha \le M x^\alpha . \) In the same way, we prove that \(P\mu ([1-x,1])\le Mx^\alpha \) (with possibly larger M and smaller \(\alpha \)). This completes the proof. \(\square \)

Using ideas from [8, Lemma 3], one can prove the following:

Lemma 3

Let \((\Gamma ,p)\) be an admissible iterated function system. Assume that there exist \(g_1,g_2\in \Gamma \) such that the ratio \(\ln g_1'(0)/\ln g_2'(0)\) is irrational and \(g_1'(0)>1>g_2'(0)\); let \(g_1'\) be Hölder continuous in a neighborhood of zero. Then the IFS \((\Gamma ,p)\) is minimal, that is, \(\{g_{i_n,i_{n-1},\ldots ,i_1}(x)\}_n\) is dense in [0, 1] for each x in (0, 1) and \({\mathbb {P}}\)-almost all \(\omega \).

Uniqueness may be shown under the conditions of the preceding lemma, for instance using an argument given in [7, Lemma 3.4]; we give another proof, which does not rely on the injectivity of each \(g\in \Gamma \).

Lemma 4

Under the conditions of the preceding lemma, there is a unique stationary measure in \({\mathcal {M}}_1(0,1)\).

Proof

Let \(M>0\) be such that \(P({\mathcal {N}}_{M,\alpha })\subset {\mathcal {N}}_{M,\alpha }\) and let \(\nu \in {\mathcal {N}}_{M,\alpha }\). Put

$$\begin{aligned} \mu _n = \frac{ \nu +P\nu +\cdots +P^{n-1}\nu }{n} \end{aligned}$$

and note that, by (1), \(\mu _n\in {\mathcal {N}}_{M,\alpha }\). Let \(\mu \) be an accumulation point of the sequence \((\mu _n)\) in the weak-\(*\) topology in \(C([0,1])^*\). Then it is easy to check that also \(\mu \in {\mathcal {N}}_{M,\alpha }\). Moreover, since P is a Feller operator, every accumulation point of the sequence \(\mu _n\) is an invariant measure for the process P.

We now prove the uniqueness. Let \(\mu \in \mathcal {M}_1(0,1)\) be an arbitrary invariant measure. Fix \(f\in C([0,1])\). We define a sequence of random variables \((\xi _n^{f,\mu })_{n\in {\mathbb {N}}}\) by the formula

$$\begin{aligned} \xi _n^{f,\mu }(\omega ) = \int \limits _{[0,1]} f( g_{i_1,\ldots ,i_n} (x) ) \mu (dx) \quad \;\text {for}\; \omega =(i_1,i_2,\ldots ). \end{aligned}$$

Since \(\mu \) is an invariant measure for P, we easily check that \((\xi _n^{f,\mu })_{n\in {\mathbb {N}}}\) is a bounded martingale with respect to the natural filtration. Note that this martingale depends on the measure \(\mu \). From the martingale convergence theorem, it follows that \((\xi _n^{f,\mu })_{n\in {\mathbb {N}}}\) is convergent \({\mathbb {P}}\)-a.s. and since the space C([0, 1]) is separable, there exists a subset \(\Omega _0\) of \(\Omega \) with \({\mathbb {P}}(\Omega _0)=1\) such that \((\xi _n^{f,\mu })_{n\in {\mathbb {N}}}\) is convergent for any \(f\in C([0,1])\) and \(\omega \in \Omega _0\). Therefore, for any \(\omega \in \Omega _0\), there exists a measure \(\omega (\mu )\in \mathcal {M}_1\) such that

$$\begin{aligned} \lim _{n\rightarrow \infty } \xi _n^{f,\mu }(\omega ) = \int \limits _{[0,1]} f(x) \omega (\mu )(dx) \quad \;\text {for every}\; f\in C([0,1]). \end{aligned}$$

We are now ready to show that for any \(\varepsilon >0\), there exists \(\Omega _{\varepsilon }\subset \Omega \) with \({\mathbb {P}}(\Omega _{\varepsilon })=1\) satisfying the following property: for every \(\omega \in \Omega _{\varepsilon }\), there exists an interval I of length \(|I|\le \varepsilon \) such that \(\omega (\mu )(I)\ge 1-\varepsilon \). Hence we obtain that \(\omega (\mu ) = \delta _{v(\omega )}\) for all \(\omega \) from some set \({\tilde{\Omega }}\) with \({\mathbb {P}}({\tilde{\Omega }})=1\). Here \(v(\omega )\) is a point from [0, 1].

Fix \(\varepsilon >0\) and let \(a,b\in (0,1)\) be such that \(\mu ([a,b]) > 1-\varepsilon \). Let \(\ell \in {\mathbb {N}}\) be such that \(1/\ell < \varepsilon /2\). Since for any \(x\in (0,1)\), there exists \(i\in \{1,\ldots ,k\}\) such that \(g_i(x)<x\), we may find a sequence \((\mathbf{j}_n)_{n\in {\mathbb {N}}}\), \(\mathbf{j}_n\in \Sigma _*\), such that \(g_{\mathbf{j}_n}(b) \rightarrow 0\) as \(n\rightarrow \infty \). Therefore, there exist \(\mathbf{i}_1,\ldots ,\mathbf{i}_\ell \) such that

$$\begin{aligned} g_{\mathbf{i}_m}([a,b]) \cap g_{\mathbf{i}_n}([a,b]) = \emptyset \quad \;\text { for }\; m,n\in \{1,\ldots ,\ell \}, m\ne n. \end{aligned}$$

Put \(n^*=\max _{m\le \ell } |\mathbf{i}_m|\) and set \(J_m = g_{\mathbf{i}_m}([a,b])\) for \(m\in \{1,\ldots ,\ell \}\). Each \(J_m\) is a closed interval. (If we skip the requirement of injectivity of \(g_i\)’s, \(J_m\) would possibly be a singleton; which does not spoil the proof; see Remark 5 afterwards). Now observe that for any sequence \(\mathbf{u} = (u_1,\ldots ,u_n)\in \Sigma _*\), there exists \(m\in \{1,\ldots ,\ell \}\) such that \({{\,\mathrm{diam}\,}}(g_{\mathbf{u}}(J_m))< 1/\ell < \varepsilon /2\). This shows that for any cylinder in \(\Omega \), defined by fixing the first initial n entries \((u_1,\ldots ,u_n)\), the conditional probability, that \((u_1,\ldots ,u_n,\ldots ,u_{n+k})\) are such that \({{\,\mathrm{diam}\,}}(g_{u_1,\ldots ,u_n,\ldots ,u_{n+k}}([a,b])) \ge \varepsilon /2\) for all \(k=1,\ldots ,n^*\), is less than \(1-q\) for some \(q>0\) independent of n. Hence there exists \(\Omega _\varepsilon \subset \Omega \) with \({\mathbb {P}}(\Omega _\varepsilon ) = 1\) such that for all \((u_1,u_2,\ldots ) \in \Omega _{\varepsilon }\), we have \({{\,\mathrm{diam}\,}}(g_{u_1,\ldots ,u_n}([a,b])) < \varepsilon /2\) for infinitely many n. Since [0, 1] is compact, we may additionally assume that for infinitely many n’s, the set \(g_{u_1,\ldots ,u_n}([a,b])\) is contained in a set I (depending on \(\omega =(u_1,u_2,\ldots )\)) with \({{\,\mathrm{diam}\,}}(I)\le \varepsilon \). Observe that this I does not depend on \(\mu \); if only \(\mu _1\), \(\mu _2\) are two stationary distributions, and if a, b are chosen in such a way that \(\mu _1([a,b]) > 1-\varepsilon \) and \(\mu _2([a,b]) > 1-\varepsilon \), then we obtain that both \(\omega (\mu _1)(I)\ge 1-\varepsilon \) and \(\omega (\mu _2)(I)\ge 1-\varepsilon \), \({\mathbb {P}}\)-a.s. Hence \(\omega (\mu _1)=\omega (\mu _2)=\delta _{v(\omega )}\) for \({\mathbb {P}}\)-almost every \(\omega \in \Omega \). Consequently, for any \(f\in C([0,1])\) and for \(i=1,2,\) we have

$$\begin{aligned} \int \limits _{[0,1]} f(x) \mu _i(dx) = \lim _{n\rightarrow \infty } \int \limits _{[0,1]} f(x) P^n\mu _i(dx) = \int \limits _\Omega \lim _{n\rightarrow \infty } \xi _n^{f,\mu _i}(\omega ) {\mathbb {P}}(d\omega ). \end{aligned}$$

Since the last integral equals \(\int _\Omega f(v(\omega )) {\mathbb {P}}(d\omega )\) in both cases, and since \(f\in C([0,1])\) was arbitrary, we obtain \(\mu _1=\mu _2\) and the proof is complete. \(\square \)

Remark 5

The proofs of Lemmata 2, 3, and 4 do not use the fact that \(g_i\), restricted to \((\beta ,1-\beta )\), is injective. Precisely, these results hold even if we replace the word “homeomorphisms” by “functions” in Definition 2.

4 Auxiliary results and main theorem

Recall that the weak-\(*\) topology in \(C([0,1])^*\) is induced by the Fortet-Mourier norm

$$\begin{aligned} \Vert \nu \Vert _{\mathrm{FM}} = \sup \left\{ \langle f, \nu \rangle : f\in BL_1 \right\} , \end{aligned}$$

where \(BL_1\) is the space of all functions \(f:[0,1]\rightarrow \mathbb {R}\) such that \(|f(x)|\le 1\) and \(|f(x)-f(y)|\le |x-y|\) for \(x,y\in [0,1]\).

Denote by \(d_0\) the distance between arbitrary (not necessarily admissible) iterated function systems given by

$$\begin{aligned} d_0((\{S_1,\ldots ,S_k\},p),(\{T_1,\ldots ,T_k\},q)) = \sum _{i=1}^{k} |p_i-q_i| + \Vert S_i-T_i\Vert . \end{aligned}$$

Lemma 6

Let (Sp) and (Tq) be given iterated function systems and let \(P_S\) and \(P_T\) be the corresponding Markov operators. Then for every \(\mu _1\), \(\mu _2\in {\mathcal {M}}_{fin}\), we have

$$\begin{aligned} \Vert (P_S-P_T)(\mu _1-\mu _2)\Vert _{\mathrm{FM}} \le (\mu _1([0,1])+\mu _2([0,1])) \, d_0( (S,p), (T,q) ). \end{aligned}$$
(2)

A proof of this lemma may be found in [12].

Lemma 7

Suppose that the iterated function systems (Sp), and \((S_1,p)\), \((S_2,p)\), ..., are such that, for some \(M>0\) and \(\alpha \in (0,1)\),

$$\begin{aligned} \mu _1,\mu _2,\ldots \in {\mathcal {N}}_{M,\alpha }\;\;\;\text {and}\;\; \lim _{n\rightarrow \infty }d_0((S_n,p),(S,p))=0, \end{aligned}$$

where \(\mu _n\) is a stationary distribution for \((S_n,p)\). Then \((\mu _n)_n\) admits a subsequence weakly convergent to a stationary distribution \(\mu \) for (Sp); moreover, \(\mu \in {\mathcal {N}}_{M,\alpha }\).

Proof

By the Prohorov theorem, there is a subsequence \(\mu _{n_k}\) converging in \(({\mathcal {M}}_1,\Vert \cdot \Vert _{\mathrm{FM}})\) to some \(\mu \in {\mathcal {M}}_1\). We denote it by \(\mu _n\) again, for convenience, so \(\Vert \mu _n-\mu \Vert _{\mathrm{FM}}\rightarrow 0\). Applying Lemma 6, we get

$$\begin{aligned} \Vert P\mu _n - \mu _n\Vert _{\mathrm{FM}} = \Vert (P-P_n)\mu _n\Vert _{\mathrm{FM}} \le \mu _n([0,1])\;d_0((S_n,p),(S,p)) \rightarrow 0, \end{aligned}$$

where P, \(P_n\) are the Markov operators corresponding to (Sp), \((S_n,p)\), respectively. The weak continuity of P implies \(P\mu _n\rightarrow P\mu \) in \(\Vert \cdot \Vert _{\mathrm{FM}}\), so that \(\Vert P\mu -\mu \Vert _{\mathrm{FM}}=0\). Now, \(\mu \) is an invariant measure for P, but we still have to check \(\mu \in {\mathcal {N}}_{M,\alpha }\). Indeed, let \(x\in (0,1)\) be arbitrary. Then

$$\begin{aligned} \mu _n([x,1-x]) \ge 1-2Mx^\alpha \quad \;\;\text {for all}\;\; n\in {\mathbb {N}}. \end{aligned}$$

By the Alexandrov theorem, \(\mu ([x,1-x])\ge \limsup _n \mu _n([x,1-x])\ge 1-2Mx^\alpha \), which means that \(\mu \in {\mathcal {N}}_{M,\alpha }\). \(\square \)

Let \({\mathcal {M}}_1^\varepsilon \) be the set of those \(\mu \in {\mathcal {M}}_1\), for which there is a Borel set A with the Lebesgue measure \(m(A)<\varepsilon \) and such that \(\mu (A)>1-\varepsilon /2\).

Remark 8

Note that, if \(\mu \in {\mathcal {M}}_1\) is singular, then for every \(\varepsilon >0\), there is \(\delta >0\) such that for \(\nu \in {\mathcal {M}}_1\), \(\Vert \mu -\nu \Vert _{\mathrm{FM}}<\delta \), we have \(\nu \in {\mathcal {M}}_1^\varepsilon \). For a proof, see [13, Lemma 2.4.1 ].

For every \(n\in {\mathbb {N}},\) let \({\mathcal {F}}_n\subset {\mathcal {F}}\) be the set of all \((\Gamma ,p)\in {\mathcal {F}}\) with invariant measure \(\mu \in {\mathcal {M}}_1(0,1)\cap {\mathcal {M}}_1^{1/n}\).

Lemma 9

For every \(n\in {\mathbb {N}},\) the set \({\mathcal {F}}_n\) is dense in the space \({\mathcal {F}}\) endowed with the metric d.

Proof

Fix \((\Gamma ,p)\in {\mathcal {F}}\), \(n\in {\mathbb {N}}\), and \(\varepsilon >0\). We may assume that \((\Gamma ,p)\) satisfies the requirements of Lemma 3. It is easy to construct absolutely continuous modifications of \(g_1\) (on \((\beta ,1-\beta )\)) in such a way that we obtain a sequence \((\Gamma _m,p)_{m\in {\mathbb {N}}}\subset {\mathcal {F}}\) satisfying \(d((\Gamma _m,p),(\Gamma ,p))<\varepsilon \) and

$$\begin{aligned} \lim _{m\rightarrow \infty }d_0((\Gamma _m,p),(S_0,p)) = 0 \end{aligned}$$

where \(S_0=\{g_0,g_2,\ldots ,g_k\}\) and \(g_0(x)=x_0\) for \(x\in [u,v]\subset (\beta ,1-\beta )\), provided \(v-u<2\varepsilon \). As we have noted in Remark 5, due to the inclusion \([u,v]\subset (\beta ,1-\beta )\), the system \((S_0,p)\) has a unique stationary distribution \(\mu _0\in {\mathcal {M}}_1(0,1)\), and \(\mu _0\) has full support in [0, 1]. Since \(\mu _0\) is invariant, we obtain \(\mu _0(\{x_0\})>0\), so \(\mu _0\) is not absolutely continuous. As remarked earlier, the uniqueness of \(\mu _0\) implies that it is either absolutely continuous or singular.

Now, \(\mu _0\) is singular and we would like to apply Remark 8, to find \((\Gamma _m,p)\in {\mathcal {F}}_n\) for some \(m\in {\mathbb {N}}\). In the light of Lemma 7, it suffices to show that all \((\Gamma _m,p)\) have stationary distributions belonging to the same set \({\mathcal {N}}_{M,\alpha }\) for some \(M,\alpha >0\). But this follows easily from the proof of Lemma 2 by taking account of the fact that the corresponding elements of \(\Gamma _m\) are identical on \((0,\beta )\cup (1-\beta ,1)\). \(\square \)

Theorem 10

The set of all \((\Gamma ,p)\in \overline{{\mathcal {F}}}\) which have a unique singular stationary distribution in \({\mathcal {M}}_1(0,1)\) is residual in \(\overline{{\mathcal {F}}}\).

Proof

For \((\Gamma ,p)\in {\mathcal {F}}_n\), we find \(\delta _0>0\) such that

$$\begin{aligned} |x - \min _{1\le i\le k}g_i(x)|> \delta _0 \;\;\text {and}\;\; |x - \max _{1\le i\le k}g_i(x)| > \delta _0 \quad \;\;\text {on}\;\; (1/n,1-1/n). \end{aligned}$$

Moreover, to \((\Gamma ,p)\) we adjoin a compact set \(Z_{(\Gamma ,p)}\subset [0,1]\) such that

$$\begin{aligned} \mu _{(\Gamma ,p)}(Z_{(\Gamma ,p)}) > 1-\frac{1}{2n} \;\;\text {and}\;\; m(Z_{(\Gamma ,p)}) < \frac{1}{n}, \end{aligned}$$

where \(\mu _{(\Gamma ,p)}\in {\mathcal {M}}_1(0,1)\) is the invariant measure for \({(\Gamma ,p)}\). Further, due to the regularity of the Lebesgue measure, we can find a positive number \(r_{(\Gamma ,p)}\) such that

$$\begin{aligned} m( O( Z_{(\Gamma ,p)}, r_{(\Gamma ,p)} ) ) < \frac{1}{n}, \end{aligned}$$

where \(O(Z_{(\Gamma ,p)},r_{(\Gamma ,p)})\) is the open neighborhood of \(Z_{(\Gamma ,p)}\) in [0, 1] with the radius \(r_{(\Gamma ,p)}\). Denote by \(A_{(\Gamma ,p)}\) the set \([0,1]{\setminus } O( Z_{(\Gamma ,p)}, r_{(\Gamma ,p)} )\) and consider the classical Tietze function \(f_{(\Gamma ,p)}:[0,1]\rightarrow {\mathbb {R}}\) for the sets \(Z_{(\Gamma ,p)}\) and \(A_{(\Gamma ,p)}\) given by the formula

$$\begin{aligned} f_{(\Gamma ,p)}(x) = \frac{\Vert x,A_{(\Gamma ,p)}\Vert }{\Vert x,A_{(\Gamma ,p)}\Vert +\Vert x,Z_{(\Gamma ,p)}\Vert }, \end{aligned}$$

where \(\Vert x,A\Vert \) stands for the distance of the point x from the set A. We have \(f_{(\Gamma ,p)}=0\) for \(x\in A_{(\Gamma ,p)}\) and \(f_{(\Gamma ,p)}(x)=1\) for \(x\in Z_{(\Gamma ,p)}\). Obviously, \(|f_{(\Gamma ,p)}|\le 1\) and \(f_{(\Gamma ,p)}\) is Lipschitz, with the Lipschitz constant \(l_{(\Gamma ,p)}>1\).

By Lemma 7, and by the proof of Lemma 2, the map taking \((\Gamma ,p)\in {\mathcal {F}}\) into its invariant measure \(\mu _{(\Gamma ,p)}\in {\mathcal {M}}_1(0,1)\) is continuous with respect to \(({\mathcal {F}},d)\) and \(({\mathcal {M}}_1(0,1),\Vert \cdot \Vert _{\mathrm{FM}})\). Hence, there is \(\delta _1>0\) such that for \((\Delta ,q)\in {\mathcal {F}}\) satisfying \(d( (\Delta ,q), (\Gamma ,p) )<\delta _1\), we have

$$\begin{aligned} \Vert \mu _{(\Gamma ,p)} - \mu _{(\Delta ,q)}\Vert _{\mathrm{FM}} < \frac{1}{2n\cdot l_{(\Gamma ,p)}}, \end{aligned}$$
(3)

where \(\mu _{(\Delta ,q)}\in {\mathcal {M}}_1(0,1)\) is the invariant measure for \((\Delta ,q)\).

Additionally, since \((\Gamma ,p)\) is admissible, it follows from Definition 2 that there is \(\delta _2>0\) with the property

$$\begin{aligned} \sum _{i=1}^{k}p_i\log (g_i'(0)-\delta _2)> 0 \;\;\text {and}\;\; \sum _{i=1}^{k}p_i\log (g_i'(1)-\delta _2) > 0. \end{aligned}$$

Now, for \((\Gamma ,p)\in {\mathcal {F}}_n\), with the aid of \(\delta _{(\Gamma ,p)}=\min \{\delta _0/2,\delta _1,\delta _2\}\), we define

$$\begin{aligned} {\widehat{{\mathcal {F}}}} = \bigcap _{n=1}^\infty \bigcup _{(\Gamma ,p)\in {\mathcal {F}}_n} B_{\overline{{\mathcal {F}}}}((\Gamma ,p),\delta _{(\Gamma ,p)}), \end{aligned}$$

where \(B_{\overline{{\mathcal {F}}}}((\Gamma ,p),\delta _{(\Gamma ,p)})\) is an open ball in \((\overline{{\mathcal {F}}},d)\) with center at \((\Gamma ,p)\) and radius \(\delta _{(\Gamma ,p)}\). Clearly, \(\widehat{{\mathcal {F}}}\) as an intersection of open dense subsets of \(\overline{{\mathcal {F}}}\) is residual.

We will show that if \((T,q)\in \widehat{{\mathcal {F}}}\), then (Tq) has the unique singular stationary distribution supported on (0, 1). Let \((T,q)\in \widehat{{\mathcal {F}}},\) be fixed. From the definition of \({\widehat{{\mathcal {F}}}}\), it follows that there exists a sequence \(((\Gamma ,p)_n)_{n\in {\mathbb {N}}}\) such that \((\Gamma ,p)_n\in {\mathcal {F}}_n\) and \((T, q)\in B_{\overline{{\mathcal {F}}}}((\Gamma , p)_n,\delta _{(\Gamma , p)_n})\) for \(n\in {\mathbb {N}}\). As we have noted, \(T=\{h_1,\ldots ,h_k\}\) is a collection of homeomorphisms due to the convergence in the metric d. By the same token, the functions \(h_i\) are even of class \(C^1\) on \([0,\beta ]\) and on \([1-\beta ,1]\). It is easy to check that (Tq) satisfies condition (1) from Definition 2 due to the proper choice of \(\delta _{(\Gamma ,p)}<\delta _0\). Moreover, by the inequality \(\delta _{(\Gamma ,p)}\le \delta _2\), the Lyapunov exponents for (Tq) are kept positive. This means that (Tq) is an admissible iterated function system. Hence it admits a unique invariant measure \(\mu _{(T,q)}\in {\mathcal {M}}_1(0,1)\). Again by Lemma 7, there is a subsequence \((\Gamma ,p)_{n_k}\) with invariant measures converging to \(\mu _{(T,q)}\) in \(\Vert \cdot \Vert _{\mathrm{FM}}\). For brevity, let us denote the elements of this subsequence again by \((\Gamma ,p)_n\), the corresponding invariant measures by \(\mu _n\), and let \(Z_n=Z_{(\Gamma ,p)_n}\), \(O(Z_n,r_n)=O(Z_{(\Gamma ,p)_n},r_{(\Gamma ,p)_n})\) for \(n\in {\mathbb {N}}\).

The obvious inequality \({{\,\mathrm{Lip}\,}}(l_n^{-1}f_n)\le 1\), where \(f_n=f_{(\Gamma ,p)_n}\) and \(l_n=l_{(\Gamma ,p)_n}\), implies that

$$\begin{aligned} \left| \int \limits _{[0,1]}f_n(x)\mu _n(dx) - \int \limits _{[0,1]}f_n(x)\mu _{(T,q)}(dx) \right| \le l_n\Vert \mu _n-\mu _{(T,q)}\Vert _{\mathrm{FM}} < \frac{1}{2n}. \end{aligned}$$

However, since \(f_n(x)=1\) for \(x\in Z_n\) and \(f_n(x)=0\) for \(x\not \in O(Z_n,r_n)\),

$$\begin{aligned} \mu _n(Z_n) - \mu _{(T,q)}(O( Z_n,r_n )) < \frac{1}{2n}. \end{aligned}$$

Thus

$$\begin{aligned} \mu _{(T,q)}( O(Z_n,r_n) )> \mu _n(Z_n)-\frac{1}{2n} > 1 -\frac{1}{2n} -\frac{1}{2n} = 1 -\frac{1}{n}. \end{aligned}$$

This and the inequality \(m( O(Z_n,r_n) )<\frac{1}{n}\), \(n\in {\mathbb {N}}\), prove that \(\mu _{(T,q)}\) is singular. The proof is completed. \(\square \)