1 Introduction

The main goal of the paper is to prove the Central Limit Theorem (CLT) for Markov operator generated by random dynamical systems. The existence of an exponentially attractive invariant measure was proven by Horbacz and Ślȩczka [19].

Random dynamical systems [15, 17] take into consideration some very important and widely studied cases, namely dynamical systems generated by learning systems [1, 20, 22, 29], iterated function systems with an infinite family of transformations [37, 38], Poisson driven stochastic differential equations [16, 35, 36], random evolutions [11, 32] and irreducible Markov systems [41], used for the computer modelling of different stochastic processes.

A large class of applications of such models, both in physics and biology, is worth mentioning here: the shot noise, the photo conductive detectors, the growth of the size of structural populations, the motion of relativistic particles, both fermions and bosons (see [10, 23, 26]), the generalized stochastic process introduced in the recent model of gene expression by Lipniacki et al. [30] see also [3, 14, 18]. The results bring some information important from biological point of view. On the other hand, it should be noted that most Markov chains, appear among other things, in statistical physics, and may be represented as iterated function systems (see [24]), for example iterated function systems have been used in studying invariant measures for the Waźewska partial differential equation which describes the process of the reproduction of red blood cells [27].

In our paper we base on coupling methods introduced in Hairer [12]. In the same spirit, the Central Limit Theorem was proven by Hille, Horbacz, Szarek and Wojewódka [13] for a stochastic model for an autoregulated gene. Komorowski and Walczuk studied Markov processes with the transfer operator having spectral gap in the Wasserstein metric and proved the CLT in the non-stationary case [25].

Properly constructed coupling measure, if combined with the results for stationary ergodic Markov chains given by Maxwell and Woodroofe [31], is also crucial in the proof of the CLT. If we have the coupling measure already constructed, the proof of the CLT is brief and less technical then typical proofs based on Gordin’s martingale approximation.

The aim of this paper is to study stochastic processes whose paths follow deterministic dynamics between random times, jump times, at which they change their position randomly. Hence, we analyse stochastic processes in which randomness appears at times \(\tau _0< \tau _1<\tau _2<\ldots \) We assume that a point \( x_0 \in Y \) moves according to one of the dynamical systems \( T_i : {\mathbb {R}}_+ \times Y \rightarrow Y \) from some set \(\{ T_1, \ldots , T_N \}\). The motion of the process is governed by the equation \( X(t) = T_i (t, x_0) \) until the first jump time \(\tau _1\). Then we choose a transformation \(q_{\theta } : Y \rightarrow Y\) from a set \(\{q_1, \ldots , q_K \} \) and define \(x_1 = q_{\theta }(T_i (\tau _1, x_0))\). The process restarts from that new point \(x_1\) and continues as before. This gives the stochastic process \(\{X(t)\}_{t \ge 0}\) with jump times \(\{\tau _1, \tau _2, \ldots \}\) and post jump positions \(\{x_1, x_2, \ldots \}\). The probability determining the frequency with which the dynamical systems \(T_i\) are chosen is described by a matrix of probabilities \({[p_{ij}]}_{i,j=1}^N \), \(p_{ij} : Y \rightarrow [0, 1]\). The maps \(q_{\theta }\) are randomly chosen with place dependent distribution.

The existence of an exponentially attractive invariant measure and strong law of large numbers for Markov operator generated by discrete time random dynamical systems was proven by Horbacz and Ślȩczka in [19]. Our model is similar to the so-called piecewise-deterministic Markov process introduced by Davis [5]. There is a substantial literature devoted to the problem the existence of an exponentially attractive invariant measure for piecewise-deterministic Markov processes. In [2] the authors considers the particular situation for random dynamical systems without jumps, ( i.e \(q_{\theta }(x) = x\)), when \(Y = \mathbb {R}^d\). Under Hormander type bracket conditions, the authors proves that there exists a unique invariant measure and that the processes converges to equilibrium in the total variation norm. We consider random dynamical systems with randomly chosen jumps acting on a given Polish space \((Y,\varrho )\). In fact, it is difficult to ensure that the process under consideration satisfies all the ergodic properties on a compact set. In [4] the authors consider a Markov process with two components: the first component evolves according to one of finitely many underlying Markovian dynamics, with a choice of dynamics that changes at the jump times of the second component, but also without jumps.

Given a Lipschitz function \(g:X \rightarrow \mathbb {R}\) we define

$$\begin{aligned} S_n(g)= g(x_0) + \cdots + g(x_{n-1})\quad {\text {and}} \quad S_t(g) = \int _0^tg(X(s))ds. \end{aligned}$$

Our aim is to find conditions under which \(S_n(g)\) and \(S_t(g)\) satisfies CLT.

The organization of the paper goes as follows. Section 2 introduces basic notation and definitions that are needed throughout the paper. Random dynamical systems is provided in Sect. 3. The main theorem (CLT) is also formulated there. Section 4 is devoted to the construction of coupling measure for random dynamical systems. Auxiliary theorems are proved in Sect. 5. The CLT for discrete and continuous time processes is established in Sect. 6. In Sect. 7 we illustrate the usefulness of our criteria for CLT for Markov chain associated with iterated function systems with place - dependent probabilities and Poisson driven stochastic differential equations.

2 Notation and Basic Definitions

Let \((X,\varrho _X)\) be a Polish space. We denote by \(B_X\) the family of all Borel subsets of X. Let B(X) be the space of all bounded and measurable functions \(f:X\rightarrow R\) with the supremum norm. Then, C(X) is the space of all bounded and continuous functions and \(Lip_b(X)\) is the space of all bounded and Lipschitz functions, also with the supremum norm.

We denote by M(X) the family of all non negative Borel measures on X and by \(M_{fin}(X)\) and \(M_1(X)\) its subfamilies such that \(\mu (X)<\infty \) and \(\mu (X)=1\), respectively. Elements of \(M_{fin}(X)\) which satisfy \(\mu (X)\le 1\) are called sub-probability measures. To simplify notation, we write

$$\begin{aligned} \langle f,\mu \rangle =\int _X f(x)\mu (dx)\quad \text {for}\, f\in B(X),\, \mu \in M(X). \end{aligned}$$

Let \(\mu \in M(X)\), by \(L^2(\mu )\) we denote the space of square integrable function \(g: X \rightarrow R\) for which \(\Vert g\Vert ^2=\int _Yg^2d\mu < \infty \), and let \(L^2_0(\mu )\) denote the set of \( g\in L^2(\mu )\) for which \(\langle g,\mu \rangle =0\).

An operator \(P:M_{fin}(X)\rightarrow M_{fin}(X)\) is called a Markov operator if

$$\begin{aligned}&P(\lambda _1\mu _1+\lambda _2\mu _2)=\lambda _1 P\mu _1+\lambda _2 P\mu _2\quad \text { for }\,\lambda _1,\lambda _2\ge 0,\; \mu _1, \mu _2\in M_{fin}(X),\\&P\mu (X)=\mu (X)\quad \text { for }\, \mu \in M_{fin}(X). \end{aligned}$$

Markov operator \(P: M_{fin}(X)\rightarrow M_{fin}(X)\) for which there exists a linear operator \(U:B(X)\rightarrow B(X)\) such that

$$\begin{aligned} \langle Uf,\mu \rangle =\langle f,P\mu \rangle \quad \text {for}\, f\in B(X),\,\mu \in M_{fin}(X) \end{aligned}$$

is called a regular operator. We say that a regular Markov operator P is Feller if \(U(C(X))\subset C(X)\). Every Markov operator P may be extended to the space of signed measures on X denoted by \(M_{sig}(X)=\{\mu _1-\mu _2:\; \mu _1,\mu _2\in M_{fin}(X)\}\).

By \(\{\Pi (x, \cdot )\, : \, x\in X\}\) we denote a transition probability function for P, i.e. a family of measures \(\Pi (x, \cdot ) \in \mathcal {M}_1(X)\) for \(x\in X\), such that the map \(x\mapsto \Pi ( x, A)\) is measurable for every \(A\in \mathcal {B}_X\) and

$$\begin{aligned} P\mu (A) = \int _A\Pi (x, A)\mu (dx) \quad \text {for} \quad A \in B_X\quad \text {and}\quad \mu \in M_{fin}(X), \end{aligned}$$

or equivalently

$$\begin{aligned} Uf(x)=\int _X f(y) \Pi (x, dy)\qquad \text {for}\qquad x\in X\quad \text {and}\quad f\in B(X). \end{aligned}$$

Distributions \(\Pi ^n(x,\cdot ))\), \({n\in \mathbb {N}}\), are defined by induction on n

$$\begin{aligned}&\Pi ^0(x,A)=\delta _{x}(A), \; \Pi ^1(x,A)=\Pi (x, A) = P \delta _{x}(A), \nonumber \\&\Pi ^n(x,A)=\int _Y \Pi ^1(x,A)\Pi ^{n-1}(x,dy), \end{aligned}$$
(2.1)

for \(x\in X\), \(A\in B_X\) .

A coupling for \(\{\Pi ^1(x, \cdot )\, : \, x\in X\}\) is a family \(\{C^1((x,y),\cdot ): x, y\in X\}\) of probability measures on \(X^2\) such that

$$\begin{aligned} C^1((x,y),A\times X)=\Pi ^1(x,A),\quad C^1((x,y),X\times B)=\Pi ^1(y,B) \end{aligned}$$

for \(A, B\in B_X\) and \(x, y \in X\).

In the following we assume that there exists a subcoupling for \(\{\Pi ^1(x, \cdot )\, : \, x\in X\}\), i.e. a family \(\{Q^1((x,y),\cdot )\, : \, x, y \in X\}\) of subprobability measures on \(X^2\) such that the mapping \((x,y)\mapsto Q^1((x,y),A\times B)\) is measurable for every \(A,B\in B_{X}\) and

$$\begin{aligned} Q^1((x, y),A\times X)\le \Pi ^1(x,A),\quad Q^1((x, y),X\times B)\le \Pi ^1(y,B) \end{aligned}$$

for \(A, B \in B_X\). Measures \(\{Q^1((x,y),\cdot )\, : \, x, y \in X\}\) allow us to construct a coupling for \(\{\Pi ^1(x, \cdot )\, : \, x\in X\}\). Define \(\{R^1((x,y),\cdot )\, : \, x, y \in X\}\) by

$$\begin{aligned}&R^{1}((x,y),A\times B)\\&= \frac{(\Pi ^1(x,A)-Q^1((x,y),A\times X))(\Pi ^1(y,B)-Q^1((x,y),X\times B))}{1-Q^1((x,y),X^2)} \end{aligned}$$

if \(Q^1((x,y),X^2)<1\) and \(R^1((x,y),A\times B)=0\) if \(Q^1((x,y),X^2)=1\) for \(A,B\in B_{X}\).

A simple computation shows that the family \(\{C^1((x,y),\cdot ): x, y\in X\}\) of probability measures on \(X\times X\) defined by

$$\begin{aligned} C^1((x,y),\cdot )=Q^1((x,y),\cdot )+R^1((x,y),\cdot )\quad \text { for } x,y \in X \end{aligned}$$

is a coupling for \(\{\Pi ^1(x, \cdot )\, : \, x\in X\}\).

For fixed \(\bar{x}\in X\) we consider the space \(M_1^1(X)\) of all probability measures with the first moment finite, i.e.,

$$\begin{aligned} M_1^1(X)=\left\{ \mu \in M_1(X):\,\int _X \varrho _X(x,\bar{x})\mu (dx)<\infty \right\} \end{aligned}$$

and the space \(M_1^2(X)\) of all probability measures with finite second moment, i.e.,

$$\begin{aligned} M_1^2(X) = \left\{ \mu \in M_1(X):\, \int _Y \varrho _X^2 (x, \overline{x}) \mu (dx) < \infty \right\} . \end{aligned}$$

The family is independent of the choice of \(\bar{x}\in X\).

Fix probability measures \(\mu ,\nu \in M_1^1(X)\) and Borel sets \(A,B\in B_X\). We consider \(b\in M_1(X^2)\) such that

$$\begin{aligned} b(A\times X)=\mu (A)\text {,}\qquad b(X\times B)=\nu (B) \end{aligned}$$
(2.2)

and \(b^n\in M_1(X^2)\) such that, for every \(n\in \mathbb {N}\),

$$\begin{aligned} b^n(A\times X)=P^n\mu (A)\text {,}\qquad b^n(X\times B)=P^n\nu (B)\text {,} \end{aligned}$$
(2.3)

where \(P : M_1(X)\rightarrow M_1(X)\) is given Markov operator.

For measures \(b\in M_{fin}^1(X^2)\) finite on \(X^2\) and with the first moment finite we define the linear functional

$$\begin{aligned} \phi (b)=\int _{X^2}\varrho _X(x,y)b(dx\times dy). \end{aligned}$$
(2.4)

A continuous function \(V: X\rightarrow [0, \infty )\) such that V is bounded on bounded sets and \(\lim _{x\rightarrow \infty }V(x)=+\infty \) is called a Lapunov function.

We call \(\mu _*\in M_{fin}(X)\) an invariant measure of P if \(P\mu _*=\mu _*\). An invariant measure \(\mu _*\) is attractive if

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\langle f, P^n\rangle = \langle f, \mu _*\rangle \quad \text {for}\quad f\in C(X),\, \mu \in \mathcal {M}_1(X). \end{aligned}$$

For \(\mu \in M_{fin}(X)\), we define the support of \(\mu \) by

$$\begin{aligned} \mathrm{supp}\mu =\{x\in X:\; \mu (B(x,r))> 0 \quad \text {for }\,r>0\}, \end{aligned}$$

where B(xr) is an open ball in X with center at \(x\in X\) and radius \(r>0\).

In \(M_{sig}(X)\), we introduce the Fortet-Mourier norm

$$\begin{aligned} \Vert \mu \Vert _{\mathcal {FM}}=\sup _{f\in \mathcal {F}}|\langle f,\mu \rangle |, \end{aligned}$$

where \(\mathcal {F}=\{f\in C(X):\;|f(x)-f(y)|\le \varrho _X(x,y),\;|f(x)|\le 1\;\text { for }\,x,y\in X\}\). The space \(M_1(X)\) with the metric \(\Vert \mu _1-\mu _2\Vert _{\mathcal {FM}}\) is complete (see [9, 33] or [39]). It is known (see Theorem 11.3.3, [7]) that the following conditions are equivalent

  1. (i)

    \(\lim _{n\rightarrow \infty }\langle f,\mu _n\rangle =\langle f,\mu \rangle \) for all \(f\in \mathcal {F}\),

  2. (ii)

    \(\lim _{n\rightarrow \infty }\Vert \mu _n-\mu \Vert _{\mathcal {FM}}=0\),

where \((\mu _n)_{n\in N}\subset M_1(X)\) and \(\mu \in M_1(X)\).

3 Random Dynamical Systems

Let \((Y, \varrho )\) be a Polish space, \(\mathbb {R}_+=[0,+\infty )\) and \(I = \{1, \dots ,N\}\), \( \Theta = \{1, \ldots , K\}\), where N and K are given positive integers.

We are given a family of continuous functions \(q_{\theta } : Y \rightarrow Y , {\theta } \in \Theta \) and a finite sequence of semidynamical systems \(T_{i}:\mathbb {R}_+\times Y\rightarrow Y\), \(i \in I\), i.e.

$$\begin{aligned} T_i(s+t,x)=T_i(s,(T_i(t,x)), \quad T_i(0, x) = x \quad \text {for }\quad s,t \in \mathbb {R}_+, \,\,i\in I \,\,\,\text {and}\,\,\, x\in Y, \end{aligned}$$

the transformations \(T_{i}:\mathbb {R}_+\times Y\rightarrow Y\), \(i \in I\) are continuous.

Let \(p_i :Y \rightarrow [0,1], \,\,\, i \in I\), \(\tilde{p}_{\theta } :Y \rightarrow [0,1], \,\,\, {\theta } \in \Theta \) be probability vectors, \(\sum _{i=1}^N p_i(x) = 1 \,\,, x\in Y\), \(\sum _{\theta =1} ^K\tilde{p}_{\theta }(x) = 1 \,\,, x\in Y\), and \([p_{ij}]_{i, j \in I}\), \( p_{ij}:Y\rightarrow [0, 1], \,\,\, i,j \in I\) be a matrix of probabilities, \(\sum _{j=1}^N p_{ij}(x) = 1 \,\,, x\in Y, i \in I\) . In the sequel we denote the system by (Tqp ).

Finally, let \((\Omega , \Sigma , \mathbb {P} )\) be a probability space and \(\{\tau _n\}_{n\ge 0}\) be an increasing sequence of random variables \(\tau _n :\Omega \rightarrow \mathbb {R}_+\) with \(\tau _0 =0\) and such that the increments \(\Delta \tau _n=\tau _n-\tau _{n-1}\), \(n \in \mathbb {N} \), are independent and have the same density \(g(t)=\lambda e^{-\lambda t}\), \( t \ge 0 \).

The intuitive description of random dynamical system corresponding to the system (Tqp ) is the following.

For an initial point \(x_0 \in Y \) we randomly select a transformation \(T_{i_0}\) from the set \(\{T_1 , \ldots , T_N \}\) in such a way that the probability of choosing \(T_{i_0}\) is equal to \(p_{i_0}(x_0)\), and we define

$$\begin{aligned} X(t) = T_{i_0}(t, x_0) \quad \text {for}\quad 0\le t < \tau _1. \end{aligned}$$

Next, at the random moment \(\tau _1\), at the point \(T_{i_0}(\tau _1, x_0)\) we choose a jump \(q_{\theta }\) from the set \(\{q_1, \ldots ,q_K\}\) with probability \(\tilde{p}_{\theta }(T_{i_0}(\tau _1, x_0 ))\) and we define

$$\begin{aligned} x_1 = q_{\theta } (T_{i_0} (\tau _1, x_0)). \end{aligned}$$

Finally, given \(x_n\), \(n\ge 1 \), we choose \( T_{i_n} \) in such a way that the probability of choosing \( T_{i_n} \) is equal to \(p_{i_{n-1}i_n}(x_n)\) and we define

$$\begin{aligned} X(t) = T_{i_n} (t - \tau _n, x_n )\quad \text {for}\quad \tau _n< t <\tau _{n+1}. \end{aligned}$$

At the point \( T_{i_n}(\Delta \tau _{n+1}, x_n ) \) we choose \(q_{{\theta }_n}\) with probability \(\tilde{p}_{{\theta }_n}(T_{i_n}(\Delta \tau _{n+1}, x_n))\). Then we define

$$\begin{aligned} x_{n+1} = q_{{\theta }_n}(T_{i_n} (\Delta \tau _{n+1}, x_n )). \end{aligned}$$

The above considerations may be reformulated as follows. Let \(\{\xi _n\}_{n \ge 1}\) and \(\{\gamma _n\}_{n \ge 1}\) be sequences of random variables, \(\xi _n :\Omega \rightarrow I\) and \( \gamma _n :\Omega \rightarrow \Theta \), such that

$$\begin{aligned}&\mathbb {P} (\xi _0 = i | x_0 = x ) = p_i (x),\nonumber \\&\mathbb {P} (\xi _n = k | x_{n} = x \quad \text {and} \quad \xi _{n-1} = i ) = p_{ik}(x),\quad \text {for}\quad n\ge 1 \nonumber \\&\mathbb {P} (\gamma _n = {\theta } | T_{\xi _{n-1}} (\Delta \tau _n , x_{n-1}) = y ) = \tilde{p}_{\theta } (y). \end{aligned}$$
(3.1)

Assume that \(\{\xi _n\}_{n \ge 0}\) and \(\{\gamma _n\}_{n \ge 1} \) are independent of \(\{\tau _n\}_{n \ge 0}\) and that for every \(n \in \mathbb {N}\).

Given an initial random variable \(\xi _1\) the sequence of the random variables \(\{x_n\}_{n\ge 0}\), \(x_n : \Omega \rightarrow Y \), is given by

$$\begin{aligned} x_n =q_{\gamma _n} \big (T_{\xi _{n-1}}(\Delta \tau _n, x_{n-1})\big ) \quad \text {for}\quad n=1,2, \dots \end{aligned}$$
(3.2)

and the stochastic process \(\{X(t)\}_{t \ge 0}\), \(X(t) : \Omega \rightarrow Y\), is given by

$$\begin{aligned} X(t) = T_{\xi _{n-1}}(t - \tau _{n-1}, x_{n-1}) \quad \text {for} \quad \tau _{n-1} \le t < \tau _n,\quad n = 1,2, \ldots \end{aligned}$$
(3.3)

We obtain a piecewise deterministic trajectory for \(\{X(t)\}_{t \ge 0}\) with jump times \(\{\tau _1, \tau _2, \ldots \}\) and post jump locations \(\{x_1, x_2, \ldots \}\).

Now define a stochastic process \(\{\xi (t) \}_{t \ge 0}\), \(\xi (t): \Omega \rightarrow I \), by

$$\begin{aligned} \xi (t) = \xi _{n-1} \quad \text {for} \quad \tau _{n-1} \le t < \tau _{n},\quad n=1, 2, \ldots \end{aligned}$$
(3.4)

It is easy to see that \(\{X(t)\}_{t \ge 0}\) and \(\{x_n\}_{n \ge 0}\) are not Markov processes. In order to use the theory of Markov operators we must redefine the processes \(\{X(t)\}_{t \ge 0}\) and \(\{x_n\}_{n \ge 0}\) in such a way that the redefined processes become Markov.

To this end, consider the space \( X = Y\times I \) endowed with the metric \(\varrho _X\) given by

$$\begin{aligned} \varrho _X \big ((x, i), (y, j)\big )=\varrho (x,y) + \varrho _c(i, j)\quad \text {for}\quad x, y\in Y, \,\,i, j\in I, \end{aligned}$$
(3.5)

where \(\varrho _c\) is the discrete metric in I. The constant c will be chosen later.

We will study the Markov chain \(\{(x_n, \xi _n) \}_{n\ge 0}\) , \((x_n, \xi _n) : \Omega \rightarrow X \) and the Markov process \( \{(X(t), \xi (t))\}_{t \ge 0}\), \((X(t), \xi (t)) : \Omega \rightarrow X \).

Now consider the sequence of distributions

$$\begin{aligned} \overline{\mu }_n(A) = \mathbb {P} \big ((x_n, \xi _n) \in A \big ) \quad \text {for} \quad A \in {B}_X, \, n \ge 0. \end{aligned}$$

It is easy to see that

$$\begin{aligned} \overline{\mu }_{n+1} = P \overline{\mu }_n \quad \text {for } \quad n \ge 0, \end{aligned}$$

where \(P : \mathcal {M}_1(X)\rightarrow \mathcal {M}_1(X)\) is the Markov operator given by

$$\begin{aligned} P\mu (A) = \sum _{j\in I} \sum _{{\theta } \in \Theta } \int _{X} \int _0^{+\infty } \lambda e^{-\lambda t} 1_A\big (q_{\theta }\big ( T_j(t,x)\big ),j \big ) p_{ij}(x)\tilde{p}_{\theta }\big (T_j (t, x)\big ) \, dt\, \mu (dx, di) \end{aligned}$$
(3.6)

and its dual operator \(U:B(X)\rightarrow B(X)\) by

$$\begin{aligned} Uf(x, i) = \sum _{j\in I} \sum _{{\theta } \in \Theta } \int _0^{+\infty } \lambda e^{-\lambda t} f\big (q_{\theta }\big ( T_j(t, x )\big ),j\big )p_{ij}(x)\tilde{p}_{\theta }\big (T_j (t, x)\big ) \,dt. \end{aligned}$$
(3.7)

The semigroup \(\{P^t\}_{t \ge 0}\) generated by the process \( \{(X(t), \xi (t))\}_{t \ge 0}\), \((X(t), \xi (t)) : \Omega \rightarrow X \) is given by

$$\begin{aligned} \langle P^t \mu , f \rangle = \langle \mu , T^t f \rangle \quad \text {for } f \in C({X}) ,\; \mu \in {\mathcal {M}}_1(X) \; \text { and } t \ge 0, \end{aligned}$$
(3.8)

where

$$\begin{aligned} T^t f(x, i) = E_{(x, i)}(f(X(t), {\xi }(t))) \quad \text {for } f \in C({X}). \end{aligned}$$
(3.9)

(E denotes the mathematical expectation on \((\Omega , \Sigma , \mathbb {P} ))\).

A measure \(\mu _0\) is called invariant with respect to \(P^t\) if \(P^t\mu _0 = \mu _0\) for every \(t \ge 0\).

We make the following assumptions on the system (Tqp).

There are three constants \(L\ge 1\), \(\alpha \in \mathbb {R} \) and \(L_q > 0\) such that

$$\begin{aligned} \sum _{j\in I} p_{ij}(y)\varrho (T_j(t,x) ,T_j(t,y)) \le Le^{ \alpha t}\varrho (x,y) \quad \text {for}\quad x,y \in Y, \,\, i \in I, \,\, t \ge 0 \end{aligned}$$
(3.10)

and

$$\begin{aligned} \sum _{{\theta } \in \Theta } \tilde{p}_{\theta }(x)\varrho (q_{\theta }(x),q_{\theta }(y)) \le L_q \varrho (x,y) \quad \text {for} \quad x,y \in Y. \end{aligned}$$
(3.11)

Assume that there exists \(x_* \in Y\) such that

$$\begin{aligned} \int _{\mathbb {R}_+}e^{-\lambda t} \varrho (q_{\theta } (T_j(t,x_*)) , q_{\theta }(x_*))\ dt < \infty \quad \text {for} \quad j \in I, \quad {\theta } \in \Theta . \end{aligned}$$
(3.12)

We also assume that the functions \(\tilde{p}_{\theta }\), \({\theta } \in \Theta \), and \(p_{ij}\), \(i,j \in I\), satisfy the following conditions

$$\begin{aligned} \sum _{j\in I} |p_{ij}(x) - p_{ij}(y)|&\le \overline{\gamma }_1\varrho (x,y) \quad \text {for}\quad x,y \in Y, \,\, i \in I,\nonumber \\ \sum _{{\theta }\in \Theta } |\tilde{p}_{{\theta }}(x) - \tilde{p}_{{\theta }}(y)|&\le \overline{\gamma }_2\varrho (x,y) \quad \text {for}\quad x,y \in Y, \end{aligned}$$
(3.13)

where \(\overline{\gamma }_1, \overline{\gamma }_2 > 0\).

Moreover, we assume that there are \(i_0 \in I, {\theta }_0 \in \Theta \) such that

$$\begin{aligned}&\varrho (T_{i_0}(t,x) , T_{i_0}(t,y)) \le Le^{ \alpha t}\varrho (x,y) \quad \text {for}\quad x,y \in Y, \,\, t \ge 0,\nonumber \\&\varrho (q_{{\theta }_0}(x),q_{{\theta }_0}(y)) \le L_q \varrho (x,y) \quad \text {for} \quad x,y \in Y, \end{aligned}$$
(3.14)

and

$$\begin{aligned} \delta _1 = \inf _{i\in I} \inf _{x \in Y} p_{i i_0}(x)> 0,\quad \delta _2 = \inf _{x \in Y} \tilde{p}_{{\theta }_0}(x) > 0. \end{aligned}$$
(3.15)

Let \(\{(x_n, \xi _n)\}_{n\in N}\) be the Markov chain given by (3.1) and (3.2). The existence of an exponentially attractive invariant measure for Markov operator generated by random dynamical systems was proven by Horbacz and Ślȩczka in [19].

Theorem 1

[19] Assume that system (Tqp) satisfies conditions (3.10)–(3.15). If

$$\begin{aligned} LL_q + \frac{\alpha }{\lambda } < 1. \end{aligned}$$
(3.16)

then

  1. (i)

    there exists a unique invariant measure \(\mu _*\in \mathcal {M}_1^1(X)\) for the chain \(\{(x_n,\xi _n)\}_{n\ge 0}\), which is attractive in \(\mathcal {M}_1(X)\).

  2. (ii)

    there exists \(q\in (0,1)\) such that for \(\mu \in \mathcal {M}_1^1(X)\) there exists and \(C = C(\mu )>0\)

    $$\begin{aligned} ||P^{n}\mu -\mu _* ||_{FM}\le q^n C(\mu ),\quad \text {for}\quad n\in \mathbb {N}, \end{aligned}$$

    where \(x_*\) is given by (3.12),

  3. (iii)

    the strong law of large numbers holds for the chain \(\{(x_n,\xi _n)\}_{n\ge 0}\) starting from \((x_0,\xi _0 )\in X\), i.e. for every bounded Lipschitz function \(f:X\rightarrow \mathbb {R}\) and every \(x_0\in Y\) and \(\xi _0\in I\) we have

    $$\begin{aligned} \lim _{n\in \infty }\frac{1}{n}\sum _{k=0}^{n-1} f(x_k,\xi _k)=\int _{X} f(x,\xi )\, \mu _*(dx,d\xi ) \end{aligned}$$

\(\mathbb {P}_{x_0,\xi _0}\) almost surely.

Remark 1

Condition (3.16) means that a large jump rate and a good contraction of the jumps could compensate expanding semiflows (\(\alpha > 0\)).

Let \(\{(x_n, \xi _n)\}_{n\in \mathbb {N}}\) be the Markov chain given by (3.1), (3.2) with initial distribution \(\mu \in M_1^2(X)\) and let \(\mu _*\in \mathcal {M}_1^1(X)\) be a unique invariant measure for the process \((x_n,\xi _n)_{n\ge 0}\). Now, choose an arbitrary function \(g:X\rightarrow \mathbb {R} \) which is Lipschitz and satisfies \(\langle g,\mu _*\rangle =0\). For every \(n\in \mathbb {N}\), put

$$\begin{aligned} S_n^{\mu }:=\frac{g(x_1, \xi _1)+\cdots +g(x_n, \xi _n)}{\sqrt{n}}. \end{aligned}$$

Now we formulate the main results of this paper. Its proof is given in Sect. 6.

Theorem 2

Assume that all assumptions of Theorem 1 are fulfilled and the unique invariant measure has finite second moment, then \(S_n^{\mu }\) converges in distribution to some random variable with normal distribution \(N(0, \sigma ^2)\), as \(n\rightarrow \infty \), where \(\sigma ^2 = \lim _{n \rightarrow \infty }E_{\mu _*}(S_n^{\mu _*})^2\).

Checking that the invariant measure has finite second moment could be difficult if we have no a priori information about the invariant measure. Now, assumption (3.12) is strengthened to the following condition:

$$\begin{aligned} \int _{\mathbb {R}_+}e^{-\lambda t} \varrho ^2 (T_j(t,x_*) , x_*)dt < \infty \quad \text {for} \quad j \in I. \end{aligned}$$
(3.17)

Theorem 3

Assume that system (Tpq) satisfies conditions (3.17) and instead of (3.14)–(3.15) for some \(i_0 \in I, {\theta }_0 \in \Theta \), that the conditions (3.14)–(3.15) are satisfied for all \(i_0 \in I, {\theta }_0 \in \Theta \). If

$$\begin{aligned} \quad (LL_q)^2 + \frac{\alpha }{\lambda } < \frac{1}{2}, \end{aligned}$$
(3.18)

then the invariant measure \(\mu _*\) for the process \(\{(x_n,\xi _n)\}_{n\ge 0}\) has finite second moment.

Note that (3.18) implies (3.16). Assuming (3.18) instead (3.16) allows us to show that \(\mu _* \in M_1^2(X)\), which is essential to establish CLT in the way presented in this paper.

The next result describing CLT for the process \(\{(x_n)\}_{n\ge 0}\) on Y is an obvious consequence of Theorem 2.

Remark 2

Choose an arbitrary function \(f : Y \rightarrow \mathbb {R} \) which is Lipschitz and satisfies \(\langle f,\tilde{\mu }_*\rangle =0\), where \(\tilde{\mu }_*(A) = \mu _*(A\times I),\,A \in B_Y\). Let \(\tilde{\mu } \in M_1^2(Y)\) be an initial distribution of \(\{x_n\}_{n \in \mathbb {N}}\). Under the hypotheses of Theorem 2 the distribution of

$$\begin{aligned} S_n^{\tilde{\mu }}= \frac{f(x_1) + \cdots + f(x_n)}{\sqrt{n}} \end{aligned}$$

converges to some random variable with normal distribution \(N(0, \sigma ^2)\), as \(n\rightarrow \infty \), where \(\sigma ^2 = \lim _{n \rightarrow \infty }E_{\tilde{\mu }_*}(S_n^{\tilde{\mu }_*})^2\).

Let \(\{(X(t), \xi (t))\}_{t \ge 0}\) be the Markov process given by (3.3) and (3.4). Relationships between an invariant measure for the Markov operator P given by (3.6) and an invariant measures for \( \{ P^t \}_{t \ge 0} \) given by (3.8) was proven by Horbacz [17]. Similar results have been proved by Davis [5, Proposition 34.36]. It has been also studied in [28].

The existence of invariant measure for \(\{P^t\}_{t \ge 0}\) follows from Theorem 1 and Theorem 5.3.1 [17]. If \(\mu _* \in {M}_1(X) \) is an invariant measure for the Markov operator P, then \( {\mu }_0 = G\mu _*, \) where

$$\begin{aligned} G\mu (A) = \sum _{i\in I} \int \limits _{X} \int \limits _0^{+\infty } 1_A(T_i(t, x), i) p_{ki}(x) \lambda e^{-\lambda t} dt \mu (dx, dk),\,\, A \in {B}_X,\, \mu \in {M}_1(X), \end{aligned}$$

is an invariant measure for the Markov semigroup \(\{P^t\}_{t \ge 0}\).

The next theorem is partially inspired by the reasoning which can be found in Lemma 2.5 [2]. Since the Markov process \(\{(X(t), \xi (t))\}_{t \ge 0}\) is defined with he help of the Markov chain \(\{(x_n, \xi _n)\}_{n\in \mathbb {N}}\) given by (3.1), (3.2) we use Theorem 1 and Theorem 2 in the proof of following theorem.

Theorem 4

Assume that all assumptions of Theorems 1 and 2 are fulfilled and the unique invariant measure \(\mu _0\) has finite second moment, then

  1. (1)

    the strong law of large numbers holds for the process \(\{(X(t),\xi (t))\}_{t\ge 0}\) starting from \((x_0, i_0 )\in X\), i.e. for every bounded Lipschitz function \(f:X\rightarrow \mathbb {R}\) and every \(x_0\in Y\) and \(i_0\in I\) we have

    $$\begin{aligned} \lim _{t\in \infty }\frac{1}{t}\int _0^t f(X(s),\xi (s)) ds=\int _{X} f(x,i)\, \mu _0(dx,di) \end{aligned}$$

    \(\mathbb {P}_{x_0,i_0}\) almost surely,

  2. (2)

    the Central Limit Theorem holds for the process \(\{(X(t),\xi (t))\}_{t\ge 0}\) i.e. for every bounded Lipschitz function \(f:X\rightarrow \mathbb {R}\) such that \(\langle f, \mu _0 \rangle = 0\)

    $$\begin{aligned} \frac{1}{\sqrt{t}}\int _0^t f(X(s),\xi (s)) ds \end{aligned}$$

converges in distribution to some random variable with normal distribution \(N(0, \tilde{\sigma }^2)\), as \(n\rightarrow \infty \), where \(\tilde{\sigma } ^2 = \lim _{n \rightarrow \infty }E_{\mu _*}(S_n^{\mu _*})^2 + \langle Hf - \tilde{K}^2f, \mu _* \rangle \) and

$$\begin{aligned}&Hf(x, i) = \sum _{j=1}^N\int _0^{\infty } \lambda e^{-\lambda s}\Bigg (\int _0^sf(T_j(v, x),j)dv\Bigg )^2p_{ij}(x)ds,\nonumber \\&\tilde{K}f(x, i) = \sum _{j=1}^N\int _0^{\infty } e^{-\lambda s} f(T_j(s, x),j)p_{ij}(x)ds\quad \text {for} \quad f \in B(X). \end{aligned}$$
(3.19)

4 Coupling for Random Dynamical Systems

Let \(P:M_{fin}(X) \rightarrow M_{fin}(X)\) be the transition Markov operator for the random dynamical system (Tpq), where \(X= Y\times I\).

Distributions \(\Pi ^n((x, i),\cdot ))\), \({n\in \mathbb {N}}\), are given by

$$\begin{aligned}&\Pi ^0((x, i),A)=\delta _{(x, i)}(A), \nonumber \\&\Pi ^1((x, i),A)=\Pi ((x, i),A) =P \delta _{(x, i)}(A) \nonumber \\&= \sum _{j\in I} \sum _{{\theta } \in \Theta } \int _0^{+\infty } \lambda e^{-\lambda t} 1_A\big (q_{\theta }\big ( T_j(t,x)\big ),j \big ) p_{ij}(x)\tilde{p}_{\theta }\big (T_j (t, x)\big ) \, dt,\nonumber \\&\Pi ^n((x, i),A)=\int _X \Pi ^1((y, j),A)\Pi ^{n-1}((x, i),d(y, j)), \end{aligned}$$
(4.1)

for \((x, i) \in X\), \(A\in B_X.\) If we assume that, for \((x, i)\in X\), \(\bar{\Pi }^{n}((x, i),\cdot )\) is a measure on \(X^n\), generated by a sequence \((\Pi ^k((x, i),\cdot ))_{k\in \mathbb {N}}\), then

$$\begin{aligned} \bar{\Pi }^{n+1}((x, i),A\times B)=\int _A\Pi ^1(z_n,B)\bar{\Pi }^{n}((x, i),dz), \end{aligned}$$
(4.2)

where \(z=((z_1,i_1),\ldots ,(z_n,i_n))\) and \(A\in B_{X^n}\), \(B\in B_X\), is a measure on \(X^{n+1}\). Note that \(\Pi ^1((x, i),\cdot ), \ldots ,\Pi ^n((x, i),\cdot )\), given by (4.1), are marginal distributions of \(\bar{\Pi }^{n}((x, i),\cdot )\), for every \((x, i)\in X\). Finally, we obtain a family \(\{\Pi ^{\infty }((x, i),\cdot ):(x, i)\in X\}\) of sub-probability measures on \(X^{\infty }\). This construction is motivated by Hairer [12].

Denote by

$$\begin{aligned}&({q}\circ T)_n(\mathbf{t_n},{\varvec{\theta _n}}, \mathbf{i_n}, x) = q_{\theta _n}\big (T_{i_n} \big (t_{n},q_{\theta _{n-1}}( T_{i_{n-1}}(t_{n-1}, \dots , T_{i_1}(t_1,x))\big )\big ) \end{aligned}$$
(4.3)

and consider the probabilities \( \mathcal {P}_n : Y \times I^{n+1}\times \mathbb {R}_+^{n-1} \times \Theta ^{n-1} \rightarrow [0, 1]\) and \(\overline{\mathcal {P}}_n : Y \times I^n \times \mathbb {R}^n_+ \times \Theta ^n \rightarrow [0, 1]\) given by

$$\begin{aligned}&\mathcal {P}_1 (x, i, i_1) = p_{ii_1}(x),\\&\mathcal {P}_n (x, i, \mathbf{i_n}, \mathbf{t_{n-1}}, {\varvec{\theta _{n-1}}} )\\&= p_{ii_1}(x) p_{i_1i_2}\big ( q_{\theta _1} (T_{i_1}(t_1, x))\big ) \cdot \ldots \cdot p_{i_{n-1}i_n} \big (({q}\circ T)_{n-1}(\mathbf{t_{n-1}},{\varvec{\theta _{n-1}}},\mathbf{i_{n-1}}, x)\big ), \end{aligned}$$

for \(n \ge 2\), and

$$\begin{aligned}&\overline{\mathcal {P}}_1 (x, i_1, t_1, \theta _1) = \tilde{p}_{\theta _1}(T_{i_1}(t_1, x)),\\&\overline{\mathcal {P}}_n (x, \mathbf {i_n}, \mathbf {t_n}, \mathbf {\varvec{\theta _n}}) = \tilde{p}_{\theta _1}\big (T_{i_1}(t_1, x )\big ) \tilde{p}_{\theta _2} \big (T_{i_2}(t_2, q_{\theta _2} (T_{i_1}(t_1, x))\big ) \cdot \ldots \cdot \\&\cdot \tilde{p}_{\theta _n}\big ( T_{i_n}(t_n, ({q}\circ T)_{n-1}(\mathbf{t_{n-1}},{\varvec{\theta _{n-1}}}, \mathbf{i_{n-1}}, x) \big ), \end{aligned}$$

for \(n \ge 2\), where

$$\begin{aligned} \mathbf{t_n}= (t_n,t_{n-1}, \ldots , t_1), \quad {\varvec{\theta _n}} =(\theta _n, \theta _{n-1}, \ldots , \theta _1),\quad \mathbf{i_n} = (i_n, i_{n-1} \dots ,i_1). \end{aligned}$$

Then \(P^n\) is given by

$$\begin{aligned}&P^{n} \mu (A) = \sum _{{ \mathbf j_n}=(j_n, \ldots ,j_1)\in I^n} \int _{X} \int _{\mathbb {R}_+^n} \sum _{ { \varvec{\theta _n}} =(\theta _1, \ldots ,\theta _n ) \in \Theta ^n} 1_A\big (({q}\circ T)_n(\mathbf{t_n},{\varvec{\theta _n}}, \mathbf{j_n}, x) \big ),j_n \big )\\&\cdot \mathcal {P}_n\big (x, i, { \mathbf j_n}, \mathbf{t_{n-1}}, {\varvec{\theta _{n-1}}}\big ) \cdot \mathcal {\overline{P}}_n \big (x, { \mathbf j_n}, \mathbf{t_n}, {\varvec{\theta _n}}\big ) \lambda e^{-\lambda (t_1 +\ldots + t_n)}\, {d{ \mathbf t_n}} \, \mu (dx, di). \end{aligned}$$

Fix \( x_*\in Y\) for which assumption (3.12) holds. We define \(V:X \rightarrow [0,\infty )\), by

$$\begin{aligned} V(x, i)=\varrho (x, x_*)\quad \text {for}\quad (x, i) \in X. \end{aligned}$$

Lemma 1

Assume that the system (Tpq) satisfies conditions (3.10) - (3.12) and (3.16). If \(\mu \in M^1_1(X)\), then \(P^n\mu \in M^1_1(X)\) for every \(n\in \mathbb {N}\). Moreover, there are constants \(a< 1\) and \(c >0\) such that

$$\begin{aligned} \langle V,P^n\mu \rangle \le a^n\langle V,\mu \rangle +\frac{1}{1-a}c \quad \text {for}\quad n\in \mathbb {N}. \end{aligned}$$

Proof

$$\begin{aligned} UV(x,i)\le&\sum _{j\in I}\sum _{\theta \in \Theta } \int _0^{+\infty } \varrho (q_{\theta }\big (T_j(t,x)\big ) , q_{\theta }\big (T_j(t,x_*)\big )) \lambda e^{-\lambda t} p_{ij}(x)\tilde{p}_{\theta }\big (T_j(t,x)\big )\, dt \\&+\sum _{j\in I}\sum _{{\theta }\in \Theta } \int _0^{+\infty } \varrho (q_{\theta }\big (T_j(t,x_*)\big ) , q_{\theta }(x_*)) \lambda e^{-\lambda t} p_{ij}(x)\tilde{p}_{\theta }\big (T_j(t,x)\big )\, dt \\&+\sum _{j\in I}\sum _{{\theta }\in \Theta } \int _0^{+\infty } \varrho (q_{\theta }(x_*),x_*) \lambda e^{-\lambda t} p_{ij}(x)\tilde{p}_{\theta }\big (T_j(t,x)\big )\, dt . \end{aligned}$$

Further, using (3.10)–(3.12) and (3.16) we obtain

$$\begin{aligned} UV(x,i)\le aV(x,i)+c, \end{aligned}$$
(4.4)

where

$$\begin{aligned}&a=\frac{\lambda L L_q}{\lambda -\alpha } < 1, \nonumber \\&c=\sum _{j\in I}\sum _{{\theta }\in \Theta }\int _0^{+\infty }\lambda e^{-\lambda t}\varrho (q_{\theta }\big (T_j(t,x_*)\big ),q_{\theta }(x_*))\, dt +\sum _{{\theta }\in \Theta }\varrho (q_{\theta }(x_*),x_*), \end{aligned}$$
(4.5)

so V is a Lapunov function for P. \(\square \)

Furthermore, we define \(\bar{V}:X^2\rightarrow [0,\infty )\)

$$\begin{aligned} \bar{V}((x, i),(y, j))=V(x,i)+V(y, j)\quad \text {for} (x, i),(y, j)\in X. \end{aligned}$$

Note that, for every \(n\in \mathbb {N}\),

$$\begin{aligned} \langle \bar{V},b^n \rangle \,\le \,a\langle \bar{V},b^{n-1}\rangle +2c\,\le \,a^n\langle \bar{V},b\rangle +\frac{2}{1-a}c, \end{aligned}$$
(4.6)

where b and \(b^n\) are given by (2.2) and (2.3). Since the measure \(b\in M_{fin}^1(X^2)\) is finite on \(X^2\) and with the first moment finite we define the linear functional

$$\begin{aligned} \phi (b)=\int _{X^2}\varrho _X((x, i),(y, j))b(d(x,i)\times d(y,j)). \end{aligned}$$

Following the above definitions, we easily obtain

$$\begin{aligned} \phi (b)\,\le \,\langle \bar{V},b\rangle . \end{aligned}$$
(4.7)

Set \(F=X\times X\) and define

$$\begin{aligned}&{\mathbf Q}^1((x_1,i_1)(x_2,i_2),A\times B)= \nonumber \\&\sum _{j\in I}\sum _{{\theta }\in \Theta }\int _0^{+\infty }\lambda e^{-\lambda t}\{ p_{i_1 j}(x_1) \tilde{p}_{\theta }\big ( T_j(t,x_1)\big )\wedge p_{i_2 j}(x_2)\tilde{p}_{\theta }\big (T_j(t,x_2)\big )\} \nonumber \\&\times 1_{A\times B}\big (\big ( q_{\theta }\big (T_j(t,x_1)\big ),j),(q_{\theta }\big (T_j(t,x_2)\big ),j\big )\big )\, dt \end{aligned}$$
(4.8)

for \(A, B\subset X\), where \(a\wedge b\) stands for the minimum of a and b, and

$$\begin{aligned}&Q^n((x_1, i_1)(x_2,i_2),A\times B) \nonumber \\&=\int _{X^2}Q^1((u,i)(v, j),A\times B)Q^{n-1}( (x_1, i_1)(x_2,i_2),d(u, i)\times d(v, j)),\quad n\in \mathbb {N}. \end{aligned}$$
(4.9)

It is easy to check that

$$\begin{aligned}&Q^1((x_1, i_1)(x_2,i_2),A\times X)\\&\le \sum _{j\in I}\sum _{{\theta }\in \Theta }\int _0^{+\infty }\lambda e^{-\lambda t}\{ p_{i_1 j}(x_1) \tilde{p}_{\theta }\big ( T_j(t,x_1)\big ) 1_{A\times B}\big (\big ( q_{\theta }\big (T_j(t,x_1)\big ),j)\big ),j\big )\big )\, dt\\&=\Pi ^1((x_1, i_1),A) \end{aligned}$$

and analogously \(Q^1((x_1, i_1)(x_2,i_2),X\times B)\le \Pi ^1((x_2, i_2),B)\). Similarly, for \(n\in \mathbb {N}\),

$$\begin{aligned}&Q^n((x_1, i_1)(x_2,i_2),A\times X) \le \Pi ^n((x_1, i_1),A)\\&Q^n((x_1, i_1)(x_2,i_2),X\times B)\le \Pi ^n((x_2, i_2),B) \end{aligned}$$

For \(b\in M_{fin}(X^2)\), let \(Q^nb\) denote the measure

$$\begin{aligned} (Q^nb)(A\times B)=\int _{X^2}Q^n((x, i)(y, j),A\times B)b(d(x, i)\times d(y,j) \end{aligned}$$
(4.10)

for \(A,B\in B_{X}, n\in \mathbb {N}\). Note that, for every \(A,B\in B_{X}\) and \(n\in \mathbb {N}\), we obtain

$$\begin{aligned}&(Q^{n+1}b)(A\times B) =\int _{X^2}Q^{n+1}(((x,i)(y,j),A\times B)b(d(x,i)\times d(y,j)) \nonumber \\&\quad {=}\int _{X^2}\int _{X^2}Q^1(((u,l)(v,k),A{\times } B)Q^n((x,i)(y,j)),d(u,l)\times d(v,k))b(d(x,i){\times } d(y,j)) \nonumber \\&\quad =\int _{X^2}Q^1(((u,l)(v,k)),A\times B)(Q^nb)(d(u,l)\times d(v,k)) =(Q^1(Q^nb))(A\times B). \end{aligned}$$
(4.11)

Again, following (4.1) and (4.2), we are able to construct measures on products and, as a consequence, a measure \(Q^{\infty }b\) on \(X^{\infty }\), for every \(b\in M_{fin}(X^2)\). Now, we check that, for \(n\in \mathbb {N}\) and \(b\in M_{fin}(X^2)\),

$$\begin{aligned} \phi (Q^nb)\le a^n\phi (b). \end{aligned}$$
(4.12)

Let us observe that

$$\begin{aligned} \phi (Q^nb)&=\int _{X^2}\int _{X^2}\varrho _X((x,i_1),(y,i_2))Q^n(((u,l)(v,k)),d(x,i_1)\\&\qquad \times d(y,i_2))b(d(u,l)\times d(v,k))\\&=\int _{X^2}\int _{X^2}\int _0^T\int _{X^2}\varrho _X((x,i_1), (y,i_2)) {\mathbf Q}^1((u_1,l_1)(v_1,k_1))(d(x,i_1)\times d(y,i_2))\\&\cdot Q^{n-1}(((u,l)(v,k)),(u_1,l_1)\times (v_1,k_1))b(d(u,l)\times d(v,k))\\&=\int _{X^2}\int _{X^2}\varrho _X((x,i_1), (y,i_2))\sum _{j\in I}\sum _{{\theta }\in \Theta }\int _0^{+\infty }\lambda e^{-\lambda t}\{ p_{i_1 j}(u_1) \tilde{p}_{\theta }\big ( T_j(t,u_1)\big )\wedge \\&\wedge p_{i_2 j}(v_1)\tilde{p}_{\theta }\big (T_j(t,v_1)\big )\} \delta _{\big (\big ( q_{\theta }\big (T_j(t,x_1)\big ),j),(q_{\theta }\big (T_j(t,x_2)\big ),j\big )\big )}(d(x,i_1)\times d(y,i_2))\\&\cdot Q^{n-1}(((u,l)(v,k)),(u_1,l_1)\times (v_1,k_1))b(d(u,l)\times d(v,k))\\&\le \int _{X^2}\int _{X^2}\sum _{j\in I}\sum _{{\theta }\in \Theta }\int _0^{+\infty }\lambda e^{-\lambda t} p_{i_1 j}(u_1) \tilde{p}_{\theta }\big ( T_j(t,u_1)\big )\\&\cdot \varrho _X\big ((q_{\theta }\big (T_j(t,u_1)\big ),j),(q_{\theta }\big (T_j(t,v_1)\big ),j\big )\big )\\&\cdot Q^{n-1}(((u,l)(v,k)),(u_1,l_1)\times (v_1,k_1))b(d(u,l)\times d(v,k)). \end{aligned}$$

Following (3.10) and (3.11), we obtain

$$\begin{aligned} \phi (Q^nb)&\le \int _{X^2}\int _{X^2}\int _0^{+\infty }\lambda e^{-\lambda t} LL_q e^{\alpha t}\varrho _X((u_1,l_1), (v_1,k_1))dt \\&\cdot Q^{n-1}(((u,l)(v,k)),(u_1,l_1)\times (v_1,k_1))b(d(u,l)\times d(v,k))\\&= \frac{LL_q\lambda }{\lambda - \alpha } \int _{X^2}\int _{X^2}\varrho _X((u_1,l_1), (v_1,k_1))dt\\ {}&\cdot Q^{n-1}(((u,l)(v,k)),(u_1,l_1)\times (v_1,k_1))b(d(u,l)\times d(v,k))\\&\le \ldots \le (LL_q \frac{\lambda }{\lambda - \alpha })^n \phi (b) = a^n \phi (b). \end{aligned}$$

We may construct the coupling \(\{C^1(((x, i),(y, j)),\cdot ):(x, i),(y, j)\in X\}\) for \(\{\Pi ^1((x, i),\cdot ):(x, i)\in X\}\) such that \(Q^1(((x, i),(y, j)),\cdot )\le C^1(((x, i),(y, j)),\cdot )\), whereas measures \(R^1(((x, i),(y, j)),\cdot )\) are non-negative. Following the rule given in (4.2), we easily obtain the family of probability measures

$$\begin{aligned} \{C^{\infty }(((x, i),(y, j)),\cdot ):(x, i),(y, j)\in X\} \end{aligned}$$

on \((X^2)^{\infty }\) with marginals \(\Pi ^{\infty }((x, i),\cdot )\) and \(\Pi ^{\infty }((y, j),\cdot )\). This construction appears in [12].

We may also consider a sequence of distributions \((\{C^n(((x, i),(y, j)),\cdot )\})_{n\in \mathbb {N}}\), constructed by induction on n, as it is done in (4.1). Note that \(C^n(((x, i),(y, j)),\cdot )\) is the n-th marginal of \(C^{\infty }(((x, i),(y, j)),\cdot )\), for \((x, i),(y, j)\in X\). Additionally, \(\{C^n(((x, i),(y, j)),\cdot )\}\) fulfills the role of coupling for \(\{\Pi ^n((x, i),\cdot ):(x, i)\in X\}\). Indeed, for \(A\in B_Y\),

$$\begin{aligned} C^n(((x, i),(y, j)),A\times X)&=\int _{X^2}C^1((u,v),A\times X)C^{n-1}(((x, i),(y, j)),du\times dv)\\&=\int _{X^2}\Pi ^1(u,A)C^{n-1}(((x, i),(y, j)),du\times dv)\\&=\ldots =\Pi ^n((x, i),A) \end{aligned}$$

and, similarly, \(C^n(((x, i),(y, j)),X\times B)=\Pi ^n((y, j),B)\).

Fix \(((x_0, i_0),(y_0, j_0))\in X^2\). The sequence of transition probability functions \(\Big (\{C^n(((x, i),(y, j)),\cdot ):(x, i),(y, j)\in X\}\Big )_{n\in \mathbb {N}}\) defines the Markov chain \(\mathcal {Z}\) on \(X^2\) with starting point \(((x_0, i_0),(y_0, j_0))\), while the sequence of transition probability functions

$$\begin{aligned} \Big (\left\{ \hat{C}^n(((x, i),(y, j),k),\cdot ):(x, i),(y, j)\in X, k\in \{0,1\}\right\} \Big )_{n\ge 1} \end{aligned}$$

defines the Markov chain \(\hat{\mathcal{{Z}}}\) on the augmented space \(X^2\times \{0,1\}\) with initial distribution \(\hat{C}^0(((x_0, i_0),(y_0, j_0)),\cdot )=\delta _{((x_0, i_0),(y_0, j_0),1)}(\cdot )\). If \(\hat{\mathcal{{Z}}}_n=((x, i),(y, j),k)\), where \((x, i),(y, j)\in X\), \(k\in \{0,1\}\), then

$$\begin{aligned} \mathbb {P}(\hat{\mathcal{{Z}}}_{n+1}\in A\times B\times \{1\}\,|\,\hat{\mathcal{{Z}}}_n=((x, i),(y, j),k),k\in \{0,1\})=Q^n(((x, i),(y, j)),A\times B), \end{aligned}$$
$$\begin{aligned} \mathbb {P}(\hat{\mathcal{{Z}}}_{n+1}\in A\times B\times \{0\}\,|\,\hat{\mathcal{{Z}}}_n=((x, i),(y, j),k),k\in \{0,1\})=R^n(((x, i),(y, j)),A\times B), \end{aligned}$$

where \(A,B\in B_Y\). Once again, we refer to (4.1) and the Kolmogorov theorem to obtain the measure \(\hat{C}^{\infty }(((x_0, i_0),(y_0, j_0)),\cdot )\) on \((X^2\times \{0,1\})^{\infty }\) which is associated with the Markov chain \(\hat{\mathcal{{Z}}}\).

From now on, we assume that processes \(\mathcal{{Z}}\) and \(\hat{\mathcal{{Z}}}\) taking values in \(X^2\) and \(X^2\times \{0,1\}\), respectively, are defined on \((\Omega , \Sigma , \mathbb {P})\). The expected value of the measures \(C^{\infty }(((x_0, i_0),(y_0, j_0)),\cdot )\) or \(\hat{C}^{\infty }(((x_0, i_0),(y_0, j_0)),\cdot )\) is denoted by \(E_{(x_0, i_0),(y_0, j_0)}\).

5 Auxiliary Theorems

Before proceeding to the proof of Theorem 2 we formulate two lemmas and two theorems, which are interesting in their own right. The first one is inspirated by the reasoning which can be found in [13].

Fix \(\tilde{a}\in (0,1-a)\) and set

$$\begin{aligned} K_{\tilde{a}}=\{((x, i),(y, j))\in X^2: \, \bar{V}((x, i),(y, j))<\tilde{a}^{-1}2c\}, \end{aligned}$$

where a and c are given by (4.5). Let \(\tau _{K_{\tilde{a}}}:(X^2)^{\infty }\rightarrow \mathbb {N}\) denote the time of the first visit in \(K_{\tilde{a}}\), i.e.

$$\begin{aligned} \tau _{K_{\tilde{a}}}(((x_n, i_n),(y_n, j_n))_{n\in \mathbb {N}})=\inf \{n\in \mathbb {N}:\, ((x_n, i_n),(y_n, j_n))\in K_{\tilde{a}}\}. \end{aligned}$$

As a convention, we put \(\tau _{K_{\tilde{a}}}(((x_n, i_n),(y_n, j_n))_{n\in \mathbb {N}})=\infty \), if there is no \(n\in \mathbb {N}\) such that \(((x_n, i_n),(y_n, j_n))\in K_{\tilde{a}}\).

Since

$$\begin{aligned} \langle \bar{V},b^n\rangle \,\le \,a^n\langle \bar{V},b\rangle +\frac{2}{1-a}c, \end{aligned}$$

by Lemma 2.2 in [21] or Theorem 7 in [13], we obtain

Lemma 2

For every \(\zeta \in (0,1)\) there exist positive constants \(D_1,D_2\) such that

$$\begin{aligned} E_{(x_0, i_0),(y_0, j_0)}\left[ (a+\tilde{a})^{-\zeta \tau _{K_{\tilde{a}}}}\right] \le D_1\bar{V}((x_0, i_0),(y_0, j_0))+D_2. \end{aligned}$$

For every positive \(r>0\), we define the set

$$\begin{aligned} C_r=\left\{ ((x, i),(y, j))\in X^2:\; \varrho _X((x, i),(y, j))<r\right\} . \end{aligned}$$

Lemma 3

Assume that the system (Tqp) satisfies conditions (3.10)–(3.11) and (3.15)–(3.16). Fix \(a_1\in (a,1)\). Let \(C_r\) be the set defined above and suppose that \(b\in M_{fin}(X^2)\) is such that \(\mathrm{supp}b\subset C_r\). There exists \(\bar{\gamma }>0\) such that

$$\begin{aligned} (Q^n b)(C_{a_1^nr})\ge \bar{\gamma }^n\Vert b\Vert . \end{aligned}$$

Proof

By (4.3), (4.8) and (4.9), we obtain

$$\begin{aligned}&Q^n((x, i)(y, j), C_{a_1^nr})= \sum _{(i_1, \ldots , i_n)} \sum _{({\theta }_1, \ldots , {\theta }_n)} \int _{\mathbb {R}_+^n} \lambda ^n e^{-\lambda (t_1+\ldots + t_n)}\\&\cdot \prod _{k=2}^n [p_{i_{k-1}i_k}(({q}\circ T)_{k-1}(\mathbf{t_{k-1}}, \mathbf{{\theta }_{k-1}}, \mathbf{{}i_{k-1}}, x)) \\&\cdot \tilde{p}_{{\theta }_k}(T_{i_k}( t_k, ({q}\circ T)_{k-1}(\mathbf{{}t_{k-1}},\mathbf{{}{\theta }_{k-1}}, \mathbf{{}i_{k-1}}, x)))\\&\wedge p_{i_{k-1}i_k}(({q}\circ T)_{k-1}(\mathbf{{}t_{k-1}}, \mathbf{{}{\theta }_{k-1}}, \mathbf{{}i_{k-1}}, y))\\&\cdot \tilde{p}_{{\theta }_k} (T_{i_k}( t_k, ({q}\circ T)_{k-1}(\mathbf{{}t_{k-1}}, \mathbf{{}{\theta }_{k-1}}, \mathbf{{}i_{k-1}}, y)))\\&\cdot p_{ii_1}(x)\tilde{p}_{{\theta }_1}(T_{i_1}(t_1, x)) \wedge p_{ii_1}(y)\tilde{p}_{{\theta }_1}(T_{i_1}(t_1, y))\\&\cdot 1_{C_{a_1^nr}}(({q}\circ T)_n( \mathbf{{}t_n}, \mathbf{{}{\theta }_n}, \mathbf{{}i_n}, x), ({q}\circ T)_n( \mathbf{{}t_n}, \mathbf{{}{\theta }_n}, \mathbf{{}i_n}, y))dt_1\ldots dt_n. \end{aligned}$$

Directly from (4.9) and (4.10) we obtain

$$\begin{aligned}&(Q^n b)(C_{a_1^nr}) = \int _{X^n}Q^n ((x, i)(y, j),C_{a_1^nr})b(d(x, i)\times d(y, j)) \end{aligned}$$

Set

$$\begin{aligned}&\mathcal {T}_n\times \mathcal {S}_n \times \mathcal {I}_n \\&=\{(\mathbf{{}t_n},\mathbf{{}{\theta }_n}, \mathbf{{}i_n}, ) \in \mathbb {R}_+^n\times \Theta ^n \times I^n {:}\varrho (({q}\circ T)_n( \mathbf{{}t_n}, \mathbf{{}{\theta }_n}, \mathbf{{}i_n}, x), ({q}\circ T)_n( \mathbf{{}t_n}, \mathbf{{}{\theta }_n}, \mathbf{{}i_n}, y)) <a_1^nr\} \end{aligned}$$

Note that \( 1_{C_{a_1^nr}}(({q}\circ T)_n( \mathbf{{}t_n}, \mathbf{{}{\theta }_n}, \mathbf{{}i_n}, x), ({q}\circ T)_n( \mathbf{{}t_n}, \mathbf{{}{\theta }_n}, \mathbf{{}i_n}, y))=1\) if and only if \( (\mathbf{{}t_n}, \mathbf{{}{\theta }_n}, \mathbf{{}i_n}) \in \mathcal {T}_n\times S_n \times I_n\). Set \((\mathcal {T}_n\times \mathcal {S}_n \times \mathcal {I}_n)^{'}:=\mathbb {R}_+^n\times \Theta ^n \times I^n\backslash \mathcal {T}_n\times S_n \times I_n\). According to assumptions (3.10) and (3.11), we have

$$\begin{aligned}&\int _{(\mathcal {T}_n\times \mathcal {S}_n \times \mathcal {I}_n )'}\lambda ^n e^{-\lambda (t_1+\ldots + t_n)} \varrho (({q}\circ T)_n( \mathbf{{}t_n}, \mathbf{{}{\theta }_n}, \mathbf{{}i_n}, x), ({q}\circ T)_n( \mathbf{{}t_n}, \mathbf{{}{\theta }_n}, \mathbf{{}i_n}, y))\\&\cdot p_{i_{n-1}i_n}(({q}\circ T)_{n-1}(\mathbf{{}t_{n-1}}, \mathbf{{}{\theta }_{n-1}}, \mathbf{{}i_{n-1}}, x))\\&\cdot \tilde{p}_{{\theta }_n}(T_{i_n}( t_n, ({q}\circ T)_{n-1}(\mathbf{{}t_{n-1}},\mathbf{{}{\theta }_{n-1}}, \mathbf{{}i_{n-1}}, x)))\cdot \ldots \cdot p_{ii_1}(x)\tilde{p}_{{\theta }_1}(T_{i_1}(t_1, x))dt_1\ldots dt_n\\&\le \int _{\mathbb {R}_+^n} L_q^n L^n {\lambda }^n e^{-\lambda (t_1+\ldots + t_n)}e^{\alpha (t_1+\ldots + t_n)}\varrho (x, y) dt_1\ldots dt_n\le a^nr \end{aligned}$$

for \((x,i),(y, j)\in C_r\), where \(a = \frac{\lambda L L_q}{\lambda - \alpha }. \) Comparing this with the definition of \((\mathcal {T}_n\times \mathcal {S}_n \times \mathcal {I}_n )'\), we obtain

$$\begin{aligned}&a_1^n r \int _{(\mathcal {T}_n\times \mathcal {S}_n \times \mathcal {I}_n )'}\lambda ^n e^{-\lambda (t_1+\ldots + t_n)} p_{i_{n-1}i_n}(({q}\circ T)_{n-1}(\mathbf{{}t_{n-1}}, \mathbf{{}{\theta }_{n-1}}, \mathbf{{}i_{n-1}}, x)\\&\cdot \tilde{p}_{{\theta }_n}(T_{i_n}( t_n, ({q}\circ T)_{n-1}(\mathbf{{}t_{n-1}},\mathbf{{}{\theta }_{n-1}}, \mathbf{{}i_{n-1}}, x)))\cdot \ldots \cdot p_{ii_1}(x)\tilde{p}_{{\theta }_1}(T_{i_1}(t_1, x)dt_1\ldots dt_n \\&< a^nr, \end{aligned}$$

which implies

$$\begin{aligned}&\int _{(\mathcal {T}_n\times \mathcal {S}_n \times \mathcal {I}_n )'}\lambda ^n e^{-\lambda (t_1+\ldots + t_n)} p_{i_{n-1}i_n}(({q}\circ T)_{n-1}(\mathbf{{}t_{n-1}}, \mathbf{{}{\theta }_{n-1}}, \mathbf{{}i_{n-1}}, x)\\&\cdot \tilde{p}_{{\theta }_n}(T_{i_n}( t_n, ({q}\circ T)_{n-1}(\mathbf{{}t_{n-1}},\mathbf{{}{\theta }_{n-1}}, \mathbf{{}i_{n-1}}, x)))\cdot \ldots \cdot p_{ii_1}(x)\tilde{p}_{{\theta }_1}(T_{i_1}(t_1, x))dt_1\ldots dt_n \\< \frac{a^n}{a_1^n}<1. \end{aligned}$$

We then obtain that the integral over \(\mathcal {T}_n\times \mathcal {S}_n \times \mathcal {I}_n\) is not less than \(1-\Big (\frac{a}{a_1}\Big )^n\ge (1-\frac{a}{a_1})^n=:\gamma ^n\), for sufficiently big \(n\in \mathbb {N}\).

Using (3.15) we obtain

$$\begin{aligned} \int _{\mathcal {T}_n\times S_n \times I_n }\lambda ^n e^{-\lambda (t_1+\ldots + t_n)}dt_1\ldots dt_n \ge \frac{(\gamma )^n}{M_1^nM_2^n}, \end{aligned}$$

where

$$\begin{aligned} M_1 = \sup _{i\in I} \sup _{x \in Y} p_{i i_0}(x),\quad M_2 = \sup _{x \in Y} \tilde{p}_{{\theta }_0}(x). \end{aligned}$$
(5.1)

Finally,

$$\begin{aligned} (Q^n b)(C_{a_1^nr})&\ge \int _{X^2}\delta _1^{{n}} \delta _2 ^{ {n}} \int _{\mathcal {T}_n\times S_n \times I_n }\lambda ^n e^{-\lambda (t_1+\ldots + t_n)}dt_1\ldots dt_n b(d(x, i)\times d(y, j))\\&\ge \delta _1^{ {n}} \delta _2 ^{ {n}}\frac{(\gamma )^n}{M_1^{{n}}M_2^{ {n}}}\Vert b\Vert . \end{aligned}$$

If we set \(\bar{\gamma }:= \frac{ \delta _1 \delta _2 }{M_1M_2} \gamma \), the proof is complete. \(\square \)

Theorem 5

Assume that the system (Tqp) satisfies conditions (3.10)–(3.16). For every \(\tilde{a}\in (0,1-a)\), there exists \(n_0\in \mathbb {N}\) such that

$$\begin{aligned} \Vert Q^{\infty }(((x, i),(y, j)),\cdot )\Vert \ge \frac{1}{2}\bar{\gamma }^{n_0}\quad \text {for }((x, i),(y, j))\in K_{\tilde{a}}, \end{aligned}$$

where \(\bar{\gamma }>0\) is given in Lemma 3.

Proof

Note that, for every real numbers \(u, v\in \mathbb {R}\), there is a general rule: \(\;\min \{u,v\}+|u-v|-u\ge 0\). Hence, for every \((x_1,i_1),(x_2,i_2)\in X\) , we obtain

$$\begin{aligned}&\sum _{j\in I}\sum _{\theta \in \Theta }\int _0^{+\infty } \lambda e^{-\lambda t}\big ( \min \{p_{i_1j}(x_1)\tilde{p}_{\theta }(T_j(t, x_1)), p_{i_2j}(x_2)\tilde{p}_{\theta }(T_j(t, x_2))\}\\&+ | p_{i_1j}(x_1)\tilde{p}_{\theta }(T_j(t, x_1)) - p_{i_2j}(x_2)\tilde{p}_{\theta }(T_j(t, x_2))| - p_{i_1j}(x_1)\tilde{p}_{\theta }(T_j(t, x_1)) \big ) dt \ge 0 \end{aligned}$$

and therefore, due to (4.8),

$$\begin{aligned}&\Vert Q^1((x_1,i_1)(x_2, i_2),\cdot )\Vert \\&+\sum _{j \in I}\sum _{\theta \in \Theta } \int _0^{+\infty }\lambda e^{-\lambda t}| p_{i_1j}(x_1)\tilde{p}_{\theta }(T_j(t, x_1)) - p_{i_2j}(x_2)\tilde{p}_{\theta }(T_j(t, x_2))| dt\ge 1. \end{aligned}$$

For every \(b\in M_{\text {fin}}(X^2)\), we get

$$\begin{aligned} \Vert Q^1b\Vert&=\int _{X^2}Q^1((x_1,i_1)(x_2, i_2),X^2)b(d(x_1, i_1)\times d(x_2, i_2))\\&=\int _{X^2}\Vert Q^1((x_1,i_1)(x_2, i_2),\cdot )\Vert b(d(x_1, i_1)\times d(x_2, i_2))\\&\ge \Vert b\Vert -\int _{X^2}\sum _{j\in I} \sum _{\theta \in \Theta } \int _0^{+\infty }\lambda e^{-\lambda t}| p_{i_1j}(x_1)\tilde{p}_{\theta }(T_j(t, x_1)) - p_{i_2j}(x_2)\tilde{p}_{\theta }(T_j(t, x_2))|\times \\&\times dtb(d(x_1, i_1)\times d(x_2, i_2)). \end{aligned}$$

We consider two cases: \(i_1 = i_2 = i \) and \(i_1 \ne i_2\). From (3.10) and (3.13), we obtain for \(i_1 = i_2 = i \)

$$\begin{aligned}&\int _0^{\infty }\lambda e^{-\lambda t}\sum _{j\in I} \sum _{\theta \in \Theta } | p_{ij}(x_1)\tilde{p}_{\theta }(T_j(t, x_1)) - p_{ij}(x_2)\tilde{p}_{\theta }(T_j(t, x_2))|dt\\&\le \int _0^{\infty }\lambda e^{-\lambda t}\sum _{j\in I} \sum _{\theta \in \Theta } | p_{ij}(x_1) - p_{ij}(x_2)|\tilde{p}_{\theta }(T_j(t, x_1 ))|dt\\&+ \int _0^{\infty }\lambda e^{-\lambda t}\sum _{j\in I} \sum _{\theta \in \Theta } | p_{ij}(x_2)|\tilde{p}_{\theta }(T_j(t, x_1)) - \tilde{p}_{\theta }(T_j(t, x_2))|dt \\&\le \overline{\gamma }_1\varrho (x_1, x_2) + \int _0^{\infty }\lambda e^{-\lambda t}\overline{\gamma }_2L e^{\alpha t}\varrho (x_1, x_2)dt \\&\le (\overline{\gamma }_1 + \gamma _2)\varrho (x_1, x_2) \le (\overline{\gamma }_1 + \gamma _2) d ((x_1,i_1),(x_2, i_2)), \end{aligned}$$

where \(\gamma _2 = \overline{\gamma }_2\frac{L\lambda }{\lambda - \alpha }\).

Suppose now that \(i_1 \ne i_2\), then \(\varrho _c(i_1, i_2)= c > (\overline{\gamma }_1 + \gamma _2)^{-1}\). In this case, we obtain

$$\begin{aligned} 1 -(\overline{\gamma }_1 + \gamma _2)d ((x_1,i_1),(x_2, i_2)) = 1 - (\overline{\gamma }_1 + \gamma _2)(\varrho (x_1, x_2) + c) \le 1 - (\overline{\gamma }_1 + \gamma _2)c \le 0. \end{aligned}$$

Thus

$$\begin{aligned} Q^1((x_1, i_1),(x_2, i_2)), X^2) \ge 0 \ge 1 - (\overline{\gamma }_1 + \gamma _2)d ((x_1,i_1),(x_2, i_2)). \end{aligned}$$

Hence,

$$\begin{aligned} \Vert Q^1b\Vert&\ge \Vert b\Vert - \int _{X^2} (\overline{\gamma }_1 + {\gamma }_2)d((x_1,i_1),( x_2, i_2))b(d(x_1, i_1)\times d(x_2, i_2)) \\&= \Vert b\Vert - (\overline{\gamma }_1 + {\gamma }_2)\phi (b). \end{aligned}$$

By (4.11) and

$$\begin{aligned} \phi (Q^nb) \le a^n\phi (b), \end{aligned}$$

we obtain

$$\begin{aligned} \Vert Q^nb\Vert =&\int _{X^2}Q^1((x,i)(y,j)),\cdot )(Q^{n-1}b)(d(x, i)\times d(y, j))\\&\ge \Vert Q^{n-1}b\Vert - (\overline{\gamma }_1 + {\gamma }_2) \phi (Q^{n-1}b)\\&\ge \Vert b\Vert - (\overline{\gamma }_1 + {\gamma }_2)\sum _{k=1}^n \phi (Q^kb) \ge \Vert b\Vert - (\overline{\gamma }_1 + {\gamma }_2)\phi (b)\sum _{k=1}^na^k \\&\ge \Vert b\Vert - (\overline{\gamma }_1 + {\gamma }_2)\frac{a}{1-a}\phi (b), \end{aligned}$$

where \( a = \frac{LL_q\lambda }{\lambda - \alpha }. \)

We may choose \(r>0\) such that \(d((x_1, i_1),(x_2, i_2))<r\) and

$$\begin{aligned} (\overline{\gamma }_1 + {\gamma }_2)\frac{a}{1-a} r< \frac{1}{2}. \end{aligned}$$

Since

$$\begin{aligned} \phi (b) \le r\Vert b\Vert \end{aligned}$$

and \(\mathrm{supp}b\subset C_r\), then we obtain

$$\begin{aligned} \Vert Q^{\infty }b\Vert \ge \frac{\Vert b\Vert }{2}. \end{aligned}$$
(5.2)

Fix \(\tilde{a}\in (0,1-a)\). It is clear that \(K_{\tilde{a}}\subset C_{\tilde{a}^{-1} 2c}\). If we define \(n_0:=\min \{n\in \mathbb {N}:\, a^n(\tilde{a})^{-1}2c<r\}\), then \(C_{a^{n_0}\tilde{a}^{-1} 2c}\subset C_r\). Remembering that \(Q^{n+m}(((x,i)(y,j)),\cdot )=Q^m(Q^n((x,i)(y,j),\cdot ))\) and using the Markov property, we obtain

$$\begin{aligned} Q^{\infty }((x,i)(y,j),X^2)= Q^{\infty }( Q^{n_0}((x,i)(y,j),X^2)). \end{aligned}$$

Then, according to (5.2) and Lemma 3, we obtain

$$\begin{aligned} \left\| Q^{\infty }((x,i)(y,j),\cdot )\right\|&=\left\| (Q^{\infty } Q^{n_0})((x,i)(y,j),\cdot )\right\| \ge \frac{\left\| Q^{n_0}((x,i)(y,j),\cdot )|_{C_r}\right\| }{2}\\&=\frac{Q^{n_0}((x,i)(y,j),C_r)}{2} \ge \frac{Q^{n_0}((x,i)(y,j),C_{a^{n_0}\tilde{a}^{-1} 2c})}{2} \ge \frac{\bar{\gamma }^{n_0}}{2} \end{aligned}$$

for \(((x,i),(y,j))\in K_{\tilde{a}}\). This finishes the proof. \(\square \)

The next theorem is partially inspired by the reasoning which can be found in Lemma 2.1 [21]

Theorem 6

Under the hypothesis of Theorem 1, there exist \(\tilde{q}\in (0,1)\) and \(D_3>0\) such that

$$\begin{aligned} E_{(x, i),(y, j)}[\tilde{q}^{-\tau }]\le D_3(1+\bar{V}((x, i),(y, j)))\quad \text {for }((x, i),(y, j)) \in X^2. \end{aligned}$$

Proof

Fix \(\tilde{a}\in (0,1-a)\) and \(((x, i),(y, j))\in X^2\). To simplify notation, we write \(\alpha =(a+\tilde{a})^{-\frac{1}{2}}\). Let s be the random moment of the first visit in \(K_{\tilde{a}}\). Suppose that

$$\begin{aligned} s_1=s,\quad s_{n+1}=s_n+s\circ \vartheta _{s_n}, \end{aligned}$$

where \(n\in \mathbb {N}\) and \(\vartheta _n\) are shift operators on \((X^2\times \{0,1\})^{\infty }\), i.e.

$$\begin{aligned} \vartheta _n(((x_k, i_k),(y_k, j_k),\theta _k)_{k\in \mathbb {N}})=((x_{k+n}, i_{k+n}),(y_{k+n}, j_{k+n}),\theta _{k+n})_{k\in \mathbb {N}}. \end{aligned}$$

Theorem 5 implies that every \(s_n\) is \(C^{\infty }(((x, i),(y, j)),\cdot )\)-a.s. finished. The strong Markov property shows that

$$\begin{aligned} E_{(x, i),(y, j)}\left[ \alpha ^s\circ \vartheta _{s_n}|F_{s_n}\right] =E_{(x_{s_n}, i_{s_n}),(y_{s_n}, j_{s_n})}[\alpha ^s]\quad \text {for }n\in \mathbb {N}, \end{aligned}$$

where \(F_{s_n}\) denotes the \(\sigma \)-algebra on \((X^2\times \{0,1\})\) generated by \(s_n\) and \(\mathcal{{Z}}=((x_n, i_n),(y_n, j_n))_{n\in \mathbb {N}}\) is the Markov chain with sequence of transition probability functions \((\{C^{1}(((x, i),(y, j))\cdot ):(x, i),(y, j)\in X\})_{i\in \mathbb {N}}\). By Theorem 5 and the definition of \(K_{\tilde{a}}\), we obtain

$$\begin{aligned} E_{(x, i),(y, j)}[\alpha ^{s_{n+1}}]&=E_{(x, i),(y, j)}\Bigg [\alpha ^{s_n}E_{(x_{s_n}, i_{s_n}),(y_{s_n}, j_{s_n})}[\alpha ^s]\Bigg ]\\&\le E_{(x, i),(y, j)}\left[ \alpha ^{s_n}\right] (D_1\varkappa ^{-1} 2c+D_2). \end{aligned}$$

Fix \(\eta =D_1\tilde{a}^{-1} 2c+D_2\). Consequently,

$$\begin{aligned} E_{(x, i),(y, j)}\left[ \alpha ^{s_{n+1}}\right] \le \eta ^n E_{(x, i),(y, j)}\left[ \alpha ^s\right] \le \eta ^n\left[ D_1\bar{V}((x, i),(y, j))+D_2\right] . \end{aligned}$$
(5.3)

We define \(\hat{\tau }(((x_n, i_n), (y_n, j_n),\theta _n)_{n\in \mathbb {N}})=\inf \{n\in \mathbb {N}:\; ((x_n, i_n), (y_n, j_n))\in K_{\tilde{a}}, \ \;\theta _k=1\,\text { for }k\ge n\}\) and \(\sigma =\inf \{n\in \mathbb {N}:\;\hat{\tau }=s_n\}\). By Theorem 5, there is \(n_0\in \mathbb {N}\) such that

$$\begin{aligned} \hat{C}^{\infty }(((x, i),(y, j)),\{\sigma >n\})\le \left( 1-\frac{\bar{\gamma }^{n_0}}{2}\right) ^n \quad \text {for }n\in \mathbb {N}. \end{aligned}$$
(5.4)

Let \(p>1\). By the Hölder inequality, (5.3) and (5.4), we obtain

$$\begin{aligned}&E_{(x, i),(y, j)}\left[ \alpha ^{\frac{\hat{\tau }}{p}}\right] \le \sum _{k=1}^{\infty }E_{(x, i),(y, j)}\left[ \alpha ^{\frac{s_k}{p}}1_{\sigma =k}\right] \\&\le \sum _{k=1}^{\infty }\Big (E_{(x, i),(y, j)}\left[ \alpha ^{s_k}\right] \Big )^{\frac{1}{p}}\Big (\hat{C}^{\infty }(((x, i),(y, j)),\{\sigma =k\})\Big )^{\left( 1-\frac{1}{p}\right) }\\&\le \left[ D_1\bar{V}((x, i),(y, j))+D_2\right] ^{\frac{1}{p}}\eta ^{-\frac{1}{p}}\sum _{k=1}^{\infty }\eta ^{\frac{k}{p}}\left( 1-\frac{1}{2}\bar{\gamma }^{n_0}\right) ^{(k-1)\left( 1-\frac{1}{p}\right) }\\&{=}\left[ D_1\bar{V}((x, i),(y, j)){+}D_2\right] ^{\frac{1}{p}}\eta ^{{-}\frac{1}{p}}\left( 1{-}\frac{1}{2}\bar{\gamma }^{n_0}\right) ^{{-}\left( 1{-}\frac{1}{p}\right) }\sum _{k=1}^{\infty }\Bigg [\Bigg (\frac{\eta }{1{-}\frac{1}{2}\bar{\gamma }^{n_0}}\Bigg )^{\frac{1}{p}}\left( 1{-}\frac{1}{2}\bar{\gamma }^{n_0}\right) \Bigg ]^k. \end{aligned}$$

For p sufficiently large and \(\tilde{q}=\alpha ^{-\frac{1}{p}}\), we get

$$\begin{aligned} E_{(x, i),(y, j)}\left[ \tilde{q}^{-\hat{\tau }}\right] =E_{(x, i),(y, j)}\left[ \alpha ^{\frac{\hat{\tau }}{p}}\right] \le \left( 1+\bar{V}((x, i),(y, j))\right) D_3 \end{aligned}$$

for some \(D_3\). Since \(\tau \le \hat{\tau }\), we finish the proof. \(\square \)

6 Central Limit Theorem: Proof of Theorems 2, 3 and 4

Let \(\{(x_n, \xi _n)\}_{n\in \mathbb {N}}\) be the Markov chain given by (3.1) and (3.2) with initial distribution \(\mu \in M_1^2(X)\), \( X = Y \times I\). Let \(g \in L^2_0(\mu )\). Define

$$\begin{aligned} S_n^{\mu } = \frac{g(x_1, \xi _1)+ \cdots + g(x_n, \xi _n)}{\sqrt{n}}, \quad \text {for} \quad n \ge 1 \end{aligned}$$
(6.1)

and let \(\Phi S_n^\mu \) denote its distribution.

Denote by \(\mu _*\in \mathcal {M}_1^1(X)\) an invariant measure for the process \(\{(x_n,\xi _n)\}_{n\ge 0}\).

Central Limit Theorems for ergodic stationary Markov chains have already been proven in many papers. See, for example, Theorem 1 and the subsequent Corollary 1 in Maxwell and Woodroof [31].

Theorem 7

[31] Let \(g\in L^2_0(\mu _*)\). If the following condition is satisfied

$$\begin{aligned} \sum _{n=1}^\infty n^{-3/2}\Bigg (\int _X(\sum _{k=0}^{n-1}\int _Xg(y)\Pi ^k(x, dy))^2\mu _*(dx)\Bigg )^{1/2} < \infty , \end{aligned}$$
(6.2)

then there exists

$$\begin{aligned} \sigma ^2 = \sigma ^2(g) = \lim _{n \rightarrow \infty } E_{\mu _*}(S_n^{\mu _*})^2 < \infty , \end{aligned}$$

and the sequence of distribution of \((S_n^{\mu _*})_{n\ge 0}\) converges weakly to some random variable with normal distribution \(N(0, \sigma ^2)\).

Proof of Theorem 2

We shall split the proof in 4 steps.

Step 1 Let \(f\in \mathcal {F}\). Then, there exist \(q\in (0,1)\) and \(D_5>0\) such that

$$\begin{aligned}&\int _{X^2}|f(u_1, i_1)-f(v_1, j_1)|(\Pi _{X^2}^*\Pi ^*_n\hat{C}^{\infty }(((x, i),(y, j)),\cdot ))(d(u_1, i_1)\times d(v_1, j_1))\\&\le q^nD_5(1+\bar{V}((x, i),(y, j))) \end{aligned}$$

for every \((x, i),(y, j)\in X\), \(n\in \mathbb {N}\), where \(\Pi ^*_n:(X^2\times \{0,1\})^{\infty }\rightarrow X^2\times \{0,1\}\) are the projections on the n -th component and \(\Pi ^*_{X^2}:X^2\times \{0,1\}\rightarrow X^2\) is the projection on \(X^2\).

For \(n\in \mathbb {N}\) we define sets

$$\begin{aligned} A_{\frac{n}{2}}=\{t\in (X^2\times \{0,1\})^{\infty }:\,\tau (t)\le \frac{n}{2}\},\quad B_{\frac{n}{2}}= (X^2\times \{0,1\})^{\infty } \backslash A_{\frac{n}{2}}. \end{aligned}$$

Thus, we have for \(n\in \mathbb {N}\)

$$\begin{aligned} \hat{C}^{\infty }(((x, i),(y, j)),\cdot )=\hat{C}^{\infty }(((x, i),(y, j)),\cdot )|_{A_{\frac{n}{2}}}+\hat{C}^{\infty }(((x, i),(y, j)),\cdot )|_{B_{\frac{n}{2}}}. \end{aligned}$$
$$\begin{aligned}&\Bigg |\int _{X^2}(f(z_1, i_1)-f(z_2, i_2))\left( \Pi _{X^2}^*\Pi _n^*\hat{C}^{\infty }(((x, i),(y, j)),\cdot )|_{A_{\frac{n}{2}}}\right) (d(z_1, i_1)\times d(z_2, i_2))\\&+\int _{X^2}(f(z_1, i_1)-f(z_2, i_2))\left( \Pi _{X^2}^*\Pi _n^*\hat{C}^{\infty }(((x, i),(y, j)),\cdot )|_{B_{\frac{n}{2}}}\right) (d(z_1, i_1)\times d(z_2, i_2))\Bigg |\\&\le \int _{X^2}\varrho _X((z_1, i_1), (z_2, i_2))\left( \Pi _{X^2}^*\Pi _n^*\hat{C}^{\infty }(((x, i),(y, j)),\cdot )|_{A_{\frac{n}{2}}}\right) (d(z_1, i_1)\times d(z_2, i_2))\\&+2\hat{C}^{\infty }\left( ((x, i),(y, j)),B_{\frac{n}{2}}\right) . \end{aligned}$$

Note that, by iterative application of (4.12), we obtain

$$\begin{aligned}&\int _{X^2}\varrho _X((z_1, i_1), (z_2, i_2))\left( \Pi _{X^2}^*\Pi _n^*\hat{C}^{\infty }(((x, i),(y, j)),\cdot )|_{A_{\frac{n}{2}}}\right) (d(z_1, i_1), d(z_2, i_2))\\&=\phi \left( \Pi _{X^2}^*\Pi _n^*\left( \hat{C}^{\infty }(((x, i),(y, j)),\cdot )|_{A_{\frac{n}{2}}}\right) \right) \\&\le a^{\lfloor \frac{n}{2}\rfloor }\phi \left( \Pi _{X^2}^*\Pi _{\lfloor \frac{n+1}{2}\rfloor }^*\left( \hat{C}^{\infty }(((x, i),(y, j)),\cdot )|_{A_{\frac{n}{2}}}\right) \right) . \end{aligned}$$

Then it follows from (4.6) and (4.7) that

$$\begin{aligned} \phi \left( \Pi _{X^2}^*\Pi _{\lfloor \frac{n+1}{2}\rfloor }^*\left( \hat{C}^{\infty }(((x, i),(y, j)),\cdot )|_{A_{\frac{n}{2}}}\right) \right) \le a^{\lfloor \frac{n+1}{2}\rfloor }\bar{V}((x, i),(y, j))+\frac{2c}{1-a} \end{aligned}$$

We obtain coupling inequality

$$\begin{aligned}&\int _{X^2}\left| f(z_1, i_1)-f(z_2, i_2)\right| \left( \Pi _{X^2}^*\Pi ^*_n\hat{C}^{\infty }(((x, i),(y, j)),\cdot )\right) (d(z_1, i_1)\times d(z_2, i_2))\\&\le a^{\lfloor \frac{n}{2}\rfloor }\left[ a^{\lfloor \frac{n+1}{2}\rfloor }\bar{V}((x, i),(y, j))+\frac{2c}{1-a}\right] +2\hat{C}^{\infty }\left( ((x, i),(y, j)),B_{\frac{n}{2}}\right) . \end{aligned}$$

It follows from Theorem 6 and the Chebyshev inequality that

$$\begin{aligned}&\hat{C}^{\infty }\left( ((x, i),(y, j)),B_{\frac{n}{2}}\right) =\hat{C}^{\infty }(((x, i),(y, j)),\left\{ \tau >\frac{n}{2}\right\} )\\&=\hat{C}^{\infty }(((x, i),(y, j)),\{\tilde{q}^{-\tau }\ge \tilde{q}^{-\frac{n}{2}}\}) \le \frac{E_{(x,i),(y,j)}[\tilde{q}^{-\tau }]}{\tilde{q}^{-\frac{n}{2}}}\\&\le \tilde{q}^{\frac{n}{2}}D_3(1+\bar{V}((x, i),(y, j))) \end{aligned}$$

for some \(\tilde{q}\in (0,1)\) and \(D_3>0\). Finally,

$$\begin{aligned}&\int _{X^2}|f(z_1, i_1)-f(z_2, i_2)|(\Pi _{X^2}^*\Pi ^*_n\hat{C}^{\infty }(((x, i),(y, j)),\cdot ))(d(z_1, i_1)\times d(z_2, i_2))\\&\le a^{\lfloor \frac{n}{2}\rfloor }D_4(1+\bar{V}((x, i),(y, j)))+2\tilde{q}^{\frac{n}{2}}D_3(1+\bar{V}((x, i),(y, j))), \end{aligned}$$

where \(D_4=\max \{a^{\frac{1}{2}},(1-a)^{-1}2c\}\). Setting \(q:=\max \{a^{\frac{1}{2}},\tilde{q}^{\frac{1}{2}}\}\) and \(D_5:=D_4+2D_3\), gives our claim.

Step 2 If \(g:X\rightarrow \mathbb {R}\) is an arbitrary bounded and Lipschitz function with constant \(C_g\), then, there are \(q\in (0,1)\) and \(D_5>0\), exactly the same as in Step 1, for which we obtain

$$\begin{aligned}&\int _{X^2}|g(z_1, i_1)-g(z_2, i_2)|(\Pi _{X^2}^*\Pi ^*_n\hat{C}^{\infty }(((x, i),(y, j)),\cdot ))(d(z_1, i_1)\times d(z_2, i_2))\\&\le Gq^nD_5(1+\bar{V}((x, i),(y, j))) \end{aligned}$$

for every \((x, i),(y, j)\in X\), \(n\in \mathbb {N}\), where \(G:=\max \{C_g,\sup _{x\in X}|g(x)|\}\).

Let \(S_n^{\mu }\) and \(\Phi S_n^\mu \) be given by (6.1). In particular, \(S_n^{\mu _*}\) and \(S_n^{(x,i)}\) are defined for the Markov chains with the same transition probability function \(\Pi \) and initial distributions \(\mu _*\) and \(\delta _{(x, i)}\), respectively. Further, let \(g:X\rightarrow \mathbb {R}\) be a bounded and Lipschitz continuous function, with constant \(C_g\), which satisfies \(\langle g,\mu _*\rangle =0\).

Step 3 Let \(g:X\rightarrow \mathbb {R}\) be a bounded and Lipschitz continuous function with constant \(C_g\). Additionally, \(\langle g,\mu _*\rangle =0\). Then,

$$\begin{aligned} \sum _{n=1}^{\infty }n^{-3/2}\Big [\int _X\Big (\sum _{k=0}^{n-1}\langle g,P^{k}\delta _{(x, i)}\rangle \Big )^2\mu _*(d(x, i))\Big ]^{1/2}<\infty . \end{aligned}$$
(6.3)

Note that, by Step 1 and Step 2,

$$\begin{aligned}&\sum _{k=0}^{n-1}\left\langle g,P^{k}\delta _{(x, i)}\right\rangle =\sum _{k=0}^{n-1}\Big (\left\langle g,P^{k}\delta _{(x, i)}\right\rangle -\left\langle g,\mu _*\right\rangle \Big )\\&=\sum _{k=0}^{n-1}\int _X\Big [\int _Xg(z, k)(\Pi ^k((x, i),\cdot )-\Pi ^k((y, j),\cdot ))(d(z, k))\Big ]\mu _*(d(y, j))\\&=\sum _{k=0}^{n-1}\int _X\Big [\int _{X^2}(g(z_1, i_1)-g(z_2, i_2))(\Pi ^*_{X^2}\Pi ^*_k\hat{C}^{\infty }(((x, i),(y, j)),\cdot ))\\&(d(z_1, i_1)\times d(z_2, i_2))\Big ]\mu _*(d(y, j))\\&\le \sum _{k=0}^{n-1}Gq^nD_5\int _{X^2}(1+\bar{V}((x, i),(y, j)))\mu _*(d(y, j)). \end{aligned}$$

Then, for every \((x, i)\in X\), \(n\in \mathbb {N}\),

$$\begin{aligned} \sum _{k=0}^{n-1}\langle g,P^{k}\delta _{(x, i)}\rangle&\le GD_5\frac{1-q^n}{1-q}\int _{X^2}(1+\bar{V}((x, i),(y, j)))\mu _*(d(y, j))\\&\le D_9(1+V((x, i))), \end{aligned}$$

where \(C_9:=GD_5(1-q)^{-1}(1+\int _XV((y, j))\mu _*(d(y, j)))\). Since \(\mu _*\) has finite second moment, we obtain that (6.3) is not bigger than

$$\begin{aligned} \sum _{n=1}^{\infty }n^{-3/2}\left[ D_9^2\left\langle 1+2V+V^2,\mu _*\right\rangle \right] ^{1/2}<\infty . \end{aligned}$$

Hence, assumptions of Theorem 7 are satisfied

Step 4 Hence, by applying Theorem 7, we obtain that \(\Phi {S_n^{\mu _*}}\) converges to the normal distribution in Levy metric, as \(n\rightarrow \infty \), which equivalently means that the distributions converge weakly to each other (see [8]).

Note that, to complete the proof of Theorem 2, it is enough to establish that \(\Phi {S_n^{\mu }}\) converges weakly to \(\Phi {S_n^{\mu _*}}\), as \(n\rightarrow \infty \). Equivalently, it is enough to show that \(\lim _{n\rightarrow \infty }\Vert \Phi {S_n^{\mu }}-\Phi {S_n^{\mu _*}}\Vert _{\mathcal {FM}}=0\), since weak convergence is metrised by the Fourtet-Mourier norm.

Set \((x, i),(y, j)\in X\) and choose arbitrary \(f\in \mathcal {F}\). Suppose that we know that the following convergence is satisfied, as \(n\rightarrow \infty \),

$$\begin{aligned} \Big |\int _{\mathbb {R}}f(u)\Phi S_n^{(x, i)}(du)-\int _{\mathbb {R}}f(v)\Phi S_n^{(y, j)}(dv)\Big |\rightarrow 0. \end{aligned}$$
(6.4)

Then, by the Dominated Convergence Theorem, we obtain

$$\begin{aligned}&\Big |\int _{\mathbb {R}}f(u)\Phi S_n^{\mu }(du)-\int _{\mathbb {R}}f(v)\Phi S_n^{\mu _*}(dv)\Big | \nonumber \\&\le \int _X\int _X\Big |\int _{\mathbb {R}}f(u)\Phi S_n^{(x, i)}(du)-\int _{\mathbb {R}}f(v)\Phi S_n^{(y, j)}(dv)\Big |\mu (d(x, i))\mu _*(d(y, j)) \rightarrow 0, \end{aligned}$$
(6.5)

as \(n\rightarrow \infty \). Note that, by Theorem 11.3.3 in [7], (6.5) implies that \(\Phi {S_n^{\mu }}\) converges weakly to \(\Phi {S_n^{\mu _*}}\), as \(n\rightarrow \infty \), which completes the proof of the CLT in the model. Now, it remains to show (6.4). Note that

$$\begin{aligned}&\Big |\int _{\mathbb {R}}f(u)\Phi S_n^{(x, i)}(du)-\int _{\mathbb {R}}f(v)\Phi S_n^{(y, j)}(dv)\Big | \nonumber \\&=\Big |\int _{X^n}f\Big (\frac{g(u_1, i_1)+\cdots +g(u_n, i_n)}{\sqrt{n}}\Big )\Pi ^{n}((x, i),d(u_1, i_1)\times \cdots \times d(u_n, i_n)) \nonumber \\&- \int _{X^n}f\Big (\frac{g(u_1, i_1)+\cdots +g(u_n, i_n)}{\sqrt{n}}\Big )\Pi ^{n}((y, j),d(u_1, i_1)\times \cdots \times d(u_n, i_n)) \Big |, \end{aligned}$$
(6.6)

We may write

$$\begin{aligned}&\Big |\int _{X^n}\int _{X^n}\Big [f\Big (\frac{g(u_1, i_1)+\cdots +g(u_n, i_n)}{\sqrt{n}}\Big ) - f\Big (\frac{g(v_1, j_1)+\cdots +g(v_n, j_n)}{\sqrt{n}}\Big )\Big ] \nonumber \\&\Pi ^{n}((x, i),d(u_1, i_1)\times \cdots \times d(u_n, i_n))\Pi ^{n}((y, j),d(v_1, j_1)\times \cdots \times d(v_n, j_n)) \Big | \nonumber \\&\le \int _{(X^2)^n}\Big | f\Big (\frac{g(u_1, i_1)+\cdots +g(u_n, i_n)}{\sqrt{n}}\Big ) - f\Big (\frac{g(v_1, j_1)+\cdots +g(v_n, j_n)}{\sqrt{n}}\Big ) \Big | \nonumber \\&\Big (\Pi ^*_{X^{2n}}\Pi ^*_{1,\ldots ,n}\hat{C}^{\infty }(((x, i),(y, j)),\cdot )\Big )(d(u_1, i_1)\times \cdots \times d(u_n, i_n) \nonumber \\&\times d(v_1, j_1)\times \cdots \times d(v_n, j_n)), \end{aligned}$$
(6.7)

where \(\Pi ^*_{n}:(X^2\times \{0,1\})^{\infty }\rightarrow (X^2\times \{0,1\})^n\) are the projections on the first n components and \(\Pi ^*_{X^{2n}}:(X^2\times \{0,1\})^n\rightarrow X^{2n}\) is the projection on \(X^{2n}\). Since f is Lipschitz with constant \(C_f\), we may further estimate (6.7) as follows

$$\begin{aligned}&\frac{C_f}{\sqrt{n}}\int _{X^{2n}}\Big [|g(u_1, i_1)-g(v_1, j_1)|+\cdots +|g(u_n, i_n)-g(v_n, j_n)|\Big ]\\ {}&(\Pi ^*_{X^{2n}}\Pi ^*_{n}\hat{C}^{\infty }(((x, i),(y, j)),\cdot )\Big )((d(u_k, i_k)\times d(v_k, j_k))_{k=1}^n)\\&=\frac{C_f}{\sqrt{n}}\sum _{i=1}^n\int _{X^2}|g(u_k,i_k)-g(v_k,j_k)| (\Pi ^*_{X^{2}}\Pi ^*_{i}\hat{C}^{\infty }(((x,i),(y,j)),\cdot )) \\&\times (d(u_k,i_k)\times d(v_k,j_k)). \end{aligned}$$

\(\square \)

Now, for every \(1\le i\le n\), we refer to Step 1 and Step 2 to obseve that (6.7) is not bigger than

$$\begin{aligned} \frac{C_f G}{\sqrt{n}}\sum _{i=1}^nq^iD_5(1+\bar{V}((x, i),(y, j)))=n^{-\frac{1}{2}}C_fGD_5q\frac{1-q^n}{1-q}(1+\bar{V}((x, i),(y, j))). \end{aligned}$$

thanks to the upper bound of (6.6). We go with n to infinity and obtain (6.4). The proof is complete.

Proof of Theorem 3

Let \(\mu \in M_1^2(X)\). Fix \((x, i)\in X \).

$$\begin{aligned} UV^2(x,i)\le&\sum _{j\in I}\sum _{\theta \in \Theta } \int _0^{+\infty } 2\varrho ^2 (q_{\theta }\big (T_j(t,x)\big ) , q_{\theta }\big (T_j(t,x_*)\big )) \lambda e^{-\lambda t} p_{ij}(x)\tilde{p}_{\theta }\big (T_j(t,x)\big )\, dt \\&+\sum _{j\in I}\sum _{{\theta }\in \Theta } \int _0^{+\infty } 4\varrho ^2 (q_{\theta }\big (T_j(t,x_*)\big ) , q_{\theta }(x_*)) \lambda e^{-\lambda t} p_{ij}(x)\tilde{p}_{\theta }\big (T_j(t,x)\big )\, dt \\&+\sum _{j\in I}\sum _{{\theta }\in \Theta } \int _0^{+\infty } 4\varrho ^2 (q_{\theta }(x_*),x_*) \lambda e^{-\lambda t} p_{ij}(x)\tilde{p}_{\theta }\big (T_j(t,x)\big )\, dt . \end{aligned}$$

Further, using (3.17), (3.18) and (3.14) for all \(i_0\in I\) and \(\theta _0 \in \Theta \), we obtain

$$\begin{aligned} UV^2(x,i)\le \gamma V^2(x,i)+\beta , \end{aligned}$$

where

$$\begin{aligned}&\gamma =\frac{2\lambda (L L_q)^2}{\lambda -2\alpha } < 1, \\&\beta =4L_q^2\sum _{j\in I}\sum _{{\theta }\in \Theta }\int _0^{+\infty }\lambda e^{-\lambda t}\varrho ^2 (T_j(t,x_*), x_*)\, dt +4\sum _{{\theta }\in \Theta }\varrho ^2(q_{\theta }(x_*),x_*). \end{aligned}$$

Since

$$\begin{aligned} \left\langle V^2,P\mu \right\rangle \le \gamma \left\langle V^2,\mu \right\rangle + \beta , \end{aligned}$$

thus

$$\begin{aligned} \left\langle V^2,P^n\mu \right\rangle \le \gamma ^n \left\langle V^2,\mu \right\rangle + \frac{\beta }{1- \gamma }. \end{aligned}$$

We take a non-decreasing sequence \((V^2_k)_{k\in \mathbb {N}}\) such that \(V^2_k(y)=\min \{k,V^2(y)\}\), for every \(k\in \mathbb {N}\) and \(y\in Y\). We know that \(P^n\mu \) converges weakly to \(\mu _*\). Hence, for all \(k\in \mathbb {N}\), \(V^2_k\in C(X)\) and

$$\begin{aligned} \lim _{n\rightarrow \infty }\left\langle V^2_k,P^n\mu \right\rangle =\left\langle V^2_k,\mu _*\right\rangle \,\le \frac{\beta }{1- \gamma } \end{aligned}$$

so the sequence \((\left\langle V^2_k,\mu _*\right\rangle )_{k\in \mathbb {N}}\) is bounded. Because \((V^2_k)_{k\in \mathbb {N}}\) is non-negative and non-decreasing, we may use the Monotone Convergence Theorem to obtain

$$\begin{aligned} \left\langle V^2,\mu _*\right\rangle =\lim _{k\rightarrow \infty }\left\langle V^2_k,\mu _*\right\rangle \, \end{aligned}$$

so, indeed, \(\mu _*\) is with finite second moment. \(\square \)

Proof of Theorem 4

Theorem 1 implies that there exists an invariant measure \(\mu _* \in {M}_1(X) \) for the Markov operator P given by (3.6). By Theorem 5.3.1 [17], \( {\mu }_0 = G\mu _*, \) where

$$\begin{aligned} G\mu (A) = \sum _{i\in I} \int \limits _{Y\times I} \int \limits _0^{+\infty } 1_A(T_i(t, x), i) p_{ki}(x) \lambda e^{-\lambda t} dt \mu (dx, dk),\,A \in {B}_X,\, \mu \in {M}_1(X), \end{aligned}$$

is an invariant measure for the Markov semigroup \(\{P^t\}_{t \ge 0}\) given by (3.8). Define

$$\begin{aligned} \tilde{G}f(x, i) = \sum _{i\in I} \int \limits _0^{+\infty } f(T_i(t, x), i) p_{ki}(x) \lambda e^{-\lambda t} dt \quad \text {for}\quad f \in B(X), \end{aligned}$$

then \( \left\langle f, G\mu \right\rangle = \left\langle \tilde{G}f, \mu \right\rangle .\) For every \(f \in B(X)\), we set

$$\begin{aligned} \tilde{U}_nf = \sum _{k=0}^{n-1}f(x_k, \xi _k),\quad U_t f = \int _0^t f(X(s), \xi (s)) ds \end{aligned}$$

and

$$\begin{aligned} \tilde{K}f = \frac{1}{\lambda } \tilde{G} f, \quad N_t = \sum _{i=1}^{+\infty } 1_{\{t \ge \tau _i\}} \end{aligned}$$

Decomposing [0, t] along the jumps yields, we obtain

$$\begin{aligned} \frac{1}{t^a}U_tf = \left( \frac{N_t}{t}\right) ^a\Bigg [ \frac{1}{(N_t)^a} \sum _{i=0}^{N_{t-1}} \int _{\tau _i}^{\tau _{i+1}} f( X(s), \xi (s)) ds - R_t\Bigg ], \quad \text {for} \,\, a=1 \,\text {or}\, a=\frac{1}{2}, \end{aligned}$$
(6.8)

where \(\Vert R_t\Vert \le \Vert f\Vert \frac{\tau _{N_{t+1}} - \tau _{N_{t}}}{N_t}.\)

For \(n\in \mathbb {N}\) and \(f \in B(X)\) we define

$$\begin{aligned}&M_n = \sum _{i=0}^{n-1} \Bigg (\int _{\tau _i}^{\tau _{i+1}} f(X(s), \xi (s))ds - \tilde{K}f(x_i, \xi _i)\Bigg ) \nonumber \\&\text {and} \quad \mathcal {F}_n = \sigma ((\Delta \tau _i, x_i, \xi _i) : i \le n). \end{aligned}$$
(6.9)

Using (6.8), it is easy to check that

$$\begin{aligned}&\frac{1}{t^a}U_tf - \left( \frac{1}{N_t}\right) ^a\tilde{U}_{N_t}\tilde{G}f \nonumber \\&= \left( \frac{N_t}{t}\right) ^a\Bigg ( \frac{1}{(N_t)^a}\sum _{i=0}^{N_{t-1}} \Bigg (\int _{\tau _i}^{\tau _{i+1}} f(X(s), \xi (s))ds - \tilde{K}f(x_i, \xi _i)\Bigg )\Bigg ) \nonumber \\&- \left( \lambda ^a - \left( \frac{N_t}{t}\right) ^a\right) \left( \frac{1}{(N_t)^a}\sum _{i=0}^{N_{t-1}}\tilde{K}f(x_i, \xi _i)\right) + \left( \frac{N_t}{t}\right) ^a R_t \nonumber \\&= \left( \frac{N_t}{t}\right) ^a\left( \frac{1}{(N_t)^a}M_{N_t}\right) - \left( \lambda ^a - \left( \frac{N_t}{t}\right) ^a\right) \left( \frac{1}{(N_t)^a}\sum _{i=0}^{N_{t-1}}\tilde{K}f(x_i, \xi _i)\right) + \left( \frac{N_t}{t}\right) ^a R_t. \end{aligned}$$
(6.10)

Since

$$\begin{aligned} \int _{\tau _i}^{\tau _{i+1}} f(X(s), \xi (s))ds = \int _0^{\Delta \tau _i} f( T_{\xi _{i+1}}(s, x_i), \xi _{i+1})ds\quad \text {for}\quad i\ge 0, \end{aligned}$$

\( (M_n, \mathcal {F}_n)_{n\in \mathbb {N}} \) is a martingale with increments in \(L^2\): \(E((M_{n+1} - M_n)^2) \le \frac{6\Vert f\Vert }{\lambda ^2}.\) Therefore, by the strong law of large numbers for martingales

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{M_n}{n} = 0 \quad \text { almost surely}. \end{aligned}$$
(6.11)

Let us observe that, \( \lim _{t \rightarrow \infty } \frac{N_t}{t} = \lambda \) and \( \lim _{t \rightarrow \infty }R_t = 0 \) almost surely, from (6.10) for \(a =1\) and (6.11), we have

$$\begin{aligned} \lim _{t \rightarrow \infty } \frac{1}{t}U_tf - \frac{1}{N_t}\tilde{U}_{N_t}\tilde{G}f = 0 \quad \text {for}\quad f \in B(X) \end{aligned}$$
(6.12)

with probability one.

Note that, \(\tilde{G}f\in Lip_b(X) \) for \( f \in Lip_b(X)\), by applying Theorem 1, we obtain

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{N_t}\tilde{U}_{N_t}\tilde{G}f =\lim _{t\rightarrow \infty } \frac{1}{N_t} \sum _{k=0}^{N_t-1}\tilde{G}f(x_k, \xi _k) = \langle \tilde{G}f, \mu _*\rangle = \langle f, G\mu _*\rangle = \langle f, \mu _0\rangle , \end{aligned}$$

\(\mathbb {P}_{x_0,\xi _0}\) almost surely. Therefore, by (6.12), we have

$$\begin{aligned} \lim _{t \rightarrow \infty }\frac{1}{t}\int _0^t f(X(s), \xi (s)) ds = \langle f, \mu _0\rangle . \end{aligned}$$

The proof of Theorem 4 (i) is complete. \(\square \)

Moreover, for \(f \in Lip_b(X)\)

$$\begin{aligned} \frac{1}{n}\sum _{k=1}^n E((M_{k+1} - M_k)^2 | \mathcal {F}_k) = \frac{1}{n} \sum _{k=1}^n (Hf - \tilde{K}^2f)(x_k, \xi _k), \end{aligned}$$

where the operators H and \(\tilde{K}\) are given by (3.19). Since \(Hf - \tilde{K}^2f \in Lip_b(X)\) for \( f \in Lip_b(X)\), by Theorem 1

$$\begin{aligned} \lim _{n \rightarrow \infty }\frac{1}{n} \sum _{k=1}^n (Hf - \tilde{K}^2f)(x_k, \xi _k) = \langle Hf- \tilde{K}^2f, \mu _* \rangle = \sigma _1^2 \end{aligned}$$

Thus, all assumptions of Theorem A.1 [40] are satisfied. By the Central Limit Theorem for martingales, \(\frac{M_n}{\sqrt{n}}\) converges in distribution to some random variable with normal distribution \(N(0, \sigma _1^2)\), as \(n\rightarrow \infty .\)

Furthermore, from (6.10) for \(a = \frac{1}{2}\) and Theorem A.1 [40], we obtain

$$\begin{aligned} \frac{1}{\sqrt{t}}U_tf - \frac{1}{\sqrt{N_t}}\tilde{U}_{N_t}\tilde{ G}f \end{aligned}$$
(6.13)

converges in distribution to some random variable with normal distribution \(N(0, \sigma _1^2)\), as \(n\rightarrow \infty .\)

Finally, let \( f : Y \rightarrow \mathbb {R}\) be a bounded and Lipschitz continuous function such that \(\langle f, \mu _0\rangle = 0 \), then \( \langle \tilde{G}f, \mu _* \rangle = 0\). By (6.13) and Theorem 2 we obtain CLT for the process \((X(t), \xi (t))_{\{t \ge 0\}}\).

7 Applications

Example 1

Poisson driven stochastic differential equation.

Poisson driven stochastic differential equations are quite important in applications. For example the whole book of [34] , is devoted to the applications of these equations in physics and engineering. Applications in biomathematics (population dynamics) can be found in the papers of [6]. Consider stochastic differential equations driven by jump-type processes [28]. They are typically of the form

$$\begin{aligned} dX(t) = a(X(t),\xi (t)) dt + \int _{\Theta } b(X(t), \theta ) \mathcal {N}_p(dt, d\theta ) \qquad \text {for} \quad t \ge 0 \end{aligned}$$
(7.1)

with the initial condition

$$\begin{aligned} X(0) = x_0, \end{aligned}$$
(7.2)

where \(\{X(t)\}_{t \ge 0}\) is a stochastic process with values in a separable Banach space \((Y, \Vert \cdot \Vert )\), or more explicitly

$$\begin{aligned} X(t) = x_0 + \int _0^t a(X(s), \xi (s)) ds + \int _0^t \int _{\Theta } b(X(s-), \theta ) \mathcal {N}_p (ds, d\theta ) \qquad \text{ for } \quad t\ge 0 \end{aligned}$$
(7.3)

with probability one. Here \(\mathcal {N}_p\) is a Poisson random counting measure, \(\{\xi (t)\}_{t \ge 0}\) is a stochastic process with values in a finite set \(I = \{1, \ldots , N\}\), the solution \(\{X(t)\}_{t \ge 0}\) has values in Y and is right-continuous with left-hand limits, i.e. \(X(t) = X(t+) = \lim _{s\rightarrow t^+} X(s) \), for all \(t \ge 0\) and the left-hand limits \(X(t-) = \lim _{s\rightarrow t^-} X(s) \) exist and are finite for all \(t >0\) (equalities here mean equalities with probability one).

In our study we make the following assumptions:

On a probability space \((\Omega , \Sigma , \mathbb {P})\) there is a sequence of random variables \(\{\tau _n\}_{n \ge 0}\) such that the variables \( \Delta \tau _n = \tau _n - \tau _{n-1}\), where \( \tau _0 = 0 \), are nonnegative, independent, and identically distributed with the density distribution function \(g(t) = \lambda e^{-\lambda t}\) for \(t \ge 0. \)

Let \( \{\eta _n\}_{n\in \mathbb {N}} \) be a sequence of independent identically distributed random elements with values \(\Theta = \{1, \ldots , K\}\); their distribution will be denote by \(\nu \). We assume that the sequences \(\{\tau _n\}_{n \ge 0}\) and \(\{\eta _n\}_{n \ge 0 }\) are independent, which implies that the mapping \(\omega \rightarrow p(\omega ) = (\tau _n(\omega ), \eta _n (\omega ))_{n \ge 0 }\) defines a stationary Poisson point process. Then for every measurable set \( Z \subset \Theta \) the random variable

$$\begin{aligned} \mathcal {N}_p((0, t]\times Z) = \# \{ i : (\tau _i, \eta _i ) \in Z \} \end{aligned}$$

is Poisson distributed with parameter \(\lambda t \nu (Z)\). \(\mathcal {N}_p\) is called a Poisson random counting measure.

The coefficient \(a : Y \times I \rightarrow Y\), \( I = \{1, \ldots ,N \}\), is Lipschitz continuous with respect to the first variable.

We define \(q_{\theta }:Y \rightarrow Y\) by \(q_{\theta }(x) = x + b(x, \theta )\) for \( x \in Y,\, \theta \in \Theta .\)

For every \(i \in I\), denote by \( v_i(t) = T_i(t,x) \) the solution of the unperturbed Cauchy problem

$$\begin{aligned} v'_i (t) = a(v_i (t), i) \qquad \text {and}\quad v_i (0) = x ,\qquad x \in Y. \end{aligned}$$
(7.4)

Suppose that \([p_{ij}]_{i,j\in I}\), \(p_{ij} : Y \rightarrow [0, 1]\) is a probability matrix, there exists \(\overline{\gamma }_1 > 0\) such that

$$\begin{aligned} \sum _{j = 1}^N |p_{ij}(x) - p_{ij}(y)|\le \overline{\gamma }_1\Vert x-y\Vert \quad \text {for }\quad x, y \in Y, \end{aligned}$$

and \([p_i]_{i\in I}\), \(p_i: Y \rightarrow [0, 1]\) is a probability vector.

Consider a sequence of random variables \(\{x_n\}_{n \ge 0} \), \( x_n : \Omega \rightarrow Y \) and a stochastic process \(\{\xi (t)\}_{t \ge 0}\), \( \xi (t) : \Omega \rightarrow I \) (describing random switching at random moments \(\tau _n\)) such that

$$\begin{aligned}&x_n = q_{\eta _n}(T_{\xi (\tau _{n-1})} (\tau _n - \tau _{n-1}, x_{n-1})) ,\nonumber \\&\mathbb {P} \{\xi (0) = k | x_0 = x \} = p_k (x),\nonumber \\&{\mathbb {P}} \{\xi (\tau _n) = s | x_{n} = y, \,\, \xi (\tau _{n-1}) = i \} = p_{is} (y), \quad \text {for}\quad n = 1,\ldots \nonumber \\&\text {and} \nonumber \\&\xi (t) = \xi (\tau _{n-1}) \qquad \text {for} \quad \tau _{n-1} \le t < \tau _{n},\quad n=1, 2, \ldots \end{aligned}$$
(7.5)

The solution of (7.3) is now given by

$$\begin{aligned} X(t) = T_{\xi (\tau _{n-1})}(t - \tau _{n-1}, x_{n-1}) \qquad \text {for} \quad \tau _{n-1} \le t < \tau _n,\quad n = 1, 2, \ldots \end{aligned}$$
(7.6)

The stochastic process \(\{(X(t), {\xi }(t)\}_{t \ge 0}\), \((X(t), {\xi } (t)) : \Omega \rightarrow Y \times I \) is a Markov process and it generates the semigroup \( \{ T^t \}_{t \ge 0} \) defined by

$$\begin{aligned} T^t f(x, i) = E_{(x, i)}(f(X(t), {\xi }(t))) \qquad \text {for} \quad f \in C({Y \times I}), \end{aligned}$$

with the corresponding semigroup of Markov operators \(\{P^t\}_{t\ge 0}\) , \( P^t: {{M}}_1(Y \times I) \rightarrow {{M}}_1 (Y \times I) \) satisfying

$$\begin{aligned} \langle P^t \mu , f \rangle = \langle \mu , T^t f \rangle \qquad \text {for} \quad f \in B({Y \times I}) ,\ \ \mu \in {{M}}_1 (Y \times I) \quad \text {and} \ \ t \ge 0. \end{aligned}$$
(7.7)

In the case when the coefficient \(a : \mathbb {R}^d \times I \rightarrow \mathbb {R}^d \) does not depend on the second variable, we obtain the stochastic equation considered by Traple [36], Szarek and Wȩdrychowicz [35].

In many applications we are mostly interested in values of the solution X(t) at the switching points \(\tau _n \). Setting

$$\begin{aligned} \overline{\mu }_n (A) = \mathbb {P} ((X(\tau _n), \xi (\tau _n)) \in A )\quad \text {for}\quad A \in {B}_{Y\times I}, \end{aligned}$$

we obtain \(\overline{\mu }_{n+1} =P \overline{\mu }_n\), \(n \in \mathbb {N} \), where P is given by

$$\begin{aligned} P\mu (A) = \sum _{j \in I}\int _{\Theta }\int _{ Y \times I} \int _{\mathbb {R}_+} \lambda e^{-\lambda t} 1_A(q(\Pi _j (t,x), \theta ), j)p_{ij}(x)dtd\nu (\theta ) d\mu (x, i) \end{aligned}$$
(7.8)

for \( A \in {B}_{Y\times I}\) and \( \mu \in {M}_1(Y\times I)\).

Assume that there exist positive constants \(L_q \), L, \(\alpha \) and \(x_* \in Y\) such that

$$\begin{aligned}&\Vert q_{\theta }(x) - q_{\theta }(y)\Vert \le L_q \Vert x - y\Vert ,\quad \text {for}\quad {\theta }\in \Theta , \, x, y \in Y,\\&\Vert T_{j} (t,x) - T_{j}(t, y)\Vert \le L e^{\alpha t} \Vert x - y\Vert ,\quad \text {for}\quad j \in I, \, t \ge 0, \, x, y \in Y,\\&\inf _{i\in I} \inf _{x \in Y} p_{ij}(x) > 0\\&\int _0^{+\infty }e^{-\lambda t}\Vert T_j(t,x_*) - x_*\Vert ^2dt < \infty \quad \text {for}\quad j \in I. \end{aligned}$$

If

$$\begin{aligned} (LL_q)^2 + \frac{\alpha }{\lambda }< \frac{1}{2}, \end{aligned}$$

then there exists a unique invariant measure \(\mu _*\in {M}_1^2(Y{\times } I)\) for the chain \(\{(X(\tau _n),\xi (\tau _n))\}_{n\ge 0}\), which is exponentially attractive in \({M}_1^1(Y\times I)\) and the Central Limit Theorem for the processes \(\{(X(\tau _n),\xi (\tau _n))\}_{n\ge 0}\) and \(\{(X(t), \xi (t))\}_{t \ge 0}\) holds.

Example 2

Iterated Function Systems.

Let \((Y, \varrho )\) be a Polish space. An iterated function system (IFS) consists of a sequence of continuous transformations

$$\begin{aligned} q_{\theta } : Y \rightarrow Y , \quad \theta = 1, \ldots , K \end{aligned}$$

and a probability vector

$$\begin{aligned} \tilde{p}_{\theta }: Y \rightarrow [0, 1] , \quad \theta = 1, \ldots , K . \end{aligned}$$

Such a system is briefly denoted by \((q, \tilde{p} )_{K} = (q_1, \ldots ,q_K , \tilde{p}_1,\ldots , \tilde{p}_K )\). The action of an IFS can be roughly described as follows. We choose an initial point \(x_0\) and we randomly select from the set \( \Theta = \{1, \ldots , K\}\) an integer \(\theta _0\) in such a way that the probability of choosing \(\theta _0\) is \(\tilde{p}_{\theta _0}(x_0)\). If a number \(\theta _0\) is drawn, we define \(x_1 = q_{\theta _0}(x_0)\). Having \(x_1\) we select \(\theta _1\) in such a way that the probability of choosing \(\theta _1\) is \(\tilde{ p}_{\theta _1}(x_1)\). Now we define \(x_2 = q_{\theta _1}(x_1)\) and so on.

An IFS is a particular example of a random dynamical system with randomly chosen jumps. Consider a dynamical system of the form \(I =\{1\}\) and \(T_1(t, x) = x \) for \(x \in Y \), \( t \in \mathbb {R}_+\). Moreover assume that \(p_1(x) = 1 \) and \(p_{11}(x) = 1 \) for \(x \in Y\). Then we obtain an IFS \((q, \tilde{ p} )_{K}\).

Denoting by \(\tilde{\mu }_n\), \(n \in \mathbb {N} \), the distribution of \(x_n\), i.e., \(\,\tilde{\mu }_n (A) = \mathbb {P}(x_n \in A)\) for \( A \in {B}_Y\), we define \(\widetilde{P}\) as the transition operator such that \(\tilde{\mu }_{n+1} = \widetilde{P}\tilde{\mu }_n\) for \(n \in \mathbb {N} \). The transition operator corresponding to iterated function system \((q, \tilde{p})_K\) is given by

$$\begin{aligned} \widetilde{P}\mu (A) = \sum _{\theta \in \Theta } \int _Y 1_A \big (q_{\theta }(x)\big )\tilde{p}_{\theta }(x) \,\mu (dx)\quad \text {for}\quad A \in {B}_Y ,\,\,\mu \in {M}_1(Y). \end{aligned}$$
(7.9)

If there exist positive constants \(L_q \) and \(\gamma \)

$$\begin{aligned}&\sum _{{\theta } \in \Theta } \tilde{p}_{\theta }(x)\varrho (q_{\theta }(x),q_{\theta }(y)) \le L_q \varrho (x,y) \quad \text {for} \quad x,y \in Y,\\&\sum _{{\theta }\in \Theta } |\tilde{p}_{{\theta }}(x) - \tilde{p}_{{\theta }}(y)| \le \gamma \varrho (x,y) \quad \text {for}\quad x,y \in Y \end{aligned}$$

with \(L_q<1\) then from Theorem 1 we obtain existence of an invariant measure \(\mu _*\in {M}_1^1(Y)\) for the Markov operator \(\tilde{P}\), which is attractive in \({M}_1(Y)\), exponentially attractive in \({M}_1^1(Y)\). If \(L_q < \frac{\sqrt{2}}{2}\) then by Theorem 3 the invariant measure \(\mu _*\) has finite second moment and by Theorem 2 the Central Limit Theorem for iterated function systems \((q,\tilde{p})\) holds.

Example 3

Let \(q_1\) and \(q_1\) be two maps from [0, 1] into inself defined by

$$\begin{aligned} q_1(x) = \beta x\quad \text { and} \quad q_2(x) = \beta x +(1- \beta ) \end{aligned}$$

where \(0< \beta < 1\) is a constant parameter. Consider the Markov chain with the transition probability

$$\begin{aligned} \Pi (x, A) = p(x)1_A(q_1(x)) + (1 - p(x))1_A(q_2(x)),\quad x \in [0, 1],\, A \in B_{[0, 1]}, \end{aligned}$$

where \(p: [0, 1] \rightarrow [0, 1]\) is a Lipschitz function.

The case when \(p(x)= \frac{1}{2}\) for \(x \in [0, 1]\) and \(\beta = \frac{1}{2}\), where the uniform distribution on [0, 1] is the unique stationary distribution, and the case when \(p(x)= \frac{1}{2}\) for \(x \in [0, 1]\) and \(\beta = \frac{1}{3}\), where the uniform distribution on the (middle third) Cantor set is the unique stationary distribution, are two important particular cases of this model.