Abstract
We consider random dynamical systems with randomly chosen jumps. The choice of deterministic dynamical system and jumps depends on a position. The Central Limit Theorem for random dynamical systems is established.
Similar content being viewed by others
1 Introduction
The main goal of the paper is to prove the Central Limit Theorem (CLT) for Markov operator generated by random dynamical systems. The existence of an exponentially attractive invariant measure was proven by Horbacz and Ślȩczka [19].
Random dynamical systems [15, 17] take into consideration some very important and widely studied cases, namely dynamical systems generated by learning systems [1, 20, 22, 29], iterated function systems with an infinite family of transformations [37, 38], Poisson driven stochastic differential equations [16, 35, 36], random evolutions [11, 32] and irreducible Markov systems [41], used for the computer modelling of different stochastic processes.
A large class of applications of such models, both in physics and biology, is worth mentioning here: the shot noise, the photo conductive detectors, the growth of the size of structural populations, the motion of relativistic particles, both fermions and bosons (see [10, 23, 26]), the generalized stochastic process introduced in the recent model of gene expression by Lipniacki et al. [30] see also [3, 14, 18]. The results bring some information important from biological point of view. On the other hand, it should be noted that most Markov chains, appear among other things, in statistical physics, and may be represented as iterated function systems (see [24]), for example iterated function systems have been used in studying invariant measures for the Waźewska partial differential equation which describes the process of the reproduction of red blood cells [27].
In our paper we base on coupling methods introduced in Hairer [12]. In the same spirit, the Central Limit Theorem was proven by Hille, Horbacz, Szarek and Wojewódka [13] for a stochastic model for an autoregulated gene. Komorowski and Walczuk studied Markov processes with the transfer operator having spectral gap in the Wasserstein metric and proved the CLT in the non-stationary case [25].
Properly constructed coupling measure, if combined with the results for stationary ergodic Markov chains given by Maxwell and Woodroofe [31], is also crucial in the proof of the CLT. If we have the coupling measure already constructed, the proof of the CLT is brief and less technical then typical proofs based on Gordin’s martingale approximation.
The aim of this paper is to study stochastic processes whose paths follow deterministic dynamics between random times, jump times, at which they change their position randomly. Hence, we analyse stochastic processes in which randomness appears at times \(\tau _0< \tau _1<\tau _2<\ldots \) We assume that a point \( x_0 \in Y \) moves according to one of the dynamical systems \( T_i : {\mathbb {R}}_+ \times Y \rightarrow Y \) from some set \(\{ T_1, \ldots , T_N \}\). The motion of the process is governed by the equation \( X(t) = T_i (t, x_0) \) until the first jump time \(\tau _1\). Then we choose a transformation \(q_{\theta } : Y \rightarrow Y\) from a set \(\{q_1, \ldots , q_K \} \) and define \(x_1 = q_{\theta }(T_i (\tau _1, x_0))\). The process restarts from that new point \(x_1\) and continues as before. This gives the stochastic process \(\{X(t)\}_{t \ge 0}\) with jump times \(\{\tau _1, \tau _2, \ldots \}\) and post jump positions \(\{x_1, x_2, \ldots \}\). The probability determining the frequency with which the dynamical systems \(T_i\) are chosen is described by a matrix of probabilities \({[p_{ij}]}_{i,j=1}^N \), \(p_{ij} : Y \rightarrow [0, 1]\). The maps \(q_{\theta }\) are randomly chosen with place dependent distribution.
The existence of an exponentially attractive invariant measure and strong law of large numbers for Markov operator generated by discrete time random dynamical systems was proven by Horbacz and Ślȩczka in [19]. Our model is similar to the so-called piecewise-deterministic Markov process introduced by Davis [5]. There is a substantial literature devoted to the problem the existence of an exponentially attractive invariant measure for piecewise-deterministic Markov processes. In [2] the authors considers the particular situation for random dynamical systems without jumps, ( i.e \(q_{\theta }(x) = x\)), when \(Y = \mathbb {R}^d\). Under Hormander type bracket conditions, the authors proves that there exists a unique invariant measure and that the processes converges to equilibrium in the total variation norm. We consider random dynamical systems with randomly chosen jumps acting on a given Polish space \((Y,\varrho )\). In fact, it is difficult to ensure that the process under consideration satisfies all the ergodic properties on a compact set. In [4] the authors consider a Markov process with two components: the first component evolves according to one of finitely many underlying Markovian dynamics, with a choice of dynamics that changes at the jump times of the second component, but also without jumps.
Given a Lipschitz function \(g:X \rightarrow \mathbb {R}\) we define
Our aim is to find conditions under which \(S_n(g)\) and \(S_t(g)\) satisfies CLT.
The organization of the paper goes as follows. Section 2 introduces basic notation and definitions that are needed throughout the paper. Random dynamical systems is provided in Sect. 3. The main theorem (CLT) is also formulated there. Section 4 is devoted to the construction of coupling measure for random dynamical systems. Auxiliary theorems are proved in Sect. 5. The CLT for discrete and continuous time processes is established in Sect. 6. In Sect. 7 we illustrate the usefulness of our criteria for CLT for Markov chain associated with iterated function systems with place - dependent probabilities and Poisson driven stochastic differential equations.
2 Notation and Basic Definitions
Let \((X,\varrho _X)\) be a Polish space. We denote by \(B_X\) the family of all Borel subsets of X. Let B(X) be the space of all bounded and measurable functions \(f:X\rightarrow R\) with the supremum norm. Then, C(X) is the space of all bounded and continuous functions and \(Lip_b(X)\) is the space of all bounded and Lipschitz functions, also with the supremum norm.
We denote by M(X) the family of all non negative Borel measures on X and by \(M_{fin}(X)\) and \(M_1(X)\) its subfamilies such that \(\mu (X)<\infty \) and \(\mu (X)=1\), respectively. Elements of \(M_{fin}(X)\) which satisfy \(\mu (X)\le 1\) are called sub-probability measures. To simplify notation, we write
Let \(\mu \in M(X)\), by \(L^2(\mu )\) we denote the space of square integrable function \(g: X \rightarrow R\) for which \(\Vert g\Vert ^2=\int _Yg^2d\mu < \infty \), and let \(L^2_0(\mu )\) denote the set of \( g\in L^2(\mu )\) for which \(\langle g,\mu \rangle =0\).
An operator \(P:M_{fin}(X)\rightarrow M_{fin}(X)\) is called a Markov operator if
Markov operator \(P: M_{fin}(X)\rightarrow M_{fin}(X)\) for which there exists a linear operator \(U:B(X)\rightarrow B(X)\) such that
is called a regular operator. We say that a regular Markov operator P is Feller if \(U(C(X))\subset C(X)\). Every Markov operator P may be extended to the space of signed measures on X denoted by \(M_{sig}(X)=\{\mu _1-\mu _2:\; \mu _1,\mu _2\in M_{fin}(X)\}\).
By \(\{\Pi (x, \cdot )\, : \, x\in X\}\) we denote a transition probability function for P, i.e. a family of measures \(\Pi (x, \cdot ) \in \mathcal {M}_1(X)\) for \(x\in X\), such that the map \(x\mapsto \Pi ( x, A)\) is measurable for every \(A\in \mathcal {B}_X\) and
or equivalently
Distributions \(\Pi ^n(x,\cdot ))\), \({n\in \mathbb {N}}\), are defined by induction on n
for \(x\in X\), \(A\in B_X\) .
A coupling for \(\{\Pi ^1(x, \cdot )\, : \, x\in X\}\) is a family \(\{C^1((x,y),\cdot ): x, y\in X\}\) of probability measures on \(X^2\) such that
for \(A, B\in B_X\) and \(x, y \in X\).
In the following we assume that there exists a subcoupling for \(\{\Pi ^1(x, \cdot )\, : \, x\in X\}\), i.e. a family \(\{Q^1((x,y),\cdot )\, : \, x, y \in X\}\) of subprobability measures on \(X^2\) such that the mapping \((x,y)\mapsto Q^1((x,y),A\times B)\) is measurable for every \(A,B\in B_{X}\) and
for \(A, B \in B_X\). Measures \(\{Q^1((x,y),\cdot )\, : \, x, y \in X\}\) allow us to construct a coupling for \(\{\Pi ^1(x, \cdot )\, : \, x\in X\}\). Define \(\{R^1((x,y),\cdot )\, : \, x, y \in X\}\) by
if \(Q^1((x,y),X^2)<1\) and \(R^1((x,y),A\times B)=0\) if \(Q^1((x,y),X^2)=1\) for \(A,B\in B_{X}\).
A simple computation shows that the family \(\{C^1((x,y),\cdot ): x, y\in X\}\) of probability measures on \(X\times X\) defined by
is a coupling for \(\{\Pi ^1(x, \cdot )\, : \, x\in X\}\).
For fixed \(\bar{x}\in X\) we consider the space \(M_1^1(X)\) of all probability measures with the first moment finite, i.e.,
and the space \(M_1^2(X)\) of all probability measures with finite second moment, i.e.,
The family is independent of the choice of \(\bar{x}\in X\).
Fix probability measures \(\mu ,\nu \in M_1^1(X)\) and Borel sets \(A,B\in B_X\). We consider \(b\in M_1(X^2)\) such that
and \(b^n\in M_1(X^2)\) such that, for every \(n\in \mathbb {N}\),
where \(P : M_1(X)\rightarrow M_1(X)\) is given Markov operator.
For measures \(b\in M_{fin}^1(X^2)\) finite on \(X^2\) and with the first moment finite we define the linear functional
A continuous function \(V: X\rightarrow [0, \infty )\) such that V is bounded on bounded sets and \(\lim _{x\rightarrow \infty }V(x)=+\infty \) is called a Lapunov function.
We call \(\mu _*\in M_{fin}(X)\) an invariant measure of P if \(P\mu _*=\mu _*\). An invariant measure \(\mu _*\) is attractive if
For \(\mu \in M_{fin}(X)\), we define the support of \(\mu \) by
where B(x, r) is an open ball in X with center at \(x\in X\) and radius \(r>0\).
In \(M_{sig}(X)\), we introduce the Fortet-Mourier norm
where \(\mathcal {F}=\{f\in C(X):\;|f(x)-f(y)|\le \varrho _X(x,y),\;|f(x)|\le 1\;\text { for }\,x,y\in X\}\). The space \(M_1(X)\) with the metric \(\Vert \mu _1-\mu _2\Vert _{\mathcal {FM}}\) is complete (see [9, 33] or [39]). It is known (see Theorem 11.3.3, [7]) that the following conditions are equivalent
-
(i)
\(\lim _{n\rightarrow \infty }\langle f,\mu _n\rangle =\langle f,\mu \rangle \) for all \(f\in \mathcal {F}\),
-
(ii)
\(\lim _{n\rightarrow \infty }\Vert \mu _n-\mu \Vert _{\mathcal {FM}}=0\),
where \((\mu _n)_{n\in N}\subset M_1(X)\) and \(\mu \in M_1(X)\).
3 Random Dynamical Systems
Let \((Y, \varrho )\) be a Polish space, \(\mathbb {R}_+=[0,+\infty )\) and \(I = \{1, \dots ,N\}\), \( \Theta = \{1, \ldots , K\}\), where N and K are given positive integers.
We are given a family of continuous functions \(q_{\theta } : Y \rightarrow Y , {\theta } \in \Theta \) and a finite sequence of semidynamical systems \(T_{i}:\mathbb {R}_+\times Y\rightarrow Y\), \(i \in I\), i.e.
the transformations \(T_{i}:\mathbb {R}_+\times Y\rightarrow Y\), \(i \in I\) are continuous.
Let \(p_i :Y \rightarrow [0,1], \,\,\, i \in I\), \(\tilde{p}_{\theta } :Y \rightarrow [0,1], \,\,\, {\theta } \in \Theta \) be probability vectors, \(\sum _{i=1}^N p_i(x) = 1 \,\,, x\in Y\), \(\sum _{\theta =1} ^K\tilde{p}_{\theta }(x) = 1 \,\,, x\in Y\), and \([p_{ij}]_{i, j \in I}\), \( p_{ij}:Y\rightarrow [0, 1], \,\,\, i,j \in I\) be a matrix of probabilities, \(\sum _{j=1}^N p_{ij}(x) = 1 \,\,, x\in Y, i \in I\) . In the sequel we denote the system by (T , q, p ).
Finally, let \((\Omega , \Sigma , \mathbb {P} )\) be a probability space and \(\{\tau _n\}_{n\ge 0}\) be an increasing sequence of random variables \(\tau _n :\Omega \rightarrow \mathbb {R}_+\) with \(\tau _0 =0\) and such that the increments \(\Delta \tau _n=\tau _n-\tau _{n-1}\), \(n \in \mathbb {N} \), are independent and have the same density \(g(t)=\lambda e^{-\lambda t}\), \( t \ge 0 \).
The intuitive description of random dynamical system corresponding to the system (T , q, p ) is the following.
For an initial point \(x_0 \in Y \) we randomly select a transformation \(T_{i_0}\) from the set \(\{T_1 , \ldots , T_N \}\) in such a way that the probability of choosing \(T_{i_0}\) is equal to \(p_{i_0}(x_0)\), and we define
Next, at the random moment \(\tau _1\), at the point \(T_{i_0}(\tau _1, x_0)\) we choose a jump \(q_{\theta }\) from the set \(\{q_1, \ldots ,q_K\}\) with probability \(\tilde{p}_{\theta }(T_{i_0}(\tau _1, x_0 ))\) and we define
Finally, given \(x_n\), \(n\ge 1 \), we choose \( T_{i_n} \) in such a way that the probability of choosing \( T_{i_n} \) is equal to \(p_{i_{n-1}i_n}(x_n)\) and we define
At the point \( T_{i_n}(\Delta \tau _{n+1}, x_n ) \) we choose \(q_{{\theta }_n}\) with probability \(\tilde{p}_{{\theta }_n}(T_{i_n}(\Delta \tau _{n+1}, x_n))\). Then we define
The above considerations may be reformulated as follows. Let \(\{\xi _n\}_{n \ge 1}\) and \(\{\gamma _n\}_{n \ge 1}\) be sequences of random variables, \(\xi _n :\Omega \rightarrow I\) and \( \gamma _n :\Omega \rightarrow \Theta \), such that
Assume that \(\{\xi _n\}_{n \ge 0}\) and \(\{\gamma _n\}_{n \ge 1} \) are independent of \(\{\tau _n\}_{n \ge 0}\) and that for every \(n \in \mathbb {N}\).
Given an initial random variable \(\xi _1\) the sequence of the random variables \(\{x_n\}_{n\ge 0}\), \(x_n : \Omega \rightarrow Y \), is given by
and the stochastic process \(\{X(t)\}_{t \ge 0}\), \(X(t) : \Omega \rightarrow Y\), is given by
We obtain a piecewise deterministic trajectory for \(\{X(t)\}_{t \ge 0}\) with jump times \(\{\tau _1, \tau _2, \ldots \}\) and post jump locations \(\{x_1, x_2, \ldots \}\).
Now define a stochastic process \(\{\xi (t) \}_{t \ge 0}\), \(\xi (t): \Omega \rightarrow I \), by
It is easy to see that \(\{X(t)\}_{t \ge 0}\) and \(\{x_n\}_{n \ge 0}\) are not Markov processes. In order to use the theory of Markov operators we must redefine the processes \(\{X(t)\}_{t \ge 0}\) and \(\{x_n\}_{n \ge 0}\) in such a way that the redefined processes become Markov.
To this end, consider the space \( X = Y\times I \) endowed with the metric \(\varrho _X\) given by
where \(\varrho _c\) is the discrete metric in I. The constant c will be chosen later.
We will study the Markov chain \(\{(x_n, \xi _n) \}_{n\ge 0}\) , \((x_n, \xi _n) : \Omega \rightarrow X \) and the Markov process \( \{(X(t), \xi (t))\}_{t \ge 0}\), \((X(t), \xi (t)) : \Omega \rightarrow X \).
Now consider the sequence of distributions
It is easy to see that
where \(P : \mathcal {M}_1(X)\rightarrow \mathcal {M}_1(X)\) is the Markov operator given by
and its dual operator \(U:B(X)\rightarrow B(X)\) by
The semigroup \(\{P^t\}_{t \ge 0}\) generated by the process \( \{(X(t), \xi (t))\}_{t \ge 0}\), \((X(t), \xi (t)) : \Omega \rightarrow X \) is given by
where
(E denotes the mathematical expectation on \((\Omega , \Sigma , \mathbb {P} ))\).
A measure \(\mu _0\) is called invariant with respect to \(P^t\) if \(P^t\mu _0 = \mu _0\) for every \(t \ge 0\).
We make the following assumptions on the system (T, q, p).
There are three constants \(L\ge 1\), \(\alpha \in \mathbb {R} \) and \(L_q > 0\) such that
and
Assume that there exists \(x_* \in Y\) such that
We also assume that the functions \(\tilde{p}_{\theta }\), \({\theta } \in \Theta \), and \(p_{ij}\), \(i,j \in I\), satisfy the following conditions
where \(\overline{\gamma }_1, \overline{\gamma }_2 > 0\).
Moreover, we assume that there are \(i_0 \in I, {\theta }_0 \in \Theta \) such that
and
Let \(\{(x_n, \xi _n)\}_{n\in N}\) be the Markov chain given by (3.1) and (3.2). The existence of an exponentially attractive invariant measure for Markov operator generated by random dynamical systems was proven by Horbacz and Ślȩczka in [19].
Theorem 1
[19] Assume that system (T, q, p) satisfies conditions (3.10)–(3.15). If
then
-
(i)
there exists a unique invariant measure \(\mu _*\in \mathcal {M}_1^1(X)\) for the chain \(\{(x_n,\xi _n)\}_{n\ge 0}\), which is attractive in \(\mathcal {M}_1(X)\).
-
(ii)
there exists \(q\in (0,1)\) such that for \(\mu \in \mathcal {M}_1^1(X)\) there exists and \(C = C(\mu )>0\)
$$\begin{aligned} ||P^{n}\mu -\mu _* ||_{FM}\le q^n C(\mu ),\quad \text {for}\quad n\in \mathbb {N}, \end{aligned}$$where \(x_*\) is given by (3.12),
-
(iii)
the strong law of large numbers holds for the chain \(\{(x_n,\xi _n)\}_{n\ge 0}\) starting from \((x_0,\xi _0 )\in X\), i.e. for every bounded Lipschitz function \(f:X\rightarrow \mathbb {R}\) and every \(x_0\in Y\) and \(\xi _0\in I\) we have
$$\begin{aligned} \lim _{n\in \infty }\frac{1}{n}\sum _{k=0}^{n-1} f(x_k,\xi _k)=\int _{X} f(x,\xi )\, \mu _*(dx,d\xi ) \end{aligned}$$
\(\mathbb {P}_{x_0,\xi _0}\) almost surely.
Remark 1
Condition (3.16) means that a large jump rate and a good contraction of the jumps could compensate expanding semiflows (\(\alpha > 0\)).
Let \(\{(x_n, \xi _n)\}_{n\in \mathbb {N}}\) be the Markov chain given by (3.1), (3.2) with initial distribution \(\mu \in M_1^2(X)\) and let \(\mu _*\in \mathcal {M}_1^1(X)\) be a unique invariant measure for the process \((x_n,\xi _n)_{n\ge 0}\). Now, choose an arbitrary function \(g:X\rightarrow \mathbb {R} \) which is Lipschitz and satisfies \(\langle g,\mu _*\rangle =0\). For every \(n\in \mathbb {N}\), put
Now we formulate the main results of this paper. Its proof is given in Sect. 6.
Theorem 2
Assume that all assumptions of Theorem 1 are fulfilled and the unique invariant measure has finite second moment, then \(S_n^{\mu }\) converges in distribution to some random variable with normal distribution \(N(0, \sigma ^2)\), as \(n\rightarrow \infty \), where \(\sigma ^2 = \lim _{n \rightarrow \infty }E_{\mu _*}(S_n^{\mu _*})^2\).
Checking that the invariant measure has finite second moment could be difficult if we have no a priori information about the invariant measure. Now, assumption (3.12) is strengthened to the following condition:
Theorem 3
Assume that system (T, p, q) satisfies conditions (3.17) and instead of (3.14)–(3.15) for some \(i_0 \in I, {\theta }_0 \in \Theta \), that the conditions (3.14)–(3.15) are satisfied for all \(i_0 \in I, {\theta }_0 \in \Theta \). If
then the invariant measure \(\mu _*\) for the process \(\{(x_n,\xi _n)\}_{n\ge 0}\) has finite second moment.
Note that (3.18) implies (3.16). Assuming (3.18) instead (3.16) allows us to show that \(\mu _* \in M_1^2(X)\), which is essential to establish CLT in the way presented in this paper.
The next result describing CLT for the process \(\{(x_n)\}_{n\ge 0}\) on Y is an obvious consequence of Theorem 2.
Remark 2
Choose an arbitrary function \(f : Y \rightarrow \mathbb {R} \) which is Lipschitz and satisfies \(\langle f,\tilde{\mu }_*\rangle =0\), where \(\tilde{\mu }_*(A) = \mu _*(A\times I),\,A \in B_Y\). Let \(\tilde{\mu } \in M_1^2(Y)\) be an initial distribution of \(\{x_n\}_{n \in \mathbb {N}}\). Under the hypotheses of Theorem 2 the distribution of
converges to some random variable with normal distribution \(N(0, \sigma ^2)\), as \(n\rightarrow \infty \), where \(\sigma ^2 = \lim _{n \rightarrow \infty }E_{\tilde{\mu }_*}(S_n^{\tilde{\mu }_*})^2\).
Let \(\{(X(t), \xi (t))\}_{t \ge 0}\) be the Markov process given by (3.3) and (3.4). Relationships between an invariant measure for the Markov operator P given by (3.6) and an invariant measures for \( \{ P^t \}_{t \ge 0} \) given by (3.8) was proven by Horbacz [17]. Similar results have been proved by Davis [5, Proposition 34.36]. It has been also studied in [28].
The existence of invariant measure for \(\{P^t\}_{t \ge 0}\) follows from Theorem 1 and Theorem 5.3.1 [17]. If \(\mu _* \in {M}_1(X) \) is an invariant measure for the Markov operator P, then \( {\mu }_0 = G\mu _*, \) where
is an invariant measure for the Markov semigroup \(\{P^t\}_{t \ge 0}\).
The next theorem is partially inspired by the reasoning which can be found in Lemma 2.5 [2]. Since the Markov process \(\{(X(t), \xi (t))\}_{t \ge 0}\) is defined with he help of the Markov chain \(\{(x_n, \xi _n)\}_{n\in \mathbb {N}}\) given by (3.1), (3.2) we use Theorem 1 and Theorem 2 in the proof of following theorem.
Theorem 4
Assume that all assumptions of Theorems 1 and 2 are fulfilled and the unique invariant measure \(\mu _0\) has finite second moment, then
-
(1)
the strong law of large numbers holds for the process \(\{(X(t),\xi (t))\}_{t\ge 0}\) starting from \((x_0, i_0 )\in X\), i.e. for every bounded Lipschitz function \(f:X\rightarrow \mathbb {R}\) and every \(x_0\in Y\) and \(i_0\in I\) we have
$$\begin{aligned} \lim _{t\in \infty }\frac{1}{t}\int _0^t f(X(s),\xi (s)) ds=\int _{X} f(x,i)\, \mu _0(dx,di) \end{aligned}$$\(\mathbb {P}_{x_0,i_0}\) almost surely,
-
(2)
the Central Limit Theorem holds for the process \(\{(X(t),\xi (t))\}_{t\ge 0}\) i.e. for every bounded Lipschitz function \(f:X\rightarrow \mathbb {R}\) such that \(\langle f, \mu _0 \rangle = 0\)
$$\begin{aligned} \frac{1}{\sqrt{t}}\int _0^t f(X(s),\xi (s)) ds \end{aligned}$$
converges in distribution to some random variable with normal distribution \(N(0, \tilde{\sigma }^2)\), as \(n\rightarrow \infty \), where \(\tilde{\sigma } ^2 = \lim _{n \rightarrow \infty }E_{\mu _*}(S_n^{\mu _*})^2 + \langle Hf - \tilde{K}^2f, \mu _* \rangle \) and
4 Coupling for Random Dynamical Systems
Let \(P:M_{fin}(X) \rightarrow M_{fin}(X)\) be the transition Markov operator for the random dynamical system (T, p, q), where \(X= Y\times I\).
Distributions \(\Pi ^n((x, i),\cdot ))\), \({n\in \mathbb {N}}\), are given by
for \((x, i) \in X\), \(A\in B_X.\) If we assume that, for \((x, i)\in X\), \(\bar{\Pi }^{n}((x, i),\cdot )\) is a measure on \(X^n\), generated by a sequence \((\Pi ^k((x, i),\cdot ))_{k\in \mathbb {N}}\), then
where \(z=((z_1,i_1),\ldots ,(z_n,i_n))\) and \(A\in B_{X^n}\), \(B\in B_X\), is a measure on \(X^{n+1}\). Note that \(\Pi ^1((x, i),\cdot ), \ldots ,\Pi ^n((x, i),\cdot )\), given by (4.1), are marginal distributions of \(\bar{\Pi }^{n}((x, i),\cdot )\), for every \((x, i)\in X\). Finally, we obtain a family \(\{\Pi ^{\infty }((x, i),\cdot ):(x, i)\in X\}\) of sub-probability measures on \(X^{\infty }\). This construction is motivated by Hairer [12].
Denote by
and consider the probabilities \( \mathcal {P}_n : Y \times I^{n+1}\times \mathbb {R}_+^{n-1} \times \Theta ^{n-1} \rightarrow [0, 1]\) and \(\overline{\mathcal {P}}_n : Y \times I^n \times \mathbb {R}^n_+ \times \Theta ^n \rightarrow [0, 1]\) given by
for \(n \ge 2\), and
for \(n \ge 2\), where
Then \(P^n\) is given by
Fix \( x_*\in Y\) for which assumption (3.12) holds. We define \(V:X \rightarrow [0,\infty )\), by
Lemma 1
Assume that the system (T, p, q) satisfies conditions (3.10) - (3.12) and (3.16). If \(\mu \in M^1_1(X)\), then \(P^n\mu \in M^1_1(X)\) for every \(n\in \mathbb {N}\). Moreover, there are constants \(a< 1\) and \(c >0\) such that
Proof
Further, using (3.10)–(3.12) and (3.16) we obtain
where
so V is a Lapunov function for P. \(\square \)
Furthermore, we define \(\bar{V}:X^2\rightarrow [0,\infty )\)
Note that, for every \(n\in \mathbb {N}\),
where b and \(b^n\) are given by (2.2) and (2.3). Since the measure \(b\in M_{fin}^1(X^2)\) is finite on \(X^2\) and with the first moment finite we define the linear functional
Following the above definitions, we easily obtain
Set \(F=X\times X\) and define
for \(A, B\subset X\), where \(a\wedge b\) stands for the minimum of a and b, and
It is easy to check that
and analogously \(Q^1((x_1, i_1)(x_2,i_2),X\times B)\le \Pi ^1((x_2, i_2),B)\). Similarly, for \(n\in \mathbb {N}\),
For \(b\in M_{fin}(X^2)\), let \(Q^nb\) denote the measure
for \(A,B\in B_{X}, n\in \mathbb {N}\). Note that, for every \(A,B\in B_{X}\) and \(n\in \mathbb {N}\), we obtain
Again, following (4.1) and (4.2), we are able to construct measures on products and, as a consequence, a measure \(Q^{\infty }b\) on \(X^{\infty }\), for every \(b\in M_{fin}(X^2)\). Now, we check that, for \(n\in \mathbb {N}\) and \(b\in M_{fin}(X^2)\),
Let us observe that
Following (3.10) and (3.11), we obtain
We may construct the coupling \(\{C^1(((x, i),(y, j)),\cdot ):(x, i),(y, j)\in X\}\) for \(\{\Pi ^1((x, i),\cdot ):(x, i)\in X\}\) such that \(Q^1(((x, i),(y, j)),\cdot )\le C^1(((x, i),(y, j)),\cdot )\), whereas measures \(R^1(((x, i),(y, j)),\cdot )\) are non-negative. Following the rule given in (4.2), we easily obtain the family of probability measures
on \((X^2)^{\infty }\) with marginals \(\Pi ^{\infty }((x, i),\cdot )\) and \(\Pi ^{\infty }((y, j),\cdot )\). This construction appears in [12].
We may also consider a sequence of distributions \((\{C^n(((x, i),(y, j)),\cdot )\})_{n\in \mathbb {N}}\), constructed by induction on n, as it is done in (4.1). Note that \(C^n(((x, i),(y, j)),\cdot )\) is the n-th marginal of \(C^{\infty }(((x, i),(y, j)),\cdot )\), for \((x, i),(y, j)\in X\). Additionally, \(\{C^n(((x, i),(y, j)),\cdot )\}\) fulfills the role of coupling for \(\{\Pi ^n((x, i),\cdot ):(x, i)\in X\}\). Indeed, for \(A\in B_Y\),
and, similarly, \(C^n(((x, i),(y, j)),X\times B)=\Pi ^n((y, j),B)\).
Fix \(((x_0, i_0),(y_0, j_0))\in X^2\). The sequence of transition probability functions \(\Big (\{C^n(((x, i),(y, j)),\cdot ):(x, i),(y, j)\in X\}\Big )_{n\in \mathbb {N}}\) defines the Markov chain \(\mathcal {Z}\) on \(X^2\) with starting point \(((x_0, i_0),(y_0, j_0))\), while the sequence of transition probability functions
defines the Markov chain \(\hat{\mathcal{{Z}}}\) on the augmented space \(X^2\times \{0,1\}\) with initial distribution \(\hat{C}^0(((x_0, i_0),(y_0, j_0)),\cdot )=\delta _{((x_0, i_0),(y_0, j_0),1)}(\cdot )\). If \(\hat{\mathcal{{Z}}}_n=((x, i),(y, j),k)\), where \((x, i),(y, j)\in X\), \(k\in \{0,1\}\), then
where \(A,B\in B_Y\). Once again, we refer to (4.1) and the Kolmogorov theorem to obtain the measure \(\hat{C}^{\infty }(((x_0, i_0),(y_0, j_0)),\cdot )\) on \((X^2\times \{0,1\})^{\infty }\) which is associated with the Markov chain \(\hat{\mathcal{{Z}}}\).
From now on, we assume that processes \(\mathcal{{Z}}\) and \(\hat{\mathcal{{Z}}}\) taking values in \(X^2\) and \(X^2\times \{0,1\}\), respectively, are defined on \((\Omega , \Sigma , \mathbb {P})\). The expected value of the measures \(C^{\infty }(((x_0, i_0),(y_0, j_0)),\cdot )\) or \(\hat{C}^{\infty }(((x_0, i_0),(y_0, j_0)),\cdot )\) is denoted by \(E_{(x_0, i_0),(y_0, j_0)}\).
5 Auxiliary Theorems
Before proceeding to the proof of Theorem 2 we formulate two lemmas and two theorems, which are interesting in their own right. The first one is inspirated by the reasoning which can be found in [13].
Fix \(\tilde{a}\in (0,1-a)\) and set
where a and c are given by (4.5). Let \(\tau _{K_{\tilde{a}}}:(X^2)^{\infty }\rightarrow \mathbb {N}\) denote the time of the first visit in \(K_{\tilde{a}}\), i.e.
As a convention, we put \(\tau _{K_{\tilde{a}}}(((x_n, i_n),(y_n, j_n))_{n\in \mathbb {N}})=\infty \), if there is no \(n\in \mathbb {N}\) such that \(((x_n, i_n),(y_n, j_n))\in K_{\tilde{a}}\).
Since
by Lemma 2.2 in [21] or Theorem 7 in [13], we obtain
Lemma 2
For every \(\zeta \in (0,1)\) there exist positive constants \(D_1,D_2\) such that
For every positive \(r>0\), we define the set
Lemma 3
Assume that the system (T, q, p) satisfies conditions (3.10)–(3.11) and (3.15)–(3.16). Fix \(a_1\in (a,1)\). Let \(C_r\) be the set defined above and suppose that \(b\in M_{fin}(X^2)\) is such that \(\mathrm{supp}b\subset C_r\). There exists \(\bar{\gamma }>0\) such that
Proof
By (4.3), (4.8) and (4.9), we obtain
Directly from (4.9) and (4.10) we obtain
Set
Note that \( 1_{C_{a_1^nr}}(({q}\circ T)_n( \mathbf{{}t_n}, \mathbf{{}{\theta }_n}, \mathbf{{}i_n}, x), ({q}\circ T)_n( \mathbf{{}t_n}, \mathbf{{}{\theta }_n}, \mathbf{{}i_n}, y))=1\) if and only if \( (\mathbf{{}t_n}, \mathbf{{}{\theta }_n}, \mathbf{{}i_n}) \in \mathcal {T}_n\times S_n \times I_n\). Set \((\mathcal {T}_n\times \mathcal {S}_n \times \mathcal {I}_n)^{'}:=\mathbb {R}_+^n\times \Theta ^n \times I^n\backslash \mathcal {T}_n\times S_n \times I_n\). According to assumptions (3.10) and (3.11), we have
for \((x,i),(y, j)\in C_r\), where \(a = \frac{\lambda L L_q}{\lambda - \alpha }. \) Comparing this with the definition of \((\mathcal {T}_n\times \mathcal {S}_n \times \mathcal {I}_n )'\), we obtain
which implies
We then obtain that the integral over \(\mathcal {T}_n\times \mathcal {S}_n \times \mathcal {I}_n\) is not less than \(1-\Big (\frac{a}{a_1}\Big )^n\ge (1-\frac{a}{a_1})^n=:\gamma ^n\), for sufficiently big \(n\in \mathbb {N}\).
Using (3.15) we obtain
where
Finally,
If we set \(\bar{\gamma }:= \frac{ \delta _1 \delta _2 }{M_1M_2} \gamma \), the proof is complete. \(\square \)
Theorem 5
Assume that the system (T, q, p) satisfies conditions (3.10)–(3.16). For every \(\tilde{a}\in (0,1-a)\), there exists \(n_0\in \mathbb {N}\) such that
where \(\bar{\gamma }>0\) is given in Lemma 3.
Proof
Note that, for every real numbers \(u, v\in \mathbb {R}\), there is a general rule: \(\;\min \{u,v\}+|u-v|-u\ge 0\). Hence, for every \((x_1,i_1),(x_2,i_2)\in X\) , we obtain
and therefore, due to (4.8),
For every \(b\in M_{\text {fin}}(X^2)\), we get
We consider two cases: \(i_1 = i_2 = i \) and \(i_1 \ne i_2\). From (3.10) and (3.13), we obtain for \(i_1 = i_2 = i \)
where \(\gamma _2 = \overline{\gamma }_2\frac{L\lambda }{\lambda - \alpha }\).
Suppose now that \(i_1 \ne i_2\), then \(\varrho _c(i_1, i_2)= c > (\overline{\gamma }_1 + \gamma _2)^{-1}\). In this case, we obtain
Thus
Hence,
By (4.11) and
we obtain
where \( a = \frac{LL_q\lambda }{\lambda - \alpha }. \)
We may choose \(r>0\) such that \(d((x_1, i_1),(x_2, i_2))<r\) and
Since
and \(\mathrm{supp}b\subset C_r\), then we obtain
Fix \(\tilde{a}\in (0,1-a)\). It is clear that \(K_{\tilde{a}}\subset C_{\tilde{a}^{-1} 2c}\). If we define \(n_0:=\min \{n\in \mathbb {N}:\, a^n(\tilde{a})^{-1}2c<r\}\), then \(C_{a^{n_0}\tilde{a}^{-1} 2c}\subset C_r\). Remembering that \(Q^{n+m}(((x,i)(y,j)),\cdot )=Q^m(Q^n((x,i)(y,j),\cdot ))\) and using the Markov property, we obtain
Then, according to (5.2) and Lemma 3, we obtain
for \(((x,i),(y,j))\in K_{\tilde{a}}\). This finishes the proof. \(\square \)
The next theorem is partially inspired by the reasoning which can be found in Lemma 2.1 [21]
Theorem 6
Under the hypothesis of Theorem 1, there exist \(\tilde{q}\in (0,1)\) and \(D_3>0\) such that
Proof
Fix \(\tilde{a}\in (0,1-a)\) and \(((x, i),(y, j))\in X^2\). To simplify notation, we write \(\alpha =(a+\tilde{a})^{-\frac{1}{2}}\). Let s be the random moment of the first visit in \(K_{\tilde{a}}\). Suppose that
where \(n\in \mathbb {N}\) and \(\vartheta _n\) are shift operators on \((X^2\times \{0,1\})^{\infty }\), i.e.
Theorem 5 implies that every \(s_n\) is \(C^{\infty }(((x, i),(y, j)),\cdot )\)-a.s. finished. The strong Markov property shows that
where \(F_{s_n}\) denotes the \(\sigma \)-algebra on \((X^2\times \{0,1\})\) generated by \(s_n\) and \(\mathcal{{Z}}=((x_n, i_n),(y_n, j_n))_{n\in \mathbb {N}}\) is the Markov chain with sequence of transition probability functions \((\{C^{1}(((x, i),(y, j))\cdot ):(x, i),(y, j)\in X\})_{i\in \mathbb {N}}\). By Theorem 5 and the definition of \(K_{\tilde{a}}\), we obtain
Fix \(\eta =D_1\tilde{a}^{-1} 2c+D_2\). Consequently,
We define \(\hat{\tau }(((x_n, i_n), (y_n, j_n),\theta _n)_{n\in \mathbb {N}})=\inf \{n\in \mathbb {N}:\; ((x_n, i_n), (y_n, j_n))\in K_{\tilde{a}}, \ \;\theta _k=1\,\text { for }k\ge n\}\) and \(\sigma =\inf \{n\in \mathbb {N}:\;\hat{\tau }=s_n\}\). By Theorem 5, there is \(n_0\in \mathbb {N}\) such that
Let \(p>1\). By the Hölder inequality, (5.3) and (5.4), we obtain
For p sufficiently large and \(\tilde{q}=\alpha ^{-\frac{1}{p}}\), we get
for some \(D_3\). Since \(\tau \le \hat{\tau }\), we finish the proof. \(\square \)
6 Central Limit Theorem: Proof of Theorems 2, 3 and 4
Let \(\{(x_n, \xi _n)\}_{n\in \mathbb {N}}\) be the Markov chain given by (3.1) and (3.2) with initial distribution \(\mu \in M_1^2(X)\), \( X = Y \times I\). Let \(g \in L^2_0(\mu )\). Define
and let \(\Phi S_n^\mu \) denote its distribution.
Denote by \(\mu _*\in \mathcal {M}_1^1(X)\) an invariant measure for the process \(\{(x_n,\xi _n)\}_{n\ge 0}\).
Central Limit Theorems for ergodic stationary Markov chains have already been proven in many papers. See, for example, Theorem 1 and the subsequent Corollary 1 in Maxwell and Woodroof [31].
Theorem 7
[31] Let \(g\in L^2_0(\mu _*)\). If the following condition is satisfied
then there exists
and the sequence of distribution of \((S_n^{\mu _*})_{n\ge 0}\) converges weakly to some random variable with normal distribution \(N(0, \sigma ^2)\).
Proof of Theorem 2
We shall split the proof in 4 steps.
Step 1 Let \(f\in \mathcal {F}\). Then, there exist \(q\in (0,1)\) and \(D_5>0\) such that
for every \((x, i),(y, j)\in X\), \(n\in \mathbb {N}\), where \(\Pi ^*_n:(X^2\times \{0,1\})^{\infty }\rightarrow X^2\times \{0,1\}\) are the projections on the n -th component and \(\Pi ^*_{X^2}:X^2\times \{0,1\}\rightarrow X^2\) is the projection on \(X^2\).
For \(n\in \mathbb {N}\) we define sets
Thus, we have for \(n\in \mathbb {N}\)
Note that, by iterative application of (4.12), we obtain
Then it follows from (4.6) and (4.7) that
We obtain coupling inequality
It follows from Theorem 6 and the Chebyshev inequality that
for some \(\tilde{q}\in (0,1)\) and \(D_3>0\). Finally,
where \(D_4=\max \{a^{\frac{1}{2}},(1-a)^{-1}2c\}\). Setting \(q:=\max \{a^{\frac{1}{2}},\tilde{q}^{\frac{1}{2}}\}\) and \(D_5:=D_4+2D_3\), gives our claim.
Step 2 If \(g:X\rightarrow \mathbb {R}\) is an arbitrary bounded and Lipschitz function with constant \(C_g\), then, there are \(q\in (0,1)\) and \(D_5>0\), exactly the same as in Step 1, for which we obtain
for every \((x, i),(y, j)\in X\), \(n\in \mathbb {N}\), where \(G:=\max \{C_g,\sup _{x\in X}|g(x)|\}\).
Let \(S_n^{\mu }\) and \(\Phi S_n^\mu \) be given by (6.1). In particular, \(S_n^{\mu _*}\) and \(S_n^{(x,i)}\) are defined for the Markov chains with the same transition probability function \(\Pi \) and initial distributions \(\mu _*\) and \(\delta _{(x, i)}\), respectively. Further, let \(g:X\rightarrow \mathbb {R}\) be a bounded and Lipschitz continuous function, with constant \(C_g\), which satisfies \(\langle g,\mu _*\rangle =0\).
Step 3 Let \(g:X\rightarrow \mathbb {R}\) be a bounded and Lipschitz continuous function with constant \(C_g\). Additionally, \(\langle g,\mu _*\rangle =0\). Then,
Note that, by Step 1 and Step 2,
Then, for every \((x, i)\in X\), \(n\in \mathbb {N}\),
where \(C_9:=GD_5(1-q)^{-1}(1+\int _XV((y, j))\mu _*(d(y, j)))\). Since \(\mu _*\) has finite second moment, we obtain that (6.3) is not bigger than
Hence, assumptions of Theorem 7 are satisfied
Step 4 Hence, by applying Theorem 7, we obtain that \(\Phi {S_n^{\mu _*}}\) converges to the normal distribution in Levy metric, as \(n\rightarrow \infty \), which equivalently means that the distributions converge weakly to each other (see [8]).
Note that, to complete the proof of Theorem 2, it is enough to establish that \(\Phi {S_n^{\mu }}\) converges weakly to \(\Phi {S_n^{\mu _*}}\), as \(n\rightarrow \infty \). Equivalently, it is enough to show that \(\lim _{n\rightarrow \infty }\Vert \Phi {S_n^{\mu }}-\Phi {S_n^{\mu _*}}\Vert _{\mathcal {FM}}=0\), since weak convergence is metrised by the Fourtet-Mourier norm.
Set \((x, i),(y, j)\in X\) and choose arbitrary \(f\in \mathcal {F}\). Suppose that we know that the following convergence is satisfied, as \(n\rightarrow \infty \),
Then, by the Dominated Convergence Theorem, we obtain
as \(n\rightarrow \infty \). Note that, by Theorem 11.3.3 in [7], (6.5) implies that \(\Phi {S_n^{\mu }}\) converges weakly to \(\Phi {S_n^{\mu _*}}\), as \(n\rightarrow \infty \), which completes the proof of the CLT in the model. Now, it remains to show (6.4). Note that
We may write
where \(\Pi ^*_{n}:(X^2\times \{0,1\})^{\infty }\rightarrow (X^2\times \{0,1\})^n\) are the projections on the first n components and \(\Pi ^*_{X^{2n}}:(X^2\times \{0,1\})^n\rightarrow X^{2n}\) is the projection on \(X^{2n}\). Since f is Lipschitz with constant \(C_f\), we may further estimate (6.7) as follows
\(\square \)
Now, for every \(1\le i\le n\), we refer to Step 1 and Step 2 to obseve that (6.7) is not bigger than
thanks to the upper bound of (6.6). We go with n to infinity and obtain (6.4). The proof is complete.
Proof of Theorem 3
Let \(\mu \in M_1^2(X)\). Fix \((x, i)\in X \).
Further, using (3.17), (3.18) and (3.14) for all \(i_0\in I\) and \(\theta _0 \in \Theta \), we obtain
where
Since
thus
We take a non-decreasing sequence \((V^2_k)_{k\in \mathbb {N}}\) such that \(V^2_k(y)=\min \{k,V^2(y)\}\), for every \(k\in \mathbb {N}\) and \(y\in Y\). We know that \(P^n\mu \) converges weakly to \(\mu _*\). Hence, for all \(k\in \mathbb {N}\), \(V^2_k\in C(X)\) and
so the sequence \((\left\langle V^2_k,\mu _*\right\rangle )_{k\in \mathbb {N}}\) is bounded. Because \((V^2_k)_{k\in \mathbb {N}}\) is non-negative and non-decreasing, we may use the Monotone Convergence Theorem to obtain
so, indeed, \(\mu _*\) is with finite second moment. \(\square \)
Proof of Theorem 4
Theorem 1 implies that there exists an invariant measure \(\mu _* \in {M}_1(X) \) for the Markov operator P given by (3.6). By Theorem 5.3.1 [17], \( {\mu }_0 = G\mu _*, \) where
is an invariant measure for the Markov semigroup \(\{P^t\}_{t \ge 0}\) given by (3.8). Define
then \( \left\langle f, G\mu \right\rangle = \left\langle \tilde{G}f, \mu \right\rangle .\) For every \(f \in B(X)\), we set
and
Decomposing [0, t] along the jumps yields, we obtain
where \(\Vert R_t\Vert \le \Vert f\Vert \frac{\tau _{N_{t+1}} - \tau _{N_{t}}}{N_t}.\)
For \(n\in \mathbb {N}\) and \(f \in B(X)\) we define
Using (6.8), it is easy to check that
Since
\( (M_n, \mathcal {F}_n)_{n\in \mathbb {N}} \) is a martingale with increments in \(L^2\): \(E((M_{n+1} - M_n)^2) \le \frac{6\Vert f\Vert }{\lambda ^2}.\) Therefore, by the strong law of large numbers for martingales
Let us observe that, \( \lim _{t \rightarrow \infty } \frac{N_t}{t} = \lambda \) and \( \lim _{t \rightarrow \infty }R_t = 0 \) almost surely, from (6.10) for \(a =1\) and (6.11), we have
with probability one.
Note that, \(\tilde{G}f\in Lip_b(X) \) for \( f \in Lip_b(X)\), by applying Theorem 1, we obtain
\(\mathbb {P}_{x_0,\xi _0}\) almost surely. Therefore, by (6.12), we have
The proof of Theorem 4 (i) is complete. \(\square \)
Moreover, for \(f \in Lip_b(X)\)
where the operators H and \(\tilde{K}\) are given by (3.19). Since \(Hf - \tilde{K}^2f \in Lip_b(X)\) for \( f \in Lip_b(X)\), by Theorem 1
Thus, all assumptions of Theorem A.1 [40] are satisfied. By the Central Limit Theorem for martingales, \(\frac{M_n}{\sqrt{n}}\) converges in distribution to some random variable with normal distribution \(N(0, \sigma _1^2)\), as \(n\rightarrow \infty .\)
Furthermore, from (6.10) for \(a = \frac{1}{2}\) and Theorem A.1 [40], we obtain
converges in distribution to some random variable with normal distribution \(N(0, \sigma _1^2)\), as \(n\rightarrow \infty .\)
Finally, let \( f : Y \rightarrow \mathbb {R}\) be a bounded and Lipschitz continuous function such that \(\langle f, \mu _0\rangle = 0 \), then \( \langle \tilde{G}f, \mu _* \rangle = 0\). By (6.13) and Theorem 2 we obtain CLT for the process \((X(t), \xi (t))_{\{t \ge 0\}}\).
7 Applications
Example 1
Poisson driven stochastic differential equation.
Poisson driven stochastic differential equations are quite important in applications. For example the whole book of [34] , is devoted to the applications of these equations in physics and engineering. Applications in biomathematics (population dynamics) can be found in the papers of [6]. Consider stochastic differential equations driven by jump-type processes [28]. They are typically of the form
with the initial condition
where \(\{X(t)\}_{t \ge 0}\) is a stochastic process with values in a separable Banach space \((Y, \Vert \cdot \Vert )\), or more explicitly
with probability one. Here \(\mathcal {N}_p\) is a Poisson random counting measure, \(\{\xi (t)\}_{t \ge 0}\) is a stochastic process with values in a finite set \(I = \{1, \ldots , N\}\), the solution \(\{X(t)\}_{t \ge 0}\) has values in Y and is right-continuous with left-hand limits, i.e. \(X(t) = X(t+) = \lim _{s\rightarrow t^+} X(s) \), for all \(t \ge 0\) and the left-hand limits \(X(t-) = \lim _{s\rightarrow t^-} X(s) \) exist and are finite for all \(t >0\) (equalities here mean equalities with probability one).
In our study we make the following assumptions:
On a probability space \((\Omega , \Sigma , \mathbb {P})\) there is a sequence of random variables \(\{\tau _n\}_{n \ge 0}\) such that the variables \( \Delta \tau _n = \tau _n - \tau _{n-1}\), where \( \tau _0 = 0 \), are nonnegative, independent, and identically distributed with the density distribution function \(g(t) = \lambda e^{-\lambda t}\) for \(t \ge 0. \)
Let \( \{\eta _n\}_{n\in \mathbb {N}} \) be a sequence of independent identically distributed random elements with values \(\Theta = \{1, \ldots , K\}\); their distribution will be denote by \(\nu \). We assume that the sequences \(\{\tau _n\}_{n \ge 0}\) and \(\{\eta _n\}_{n \ge 0 }\) are independent, which implies that the mapping \(\omega \rightarrow p(\omega ) = (\tau _n(\omega ), \eta _n (\omega ))_{n \ge 0 }\) defines a stationary Poisson point process. Then for every measurable set \( Z \subset \Theta \) the random variable
is Poisson distributed with parameter \(\lambda t \nu (Z)\). \(\mathcal {N}_p\) is called a Poisson random counting measure.
The coefficient \(a : Y \times I \rightarrow Y\), \( I = \{1, \ldots ,N \}\), is Lipschitz continuous with respect to the first variable.
We define \(q_{\theta }:Y \rightarrow Y\) by \(q_{\theta }(x) = x + b(x, \theta )\) for \( x \in Y,\, \theta \in \Theta .\)
For every \(i \in I\), denote by \( v_i(t) = T_i(t,x) \) the solution of the unperturbed Cauchy problem
Suppose that \([p_{ij}]_{i,j\in I}\), \(p_{ij} : Y \rightarrow [0, 1]\) is a probability matrix, there exists \(\overline{\gamma }_1 > 0\) such that
and \([p_i]_{i\in I}\), \(p_i: Y \rightarrow [0, 1]\) is a probability vector.
Consider a sequence of random variables \(\{x_n\}_{n \ge 0} \), \( x_n : \Omega \rightarrow Y \) and a stochastic process \(\{\xi (t)\}_{t \ge 0}\), \( \xi (t) : \Omega \rightarrow I \) (describing random switching at random moments \(\tau _n\)) such that
The solution of (7.3) is now given by
The stochastic process \(\{(X(t), {\xi }(t)\}_{t \ge 0}\), \((X(t), {\xi } (t)) : \Omega \rightarrow Y \times I \) is a Markov process and it generates the semigroup \( \{ T^t \}_{t \ge 0} \) defined by
with the corresponding semigroup of Markov operators \(\{P^t\}_{t\ge 0}\) , \( P^t: {{M}}_1(Y \times I) \rightarrow {{M}}_1 (Y \times I) \) satisfying
In the case when the coefficient \(a : \mathbb {R}^d \times I \rightarrow \mathbb {R}^d \) does not depend on the second variable, we obtain the stochastic equation considered by Traple [36], Szarek and Wȩdrychowicz [35].
In many applications we are mostly interested in values of the solution X(t) at the switching points \(\tau _n \). Setting
we obtain \(\overline{\mu }_{n+1} =P \overline{\mu }_n\), \(n \in \mathbb {N} \), where P is given by
for \( A \in {B}_{Y\times I}\) and \( \mu \in {M}_1(Y\times I)\).
Assume that there exist positive constants \(L_q \), L, \(\alpha \) and \(x_* \in Y\) such that
If
then there exists a unique invariant measure \(\mu _*\in {M}_1^2(Y{\times } I)\) for the chain \(\{(X(\tau _n),\xi (\tau _n))\}_{n\ge 0}\), which is exponentially attractive in \({M}_1^1(Y\times I)\) and the Central Limit Theorem for the processes \(\{(X(\tau _n),\xi (\tau _n))\}_{n\ge 0}\) and \(\{(X(t), \xi (t))\}_{t \ge 0}\) holds.
Example 2
Iterated Function Systems.
Let \((Y, \varrho )\) be a Polish space. An iterated function system (IFS) consists of a sequence of continuous transformations
and a probability vector
Such a system is briefly denoted by \((q, \tilde{p} )_{K} = (q_1, \ldots ,q_K , \tilde{p}_1,\ldots , \tilde{p}_K )\). The action of an IFS can be roughly described as follows. We choose an initial point \(x_0\) and we randomly select from the set \( \Theta = \{1, \ldots , K\}\) an integer \(\theta _0\) in such a way that the probability of choosing \(\theta _0\) is \(\tilde{p}_{\theta _0}(x_0)\). If a number \(\theta _0\) is drawn, we define \(x_1 = q_{\theta _0}(x_0)\). Having \(x_1\) we select \(\theta _1\) in such a way that the probability of choosing \(\theta _1\) is \(\tilde{ p}_{\theta _1}(x_1)\). Now we define \(x_2 = q_{\theta _1}(x_1)\) and so on.
An IFS is a particular example of a random dynamical system with randomly chosen jumps. Consider a dynamical system of the form \(I =\{1\}\) and \(T_1(t, x) = x \) for \(x \in Y \), \( t \in \mathbb {R}_+\). Moreover assume that \(p_1(x) = 1 \) and \(p_{11}(x) = 1 \) for \(x \in Y\). Then we obtain an IFS \((q, \tilde{ p} )_{K}\).
Denoting by \(\tilde{\mu }_n\), \(n \in \mathbb {N} \), the distribution of \(x_n\), i.e., \(\,\tilde{\mu }_n (A) = \mathbb {P}(x_n \in A)\) for \( A \in {B}_Y\), we define \(\widetilde{P}\) as the transition operator such that \(\tilde{\mu }_{n+1} = \widetilde{P}\tilde{\mu }_n\) for \(n \in \mathbb {N} \). The transition operator corresponding to iterated function system \((q, \tilde{p})_K\) is given by
If there exist positive constants \(L_q \) and \(\gamma \)
with \(L_q<1\) then from Theorem 1 we obtain existence of an invariant measure \(\mu _*\in {M}_1^1(Y)\) for the Markov operator \(\tilde{P}\), which is attractive in \({M}_1(Y)\), exponentially attractive in \({M}_1^1(Y)\). If \(L_q < \frac{\sqrt{2}}{2}\) then by Theorem 3 the invariant measure \(\mu _*\) has finite second moment and by Theorem 2 the Central Limit Theorem for iterated function systems \((q,\tilde{p})\) holds.
Example 3
Let \(q_1\) and \(q_1\) be two maps from [0, 1] into inself defined by
where \(0< \beta < 1\) is a constant parameter. Consider the Markov chain with the transition probability
where \(p: [0, 1] \rightarrow [0, 1]\) is a Lipschitz function.
The case when \(p(x)= \frac{1}{2}\) for \(x \in [0, 1]\) and \(\beta = \frac{1}{2}\), where the uniform distribution on [0, 1] is the unique stationary distribution, and the case when \(p(x)= \frac{1}{2}\) for \(x \in [0, 1]\) and \(\beta = \frac{1}{3}\), where the uniform distribution on the (middle third) Cantor set is the unique stationary distribution, are two important particular cases of this model.
References
Barnsley, M.F., Demko, S.G., Elton, J.H., Geronimo, J.S.: Invariant measures arising from iterated function systems with place dependent probabilities. Ann. Inst. H. Poincaré 24, 367–394 (1988)
Benaim, M., Le Borgne, S., Malrieu, F., Zitt, P.-A.: Qualitative properties of certain piecewise deterministic Markov processes. Ann. de lIHP B 51(3), 1040–1075 (2015)
Bobrowski, A.: Degenerate convergence of semigroups related to a model of eukaryotic gene expression. Semigr Forum 73, 343–366 (2006)
Cloez, B., Hairer, M.: Exponential ergodicity for Markov processes with random switching. Bernoulli 21(1), 505–536 (2015)
Davis, M.H.A.: Markov Models and Optimization. Chapman and Hall, London (1993)
Diekmann, O., Heijmans, H.J., Thieme, H.R.: On the stability of the cells size distribution. J. Math. Biol. 19, 227–248 (1984)
Dudley, R.M.: Real Analysis and Probability. Cambridge University Press, Cambridge (2004)
Ethier, S.N., Kurtz, T.G.: Markov Processes. Characterization and Convergence. Wiley, New York (1986)
Fortet, R., Mourier, B.: Convergence de la réparition empirigue vers la réparition théorétique. Ann. Sci. École Norm. Sup. 70, 267–285 (1953)
Frisch, K.U.: Wave propagation in random media, stability. In: Bharucha-Reid, A.T. (ed.) Probabilistic Methods in Applied Mathematics. Academic Press, New York (1986)
Griego, R.J., Hersh, R.: Random evolutions, Markov chains and systems of partial differential equations. Proc. Nat. Acad. Sci USA 62, 305–308 (1969)
Hairer, M.: Exponential mixing properties of stochastic PDEs through asymptotic coupling. Probab. Theory Relat. Fields 124, 345–380 (2002)
Hille, S.C., Horbacz, K., Szarek, T., Wojewódka, H.: Limit theorems for some Markov operators. J. Math. Anal. Appl. 443, 385–408 (2016)
Hille, S., Horbacz, K., Szarek, T.: Existence of a unique invariant measure for a class of equicontinuous Markov operators with application to a stochastic model for an autoregulated gene. Submitted for publication
Horbacz, K.: Random dynamical systems with jumps. J. Appl. Probab. 41, 890–910 (2004)
Horbacz, K.: Asymptotic stability of a semigroup generated by randomly connected Poisson driven differential equations. Boll. Unione Mat. Ital. 9–B(8), 545–566 (2006)
Horbacz, K.: Invariant measures for random dynamical systems. Dissertationes Math. Vol. 451 (2008)
Horbacz, K.: Continuous random dynamical systems. J. Math. Anal. Appl. 408, 623–637 (2013)
Horbacz, K., Ślȩczka, M.: Law of large numbers for random dynamical systems. J. Stat. Phys. 162, 671–684 (2016)
Iosifescu, M., Theodorescu, R.: Random Processes and Learning. Springer, New York (1969)
Kapica, R., Ślȩczka, M.: Random iteration with place dependent probabilities, arXiv:1107.0707 [math.PR] (2012)
Karlin, S.: Some random walks arising in learning models. Pac. J. Math. 3, 725–756 (1953)
Keller, J.B.: Stochastic equations and wave propagation in random media. Proc. Symp. Appl. Math. 16, 1456–1470 (1964)
Kifer, Y.: Ergodic Theory of Random Transformations (Progress in Probability). Birkh\(\ddot{\rm {{a}}}\)user, Boston (1986)
Komorowski, T., Walczuk, A.: Central limit theorem for Markov processes with spectral gap in the Wasserstein metric. Stoch. Proc. Appl. 122, 2155–2184 (2012)
Kudo, T., Ohba, I.: Derivation of relativistic wave equation from the Poisson process. Pramana J. Phys. 59, 413–416 (2002)
Lasota, A., Szarek, T.: Dimension of measures invariant with respect to Ważewska partial differential equations. J. Differ. Equ. 196(2), 448–465 (2004)
Lasota, A., Traple, J.: Invariant measures related with Poisson driven stochastic differential equation. Stoch. Process. Appl. 106(1), 81–93 (2003)
Lasota, A., Yorke, J.A.: Lower bound technique for Markov operators and iterated function systems. Random Comput. Dyn. 2, 41–77 (1994)
Lipniacki, T., Paszek, P., Marciniak-Czochra, A., Brasier, A.R., Kimel, M.: Transcriptional stochasticity in gene expression. J. Theor. Biol. 238, 348–367 (2006)
Maxwell, M., Woodroofe, M.: Central limit theorems for additive functionals of Markov chains. Ann. Probab. 28(2), 713–724 (2000)
Pinsky, M.A.: Lectures on Random Evolution. World Scientific, Singapore (1991)
Rachev, S.T.: Probability Metrics and the Stability of Stochastic Models. Wiley, New York (1991)
Snyder, D.: Random Point Processes. Wiley, New York (1975)
Szarek, T., Wedrychowicz, S.: Markov semigroups generated by Poisson driven differential equation. Nonlinear Anal. 50, 41–54 (2002)
Traple, J.: Markov semigroup generated by Poisson driven differential equations. Bull. Pol. Acad. Sci. Math. 44, 161–182 (1996)
Tyrcha, J.: Asymptotic stability in a generalized probabilistic/deterministic model of the cell cycle. J. Math. Biol. 26, 465–475 (1988)
Tyson, J.J., Hannsgen, K.B.: Cell growth and division: a deterministic /probabilistic model of the cell cycle. J. Math. Biol. 23, 231–246 (1986)
Villani, C.: Optimal Transport: Old and New, Grundlehren der mathematischen Wissenschaften, vol. 338. Springer, Berlin (2009)
Walczuk, A.: Central limit theorem for an additive functional of a Markov process, stable in the Wesserstein metric. Ann. Univ. Mariae Curie Sklodowska Sect. A 62, 149–159 (2008)
Werner, I.: Contractive Markov systems. J. Lond. Math. Soc. 71(2), 236–258 (2005)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Horbacz, K. The Central Limit Theorem for Random Dynamical Systems. J Stat Phys 164, 1261–1291 (2016). https://doi.org/10.1007/s10955-016-1601-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10955-016-1601-1