Law Of Large Numbers For Random Dynamical Systems

We cosider random dynamical systems with randomly chosen jumps. The choice of deterministic dynamical system and jumps depends on a position. We proove the existence of an exponentially attractive invariant measure and the strong law of large numbers.


Introduction
In the present paper we are concerned with the problem of proving the law of large numbers (LLN) for random dynamical systems.
The question of establishing the LLN for an additive functional of a Markov process is one of the most fundamental in probability theory and there exists a rich literature on the subject, see e.g. the monograph of Meyn and Tweedie [17] and the citations therein. However, in most of the existing results, it is usually assumed that the process under consideration is stationary and its equilibrium state is stable in some sense, usually in the L 2 , or total variation norm. Our stability condition is formulated in a weaker metric than the total variation distance.
The law of large numbers we study in this note was also considered in many papers. Our results are based on a version of the law of large numbers due to Shirikyan (see [19], [20]). Recently Komorowski, Peszat and Szarek [12] obtained the weak law of large numbers for the passive tracer model in a compressible environment and Walczuk studied Markov processes with the transfer operator having spectral gap in the Wasserstein metric and proved the LLN in the non-stationary case [25].
A large class of applications of such models, both in physics and biology, is worth mentioning here: the shot noise, the photo conductive detectors, the growth of the size of structural populations, the motion of relativistic particles, both fermions and bosons (see [2], [11], [13]), the generalized stochastic process introduced in the recent model of gene expression by Lipniacki et al. [7].
A number of results have been obtained that claim an existence of an asymptotically stable, unique invariant measure for Markov processes generated by random dynamical systems for which the state space need not be locally compact. We consider random dynamical systems with randomly chosen jumps acting on a given Polish space (Y, ̺).
The aim of this paper is to study stochastic processes whose paths follow deterministic dynamics between random times, jump times, at which they change their position randomly. Hence, we analyse stochastic processes in which randomness appears at times t 0 < t 1 < t 2 < . . . We assume that a point x 0 ∈ Y moves according to one of the dynamical systems Π i : The motion of the process is governed by the equation X(t) = Π i (t, x 0 ) until the first jump time t 1 . Then we choose a transformation q s : Y → Y from a family {q s : s ∈ S = {1, . . . , K}} and define x 1 = q s (Π i (t 1 , x 0 )). The process restarts from that new point x 1 and continues as before. This gives the stochastic process {X(t)} t≥0 with jump times {t 1 , t 2 , . . .} and post jump positions {x 1 , x 2 , . . .}. The probability determining the frequency with which the dynamical systems Π i are chosen is described by a matrix of probabilities The maps q s are randomly chosen with place dependent distribution. Given a Lipschitz function ψ : X → R we define Our aim is to find conditions under which S n (ψ) satisfies law of large numbers. Our results are based on an exponential convergence theorem due toŚlȩczka and Kapica (see [9]) and a version of the law of large numbers due to Shirikyan (see [19], [20]).

Notation and basic definitions
Let (X, d) be a Polish space, i.e. a complete and separable metric space and denote by B X the σ-algebra of Borel subsets of X. By B b (X) we denote the space of bounded Borel-measurable functions equipped with the supremum norm, C b (X) stands for the subspace of bounded continuous functions. Let M f in (X) and M 1 (X) be the sets of Borel measures on X such that µ(X) < ∞ for µ ∈ M f in (X) and µ(X) = 1 for µ ∈ M 1 (X). The elements of M 1 (X) are called probability measures.
The elements of M f in (X) for which µ(X) ≤ 1 are called subprobability measures. By supp µ we denote the support of the measure µ. We also define is an arbitrary Borel measurable function and wherex ∈ X is fixed. By the triangle inequality this family is independent of the choice ofx. The space M 1 (X) is equipped with the Fourtet-Mourier metric: y) and |f (x)| ≤ 1 for x, y ∈ X}.
Let P : B b (X) → B b (X) be a Markov operator, i.e. a linear operator satisfying P 1 X = 1 X and P f (x) ≥ 0 if f ≥ 0. Denote by P * the the dual operator, i.e operator P * : M f in (X) → M f in (X) defined as follows We say that a measure µ By {P x : x ∈ X} we denote a transition probability function for P , i.e. a family of measures P for every x, y ∈ X and A ∈ B X .
In the following we assume that there exists a subcoupling for {P x : x ∈ X}, i.e. a family {Q x,y : x, y ∈ X} of subprobability measures on X 2 such that the map and for every x, y ∈ X and Borel A ⊂ X.
Measures {Q x,y : x, y ∈ X} allow us to construct a coupling for {P x : x ∈ X}.
Define on X 2 the family of measures {R x,y : x, y ∈ X} which on rectangles A × B are given by when Q x,y (X 2 ) < 1 and R x,y (A × B) = 0 otherwise. A simple computation shows that the family {B x,y : x, y ∈ X} of measures on X 2 defined by The following Theorem due to M.Ślȩczka and R. Kapica (see [9]) will be used in the proof of Theorem 4.1 in Section 4.
Theorem 2.1. Assume that a Markov operator P and transition probabilities A2 There exist F ⊂ X 2 and α ∈ (0, 1) such that supp Q x,y ⊂ F and A3 There exist δ > 0, l > 0 and ν ∈ (0, 1] such that where E x,y denotes here the expectation with respect to the chain starting from (x, y) and with trasition function {B x,y : x, y ∈ X}. Then operator P possesses a unique invariant measure µ * ∈ M L 1 (X), which is attractive in M 1 (X). Moreover, there exist q ∈ (0, 1) and C > 0 such that for µ ∈ M L 1 (X) and n ∈ N. We will also need a version of the strong law of large numbers due to A. Shirikyan ([19], [20]). It is originally formulated for Markov chains on a Hilbert space, however analysis of the proof shows that it remains true for Polish spaces.
Theorem 2.2. Let (Ω, F , P) be a probability space and let X be a Polish space. Suppose that for a family of Markov chains ((X x n ) n≥0 , P x ) x∈X on X with Markov operator P : B b (X) → B b (X) there exists a unique invariant measure µ * ∈ M 1 (X), a continuous function v : X → R + and a sequence (γ n ) n∈N of positive numbers such that γ n → 0 as n → ∞ and and there exits a continuous function h : where E x is the expectation with respect to P x , then for any x ∈ X and any bounded Lipschitz function f : X → R we have

Random Dynamical Systems
Let Let We are given probability vectors p i : Y → [0, 1], i ∈ I, p s : Y → [0, 1], s ∈ S, a matrix of probabilities [p ij ] i,j∈I , p ij : Y → [0, 1], i, j ∈ I and a family of continuous functions q s : Y → Y, s ∈ S. In the sequel we denote the system by (Π, q, p).
Finally, let (Ω, Σ, P) be a probability space and {t n } n≥0 be an increasing sequence of random variables t n : Ω → R + with t 0 = 0 and such that the increments ∆t n = t n − t n−1 , n ∈ N, are independent and have the same density g(t) = λe −λt , t ≥ 0.
The action of randomly chosen dynamical systems, with randomly chosen jumps, at random moments t k corresponding to the system (Π, q, p) can be roughly described as follows.
We choose an initial point x 0 ∈ Y and randomly select a transformation Π i from the set {Π 1 , . . . , Π N } in such a way that the probability of choosing Π i is equal to p i (x 0 ), and we define Next, at the random moment t 1 , at the point Π i (t 1 , x 0 ) we choose a jump q s from the set {q 1 , . . . , q K } with probability p s (Π i (t 1 , x 0 )). Then we define After that we choose Π i1 with probability p ii1 (x 1 ), define and at the point Π i1 (t 2 − t 1 , x 1 ) we choose q s1 with probability p s1 (Π i1 (t 2 − t 1 , x 1 )). Then we define Finally, given x n , n ≥ 2, we choose Π in in such a way that the probability of choosing Π in is equal to p in−1in (x n ) and we define At the point Π in (∆t n+1 , x n ) we choose q sn with probability p sn (Π in (∆t n+1 , x n )). Then we define x n+1 = q sn (Π in (∆t n+1 , x n )).
It is easy to see that {X(t)} t≥0 and {x n } n≥0 are not Markov processes. In order to use the theory of Markov operators we must redefine the processes {X(t)} t≥0 and {x n } n≥0 in such a way that the redefined processes become Markov.
For this purpose, consider the space Y × I endowed with the metric d given by where ̺ d is the discrete metric in I. Now define a stochastic process {ξ(t)} t≥0 , ξ(t) : Ω → I, by ξ(t) = ξ n−1 for t n−1 ≤ t < t n , n = 1, 2, . . .
Then the stochastic process {(X(t), ξ(t))} t≥0 , (X(t), ξ(t)) : Ω → Y × I has the required Markov property. In many applications we are mostly interested in values of the process X(t) at the switching points t n . Therefore, we will also study the stochastic discrete process (post jump locations) {(x n , ξ n )} n≥0 , (x n , ξ n ) : Ω → Y × I. Clearly {(x n , ξ n )} n≥0 is a Markov process too.
We consider the stochastic process {(x n , ξ n )} n≥0 , (x n , ξ n ) : Ω → Y × I, defined by (3.1)-(3.3) with the help of the system (Π, q, p). We will need the following assumptions: The transformations Π i : R + × Y → Y , i ∈ I and q s : Y → Y , s ∈ S, are continuous and there exists x * ∈ Y such that The functions p s , s ∈ S, and p ij , i, j ∈ I, satisfy the following conditions where L p , L p > 0.
We also assume that for the system (Π, q, p) there are three constants L ≥ 1, α ∈ R and L q > 0 such that Assume that there are p 0 > 0, q 0 > 0 such that : for every i 1 , i 2 ∈ I, x, y ∈ Y and t ≥ 0 we have (3.11) j∈IΠ(t,x,y) Remark 3.1. The condition (3.11) is satisfied if there are i 0 ∈ I, s 0 ∈ S such that To begin our study of the stochastic process {(x n , ξ n )} n≥0 consider the sequence of distributions µ n (A) = P (x n , ξ n ) ∈ A for A ∈ B(Y × I), n ≥ 0.
It is easy to see that there exists a Markov-Feller operator P : M → M such that µ n+1 = P µ n for n ≥ 0.
The operator P is given by the formula (3.14) and its dual operator U by where λ is the intensity of the Poisson process which governs the increment ∆t n of the random variables {t n } n≥0 . The operator P given by (3.14) is called a transition operator for this system.

The main theorem
Theorem 4.1. Assume that system (Π, p, q) satisfies conditions (3.6)-(3.11). If (ii) there exist q ∈ (0, 1) and C > 0 such that for µ ∈ M 1 1 (Y × I) and n ∈ N where x * is given by (3.6), (iii) the strong law of large numbers holds for the process (x n , ξ n ) n≥0 starting from (x 0 , ξ 0 ) ∈ Y × I, i.e. for every bounded Lipschitz function f : Y × I → R and every x 0 ∈ Y and ξ 0 ∈ I we have Proof of Theorem 4.1 We are going to verify assumptions of Theorem 2.1. Set X = Y × I, F = X × X and define A0. The continuity of functions p ij , p s , q s implies that the operator P defined in (3.14) is a Feller operator.
Application of Theorem 2.2 ends the proof.
The next result describing the asymptotic behavior of the process (x n ) n≥0 on Y is an obvious consequence of Theorem 4.1. Letμ 0 be the distribution of the initial random vector x 0 andμ n the distribution of x n , i.e. µ n (A) = P(x n ∈ A) for A ∈ B Y , n ≥ 1.