On the Existence and Uniqueness of Stationary Distributions for Some Piecewise Deterministic Markov Processes With State-Dependent Jump Intensity

In this paper, we consider a subclass of piecewise deterministic Markov processes with a Polish state space that involve a deterministic motion punctuated by random jumps, occurring in a Poisson-like fashion with some state-dependent rate, between which the trajectory is driven by one of the given semiflows. We prove that there is a one-to-one correspondence between stationary distributions of such processes and those of the Markov chains given by their post-jump locations. Using this result, we further establish a criterion guaranteeing the existence and uniqueness of the stationary distribution in a particular case, where the post-jump locations result from the action of a random iterated function system with an arbitrary set of transformations.

The transition law of Φ, further denoted by P , will be defined using an arbitrary stochastic kernel J on Y and a state-dependent stochastic matrix {π ij : i, j ∈ I}, consisting of continuous functions from Y to [0, 1], so that J(y, B) is the probability of entering a Borel set B ⊂ Y by Y n given Y (τ n −) = y, while π ij (y) specifies the probability of transition from ξ n−1 = i to ξ n = j given Y n = y.
The main goal of the paper is to prove that there is a one-to-one correspondence between the families of stationary (invariant) distributions of the process Ψ and the chain Φ, assuming that the kernel J enjoys a strengthened form of the Feller property (Assumption 4.4).This result (i.e., Theorem 4.5) generalizes [5,Theorem 5.1], which has been established in the case where ∆τ n are exponentially distributed with a constant rate λ, and π ij are constant.The latter, in turn, was intended to extend the scope of [4,Theorem 4.4], which refers to a model with a specific form of the kernel J.
Originally, PDMPs were introduced by Davis in [6] (see also [7]) as a general class of non-diffusive stochastic processes that combine a deterministic motion and random jumps.In this traditional framework, such processes evolve on the union of a countable indexed family of open sets in Euclidean spaces.Associated with each component of the state space is a flow generated by a locally Lipschitz continuous vector field, determining the trajectory's evolution within this component between the jumps.The process Ψ, investigated here, is defined in a similar manner, but it evolves on a general Polish space, and the semiflows are not generated by vector fields but given a priori .At the same time, let us emphasize that our model does not involve the so-called active boundaries (forcing the jumps), which occur in [6,7].Nevertheless, everything indicates that the techniques employed in this paper should also prove effective in their presence, and it is highly likely that we will present such results in another article.
For the classical PDMPs, analogous results to what we aim to achieve here were established in [2, Theorems 1-3] and [7,Ch. 34].Their proofs are based on the concept of extended generator of a Markov semigroup (see [7,Definition 14.15]), defined as an extension of its strong generator (see [10,Ch. 1]) using a martingale interpretation of the Dynkin identity.Specifically, the key observation (resulting from [7,Proposition 34.7]) is as follows: If, for some finite Borel measure µ, the extended (or strong) generator A of a given Markov semigroup satisfies with D(A) standing for the domain of A, then µ is invariant for this semigroup, provided that D(A) intersected with the space of bounded Borel measurable functions separates probability measures.On the other hand, Davis (in [6,Theorem 5.5]; cf. also [7,Theorem 26.14]) has completely characterized the extended generators of classical PDMPs and shown that their domains enjoy the separating property (see [7,Proposition 34.11]), which enabled the use of the aforementioned fact in the reasoning presented in [2,7].In our framework, establishing such a property would be rather challenging, if not impossible, as the state space is not Euclidean, and the deterministic dynamics do not correspond to systems of differential equations, which is essential for the discussion in [7, §31-32], leading to [7,Proposition 34.11].
A result similar to those discussed above, but in a much more simple setting, where the state space is compact, and randomness of the examined process arises solely from the switching between semiflows, which occur with a constant intensity, was established in [1,Proposition 2.4].When proving this proposition, the authors utilize the compactness of the state space to show that the Markov semigroup under consideration (which enjoys the Feller property) is strongly continuous on the space of (bounded) continuous functions.According to the Hille-Yosida theorem ([10, Theorem 1.2.6]), this guarantees that the domain of its strong generator is dense in such a space and, therefore, separates the measures.Obviously, in our framework, this argument fails due to the lack of compactness.
For the purposes of our study, we employ the concept of weak generator ( [8,9]), similarly to the approach in [4].This choice enable us to use an argument resembling the one utilized in [1], but in the context of the weak star (w * -)topology.More precisely, by showing that the transition semigroup of Ψ is Feller (which follows from the corresponding property of J) and continuous in the weak sense on the space of bounded continuous functions, one can conclude that the w * -closure of the domain of its weak generator contains that space.This, in turn, allows one to argue the Ψ-invariance of measures by verifying (1.2) for such a generator.Compared to the results of [4,5], the primary contribution of the present paper lies in Lemmas 5.3 and 6.2, which allow overcoming the difficulty arising from the fact that the jump intensity λ is not constant.In particular, the latter enables us to establish (in Lemma 5.1) a suitable generalization of [5,Lemma 5.1], which underlies the arguments used to prove [5, Theorem 5.1] (itself based on the proof of [4,Theorem 4.4]).
We finalize the paper by applying our main result to a special case of the process Ψ, where J is the transition law of the Markov chain arising from a random iterated functions system involving an arbitrary family {w θ : θ ∈ Θ} of continuous transformations from Y into itself (see, e.g., [12]).Specifically, we provide a set of user-friendly conditions on the component functions guaranteeing that Ψ has a unique stationary distribution with finite first moment (see Theorem 7.10), which leads to a generalization of [4,Corollary 4.5] (cf.also [11,Theorem 5.3.1]).In an upcoming paper, under similar assumptions, we also plan to prove the exponential ergodicity of Ψ in the bounded Lipschitz distance (and thus generalize [5,Proposition 7.2]).
Eventually, it is worth to emphasize that considering the jumps occurring with a state-dependent intensity is often significant in applications.For example, the above-mentioned special case of Ψ with E = R + , Θ being an compact interval, and w θ (y) = y + θ proves to be useful in analysing the stochastic dynamics of gene expression in the presence of transcriptional bursting (see, e.g., [13] or [4, §5.1]).In short, {Y (t)} t∈R+ then describes the concentration of a protein encoded by some gene of a prokaryotic cell.The protein molecules undergo a degradation process, which is interrupted by production appearing in the so-called bursts at random times τ n .From a biological point of view, it is known that the intensity of these bursts depends on the current number of molecules, and thus taking into account the non-constancy of λ makes the model more accurate.
The outline of the paper is as follows.In Section 2, we introduce notation and review several basic concepts related to Markov processes and weak generators of contraction semigroups.Section 3 presents a formal construction of the model under consideration.The main result is formulated in Section 4. Here, we also provide a simple observation regarding the finiteness of the first moments of the invariant measures under consideration.The proof of the main theorem, along with the statements of all necessary auxiliary facts, is given in Section 5.The latter are proved in Section 6.Finally, Section 7 discusses an application of the main result to the aforementioned specific case of Ψ.

Preliminaries
Consider an arbitrary metric space E, and let B(E) stand for its Borel σ-field.By B b (E) we will denote the space of all bounded Borel measurable functions from E to R, endowed with the supremum norm, i.e., f whilst by C b (E) we will mean the subspace of B b (E) consisting of all continuous functions.Further, let M sig (E) be the family of all finite signed Borel measures on E (that is, all σ-additive real-valued set functions on B(E)), and let M(E), M 1 (E) stand for its subsets containing all non-negative measures and all probability measures, respectively.Additionally, for any given Borel measurable function 1 (E) will denote the set of all µ ∈ M 1 (E) with finite first moment w.r.t.V , i.e., such that E V dµ < ∞.Moreover, for notational brevity, given f ∈ B b (E) and µ ∈ M(E), we will often write f, µ for the Lebesgue integral E f dµ.

Markov processes and their transition laws
Let us now recall several basic concepts from the theory of Markov processes, which we will refer to throughout the paper.
A function K : E × B(E) → [0, ∞] is said to be a (transition) kernel on E whenever, for every A ∈ B(E), the map E ∋ x → K(x, A) is Borel measurable, and, for every x ∈ E, B(E) ∋ A → K(x, A) is a non-negative Borel measure.If, additionally, sup x∈E K(x, E) < ∞, then K is said to be bounded.In the case where K(x, E) = 1 for every x ∈ E, K is called a stochastic kernel (or a Markov kernel) and usually denoted by P rather than K.
For any two kernels K 1 and K 2 , we can consider their composition K 1 K 2 of the form The iterates of a kernel K are defined as usual by K 1 := K and K n+1 := KK n for every n ∈ N. Obviously, the composition of any two bounded kernels is again bounded.
Given a kernel K on E, for any non-negative Borel measure µ on E and any bounded below Borel measurable function f : E → R, we can consider the measure µK and the function Kf defined as respectively.They are related to each other in such a way that f, µK = Kf, µ .Obviously, if K is bounded, then the operator µ → µK transforms M(E) into itself, whilst f → Kf maps B b (E) into itself.In the case where K is stochastic, the operator defined by (2.1) also leaves the set M 1 (X) invariant.
Let us stress here that the notation consistent with (2.1) and (2.2) will be used for all the kernels considered in the paper, without further emphasis.
A kernel K on E is said to be Feller if Kf ∈ C b (E) for every function f ∈ C b (E).Furthermore, a non-negative Borel measure µ is called invariant for a kernel K (or for the operator on measures induced by this kernel) whenever µK = µ.These two concepts can also be used in reference to any family of transition kernels.The family of this kind is said to be Feller if all its members are Feller.A measure µ is called invariant for such a family if µ is invariant for each of its members.
For a given E-valued time-homogeneous Markov chain Φ = {Φ n } n∈N0 , defined on a probability space (Ω, F , P), a stochastic kernel P on E is called the transition law of this chain if Letting µ n denote the distribution of Φ n for each n ∈ N 0 , we then have µ n+1 = µ n P for every n ∈ N 0 .The operator (•)P on M 1 (E) is thus referred to as the transition operator of Φ, and the invariant probability measures of P are simply the stationary distributions of Φ.Moreover, in fact, P(Φ n+k ∈ A | Φ k = x) = P n (x, A) for any x ∈ E, A ∈ B(E) and k, n ∈ N 0 , which implies that where E x stands for the expectation with respect to P x := P(• | Φ 0 = x).
On the other hand, it is well-known (see, e.g., [14]) that, for every stochastic kernel P on a separable metric space E, there exists an E-valued Markov chain with transition law P .More precisely, putting Ω := E N0 , F = B(E N0 ) and, for each n ∈ N 0 , defining one can show that, for every x ∈ E, there exists a probability measure P x on F such that, for any n ∈ N 0 , A 0 , . . ., A n ∈ B(E) and F := {Φ 0 ∈ A 0 , . . ., Φ n ∈ A n }, we have (2.4) 3) is a time-homogeneous Markov chain on (Ω, F , P x ) with initial state x and transition law P .The Markov chain constructed in this way is called the canonical one.Finally, recall that a family {P t } t∈R+ of stochastic kernels on E (or the corresponding family of operators on M 1 (X) or B b (X)) is called a Markov transition semigroup whenever P s+t = P s P t for all s, t ≥ 0 and P 0 (x, •) = δ x for every x ∈ E, where δ x stands for the Dirac measure at x.While using this term in the context of a time-homogeneous Markov process Ψ = {Ψ(t)} t≥0 , defined on some probability space (Ω, F , P), we will mean that Clearly, if µ t denotes the distribution of Ψ(t) for every t ≥ 0, then µ s+t = µ s P t for any s, t ≥ 0, which, in particular, shows that the invariant probability measures of {P t } t∈R+ are, in fact, the stationary distributions of the process Ψ.Moreover, analogously to the discrete case, we have where E x stands for the expectation with respect to

Weak infinitesimal generators
As mentioned in the introduction, we shall use certain tools relying on the concept of a weak infinitesimal generator, which generally pertains to contraction semigroups on subspaces of Banach spaces.In our study, we adopt this concept from [8,9].Let us first recall that, given a normed space (L, • L ), a family {H t } t∈R+ of linear operators from L to itself is called a contraction semigroup on In what follows, we will only focus on the case where L is a subspace of (B b (E), • ∞ ).Note that any Markov transition semigroup, regarded as a family of operators on B b (E) is a contraction semigroup of linear operators.Obviously, if such a semigroup enjoys the Feller property, then it forms a semigroup on C b (E).
To introduce the notion of the weak generator (adapted to our purposes), let us first consider the Banach space (M sig (E), • T V ) with the total variation norm, defined by and let M sig (E) * denote its dual space.Further, for every f ∈ B b (E), define the bounded linear functional ℓ , where • denotes the operator norm.Therefore, B b (E) can be regarded as a subspace of M sig (E) * , and thus it can be endowed with the weak w * -topology inherited from the latter.Moreover, it is easy to check that In view of the above, a sequence (2.6) On the other hand, it is well-known that (2.6) is equivalent to the pointwise convergence of {f n } n∈N to f in conjunction with the boundedness of the sequence * and a contraction semigroup {H t } t∈R+ of linear operators on L, by the weak (infinitesimal) generator of this semigroup we will mean (following [8, Ch.1 § 6]) the operator A H : D(A H ) → L 0,H given by where At the end of this section, let us quote several basic properties of weak generators that will be useful in the further course of the paper.
Remark 2.1 (see [8, p. 40] or [9, pp.437-448]) Let {H t } t≥0 be a contraction semigroup of linear operators on a subspace L of B b (E), and let A H stand for the weak generator of this semigroup.Then ) and is * -weak continuous from the right.Consequently, Moreover, the Dynkin formula holds, i.e., HsA H f ds for all t ≥ 0.

Definition of the model
Let (Y, ρ Y ) be a non-empty complete separable metric space, and let I stand for an arbitrary non-empty finite set endowed with the discrete topology.Moreover, let us introduce X := Y × I and X := X × R + , both equipped with the product topologies, upon assuming that R + := [0, ∞) is supplied with the Euclidean topology.
Consider a family {S i : i ∈ I} of jointly continuous semiflows acting from R + × Y to Y .By calling S i a semiflow we mean, as usual, that S i (s, S i (t, y)) = S i (s + t, y) and S i (0, y) = y for any s, t ∈ R + , y ∈ Y.
with certain constants λ, λ > 0, and put Finally, suppose that we are given an arbitrary stochastic kernel J : Let us now define a stochastic kernel P : X × B( X) → [0, 1] by setting for any y ∈ Y , i ∈ I, s ∈ R + and Ā ∈ B( X).Furthermore, let P : X × B(X) → [0, 1] be given by Remark 3.1 Taking into account the continuity of the maps X ∋ (y, i) → S i (t, y) for t ≥ 0, (y, i) → π ij (y) for j ∈ I, and (y, i) → λ(y), it is easy to see that P is Feller whenever so is the kernel J.
By Φ := {(Y n , ξ n , τ n )} n∈N0 we will denote a time-homogeneous Markov chain with state space X and transition law P , wherein Y n , ξ n , τ n take values in Y , I, R + , respectively.For simplicity of analysis, we shall regard Φ as the canonical Markov chain (starting from some point of X), constructed on the coordinate space Ω := XN0 , equipped with the σ-field F := B XN0 and a suitable probability measure P on F .Obviously, Φ := {(Y n , ξ n )} n∈N0 is then a Markov chain with respect to its own natural filtration, governed by the transition law P , given by (3.3).Moreover, for every n ∈ N 0 , we have where ∆τ n+1 := τ n+1 − τ n , and In particular, (3.4) implies that the conditional distributions of ∆τ n+1 given Φn are of the form This yields that ∆τ n > 0 a.s.for all n ∈ N (so {τ n } n∈N0 is a.s.strictly increasing), and, together with the Markov property of Φ, shows that ∆τ 1 , ∆τ 2 , . . .are mutually independent.Further, it follows that, for any n, r ∈ N, ))e −Λ(t,Yn−1,ξn−1) dt, whence, in view of (3.1), we get Consequently, Kolmogorov's criterion guarantees that (∆τ n − E∆τ n ) n∈N obeys the strong law of large numbers.Hence, writing The main focus of our study will be the PDMP Ψ := {Ψ(t)} t∈R+ of the form defined via interpolation of Φ according to formula (1.1).Clearly, this definition is well-posed since τ n ↑ ∞ a.s., and Φ describes the post-jump locations of the process Ψ, that is, In what follows, the Markov transition semigroup of Ψ will be denoted by {P t } t≥0 .

Main results
In this section, we shall formulate the main result of this paper, concerning a one-to-one correspondence between invariant probability measures of the transition semigroup {P t } t∈R+ of the process Ψ, determined by (1.1), and those of the transition operator P of the chain Φ, given by (3.3).
For this aim, let us consider two stochastic kernels G, W : for all x = (y, i) ∈ X and A ∈ B(X), where J stands for the kernel involved in (3.2).Further, define λ(y, i) := λ(y) for any y ∈ Y, i ∈ I, and introduce two (generally non-stochastic) kernels G, W : and W (x, A) := λ(x) W ½ A (x) for all x ∈ X, A ∈ B(X).Remark 4.2 According to (3.1), for any x ∈ X and A ∈ B(X), we have Consequently, the kernels G and W are bounded, and thus the sets M(X) and B b (X) are invariant under the operators induced by these kernels according to (2.1) and (2.2), respectively.Moreover, note that, if µ ∈ M(X) is a non-zero measure, then the measures µ G and µ W are non-zero as well.

Remark 4.3
The kernels G and G are Feller.Moreover, if the kernel J is Feller then so are the kernels W and W .
Essentially, apart from the conditions imposed on the model components in Section 3, the only assumption that we need to make in our main theorem is the following strengthened version of the Feller property for the kernel J: Although the above assumption might appear somewhat technical, it is crucial to ensure the joint continuity of a certain specific function used to obtain an explicit form of the semigroup {P t } t∈R+ in the upcoming Lemma 5.1(ii).This continuity, in turn, will be needed to prove the subsequent Lemma 5.2, which plays a key role in our approach.
The main result of the paper reads as follows: Theorem 4.5 Suppose that the kernel J satisfies Assumption 4.4.
Let us finish this section with a simple observation concerning the property of having finite first moments by the measures featured in Theorem 4.5.We are interested in the moments with respect to the function V : X → [0, ∞) given by where y * is an arbitrarily fixed point of Y .To state an appropriate result, let us introduce additionally the following two assumptions, which will also be utilized in Section 7: Assumption 4.9 There exist constants L > 0 and α < λ such that It should be noted here that, if Assumptions 4.8 and 4.9 are fulfilled, then β(y * ) < ∞ for every y * ∈ Y .
Proposition 4.10 Let V be the function given by (4.6).Then, the following holds: (i) Under Assumptions 4.8 and 4.9, for any µ Φ * ∈ M V 1 (X), the measure µ Ψ * defined by (4.4) also belongs to M V 1 (X).(ii) Suppose that there exist constants ã, b ≥ 0 such that, for V := ρ Y (•, y * ), we have Then, for any Proof To see (i) it suffices to observe that, for every (y, i) ∈ X, Statement (ii) follows from the fact that, for any (y, i) ∈ X,

Proof of the main theorem
Before we proceed to prove Theorem 4.5, let us first formulate certain auxiliary results (specifically, Lemmas 5.1-5.3),whose proofs are given in Section 6.
The first result (proved in Section 6.1) collects certain properties of the transition semigroup {P t } t∈R+ , which are essential for establishing the forthcoming Lemma 5.2, concerning the weak generator of this semigroup.It is also worth noting that this result extends the scope of [4, Lemma 5.1], which was previously established only for constant λ and π ij .
Lemma 5.1 The following statements hold for the transition semigroup {P t } t∈R+ of the process Ψ, specified by (1.1): for any x = (y, i) ∈ X and T ∈ R + , and that the following conditions hold: with λ given by (4.3), and, if Assumption 4.4 is fulfilled, then D ψ ∋ (t, T ) → ψ f (x, t, T ) is jointly continuous for every x ∈ X whenever f ∈ C b (X).
Obviously, according to statement (i) of Lemma 5.1, if J is Feller (or, in particular, if Assumption 4.4 is fulfilled), then {P t } t∈R+ can be viewed as a contraction semigroup of linear operators on C b (X), and thus we can consider its weak generator (in the sense specified in Section 2.2).In what follows, this generator will be denoted by A P .Apart from this, we also employ the weak generator A Q of the semigroup {Q t } t∈R+ defined by (5.5) Clearly, D(A p ) and D(A Q ) are then subsets of C b (X).Furthermore, having in mind (2.7), we see that L 0,P = C b (X) by statement (iii) of Lemma 5.1, and also L 0,Q = C b (X), due to the continuity of S i (•, y), y ∈ Y .The main idea underlying the proof of our main theorem is that the generator A P can be expressed using A Q and the operator W , determined by (4.2) (similarly as in [6, Theorem 5.5]), which is exactly the statement of the lemma below.The proof of this result (given in Section 6.2) is founded on assertion (ii) of Lemma 5.1, which, incidentally, is the reason why we require Assumption 4.4 rather than just the Feller property of J.It should also be noted that, according to Remark 4.3, the kernel W is Feller under this assumption, which makes the formula below meaningful.Lemma 5.2 Suppose that Assumption 4.4 holds.Then D(A P ) = D(A Q ) and, for every f ∈ D(A P ), we have where λ is given by (4.3), and W is the operator on C b (X), induced by the kernel specified in (4.2).
Another fact playing a significant role in the proof of Theorem 4.5 is related with the invertibility of the operator G on C b (X) (cf.Remark 4.3), induced by the kernel given by (4.1).Clearly, if λ is constant, then G/λ is simply the resolvent of the semigroup {Q t } t∈R+ .As is well known (see [8,Theorem 1.7]), in this case, one has which is a key observation in [4].Since this argument fails in the present framework, we will prove (in Section 6.2) the following:

be the operator defined by
with λ given by (4.3).Then the operator id Armed with the results above, we are now prepared to prove Theorem 4.5.It is worth highlighting here that, although the reasoning below explicitly utilizes only Lemmas 5.2 and 5.3, it fundamentally relies on Remark 2.1, which is valid for {P t } t∈R+ with L 0,P = C b (X) owing to Lemma 5.1.

Proof of Theorem 4.5 (a):
To prove statement (a), suppose that µ Φ * ∈ M 1 (X) is invariant for P , and let µ Ψ * be given by (4.4).We will first show that A P f, µ Φ * G = 0 for every f ∈ D(A P ). (5.9) To this end, let f ∈ D(A P ) and define ν * := µ Φ * G. Taking into account that GW = P (cf.Remark 4.1), we see that , which means that ν * is an invariant probability measure of W G. Further, from Lemma 5.2 it follows that f ∈ D(A Q ), and that Using the W G-invariance of ν * and identity (5.8), resulting from Lemma 5.3, we further obtain To do this, let f ∈ C b (X) and define g := Gf .Clearly, lemmas 5.2 and 5.3 guarantee that g ∈ D(A Q ) = D(A P ).Taking into account statement (ii) of Remark 2.1 and the fact that µ Ψ * is invariant for {P t } t∈R+ , we infer that Hence, referring to Lemma 5.2, we get Eventually, having in mind that g = Gf and using (5.7), following from Lemma 5.3, we obtain Let us begin this section with defining Obviously, {η(t) = n} = {τ n ≤ t < τ n+1 } for any t ∈ R + and n ∈ N 0 .Although (6.1) generally does not define a Poisson process (unless λ is constant), using assumption (3.1), one can find an upper limit of the probability of {η(t) = n} close to the corresponding probability in such a process, which is crucial in the proof of Lemma 5.1.This is done in Lemma 6.2, which relies on the following observation: Proof The inequality is obvious for n = 0.
According to (3.4), for every n ∈ N 0 and each (y, i, s) ∈ X, the conditional probability density function of τ n+1 given Φn = (y, i, s) is of the form Let x = (y, i) ∈ X and put x := (x, 0) ∈ X.We shall proceed by induction.Taking into account (3.1), for n = 1 we have Now, suppose inductively that (6.2) holds for some arbitrarily fixed n ∈ N and all t ∈ R + .Then, for every T ∈ R + , we can write Taking the expectation of both sides of this inequality and, further, using the inductive hypothesis gives which ends the proof.Lemma 6.2For any x ∈ X, t ∈ R + and n ∈ N 0 , are continuous.Moreover, all these maps are bounded by λ e −λt ϕ ∞ .Hence, using the Lebesgue dominated convergence theorem, we can conclude that the function is continuous, as claimed.
In light of the observation above, all the maps X ∋ x → P n (g T P h T )(x), n ∈ N 0 , are continuous.Furthermore, from (6.5) and Lemma 6.2 it follows that for all x ∈ X and n ∈ N 0 .Hence, the continuity of P T f can be now deduced by applying the discrete version of the Lebesgue dominated convergence theorem to (6.6).
(ii): Let us define Obviously, u f (•, T ) and ψ f (•, t, T ) are Borel measurable for any T ∈ R + and 0 ≤ t ≤ T .Now, fix T ∈ R + .Then, according to (6.6), we have Keeping in mind (3.2) and (6.7), we see that for any x = (y, i) ∈ X and s ∈ R + , which, in particular, gives Appealing to (6.10), we can also conclude that which, together with (6.9) and (6.11) implies (5.1).Further, referring to (6.8), we obtain which shows that u f fulfills the conditions specified in (5.2).In turn, the properties of the function ψ f stated in (5.3) and (5.4) follow directly from its definition and (3.1).Now, suppose that J satisfies Assumption 4.4, f ∈ C b (X), and let x = (y, i) ∈ X.To prove that the map D ψ ∋ (t, T ) → ψ(x, t, T ) is jointly continuous, define Taking into account (3.1), for every j ∈ I and any (u, t), (u 0 , t 0 ) ∈ Y × R + , we have Hence, in view of the continuity of λ and S j (h, •) for any h ≥ 0, we can conclude (by applying the Lebesgue dominated convergence theorem) that (u, t) → Λ(u, j, t) is jointly continuous for each j ∈ I. This, together with the continuity of f , S j and π ij , j ∈ J, shows that g is jointly continuous as well, which, in turn, guarantees that g Eventually, it now follows from Assumption 4.4 and the continuity of S i (•, y) that the map D ψ ∋ (t, T ) → Jg(•, T − t)(S i (t, y)) is jointly continuous, and thus so is (iii): Statement (iii) follows immediately from (ii).

Proofs of Lemmas 5.2 and 5.3
Before we proceed, let us recall that A P and A Q stand for the weak generators of {P t } t∈R+ and {Q t } t∈R+ , respectively, considered as contraction semigroups on C b (X), where {Q t } t∈R+ is defined by (5.5).Also, keep in mind that G and W are the kernels specified in (4.1) and (4.2), respectively, while λ is given by (4.3).
Proof of Lemma 5.2 Let f ∈ C b (X) and define for any x ∈ X and T > 0, where ψ f is the function specified in statement (ii) of Lemma 5.1.
Recall that, according to this lemma, the map D ψ ∋ (t, T ) → ψ f (x, t, T ) is jointly continuous for every x ∈ X (due to Assumption 4.4).We will first show that To do this, let x ∈ X.Since the maps [0, T ] ∋ t → ψ f (x, t, T ), T > 0, are continuous, the mean value theorem for integrals guarantees that, for every T > 0, the first component on the right-hand side of (6.13) is equal to ψ f (x, tx(T ), T ) for some tx(T ) ∈ [0, T ].Consequently, taking into account the continuity of T → ψ f (x, tx(T ), T ) and (5.4), we see that This, together with the fact that resulting from (5.3), implies that w *lim T →0 T −1 T 0 ψ f (•, t, T ) dt = λW f .Further, from l'Hospital's rule it follows that This all proves that (6.14) holds.Now, referring to (5.1), we can write which, due to (5.5) and (6.13), gives In view of (6.14) and the fact that w *lim T →0 e −Λ(•,T ) = 1, this observation shows that f ∈ D(A P ) iff f ∈ D(A Q ), and that (5.6) is satisfied for all f ∈ D(A P ) = D(A Q ).The proof of the lemma is therefore complete.
Proof of Lemma 5.3 Obviously, it suffices to show that (5.7) and (5.8) hold.
To this end, let f ∈ C b (X).Then, using the flow property, for any x = (y, i) ∈ X and any T > 0, we obtain Making the substitutions t = t + T and h = h + T therefore gives Further, since t T λ(S i (h, y)) dh = L(y, i, t) − L(y, i, T ) for t ≥ T, it follows that Hence, for all x = (y, i) ∈ X and T > 0, we have − e L(y,i,T ) 1 T T 0 λ(S i (t, y))e −L(y,i,t) f (S i (t, y), i) dt.
Now, using l'Hospital's rule and taking into account the continuity of the integrand on the right-hand side of this equality, we can conclude that lim and λ(Gf − f ) ∈ C b (X), since G is Feller.Moreover, bearing in mind (3.1), we see that for every T ∈ (0, δ) with sufficiently small δ > 0. We have therefore shown that This obviously means that Gf ∈ D(A Q ) and A Q (Gf ) = λ(Gf − f ), which immediately implies (5.7).
What is left is to prove (5.8).To this end, let f ∈ D(A Q ) and x = (y, i) ∈ X.Then According to statement (ii) of Remark 2.1, we have whence it finally follows that which clearly yields (5.8).The proof is now complete.
7 Application to a particular subclass of PDMPs In this section we shall use Theorem 4.5 and [3, Theorem 4.1] (cf.also [4,Theorem 4.1] for the case of constant λ) to provide a set of tractable conditions guaranteeing the existence and uniqueness of a stationary distribution for some particular PDMP, where J is the transition law of a random iterated function system with an arbitrary family of transformations and state-dependent probabilities of selecting them.
Let Θ be a topological space, and suppose that ϑ is a non-negative Borel measure on Θ.Further, consider an arbitrary set {w θ : θ ∈ Θ} of continuous transformations from Y to itself and an associated collection {p θ : θ ∈ Θ} of continuous maps from Y to R + such that, for every y ∈ Y , θ → p θ (y) is a probability density function with respect to ϑ.Moreover, assume that (y, θ) → w θ (y) and (y, θ) → p θ (y) are product measurable.Given this framework, we are concerned with the kernel J defined by J(y, B) = Θ ½ B (w θ (y)) p θ (y) ϑ(dθ) for all y ∈ Y and B ∈ B(Y ). (7.1) The transition law P , specified by (3.3), can be then expressed exactly as in [3], i.e., for any y ∈ Y , i ∈ I and A ∈ B(X).Moreover, note that, in this case, the first coordinate of Φ satisfies the recursive formula: where {η n } n∈N is a sequence of random variables with values in Θ, such that The aforementioned [3, Theorem 4.1] (whose proof relies on [12, Theorem 2.1]) provides certain conditions under which the transition operator P , induced by (7.2), is exponentially ergodic (in the so-called bounded Lipschitz distance) and notably possesses a unique invariant distribution, which belongs to M V 1 (X).To establish the main result of this section, we shall therefore incorporate several additional requirements, which, along with Assumptions 4.8 and 4.9, will enable us to apply this theorem and, simultaneously, ensure the fulfillment of Assumption 4.4, involved in Theorem 4.5.
In connection with the above, we first make one more assumption on the semiflows S i : and a function L : Y → R + , bounded on bounded sets, such that ρ Y (S i (t, y), S j (t, y)) ≤ ϕ(t)L(y) for any t ≥ 0, y ∈ Y, i, j ∈ I.
Further, we employ the following assumption on the probabilities π ij , associated with the semiflows switching: Assumption 7.2 There exist positive constants Lπ and δπ such that Finally, let us impose certain conditions on the components of the kernel J, defined by (7.1).
In addition to that, we will require that the constants L, α in Assumption 4.9 and L w in Assumption 7.4 are interrelated by the inequality LL w λ + α < λ. (7.6) It is worth stressing here that conditions similar in form to those gathered in Assumptions 7.
(ii) Condition (A2) is equivalent to the conjunction of Assumptions 4.9 and 7.1 with ϕ(t) = t.Nevertheless, a simply analysis of the proof of [3,Theorem 4.1] shows that it remains valid if ϕ : R + → R + is an arbitrary Lebesgue measurable function satisfying (7.3).
(iv) Hypothesis (A4) just corresponds to the assumed Lipschitz continuity of λ.  [3, (A1)].However, it is easy to check (using the triangle inequality) that such an implication does hold in at least three special cases: first, when all w θ are Lipschitz continuous (thus strengthening (7.4)); second, when all the maps p θ are constant, and third, if there exists a point y * ∈ Y such that S i (t, y * ) = y * for all t ≥ 0 and i ∈ I (in this case Assumptions 4.8 holds trivially).
The following two lemmas justifies statement (i) of Remark 7.5.In particular, if J is of the form (7.1), then Assumptions 4.8, 4.9, 7.3 and condition (7.4) yield that (7.7) holds with the constant a given by (7.8).The second part of the assertion follows directly from Lemma 7.7.

Proof
What is now left to establish the main theorem of this section is to show that the model under consideration fulfills Assumption 4.4.It now suffices to observe that both terms on the right-hand side of this estimation tend to 0 as (t, y) → (t 0 , y 0 ).The convergence of the first term follows from the Lebesgue dominated convergence theorem, since w θ and g are continuous (and the latter is also bounded).The second one converges by condition (7.5) and the boundedness of g, which enables estimating it from above by g ∞ Lpρ Y (y 0 , y).Theorem 7.10 Let J be of the form (7.1).Further, suppose that Assumptions 4.8, 4.9 and 7.1-7.4are fulfilled, and that the constants α, L and Lw can be chosen so that (7.6) holds.Moreover, assume that the function λ is Lipschitz continuous.Then the transition semigroup

Remark 4 . 1
It is easily seen that GW = G W = P , where P is given by (3.3).
Remark 7.6 Regarding Remark 7.5(i), the sole reason we employ Assumptions 4.8 and 7.3 in this paper instead of hypothesis[3, (A1)] is to get the conclusion of Proposition 4.10(i), which ensures that the invariant measures of {P t } t∈R+ inherit the property of having finite first moments w.r.t.V from the invariant measures of P .It is worth emphasizing that Assumptions 4.8 and 7.3 do not generally imply