Abstract
Given a probability space \((\Omega ,\mathcal {A},\mathbb {P})\), a complete separable Banach space X with the \(\sigma \)-algebra \(\mathcal B(X)\) of all its Borel subsets, an operator \(\Lambda :\Omega \rightarrow L(X,X)\) and \(\xi :\Omega \rightarrow X\) we consider the \(\mathcal {B}(X)\otimes \mathcal A\)-measurable function \(f:X\times \Omega \rightarrow X\) given by \(f(x,\omega )=\Lambda (\omega )x+\xi (\omega )\) and investigate the continuous dependence of a weak limit \(\pi ^f\) of the sequence of iterates \((f^n(x,\cdot ))_{n\in \mathbb {N}}\) of f, defined by \(f^0(x,\omega )=x, f^{n+1}(x,\omega )=f(f^n(x,\omega ),\omega _{n+1})\) for \(x\in X\) and \(\omega =(\omega _1,\omega _2,\dots )\). Moreover for X taken as a Hilbert space we characterize \(\pi ^f\) via the functional equation
with the aid of its characteristic function \(\varphi ^f\). We also indicate the continuous dependence of a solution of that equation.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Fix a probability space \((\Omega ,\mathcal {A},\mathbb {P})\) and a separable Banach space X. By \(\mathcal {B}(X)\) we denote the family of all Borel subsets of X. A map \(f:X\times \Omega \rightarrow X\) measurable with respect to the product algebra \(\mathcal {B}(X)\otimes \mathcal {A}\) (shortly: \(\mathcal {B}(X)\otimes \mathcal {A}\)-measurable) is called a random-valued function or an rv-function. By \(f^n\) we denote the n-th iterate of f, given by
for \(n\in \mathbb N, x\in X\) and \(\omega =(\omega _1,\omega _2,\dots )\) from \(\Omega ^\infty \) defined as \(\Omega ^\mathbb N\). Note that the map \(f^n:X\times \Omega ^{\infty }\rightarrow X\) is \(\mathcal {B}(X)\otimes \mathcal {A}_n\)-measurable, where \(\mathcal {A}_n\) denotes the \(\sigma \)-algebra of all the sets \(\left\{ (\omega _1,\omega _2\ldots ):(\omega _1,\ldots ,\omega _n)\in A\right\} \) and A belongs to the product \(\sigma \)-algebra \(\mathcal {A}^n\). Since \(f^n\) depends only on the first n coordinates of \(\omega \), we can identify \(f^n(\cdot ,\omega _1,\dots ,\omega _n)\) with \(f^n(\cdot ,\omega )\). So \(f^n\) is an rv-function on the probability space \((\Omega ^{\infty },\mathcal {A}_n,\mathbb {P}^{\infty })\) and also on \((\Omega ^{n},\mathcal {A}^n,\mathbb {P}^{n})\). These iterates were defined by K. Baron and M. Kuczma [2], and independently by Ph. Diamond [7] to solve iterative functional equations. In particular they form forward type iterations and are the prototype of random dynamical systems. A result on almost sure (a.s., for short) convergence of \((f^n(x,\cdot ))_{n\in \mathbb N}\) for \(X=[0,1]\) can be found in [17, Sec. 1.4 B]. A simple and useful criterion for a weak convergence of distributions of \(f^n(x,\cdot ), n\in \mathbb N\) to a probability Borel measure \(\pi ^f\) independent of \(x\in X\) for X being a Polish space was proved in [1] and applied to some linear inhomogeneous functional equation.
One of the most important cases of rv-functions is the so called random affine map (see e.g. [11]), which is given by
where \(\eta :\Omega \rightarrow \mathbb R, \xi :\Omega \rightarrow X\) are \(\mathcal {A}\)-measurable. These maps are related to perpetuities, see for instance [11, 12, 16]); they are also applied to refinement type equations [15]. Substituting a random vector \(\eta \) into a random operator, we will consider rv-functions of the form
where \(\Lambda (\omega ):X\rightarrow X\) is a continuous and bounded operator for \(\omega \in \Omega \). A function (1.2) will be called a generalized random affine map or GRAM, for short. However, the main motivation to study such rv-functions is the work of K. Baron [5], where a special case of map (1.2) with the same operator \(\Lambda (\omega )\) for any \(\omega \) was examined.
The first aim of the present paper is to give some natural conditions under which the sequence of iterates of GRAM’s f converges in law to \(\pi ^f\), and to establish the continuity of the operator \( f\longmapsto \pi ^f \) by showing how \(\pi ^f\) change if \(\Lambda \) and \(\xi \) do. This extends the main result of [3] as well as [4, Theorem 1] and [14, Theorem 5.2].
In the case when X is a real Hilbert space a characterization of a limit distribution \(\pi ^f\) by its characteristic function \(\varphi ^f\) via the linear functional equation \(\varphi ^f(u)=\varphi ^f(\Lambda ^*(u))\cdot \varphi ^{\xi }(u)\) was established in [5]. Referring to that paper we will show that the function \(\varphi ^f\) for GRAM’s f is only one solution of the equation
in a class of characteristic functions. Moreover, we will indicate continuous dependence in such a characterisation of the limit distribution.
2 Notions and basic facts
Throughout the paper \((X,\Vert \cdot \Vert )\) is a separable Banach space and \((\Omega , \mathcal {A},\mathbb {P})\) is a given probability space. We write B(X) for a space of all Borel and bounded functions endowed with the supremum norm \(\Vert \cdot \Vert _{\infty }\) and C(X) for its subspace containing all continuous (and bounded) functions. A space of all linear and continuous operators \(\Lambda :X\rightarrow X\) will be denoted by L(X, X). We use the symbol \(\mathcal {M}_1(X)\) to denote the space of all probability measures defined on \(\mathcal {B}(X)\). For short, we will write \(\int \varphi d\mu \) instead of \(\int _X \varphi (x) \mu (dx)\) for Bochner integrable \(\varphi \) and \(\mu \in \mathcal M_1(X)\) if there is no confusion. We also consider a family of all measures with finite first moment given by
for some \(x_0\in X\). (Clearly \(\mathcal {M}_1^1(X)\) does not depend on \(x_0\).) Recall that a measure \(\mu *\nu \) is a convolution of measures \(\mu \) and \(\nu \) if
We write \(\mu _{\chi }\) to denote a probability distribution of the random variable \(\chi \). Random variables \(\chi :\Omega \rightarrow X\), \(\zeta :\Omega \rightarrow Y\) are called independent if
where \(\mu _{(\chi ,\zeta )}\) is their joint probability distribution. We say that a sequence \((\mu _n)\) of measures from \(\mathcal {M}_1(X)\) converges weakly to \(\mu \) if \(\int fd\mu _n\xrightarrow [n\rightarrow \infty ]{} \int f d\mu \) for every \(f\in C(X)\). We introduce the symbol \(d_{FM}\) to denote the Fortet–Mourier metric (also known as the bounded Lipschitz distance) given by
and additionally \(d_H\) to denote the Huthinson metric given by
where
Note that the distance between some measures in the Huthinson metric may be infinite. It is known (see [9, Theorem 11.3.3]) that weak convergence is metrizable by the Fortet–Mourier norm.
With an rv-function \(f:X\times \Omega \rightarrow X\) we may associate a linear operator \(P:\mathcal {M}_1(X)\rightarrow \mathcal {M}_1(X)\) by the formula
which will be used in this paper. It can be shown that P is the Markovian transition operator for the distribution \(\pi _n\) of \(f^n\) given by
i.e.
By the convergence in distribution or in law of the sequence of iterates \((f^n(x, \cdot ))_{n\in \mathbb {N}}\) we mean that the sequence \((\pi _n(x,\cdot ))_{n\in \mathbb {N}}\) converges weakly to a probability distribution.
Following [1] and [13] we consider a family of rv-functions \(f:X\times \Omega \rightarrow X\) which satisfy:
- (H\(_f\)):
-
There exists \(\lambda _f\in (0,1)\) such that
$$\begin{aligned} \int _\Omega \Vert f(x,\omega )-f(y,\omega )\Vert \mathbb P(d\omega )\le \lambda _f \Vert x-y\Vert \quad \textrm{for}\quad x,y\in X \end{aligned}$$and
$$\begin{aligned} \int _\Omega \Vert f(x,\omega )-x\Vert \mathbb P(d\omega )<\infty \quad \mathrm{for\; some \;(thus \;all)} \quad x\in X. \end{aligned}$$
A simple criterion [13, Corollary 5.6], cf. [1, Theorem 3.1], for the convergence in distribution of iterates of rv-functions reads as follows:
Proposition 2.1
Assume that an rv-function \(f:X\times \Omega \rightarrow X\) satisfies (H\(_f\)). Then for every \(x\in X\) the sequence of iterates \((f^n(x, \cdot ))_{n\in \mathbb {N}}\) converges in distribution and the limit \(\pi ^f\) does not depend on x. Moreover \(\pi ^f\in \mathcal M_1^1(X)\) and
for \(n\in \mathbb N\) and \(x\in X\).
The geometric rate of convergence allows us to formulate a result concerning the continuity of \(f\longmapsto \pi ^f\). We cite a part of [14, Theorem 4.1] that will be useful in the next section.
Proposition 2.2
Assume that rv-functions f, g satisfy (H\(_f\)) and (H\(_g\)), respectively. Then for limit distributions \(\pi ^f\) and \(\pi ^g\), occurring in Proposition 2.1, we have
where
for \(h\in \{f,g\}\).
Remark 2.3
By condition (H\(_g\)) we mean (H\(_f\)) in which all functions f’s are replaced by g’s. A similar convention will be used considering condition (U\(_g\)) in the next section.
3 Continuous dependence of the limit distribution of generalized random affine maps
Fix \(\Lambda :\Omega \rightarrow L(X,X)\) and \(\mathcal A\)-measurable \(\xi :\Omega \rightarrow X\). Since X is separable, we may consider equivalently the weak, strong (in Bochner’s sense), and Borel measurability of the random variable \(\xi \). To get some results concerning the convergence in law of GRAM’s (1.2) we need to show that (1.2) is an rv-function. To do this we will introduce the following:
Definition 3.1
We call a map \(\Lambda :\Omega \rightarrow L(X,X)\) a random operator, if it is \(\mathcal {A}\)-measurable, i.e. \(\Lambda ^{-1}(B)\in \mathcal A\) for every Borel subset B of L(X, X).
Proposition 3.2
If \(\Lambda :\Omega \rightarrow L(X,X)\) is a random operator, then a function \(\Lambda (\cdot )x:\Omega \rightarrow X\) is \(\mathcal {A}\)-measurable for every \(x\in X\).
Proof
Fix \(x\in X\) and define \(\varphi _x:L(X,X)\rightarrow X\) by \(\varphi _x(T)=Tx\). It is obvious that \(\varphi _x\) is linear, and since
it is bounded (thus continuous). Now fix \(B\in \mathcal {B}(X)\) then we have
\(\square \)
Remark 3.3
One can show that for a separable space X if \(\Lambda (\cdot )x:\Omega \rightarrow X\) is \(\mathcal {A}\)-measurable for every \(x\in X\) and \(\Lambda (\omega ):X\rightarrow X\) is continuous for every \(\omega \in \Omega \) then a map \(\Lambda :\Omega \times X\rightarrow X\) with \(\Lambda (x,\omega )=\Lambda (\omega )x\) is \(\mathcal {A}\otimes \mathcal {B}(X)\)-measurable. Moreover, \(\xi \) extended to \(\xi :X\times \Omega \rightarrow X\) by \(\xi (x,\omega )=\xi (\omega )\) is \(\mathcal {A}\otimes \mathcal {B}(X)\)-measurable. Since the sum of \(\mathcal {A}\otimes \mathcal {B}(X)\)-measurable functions on separable values is also \(\mathcal {A}\otimes \mathcal {B}(X)\)-measurable it follows that \(f:\Omega \times X\rightarrow X\) given by (1.2) is an rv-function.
The main result of this section concerns the continuous dependence of the limit of iterates of GRAM’s. We will formulate it for a family of rv-functions \(f:X\times \Omega \rightarrow X\) which satisfy:
- (U\(_f\)):
-
The function \(f:X\times \Omega \rightarrow X\) has the form \( f(x,\omega )=\Lambda _f(\omega )x+\xi _f(\omega )\), where \(\xi _f:\Omega \rightarrow X\) is \(\mathcal A\)-measurable,
$$\begin{aligned} \mathbb E\Vert \xi _f\Vert =\int _\Omega \Vert \xi _f(\omega )\Vert \mathbb P(d\omega )<\infty , \end{aligned}$$
and \(\Lambda _f:\Omega \rightarrow L(X,X)\) is a random operator satisfying
where \(\Vert \Lambda _f(\omega )\Vert \) is the operator norm of \(\Lambda _f(\omega )\).
Theorem 3.4
Assume that rv-functions f, g satisfy (U\(_f\)) and (U\(_g\)), respectively. Then the sequences of iterates \((f^n(x, \cdot ))_{n\in \mathbb {N}}, (g^n(x, \cdot ))_{n\in \mathbb {N}}\) are convergent in law to the probability distributions \(\pi ^f, \pi ^g\in \mathcal {M}_1^1(X)\), respectively, the limits do not depend on \(x\in X\), and
where \(\alpha =\mathbb {E}\Vert \Lambda _f(\cdot )-\Lambda _g(\cdot )\Vert , \ \beta =\mathbb {E}\Vert \xi _f-\xi _g\Vert .\)
Proof
At the beginning let us observe that (U\(_f\)) implies (H\(_f\)). Indeed,
and
By Proposition 2.1 we infer that there exist probability distributions \(\pi ^f, \ \pi ^g\in \mathcal {M}_1^1(X)\) such that for every \(x\in X\) the sequences \((f^n(x, \cdot ))_{n\in \mathbb {N}}\), \((g^n(x, \cdot ))_{n\in \mathbb {N}}\) are convergent in law to \(\pi ^f, \pi ^g\), respectively.
The rest of the proof runs similarly to the proof of [14, Theorem 5.2] which concerns (1.1). For the convenience of the reader we repeat the relevant computations after appropriate changes for the case of GRAM’s, thus making our exposition self-contained. So fix \(k\in \mathbb {N}\) and let us define \(\Lambda _k:\Omega ^{\infty }\rightarrow L(X,X)\) and \(\xi _k:\Omega ^{\infty }\rightarrow X\) by \(\Lambda _k (\omega )=\Lambda _f(\omega _{k}), \ \xi _k (\omega )=\xi _f(\omega _{k})\), where \(\omega =(\omega _1,\omega _2\ldots )\in \Omega ^{\infty }\), and observe that for \(\omega \in \Omega ^{\infty }\) and \(x\in X\)
where
and \(\circ \) is a composition. From that
Then
and from the inequality
we have
Since \(\Vert \xi _{k-1}\Vert , \Vert \Lambda _k(\cdot )\Vert , \ldots , \Vert \Lambda _n(\cdot )\Vert \) are independent it follows that
Therefore for the function \(\alpha _f(x)\) given by (2.3) we obtain
A similar inequality holds for \(\alpha _g(x)\). Taking \(\lambda _f=\mathbb {E}\Vert \Lambda _{f}(\cdot )\Vert , \lambda _g=\mathbb {E}\Vert \Lambda _{g}(\cdot )\Vert \) and applying Proposition 2.2 we finish the proof. \(\square \)
Corollary 3.5
Assume that rv-functions f, g have the form
with \(\Lambda _f,\Lambda _g\in L(X,X)\) such that \(\Vert \Lambda _f\Vert <1\), \(\Vert \Lambda _g\Vert <1\) and \(\xi _f, \xi _g:\Omega \rightarrow X\) such that \(\mathbb {E}\Vert \xi _f\Vert <\infty \), \(\mathbb {E}\Vert \xi _g\Vert <\infty \). Then the sequences of iterates \((f^n(x, \cdot ))_{n\in \mathbb {N}}\), \((g^n(x, \cdot ))_{n\in \mathbb {N}}\) are convergent in law to the probability distributions \(\pi ^f\), \(\pi ^g\in \mathcal {M}_1^1(X)\), respectively, the limits do not depend on \(x\in X\), and
where \(\alpha =\Vert \Lambda _f-\Lambda _g\Vert , \ \beta =\mathbb {E}\Vert \xi _f-\xi _g\Vert .\)
Corollary 3.5 given above extends the main result of [3] as well as [4, Theorem 1]. Due to this result we can generalize [4, Theorem 3] and [5, Theorem 3.1]; see Theorems 4.10, 4.22.
4 Characterisation of the limit distribution
Let \((\Omega ,\mathcal {A},\mathbb {P})\) be a probability space. In this section X is a separable real Hilbert space with the inner product \((\cdot |\cdot )\). However in cases when it is not needed we will emphasize it. We define a characteristic function \(\varphi ^f\) of the rv-function f, assuming that the iterates \((f^n(x, \cdot ))_{n\in \mathbb {N}}\) converge in law and the limit does not depend on x; in such a case we denote by \(\pi ^f\) the distribution of the limit, i.e.
Definition 4.1
A function \(\varphi ^{\chi }:X\rightarrow \mathbb {C}\) given by
is called a characteristic function of the X-valued random variable \(\chi \) with distribution \(\mu _{\chi }\).
Definition 4.2
A function \(\varphi ^{f}:X\rightarrow \mathbb {C}\) given by
is called a characteristic function of the rv-function f.
The problem of characterization of the limit distribution \(\pi ^f\) via a functional equation for its characteristic function \(\varphi ^f\) was considered in [5]. The author showed that for the rv-function f given by
with \(\Lambda \in L(X,X)\) such that \(\Vert \Lambda \Vert <1\) and a random variable \(\xi :\Omega \rightarrow X\) such that \(\mathbb {E}\Vert \xi \Vert <\infty \) its characteristic function \(\varphi ^f\) is the only solution of the equation
where \(\Lambda ^*\) stand for the adjoint operator to \(\Lambda \), which satisfies \((\Lambda ^*u|z)=(u|\Lambda z)\) for every \(u,z\in X\). Our goal is to generalize this result to GRAM’s. First we give some preceding facts, which will be needed in the general setting.
Lemma 4.3
Let X be a Banach space. Assume that a random operator \(\Lambda :\Omega \rightarrow L(X,X)\) and a random variable \(\xi :\Omega \rightarrow X\) are independent. If \(x\in X\), then \(\Lambda (\cdot ) x:\Omega \rightarrow X\) and \(\xi :\Omega \rightarrow X\) are independent.
Proof
Fix \(x\in X\). Let us define \(\tau _x:L(X,X)\times X\rightarrow X^2\) by
Observe that \(\tau _x\) is well defined, continuous in product topology (by the continuity of T) and thus \(\mathcal {B}(L(X,X))\otimes \mathcal {B}(X)\)-measurable. Denote the distribution of \(\Lambda (\cdot )x\) by \(\mu _{\Lambda x}\). We claim that \(\mu _{(\Lambda x,\xi )}(B)=\mu _{(\Lambda ,\xi )}(\tau _x^{-1}(B))\) for \(B\in \mathcal {B}({X^2})\). Indeed we have
It remains to show that \(\mu _{\Lambda x}\otimes \mu _{\xi }(B)=\mu _{\Lambda }\otimes \mu _{\xi }(\tau _x^{-1}(B))\) for \(B\in \mathcal {B}({X^2})\). Define \(B_y=\left\{ \overline{x}:(\overline{x},y)\in B\right\} \) and now we have the following
Finally by the assumption of independence we obtain
which ends the proof. \(\square \)
Lemma 4.4
Let X be a Banach space and \(n\in \mathbb {N}\). Assume that \(\Lambda :\Omega \rightarrow L(X,X)\) is a random operator and \(\psi :\Omega ^{n}\rightarrow X\), \(\xi :\Omega \rightarrow X\) are random variables. Define \(\psi _n:\Omega ^{\infty }\rightarrow X\), \(\Lambda _{n+1}:\Omega ^{\infty }\rightarrow L(X,X)\), \(\xi _{n+1}:\Omega ^{\infty }\rightarrow X\) by
and \(\Lambda \psi _{n+1}:\Omega ^{\infty }\rightarrow X\) by
where \(\omega =(\omega _1,\omega _2,\ldots )\in \Omega ^{\infty }. \) If \(\Lambda _{n+1}\) and \(\xi _{n+1}\) are independent, then \(\Lambda \psi _{n+1}\) and \(\xi _{n+1}\) are also independent.
Proof
Fix \(B\in \mathcal {B}(X^2)\). Put
and
for \(\omega _1,\dots ,\omega _{n+1}\in \Omega \). Then
when the last equality holds due to Lemma 4.3. Therefore
which ends the proof. \(\square \)
Corollary 4.5
Let X be a separable Banach space. Assume that an rv-function \(f:X\times \Omega \rightarrow X\) is given by (1.2), where \(\Lambda :\Omega \rightarrow L(X,X)\) is a random operator and \(\xi :\Omega \rightarrow X\) is a random variable. If \(\Lambda \) and \(\xi \) are independent, \(x\in X\) and \(n\in \mathbb {N}\), then \(\Lambda _{n+1}(\cdot )f^n(x,\cdot ):\Omega ^{\infty }\rightarrow X\) with
and \(\xi _{n+1}:\Omega ^\infty \rightarrow X \) with \(\xi _{n+1}(\omega )=\xi (\omega _{n+1})\) are independent.
Having proved independence we also have to characterise the probability distribution of the sum of independent random variables. It is well known that such a distribution can be described as the convolution of each random variable distributions. More precisely, we have:
Theorem 4.6
Let X be a separable Banach space. If \(\eta :\Omega \rightarrow X\), \(\xi :\Omega \rightarrow X\) are independent random variables, then
Definition 4.7
If \(\Lambda :\Omega \rightarrow L(X,X)\) is a random operator, then a map \(\Lambda ^*:\Omega \rightarrow L(X,X)\) satisfying
is called an adjoint random operator to \(\Lambda \).
Lemma 4.8
A function \(\Lambda ^*:X\times \Omega \rightarrow X\) given by \(\Lambda ^*(x,\omega )=\Lambda ^*(\omega )x\) is \(\mathcal {B}(X)\otimes \mathcal {A}\)-measurable.
Proof
According to Remark 3.3 it is enough to show that \(\Lambda ^*(\cdot )x:\Omega \rightarrow X\) is \(\mathcal {A}\)-measurable for every \(x\in X\). Fix \(x\in X\) and observe that \((x|\Lambda (\omega )y):\Omega \rightarrow \mathbb {R}\) is \(\mathcal {A}\)-measurable for every \(y\in X\). By the Riesz Representation Theorem for every linear functional \(y^*:X\rightarrow \mathbb {R}\) there exists y such that
Therefore from the \(\mathcal {A}\)-measurability of \((x|\Lambda (\cdot )y):\Omega \rightarrow X\) we conclude that \(\Lambda ^*(\cdot )x\) is weak measurable. Since X is separable, we may conclude that \(\Lambda ^*(\cdot )x\) is strong measurable and consequently \(\mathcal {A}\)-measurable. \(\square \)
Remark 4.9
Note that \(\Vert \Lambda ^*(\cdot )\Vert :\Omega \rightarrow [0,\infty )\) is \(\mathcal {A}\)-measurable due to the equality
The following theorem characterizes the limit distribution of GRAM’s and it generalizes [5, Theorem 3.1] (see Remark 4.12).
Theorem 4.10
Assume that an rv-function f has the form (1.2) with a random operator \(\Lambda :\Omega \rightarrow L(X,X)\) and a random variable \(\xi :\Omega \rightarrow X\) such that \(\mathbb {E}\Vert \Lambda (\cdot )\Vert <1\), \(\mathbb {E}\Vert \xi \Vert <\infty \). Moreover, assume that \(\Lambda \) and \(\xi \) are independent. Then the characteristic function \(\varphi ^f\) of f is the only solution of the equation
which is continuous at zero, bounded and fulfills \(\varphi ^f(0)=1\).
Lemma 4.11
Let \((\Omega ,\mathcal {A},\mathbb {P})\) be an arbitrary probability space. Suppose that the independent and identically distributed random variables \(\zeta _i:\Omega \rightarrow \mathbb {R}\), \(i\in \mathbb {N}\) fulfil the following properties
-
1.
\(\zeta _i \ge 0\)
-
2.
\(0<\mathbb {E}\zeta _i <1\).
Then the sequence \((\prod _{i=1}^{n}\zeta _i)_{n\in \mathbb {N}}\) converges a.s. to zero.
Proof
To show convergence we will consider three cases:
I. If \(\mathbb {E}\zeta _i=0=\int _{\Omega }\zeta _i(\omega )\mathbb {P}(d\omega )\), then \(\zeta _i=0\) a.s., so is \(\prod _{i=1}^{n}\zeta _i\).
II. Assume that \(0<\mathbb {E}\zeta _i <1\) and \(\mathbb {P}(\zeta _i=0)=p>0\). Then
Define a set \(A_n=\left\{ \omega \in \Omega : \ \prod _{i=1}^{n}\zeta _i(\omega )\ne 0\right\} \) and observe that \(A_{n+1}\subset A_n\), and
By the continuity of the measure it follows that
III. Now assume that \(0<\mathbb {E}\zeta _i <1\), and \(\mathbb {P}(\zeta _i=0)=0\). From Jensen’s inequality we have \(\mathbb {E}\log \zeta _i\le \log \mathbb {E}\zeta _i<0\). Observe that
If \(-\infty <\mathbb {E}\log \zeta _1\) then by the independence of \(\zeta _i's\) we can apply the Strong Law of Large Numbers, hence for \(0<\epsilon <|\mathbb {E}\log \zeta _1|\) there exists \(N_{\epsilon }\in \mathbb {N}\) such that
Therefore for the same \(n>N_{\epsilon }\) it holds that
Passing with n to the limit we obtain
If \(\mathbb {E}\log \zeta _1=-\infty \), then we can apply theorem [10, Theorem 2.4.5], from which we conclude that
Hence
Summarizing we get convergence in all cases. \(\square \)
Proof of Theorem 4.10
A random operator \(\Lambda :\Omega \rightarrow L(X,X)\) can be considered as an rv-function \(\Lambda :X\times \Omega \rightarrow X\) due to its measurability (see Sect. 3) and consequently we can associate it with a linear operator Q given by
Now let us define \(\pi ^{\Lambda f}_n:X\times \mathcal {B}(X)\rightarrow [0,1]\) by
and observe that
Indeed, for fixed \(x\in X\), \(B\in \mathcal {B}(X)\) it holds that
So now, by Corollary 4.5 and Theorem 4.6 we see that
It can be easily shown that the Markov operator Q has the Feller property. To do this let us see at first that
For a fixed \(\psi \in C(X)\) take an arbitrary \(x_0\in X\) and note that for every \((x_n)_{n\in \mathbb {N}}\) such that \(x_n\xrightarrow []{n\rightarrow \infty }x_0\) we have \(\psi (\Lambda (\omega )x_n)\xrightarrow []{n\rightarrow \infty }\psi (\Lambda (\omega )x_0)\) for every \(\omega \in \Omega \). Let us define \(\varphi _n(\omega )=\psi (\Lambda (\omega )x_n)\) and \(\varphi _0(\omega )=\psi (\Lambda (\omega )x_0)\). Since \(|\varphi _n(\omega )|\le \Vert \psi \Vert _{\infty }\) for \(\omega \in \Omega \), \(n\in \mathbb {N}\) we can apply the Lebesgue Dominated Convergence theorem and hence
Because \(x_0\), \((x_n)_{n\in \mathbb {N}}\) and \(\psi \) are arbitrary, we have \(Q^*(C(X))\subset C(X)\). From that and [18, Theorem 1.1, Ch. III] we can pass n to the limit and we obtain
Now from the definition of the characteristic function we make the following computations
This shows that \(\varphi ^f\) satisfies (4.1).
It remains to show the uniqueness of the solution of (4.1). To do this, let us assume that \(\varphi \) is a bounded, continuous at zero solution of (4.1) and \(\varphi (0)=1\). Then observe that
where
It follows that for every \(n\in \mathbb {N}\) we can write
Since \(\Vert \Lambda ^*(\omega ) \Vert =\Vert \Lambda (\omega ) \Vert \) for every \(\omega \in \Omega \), we have \(\mathbb {E} \Vert \Lambda ^*(\cdot )\Vert =\mathbb {E} \Vert \Lambda (\cdot )\Vert <1\). Taking \(\zeta _i(\omega )= \Vert \Lambda ^*(\omega _i)\Vert \) for \(\omega =(\omega _1,\omega _2,\ldots )\in \Omega ^{\infty }\) we see that
By Lemma 4.11 we conclude that the sequence \(\big (\Vert (\Lambda ^*)^n(\cdot )(u)\Vert \big )_{n\in \mathbb N}\) converges a.s. to zero.
Fix \(n\in \mathbb {N}\) and let us define random variables \(\eta _n, \theta _n:\Omega ^{\infty }\rightarrow \mathbb {C}\), respectively, by
Hence we can rewrite (4.3) as
and thus we obtain
Observe that \(|\theta _n(\omega )-1|\le \Vert \varphi \Vert _{\infty }+1\) and \((\theta _n)_{n\in \mathbb N}\) converges a.s. to 1, by the continuity of \(\varphi \) at zero. Therefore, from the Lebesgue dominated convergence theorem it can be concluded that
Hence passing with n to the limit we obtain
which completes the proof. \(\square \)
Remark 4.12
Note that under the assumptions of Theorem 4.10 the following statements hold:
-
(i)
The characteristic function \(\varphi ^f\) is the only solution of the equation (4.1), which is Lipschitz, continuous at zero and \(\varphi (0)=1\).
-
(ii)
If \(\Lambda \) does not depend on \(\omega \), i.e. \(\Lambda (\omega )\) is the same as \(\omega \) changes, then \(\varphi ^f\) is the only solution of the equation (4.1), which is continuous at zero and \(\varphi (0)=1\).
To show assertion (i) observe that for a function \(\varphi \) which is a solution of (4.1) and \(M>0\), a Lipschitz constant of \(\varphi \), the following inequalities hold,
which yields (4.4).
When (ii) holds, the formula (4.3) reduces to
for any \(n\in \mathbb N\). Passing with n to the limit we obtain
\(\square \)
Remark 4.13
Note that the expression (4.4) is in fact the formula of the unique solution \(\varphi \) of (4.1). In particular, when \(\Lambda \) is independent of \(\omega \), this solution takes the form (4.5) and it can also be found in [5, Theorem 3.1].
We now give an example of a GRAM which satisfies the assumptions of Theorem 4.10.
Example 4.14
Let us consider random variables \(\xi :\Omega \rightarrow X\) and \(\kappa :\Omega \rightarrow \mathbb {N}\). Take a countable family of linear bounded operators \(T_i:X\rightarrow X\), \(i\in \mathbb {N}\). We define \(\Lambda :\Omega \rightarrow L(X,X)\) as
Then the following statements hold:
-
(i)
\(\Lambda \) is a random operator.
-
(ii)
If \(\xi \) and \(\kappa \) are independent, then so are \(\xi \) and \(\Lambda \).
-
(iii)
The expected value of \(\Lambda \) is equal to
$$\begin{aligned} \mathbb {E}\Vert \Lambda (\cdot )\Vert =\sum _{i\in \mathbb {N}}\mu _{\kappa }(\left\{ i\right\} )\Vert T_i\Vert . \end{aligned}$$ -
(iv)
The adjoint random operator \(\Lambda ^*\) has the form
$$\begin{aligned} \Lambda ^*(\omega )=T^*_{\kappa (\omega )}. \end{aligned}$$
Assertion (i) follows from the fact that \(\Lambda \) can be rewritten in the form
Hence it can be easily seen that \(\Lambda \) is \(\mathcal {A}\)-measurable. To show statement (ii) assume that \(\xi \) and \(\kappa \) are independent and observe that \(\mu _{\Lambda }\) has the form
and
From that
Now fix \(B\in \mathcal {B}(L(X,X))\otimes \mathcal {B}(X)\), define \(B_T\in \mathcal {B}(\mathbb {N})\otimes \mathcal {B}(X)\) as
and observe that
where \(B^x=\left\{ y\in X:(x,y)\in B\right\} \), \(x\in L(X,X)\). An easy computation shows that
Statement (iii) is obvious. Finally to show (iv) fix \(i\in \mathbb {N}\) and observe that for \(\omega \in \kappa ^{-1}(\left\{ i\right\} )\) we have
Therefore \(\Lambda ^*(\omega )=T_i^*\) for \(\omega \in \kappa ^{-1}(\left\{ i\right\} )\). From that we obtain
By statements (i)–(iv) we can consider an rv-function f of the form
and if we assume additionally that
then Theorem 4.2 allows us to claim that (provided that \(\kappa \) and \(\xi \) are independent) the characteristic function \(\varphi ^f\) is the only solution of the equation
which is bounded, continuous at zero and \(\varphi (0)=1\). \(\square \)
It is worth pointing out that if we consider the class of solutions \(\varphi \) of the equation (4.1) (or in particular of (4.6)) which do not have to be either bounded or Lipschitz, then such a class can contain more than one solution, which is shown in the example given below.
Example 4.15
Fix \(a\in \mathbb {R}\) such that \(|a|>1\) and \(p\in \left( 0,\frac{1}{1+|a|}\right) \) and let \(X=\mathbb {R}\). Let operators \(T_i:\mathbb {R}\rightarrow \mathbb {R}, i\in \left\{ 1,2\right\} \) be given, respectively, by
Set a random variable \(\kappa :\Omega \rightarrow \mathbb {N}\) with the following distribution
It can be easily seen that for a random operator \(\Lambda \) given by
we have
Observe furthermore that \(\Lambda \) and \(\Lambda ^*\) have the same distribution.
Now consider a random variable \(\xi :\Omega \rightarrow \mathbb {R}\), independent of \(\kappa \), with \(\mu _{\xi }=\delta _0\). Then \(\varphi ^\xi \equiv 1\). It is easy to check that \(\varphi ^f\equiv 1\) and it is a solution of the equation
However it is not unique in a family of continuous at zero functions \(\varphi \) which satisfy \(\varphi (0)=1\). To this end, take a function \(\varphi _0:\mathbb {R}\rightarrow \mathbb {R}\) with
Let us see that \(\varphi _0\) is continuous on its domain, \(\varphi _0(0)=1\) and
so \(\varphi ^f\) is not the unique continuous solution of the equation (4.7) having value 1 at zero. \(\square \)
For GRAM’s f given above, the natural question arises whether an operator \( (\Lambda ,\xi ) \longmapsto \varphi ^f \) is continuous and what kind of continuity it has. Before we formulate an appropriate result, we present some additional facts in which \((X,\rho )\) is a metric space and
for \(\alpha \in (0,\infty )\), and B(X, Y) is a set of all bounded functions acting on X into Y.
Definition 4.16
Let \((X,\rho )\) be a separable and complete metric space and let \((Y,\Vert \cdot \Vert )\) be a Banach space. We denote a metric \(d_H^{X,Y}\) on \(\mathcal M_1(X)\) by the formula
Proposition 4.17
Assume that spaces X and Y are nontrivial. Then the metric \(d^{X,Y}_H\) is independent of the choise spaces X and Y, and moreover \(d^{X,Y}_H(\mu ,\nu )=d_H(\mu ,\nu )\) for every \(\mu ,\nu \in \mathcal {M}_1(X)\).
Proof
Fix \(u\in Lip_1(X)\) and \(x_0\in Y\) such that \(\Vert x_0\Vert =1\). Put \(\varphi _0 (x)=u(x)\cdot x_0\) for \(x\in X\), then \(\varphi _0\in Lip_1 (X,Y)\) and it is integrable in Bochner’s sense with respect to any probability measure, so we have
Since u is arbitrary, we can take the supremum on the left hand side of the inequality and as a consequence we obtain \(d_H\le d^{X,Y}_H.\)
Now fix \(\varphi \in Lip_1(X,Y)\) and \(\mu , \nu \in \mathcal {M}_1(X)\). Then there exists \(y^*\in Y^*\) such that \(\Vert y^*\Vert =1\) and
by the Hahn–Banach theorem. Applying the Hille Theorem (see e.g. [8, Theorem 6 Ch. II]) we deduce that
and since \( y^*\circ \varphi \in Lip_1(X)\) we finally obtain \(d_H\ge d^{X,Y}_H.\) \(\square \)
Lemma 4.18
If \(u\in X\setminus \{0\}\) and a function \(\psi :X\rightarrow \mathbb {C}\) is given by \(\psi (z)= e^{i(u|z)}\), then \(\psi \in Lip_{\Vert u\Vert }(X,\mathbb {C})\).
Proof
Since \((u|z)\in \mathbb {R}\) for every \(u,z\in X\), it follows that
Then the proof is completed. \(\square \)
Proposition 4.19
Let \(f,g:X\times \Omega \rightarrow X\) be rv-functions. Assume that the iterates \((f^n(x, \cdot ))_{n\in \mathbb {N}}\), \((f^n(x, \cdot ))_{n\in \mathbb {N}}\) converge in law to \(\pi ^f\) and \(\pi ^g\), respectively, and the limits \(\pi ^f, \pi ^g\) do not depend on x. Then the following inequality for the characteristic functions \(\varphi ^f\) and \(\varphi ^g\) holds
for every \(u\in X\).
Proof
Fix \(u\in X\setminus \{0\}\) and define \(\psi :X\rightarrow \mathbb {C}\) as \(\psi (z)=e^{i(u|z)}\). Then \(\frac{1}{\Vert u\Vert }\psi \in Lip_{1}(X,\mathbb {C})\), by Lemma 4.18. Using Proposition 4.17 we see that
This ends the proof. \(\square \)
Remark 4.20
Inequality (4.8) can not be strengthened by
which is shown in the example given below.
Example 4.21
Fix \(a\in \mathbb {R}\). For \(n\in \mathbb {N}\) let \(\xi _n:\Omega \rightarrow X\) be a random variable with uniform distribution on the interval \(\left[ a,a+\frac{1}{n}\right] \). (Obviously, we assume such \({\xi _n}'s\) can be constructed. It is possible for instance on the space \((\Omega ,\mathcal A,\mathbb P)\) as a unit interval with Lebesgue measure.) Define rv-functions \(f_n,g:X\times \Omega \rightarrow X\) by
Observe that the k-th iterate of \(f_n\) satisfies \(f_n^k(x,\omega _1,\ldots ,\omega _k)=\xi _n(\omega _k)\) and \(g^k(x,\omega )=a\). So we can write
Additionally let us see that
The characteristic functions of the above distributions have the following forms
For every \(c\in Lip_1(\mathbb {R})\) we have the following computation
Taking supremum over all \(c\in Lip_1(\mathbb {R})\) we obtain
It is easily seen that \(\varphi ^{f_n}(u)\xrightarrow {n\rightarrow \infty }\varphi ^g(u)\) for every \(u\in X\), but
for every \(n\in \mathbb {N}\). From that
Therefore the sequence \(\big (\varphi ^{f_n}\big )_{n\in \mathbb N}\) is not convergent to \(\varphi ^g \) in the supremum norm \(\Vert \cdot \Vert _{\infty }\). \(\square \)
Now we turn to formulating the second theorem of this section that extends [4, Theorem 3]. We note that in this theorem a real separable Hilbert space X is considered and \(\varphi ^f,\varphi ^g\) denote the characteristic functions of \(\pi ^f, \pi ^f\), which result from Theorem 3.4. The announced theorem is a straightforward consequence of Theorem 3.4 and Lemma 4.19, and reads as follows.
Theorem 4.22
Assume that rv-functions f, g satisfy (U\(_f\)) and (U\(_g\)), respectively. Then
where \(\alpha =\mathbb {E}\Vert \Lambda _f(\cdot )-\Lambda _g(\cdot )\Vert , \ \beta =\mathbb {E}\Vert \xi _f-\xi _g\Vert .\)
Remark 4.23
The main results of [4, 5] concern rv-functions of the form \(f(x,\omega )=\Lambda x+\xi _f(\omega )\) with \(\Lambda \in L(X,X)\). In particular the author examines a kind of continuity of the operator \(\xi _f\longmapsto \varphi ^f\). Note that this is one case in our results, when \(\alpha =0\). Under appropriate assumptions we have
as well as
Availability of data and materials
Not applicable.
References
Baron, K.: On the convergence in law of iterates of random-valued functions, Australian. J. Math. Anal. Appl. 6, 1–9 (2009)
Baron, K., Kuczma, M.: Iteration of random-valued functions on the unit interval. Colloq. Math. 37, 263–269 (1977)
Baron, K.: On the continuous dependence in a problem of convergence of iterates of random-valued functions. Grazer Math. Ber. 363, 1–6 (2015)
Baron, K.: Remarks connected with the weak limit of iterates of some random-valued functions and iterative functional equations. Ann. Math. Silesianae 34, 36–44 (2020)
Baron, K.: Weak limit of iterates of some random-valued functions and its application. Aequat. Math. 94, 415–425 (2020)
Czapla, D., Hille, S.C., Horbacz, K., Wojewódka-Ścia̧żko, H.: Continuous dependence of an invariant measure on the jump rate of a piecewise-deterministic Markov process. Math. Biosci. Eng. 17, 1059–1073 (2020)
Diamond, P.: A stochastic functional equation. Aequ. Math. 15, 225–233 (1977)
Diestel, J., Uhl, J.J., Jr.: Vector Measures. American Mathematical Society Providence, Rhode Island (1977)
Dudley, R.M.: Real analysis and probability. Cambridge studies in advanced mathematics, vol. 74, Cambridge Univerity Press, Cambridge (2002)
Durrett, R.: Probability Theory and Examples. Cambridge Series in Statistical and Probabilistic Mathematics, 5th edn., Cambridge University Press (2019)
Goldie, C.M., Maller, R.A.: Stability of perpetuities. Ann. Probab. 28, 1195–1218 (2000)
Iksanov, A.: Renewal theory for perturbed random walks and similar processes. Probability and Its Applications, Birkhäuser (2016)
Kapica, R.: The geometric rate of convergence of random iteration in the Hutchinson distance. Aequat. Math. 93, 149–160 (2019)
Kapica, R., Komorek, D.: Continuous dependence in a problem of convergence of random iteration. Qual. Theory Dyn. Syst. 22(2), 65 (2023)
Kapica, R., Morawiec, J.: Refinement equations and distributional fixed points. Appl. Math. Comput. 218, 7741–7746 (2012)
Kesten, H.: Random difference equations and renewal theory for products of random matrices. Acta Math. 131, 207–248 (1973)
Kuczma, M., Choczewski, B., Ger, R.: Iterative functional equations. Encyclopedia of Mathematics and Its Applications, vol. 32, Cambridge University Press, Cambridge (1990)
Parthasarathy, K.R.: Probability measures on metric spaces. Probability and Mathematical Statistics, vol. 3, Academic Press, Inc., New York (1967)
Acknowledgements
The research was supported by the Faculty of Applied Mathematics AGH UST statutory tasks within subsidy of Ministry of Education and Science.
Funding
No funding received.
Author information
Authors and Affiliations
Contributions
Dawid Komorek wrote the whole manuscript.
Corresponding author
Ethics declarations
Conflict of interest
No interests of a financial or personal nature.
Ethical approval
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work was completed with the support of our TeX-pert.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Komorek, D. Continuous dependence of the weak limit of iterates of some random-valued vector functions. Aequat. Math. 97, 753–776 (2023). https://doi.org/10.1007/s00010-023-00959-w
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00010-023-00959-w
Keywords
- Random iteration
- Iterative equation
- Invariant measure
- Hutchinson distance
- Fortet–Mourier distance
- Continuous dependence on the given function
- Random-valued function
- Random affine map