1 Introduction

Fix a probability space \((\Omega ,\mathcal {A},\mathbb {P})\) and a separable Banach space X. By \(\mathcal {B}(X)\) we denote the family of all Borel subsets of X. A map \(f:X\times \Omega \rightarrow X\) measurable with respect to the product algebra \(\mathcal {B}(X)\otimes \mathcal {A}\) (shortly: \(\mathcal {B}(X)\otimes \mathcal {A}\)-measurable) is called a random-valued function or an rv-function. By \(f^n\) we denote the n-th iterate of f, given by

$$\begin{aligned} f^0(x,\omega )=x\ \ \text {and} \ \ f^n(x,\omega _1,\ldots \omega _n)=f(f^{n-1}(x,\omega _1,\ldots ,\omega _{n-1}),\omega _n) \end{aligned}$$

for \(n\in \mathbb N, x\in X\) and \(\omega =(\omega _1,\omega _2,\dots )\) from \(\Omega ^\infty \) defined as \(\Omega ^\mathbb N\). Note that the map \(f^n:X\times \Omega ^{\infty }\rightarrow X\) is \(\mathcal {B}(X)\otimes \mathcal {A}_n\)-measurable, where \(\mathcal {A}_n\) denotes the \(\sigma \)-algebra of all the sets \(\left\{ (\omega _1,\omega _2\ldots ):(\omega _1,\ldots ,\omega _n)\in A\right\} \) and A belongs to the product \(\sigma \)-algebra \(\mathcal {A}^n\). Since \(f^n\) depends only on the first n coordinates of \(\omega \), we can identify \(f^n(\cdot ,\omega _1,\dots ,\omega _n)\) with \(f^n(\cdot ,\omega )\). So \(f^n\) is an rv-function on the probability space \((\Omega ^{\infty },\mathcal {A}_n,\mathbb {P}^{\infty })\) and also on \((\Omega ^{n},\mathcal {A}^n,\mathbb {P}^{n})\). These iterates were defined by K. Baron and M. Kuczma [2], and independently by Ph. Diamond [7] to solve iterative functional equations. In particular they form forward type iterations and are the prototype of random dynamical systems. A result on almost sure (a.s., for short) convergence of \((f^n(x,\cdot ))_{n\in \mathbb N}\) for \(X=[0,1]\) can be found in [17, Sec. 1.4 B]. A simple and useful criterion for a weak convergence of distributions of \(f^n(x,\cdot ), n\in \mathbb N\) to a probability Borel measure \(\pi ^f\) independent of \(x\in X\) for X being a Polish space was proved in [1] and applied to some linear inhomogeneous functional equation.

One of the most important cases of rv-functions is the so called random affine map (see e.g. [11]), which is given by

$$\begin{aligned} (x,\omega )\longmapsto \eta (\omega )x+\xi (\omega ), \end{aligned}$$
(1.1)

where \(\eta :\Omega \rightarrow \mathbb R, \xi :\Omega \rightarrow X\) are \(\mathcal {A}\)-measurable. These maps are related to perpetuities, see for instance [11, 12, 16]); they are also applied to refinement type equations [15]. Substituting a random vector \(\eta \) into a random operator, we will consider rv-functions of the form

$$\begin{aligned} (x,\omega ) \longmapsto \Lambda (\omega ) x+\xi (\omega ), \end{aligned}$$
(1.2)

where \(\Lambda (\omega ):X\rightarrow X\) is a continuous and bounded operator for \(\omega \in \Omega \). A function (1.2) will be called a generalized random affine map or GRAM, for short. However, the main motivation to study such rv-functions is the work of K. Baron [5], where a special case of map (1.2) with the same operator \(\Lambda (\omega )\) for any \(\omega \) was examined.

The first aim of the present paper is to give some natural conditions under which the sequence of iterates of GRAM’s f converges in law to \(\pi ^f\), and to establish the continuity of the operator \( f\longmapsto \pi ^f \) by showing how \(\pi ^f\) change if \(\Lambda \) and \(\xi \) do. This extends the main result of [3] as well as [4, Theorem 1] and [14, Theorem 5.2].

In the case when X is a real Hilbert space a characterization of a limit distribution \(\pi ^f\) by its characteristic function \(\varphi ^f\) via the linear functional equation \(\varphi ^f(u)=\varphi ^f(\Lambda ^*(u))\cdot \varphi ^{\xi }(u)\) was established in [5]. Referring to that paper we will show that the function \(\varphi ^f\) for GRAM’s f is only one solution of the equation

$$\begin{aligned} \varphi ^f(u)= \varphi ^{\xi }(u)\int _{\Omega }\varphi ^f(\Lambda ^*(\omega )u)\mathbb {P}(d\omega ) \end{aligned}$$

in a class of characteristic functions. Moreover, we will indicate continuous dependence in such a characterisation of the limit distribution.

2 Notions and basic facts

Throughout the paper \((X,\Vert \cdot \Vert )\) is a separable Banach space and \((\Omega , \mathcal {A},\mathbb {P})\) is a given probability space. We write B(X) for a space of all Borel and bounded functions endowed with the supremum norm \(\Vert \cdot \Vert _{\infty }\) and C(X) for its subspace containing all continuous (and bounded) functions. A space of all linear and continuous operators \(\Lambda :X\rightarrow X\) will be denoted by L(XX). We use the symbol \(\mathcal {M}_1(X)\) to denote the space of all probability measures defined on \(\mathcal {B}(X)\). For short, we will write \(\int \varphi d\mu \) instead of \(\int _X \varphi (x) \mu (dx)\) for Bochner integrable \(\varphi \) and \(\mu \in \mathcal M_1(X)\) if there is no confusion. We also consider a family of all measures with finite first moment given by

$$\begin{aligned} \mathcal {M}_1^1(X)=\left\{ \mu \in \mathcal {M}_1(X):\int \rho (x,x_0)\mu (dx)<\infty \right\} \end{aligned}$$

for some \(x_0\in X\). (Clearly \(\mathcal {M}_1^1(X)\) does not depend on \(x_0\).) Recall that a measure \(\mu *\nu \) is a convolution of measures \(\mu \) and \(\nu \) if

$$\begin{aligned} \mu *\nu (B)=\int \mu (B-x)\nu (dx)\qquad \text {for every} \ B\in \mathcal {B}(X). \end{aligned}$$

We write \(\mu _{\chi }\) to denote a probability distribution of the random variable \(\chi \). Random variables \(\chi :\Omega \rightarrow X\), \(\zeta :\Omega \rightarrow Y\) are called independent if

$$\begin{aligned} \mu _{(\chi ,\zeta )}=\mu _{\chi }\otimes \mu _{\zeta }, \end{aligned}$$

where \(\mu _{(\chi ,\zeta )}\) is their joint probability distribution. We say that a sequence \((\mu _n)\) of measures from \(\mathcal {M}_1(X)\) converges weakly to \(\mu \) if \(\int fd\mu _n\xrightarrow [n\rightarrow \infty ]{} \int f d\mu \) for every \(f\in C(X)\). We introduce the symbol \(d_{FM}\) to denote the Fortet–Mourier metric (also known as the bounded Lipschitz distance) given by

$$\begin{aligned} d_{FM}(\mu ,\nu )=\sup \left\{ \left| \int fd\mu -\int fd\nu \right| :f\in Lip_1(X), \Vert f\Vert _{\infty }\le 1 \right\} , \end{aligned}$$

and additionally \(d_H\) to denote the Huthinson metric given by

$$\begin{aligned} d_{H}(\mu ,\nu )=\sup \left\{ \left| \int fd\mu -\int fd\nu \right| :f\in Lip_1(X) \right\} , \end{aligned}$$

where

$$\begin{aligned} Lip_1(X)=\left\{ f\in B(X):\ |f(x)-f(y)|\le \Vert x-y\Vert \ \text {for} \ x,y\in X\right\} . \end{aligned}$$

Note that the distance between some measures in the Huthinson metric may be infinite. It is known (see [9, Theorem 11.3.3]) that weak convergence is metrizable by the Fortet–Mourier norm.

With an rv-function \(f:X\times \Omega \rightarrow X\) we may associate a linear operator \(P:\mathcal {M}_1(X)\rightarrow \mathcal {M}_1(X)\) by the formula

$$\begin{aligned} P\mu (A)=\int _{X}\int _{\Omega }\mathbbm {1}_A(f(x,\omega ))\mathbb {P}(d\omega )\mu (dx), \end{aligned}$$
(2.1)

which will be used in this paper. It can be shown that P is the Markovian transition operator for the distribution \(\pi _n\) of \(f^n\) given by

$$\begin{aligned} \pi _n(x,A)=\mathbb {P}^{\infty }(\left\{ \omega \in \Omega ^{\infty }:f^n(x,\omega )\in A\right\} ), \end{aligned}$$

i.e.

$$\begin{aligned} P\pi _n(x,A)=\pi _{n+1}(x,A)\quad {\text {for}}\; x\in X, \ A\in \mathcal {B}(X). \end{aligned}$$

By the convergence in distribution or in law of the sequence of iterates \((f^n(x, \cdot ))_{n\in \mathbb {N}}\) we mean that the sequence \((\pi _n(x,\cdot ))_{n\in \mathbb {N}}\) converges weakly to a probability distribution.

Following [1] and [13] we consider a family of rv-functions \(f:X\times \Omega \rightarrow X\) which satisfy:

(H\(_f\)):

There exists \(\lambda _f\in (0,1)\) such that

$$\begin{aligned} \int _\Omega \Vert f(x,\omega )-f(y,\omega )\Vert \mathbb P(d\omega )\le \lambda _f \Vert x-y\Vert \quad \textrm{for}\quad x,y\in X \end{aligned}$$

and

$$\begin{aligned} \int _\Omega \Vert f(x,\omega )-x\Vert \mathbb P(d\omega )<\infty \quad \mathrm{for\; some \;(thus \;all)} \quad x\in X. \end{aligned}$$

A simple criterion [13, Corollary 5.6], cf. [1, Theorem 3.1], for the convergence in distribution of iterates of rv-functions reads as follows:

Proposition 2.1

Assume that an rv-function \(f:X\times \Omega \rightarrow X\) satisfies (H\(_f\)). Then for every \(x\in X\) the sequence of iterates \((f^n(x, \cdot ))_{n\in \mathbb {N}}\) converges in distribution and the limit \(\pi ^f\) does not depend on x. Moreover \(\pi ^f\in \mathcal M_1^1(X)\) and

$$\begin{aligned} d_H(\pi _n(x,\cdot ),\pi ^f) \le \frac{{\lambda _f}^n}{1-\lambda _f}\int _\Omega \Vert f(x,\omega ) -x\Vert \mathbb P(d\omega ) \end{aligned}$$

for \(n\in \mathbb N\) and \(x\in X\).

The geometric rate of convergence allows us to formulate a result concerning the continuity of \(f\longmapsto \pi ^f\). We cite a part of [14, Theorem 4.1] that will be useful in the next section.

Proposition 2.2

Assume that rv-functions fg satisfy (H\(_f\)) and (H\(_g\)), respectively. Then for limit distributions \(\pi ^f\) and \(\pi ^g\), occurring in Proposition 2.1, we have

$$\begin{aligned} d_H(\pi ^f,\pi ^g)\le \min \left\{ \frac{1}{1-\lambda _f}\inf _{x\in X}{\alpha _g(x)},\frac{1}{1-\lambda _g}\inf _{x\in X}{\alpha _f(x)}\right\} , \end{aligned}$$
(2.2)

where

$$\begin{aligned} \alpha _h(x)=\sup _{n\in \mathbb {N}_0}\int _{\Omega ^{\infty }}\int _{\Omega }\Vert f(h^n(x,\omega ),\varpi )-g(h^n(x,\omega ),\varpi )\Vert \mathbb {P}(d\varpi )\mathbb {P}^{\infty }(d\omega ) \end{aligned}$$
(2.3)

for \(h\in \{f,g\}\).

Remark 2.3

By condition (H\(_g\)) we mean (H\(_f\)) in which all functions f’s are replaced by g’s. A similar convention will be used considering condition (U\(_g\)) in the next section.

3 Continuous dependence of the limit distribution of generalized random affine maps

Fix \(\Lambda :\Omega \rightarrow L(X,X)\) and \(\mathcal A\)-measurable \(\xi :\Omega \rightarrow X\). Since X is separable, we may consider equivalently the weak, strong (in Bochner’s sense), and Borel measurability of the random variable \(\xi \). To get some results concerning the convergence in law of GRAM’s (1.2) we need to show that (1.2) is an rv-function. To do this we will introduce the following:

Definition 3.1

We call a map \(\Lambda :\Omega \rightarrow L(X,X)\) a random operator, if it is \(\mathcal {A}\)-measurable, i.e. \(\Lambda ^{-1}(B)\in \mathcal A\) for every Borel subset B of L(XX).

Proposition 3.2

If \(\Lambda :\Omega \rightarrow L(X,X)\) is a random operator, then a function \(\Lambda (\cdot )x:\Omega \rightarrow X\) is \(\mathcal {A}\)-measurable for every \(x\in X\).

Proof

Fix \(x\in X\) and define \(\varphi _x:L(X,X)\rightarrow X\) by \(\varphi _x(T)=Tx\). It is obvious that \(\varphi _x\) is linear, and since

$$\begin{aligned} \Vert \varphi _x(T)\Vert =\Vert Tx\Vert \le \Vert x \Vert \cdot \Vert T\Vert \end{aligned}$$

it is bounded (thus continuous). Now fix \(B\in \mathcal {B}(X)\) then we have

$$\begin{aligned} \left\{ \omega \in \Omega :\Lambda (\omega )x\in B\right\}&=\left\{ \omega \in \Omega :\varphi _x(\Lambda (\omega ))\in B\right\} \\ {}&=\left\{ \omega \in \Omega :\Lambda (\omega )\in \varphi ^{-1}_x(B)\right\} \in \mathcal {A}. \end{aligned}$$

\(\square \)

Remark 3.3

One can show that for a separable space X if \(\Lambda (\cdot )x:\Omega \rightarrow X\) is \(\mathcal {A}\)-measurable for every \(x\in X\) and \(\Lambda (\omega ):X\rightarrow X\) is continuous for every \(\omega \in \Omega \) then a map \(\Lambda :\Omega \times X\rightarrow X\) with \(\Lambda (x,\omega )=\Lambda (\omega )x\) is \(\mathcal {A}\otimes \mathcal {B}(X)\)-measurable. Moreover, \(\xi \) extended to \(\xi :X\times \Omega \rightarrow X\) by \(\xi (x,\omega )=\xi (\omega )\) is \(\mathcal {A}\otimes \mathcal {B}(X)\)-measurable. Since the sum of \(\mathcal {A}\otimes \mathcal {B}(X)\)-measurable functions on separable values is also \(\mathcal {A}\otimes \mathcal {B}(X)\)-measurable it follows that \(f:\Omega \times X\rightarrow X\) given by (1.2) is an rv-function.

The main result of this section concerns the continuous dependence of the limit of iterates of GRAM’s. We will formulate it for a family of rv-functions \(f:X\times \Omega \rightarrow X\) which satisfy:

(U\(_f\)):

The function \(f:X\times \Omega \rightarrow X\) has the form \( f(x,\omega )=\Lambda _f(\omega )x+\xi _f(\omega )\), where \(\xi _f:\Omega \rightarrow X\) is \(\mathcal A\)-measurable,

$$\begin{aligned} \mathbb E\Vert \xi _f\Vert =\int _\Omega \Vert \xi _f(\omega )\Vert \mathbb P(d\omega )<\infty , \end{aligned}$$

and \(\Lambda _f:\Omega \rightarrow L(X,X)\) is a random operator satisfying

$$\begin{aligned} \mathbb {E}\Vert \Lambda _f(\cdot )\Vert =\int _{\Omega }\Vert \Lambda _f(\omega )\Vert \mathbb {P}(d\omega )<1, \end{aligned}$$

where \(\Vert \Lambda _f(\omega )\Vert \) is the operator norm of \(\Lambda _f(\omega )\).

Theorem 3.4

Assume that rv-functions fg satisfy (U\(_f\)) and (U\(_g\)), respectively. Then the sequences of iterates \((f^n(x, \cdot ))_{n\in \mathbb {N}}, (g^n(x, \cdot ))_{n\in \mathbb {N}}\) are convergent in law to the probability distributions \(\pi ^f, \pi ^g\in \mathcal {M}_1^1(X)\), respectively, the limits do not depend on \(x\in X\), and

$$\begin{aligned} d_H(\pi ^f,\pi ^g)\le \min&\left\{ \frac{1}{1-\mathbb {E}\Vert \Lambda _f(\cdot )\Vert }\left( \frac{\mathbb {E}\Vert \xi _g\Vert }{1-\mathbb {E}\Vert \Lambda _g(\cdot )\Vert }\alpha +\beta \right) ,\right. \\&\left. \frac{1}{1-\mathbb {E}\Vert \Lambda _g(\cdot )\Vert }\left( \frac{\mathbb {E}\Vert \xi _f\Vert }{1-\mathbb {E}\Vert \Lambda _f(\cdot )\Vert }\alpha +\beta \right) \right\} ,\end{aligned}$$

where \(\alpha =\mathbb {E}\Vert \Lambda _f(\cdot )-\Lambda _g(\cdot )\Vert , \ \beta =\mathbb {E}\Vert \xi _f-\xi _g\Vert .\)

Proof

At the beginning let us observe that (U\(_f\)) implies (H\(_f\)). Indeed,

$$\begin{aligned} \int _\Omega \Vert f(x,\omega )-f(y,\omega )\Vert \mathbb P(d\omega ) \le \Vert x-y\Vert \int _{\Omega }\Vert \Lambda _f(\omega )\Vert \mathbb {P}(d\omega ) \quad \textrm{for}\quad x,y\in X \end{aligned}$$

and

$$\begin{aligned} \int _\Omega \Vert f(0,\omega )\Vert \mathbb P(d\omega )= \int _\Omega \Vert \xi _f(\omega )\Vert \mathbb P(d\omega )<\infty . \end{aligned}$$

By Proposition 2.1 we infer that there exist probability distributions \(\pi ^f, \ \pi ^g\in \mathcal {M}_1^1(X)\) such that for every \(x\in X\) the sequences \((f^n(x, \cdot ))_{n\in \mathbb {N}}\), \((g^n(x, \cdot ))_{n\in \mathbb {N}}\) are convergent in law to \(\pi ^f, \pi ^g\), respectively.

The rest of the proof runs similarly to the proof of [14, Theorem 5.2] which concerns (1.1). For the convenience of the reader we repeat the relevant computations after appropriate changes for the case of GRAM’s, thus making our exposition self-contained. So fix \(k\in \mathbb {N}\) and let us define \(\Lambda _k:\Omega ^{\infty }\rightarrow L(X,X)\) and \(\xi _k:\Omega ^{\infty }\rightarrow X\) by \(\Lambda _k (\omega )=\Lambda _f(\omega _{k}), \ \xi _k (\omega )=\xi _f(\omega _{k})\), where \(\omega =(\omega _1,\omega _2\ldots )\in \Omega ^{\infty }\), and observe that for \(\omega \in \Omega ^{\infty }\) and \(x\in X\)

$$\begin{aligned} f^n(x,\omega )&=\bigodot _{i=0}^{n-1}\Lambda _{n-i}(\omega )x+\bigodot _{i=0}^{n-2}\Lambda _{n-i}(\omega )\xi _{1}(\omega )+\\&\quad +\bigodot _{i=0}^{n-3}\Lambda _{n-i}(\omega )\xi _{2}(\omega )+\ldots +\Lambda _{n}(\omega )\xi _{n-1}(\omega )+\xi _{n}(\omega ),\end{aligned}$$

where

$$\begin{aligned} \bigodot _{i=0}^{n-k}\Lambda _{n-i}(\omega )=\Lambda _{n}(\omega )\circ \Lambda _{n-1}(\omega )\circ \Lambda _{n-2}(\omega )\circ \ldots \circ \Lambda _{k}(\omega ) \end{aligned}$$

and \(\circ \) is a composition. From that

$$\begin{aligned} f^n(0,\omega )=\sum _{k=2}^{n}\bigodot _{i=0}^{n-k}\Lambda _{n-i}(\omega )\xi _{k-1}(\omega )+\xi _n(\omega ). \end{aligned}$$

Then

$$\begin{aligned}&\Vert g(f^n(0,\omega ),\overline{\omega })-f(f^n(0,\omega ),\overline{\omega })\Vert \\&\le \Vert \Lambda _g(\overline{\omega })-\Lambda _f(\overline{\omega })\Vert \times \left( \sum _{k=2}^{n}\left\| \bigodot _{i=0}^{n-k}\Lambda _{n-i}(\omega )\xi _{k-1}(\omega ) \right\| +\Vert \xi _n(\omega )\Vert \right) \\&\quad +\Vert \xi _g(\overline{\omega })-\xi _f(\overline{\omega })\Vert \end{aligned}$$

and from the inequality

$$\begin{aligned} \left\| \bigodot _{i=0}^{n-k}\Lambda _{n-i}(\omega )\xi _{k-1}(\omega ) \right\| \le \Vert \xi _{k-1}(\omega )\Vert \prod _{i=k}^{n}\Vert \Lambda _i(\omega )\Vert \end{aligned}$$

we have

$$\begin{aligned} \Vert g(f^n(0,\omega ),\overline{\omega })&-f(f^n(0,\omega ),\overline{\omega })\Vert \le \Vert \Lambda _g(\overline{\omega })-\Lambda _f(\overline{\omega })\Vert \\&\times \left( \sum _{k=2}^{n}\Vert \xi _{k-1}(\omega )\Vert \prod _{i=k}^{n}\Vert \Lambda _i(\omega )\Vert +\Vert \xi _n(\omega )\Vert \right) +\Vert \xi _g(\overline{\omega })-\xi _f(\overline{\omega })\Vert .\end{aligned}$$

Since \(\Vert \xi _{k-1}\Vert , \Vert \Lambda _k(\cdot )\Vert , \ldots , \Vert \Lambda _n(\cdot )\Vert \) are independent it follows that

$$\begin{aligned}\int _{\Omega ^{\infty }}\int _{\Omega }&\Vert g(f^n(0,\omega ),\overline{\omega })-f(f^n(0,\omega ),\overline{\omega })\Vert \mathbb {P}^{\infty }(d\omega )\mathbb {P}(d\overline{\omega }) \\&\le \int _{\Omega }\Vert \Lambda _g(\overline{\omega })-\Lambda _f(\overline{\omega })\Vert \mathbb {P}(d\overline{\omega })\int _{\Omega ^{\infty }}\Bigg (\sum _{k=2}^{n}\Vert \xi _{k-1}(\omega )\Vert \\&\quad \times \prod _{i=k}^{n}\Vert \Lambda _i(\omega )\Vert +\Vert \xi _n(\omega )\Vert \Bigg )\mathbb {P}^{\infty }(d\omega )+\int _{\Omega }\Vert \xi _g(\overline{\omega })-\xi _f(\overline{\omega })\Vert \mathbb {P}(d\overline{\omega })\\&=\alpha \sum _{k=2}^{n}\mathbb {E}\Vert \xi _{k-1}\Vert \prod _{i=k}^{n}\mathbb {E}\Vert \Lambda _{i}(\cdot )\Vert +\mathbb {E}\Vert \xi _n\Vert +\beta \\&=\alpha \sum _{k=2}^{n+1}\mathbb {E}\Vert \xi _{f}\Vert \cdot ( \mathbb {E}\Vert \Lambda _{f}(\cdot )\Vert )^{n-k+1}+\beta \\&=\alpha \mathbb {E}\Vert \xi _{f}\Vert \frac{1-\left( \mathbb {E}\Vert \Lambda _{f}(\cdot )\Vert \right) ^{n}}{1- \mathbb {E}\Vert \Lambda _{f}(\cdot )\Vert }+\beta .\end{aligned}$$

Therefore for the function \(\alpha _f(x)\) given by (2.3) we obtain

$$\begin{aligned} \inf _{x\in X}\alpha _f(x)\le \alpha _f(0)\le \alpha \frac{\mathbb {E}\Vert \xi _{f}\Vert }{1- \mathbb {E}\Vert \Lambda _{f}(\cdot )\Vert }+\beta \qquad \text {for} \ x\in X. \end{aligned}$$

A similar inequality holds for \(\alpha _g(x)\). Taking \(\lambda _f=\mathbb {E}\Vert \Lambda _{f}(\cdot )\Vert , \lambda _g=\mathbb {E}\Vert \Lambda _{g}(\cdot )\Vert \) and applying Proposition 2.2 we finish the proof. \(\square \)

Corollary 3.5

Assume that rv-functions fg have the form

$$\begin{aligned} f(x,\omega )=\Lambda _f x+\xi _f(\omega ), \qquad g(x,\omega )=\Lambda _g x+\xi _g(\omega ) \end{aligned}$$

with \(\Lambda _f,\Lambda _g\in L(X,X)\) such that \(\Vert \Lambda _f\Vert <1\), \(\Vert \Lambda _g\Vert <1\) and \(\xi _f, \xi _g:\Omega \rightarrow X\) such that \(\mathbb {E}\Vert \xi _f\Vert <\infty \), \(\mathbb {E}\Vert \xi _g\Vert <\infty \). Then the sequences of iterates \((f^n(x, \cdot ))_{n\in \mathbb {N}}\), \((g^n(x, \cdot ))_{n\in \mathbb {N}}\) are convergent in law to the probability distributions \(\pi ^f\), \(\pi ^g\in \mathcal {M}_1^1(X)\), respectively, the limits do not depend on \(x\in X\), and

$$\begin{aligned} d_H(\pi ^f,\pi ^g)\le \min&\left\{ \frac{1}{1-\Vert \Lambda _f\Vert }\left( \frac{\mathbb {E}\Vert \xi _g\Vert }{1-\Vert \Lambda _g\Vert }\alpha +\beta \right) ,\right. \\&\left. \frac{1}{1-\Vert \Lambda _g\Vert }\left( \frac{\mathbb {E}\Vert \xi _f\Vert }{1-\Vert \Lambda _f\Vert }\alpha +\beta \right) \right\} ,\end{aligned}$$

where \(\alpha =\Vert \Lambda _f-\Lambda _g\Vert , \ \beta =\mathbb {E}\Vert \xi _f-\xi _g\Vert .\)

Corollary 3.5 given above extends the main result of [3] as well as [4, Theorem 1]. Due to this result we can generalize [4, Theorem 3] and [5, Theorem 3.1]; see Theorems 4.10, 4.22.

4 Characterisation of the limit distribution

Let \((\Omega ,\mathcal {A},\mathbb {P})\) be a probability space. In this section X is a separable real Hilbert space with the inner product \((\cdot |\cdot )\). However in cases when it is not needed we will emphasize it. We define a characteristic function \(\varphi ^f\) of the rv-function f, assuming that the iterates \((f^n(x, \cdot ))_{n\in \mathbb {N}}\) converge in law and the limit does not depend on x; in such a case we denote by \(\pi ^f\) the distribution of the limit, i.e.

$$\begin{aligned} \pi ^f_n(x,\cdot )\xrightarrow [n\rightarrow \infty ]{w}\pi ^f. \end{aligned}$$

Definition 4.1

A function \(\varphi ^{\chi }:X\rightarrow \mathbb {C}\) given by

$$\begin{aligned} \varphi ^{\chi }(u)=\int _Xe^{i(u|z)}\mu _{\chi }(dz) \end{aligned}$$

is called a characteristic function of the X-valued random variable \(\chi \) with distribution \(\mu _{\chi }\).

Definition 4.2

A function \(\varphi ^{f}:X\rightarrow \mathbb {C}\) given by

$$\begin{aligned} \varphi ^{f}(u)=\int _Xe^{i(u|z)}\pi ^f(dz) \end{aligned}$$

is called a characteristic function of the rv-function f.

The problem of characterization of the limit distribution \(\pi ^f\) via a functional equation for its characteristic function \(\varphi ^f\) was considered in [5]. The author showed that for the rv-function f given by

$$\begin{aligned} f(x,\omega )=\Lambda x+\xi (\omega ) \end{aligned}$$

with \(\Lambda \in L(X,X)\) such that \(\Vert \Lambda \Vert <1\) and a random variable \(\xi :\Omega \rightarrow X\) such that \(\mathbb {E}\Vert \xi \Vert <\infty \) its characteristic function \(\varphi ^f\) is the only solution of the equation

$$\begin{aligned} \varphi ^f(u)=\varphi ^f(\Lambda ^*(u))\cdot \varphi ^{\xi }(u), \end{aligned}$$

where \(\Lambda ^*\) stand for the adjoint operator to \(\Lambda \), which satisfies \((\Lambda ^*u|z)=(u|\Lambda z)\) for every \(u,z\in X\). Our goal is to generalize this result to GRAM’s. First we give some preceding facts, which will be needed in the general setting.

Lemma 4.3

Let X be a Banach space. Assume that a random operator \(\Lambda :\Omega \rightarrow L(X,X)\) and a random variable \(\xi :\Omega \rightarrow X\) are independent. If \(x\in X\), then \(\Lambda (\cdot ) x:\Omega \rightarrow X\) and \(\xi :\Omega \rightarrow X\) are independent.

Proof

Fix \(x\in X\). Let us define \(\tau _x:L(X,X)\times X\rightarrow X^2\) by

$$\begin{aligned} \tau _x(T,y)=(Tx,y). \end{aligned}$$

Observe that \(\tau _x\) is well defined, continuous in product topology (by the continuity of T) and thus \(\mathcal {B}(L(X,X))\otimes \mathcal {B}(X)\)-measurable. Denote the distribution of \(\Lambda (\cdot )x\) by \(\mu _{\Lambda x}\). We claim that \(\mu _{(\Lambda x,\xi )}(B)=\mu _{(\Lambda ,\xi )}(\tau _x^{-1}(B))\) for \(B\in \mathcal {B}({X^2})\). Indeed we have

$$\begin{aligned}\mu _{(\Lambda x,\xi )}(B)&=\mathbb {P}(\left\{ \omega : (\Lambda (\omega )x,\xi (\omega ))\in B\right\} )= \mathbb {P}(\left\{ \omega : \tau _x(\Lambda (\omega ),\xi (\omega ))\in B\right\} )\\&=\mathbb {P}(\left\{ \omega : (\Lambda (\omega ),\xi (\omega ))\in \tau _x^{-1}(B)\right\} )= \mu _{(\Lambda ,\xi )}(\tau _x^{-1}(B)).\end{aligned}$$

It remains to show that \(\mu _{\Lambda x}\otimes \mu _{\xi }(B)=\mu _{\Lambda }\otimes \mu _{\xi }(\tau _x^{-1}(B))\) for \(B\in \mathcal {B}({X^2})\). Define \(B_y=\left\{ \overline{x}:(\overline{x},y)\in B\right\} \) and now we have the following

$$\begin{aligned}\mu _{\Lambda x}\otimes \mu _{\xi }(B)&=\int _X\mu _{\Lambda x}(B_y)\mu _{\xi }(dy)=\int _X\mu _{\Lambda x}(\left\{ \overline{x}: (\overline{x},y)\in B\right\} )\mu _{\xi }(dy)\\ {}&=\int _X\mathbb {P}(\left\{ \omega :(\Lambda (\omega ) x,y)\in B\right\} )\mu _{\xi }(dy)\\ {}&=\int _X\mathbb {P}(\left\{ \omega :(\Lambda (\omega ),y)\in \tau _x^{-1}(B)\right\} )\mu _{\xi }(dy)=\mu _{\Lambda }\otimes \mu _{\xi }(\tau _x^{-1}(B)).\end{aligned}$$

Finally by the assumption of independence we obtain

$$\begin{aligned} \mu _{(\Lambda x,\xi )}(B)=\mu _{(\Lambda ,\xi )}(\tau _x^{-1}(B))=\mu _{\Lambda }\otimes \mu _{\xi }(\tau _x^{-1}(B))=\mu _{\Lambda x}\otimes \mu _{\xi }(B), \end{aligned}$$

which ends the proof. \(\square \)

Lemma 4.4

Let X be a Banach space and \(n\in \mathbb {N}\). Assume that \(\Lambda :\Omega \rightarrow L(X,X)\) is a random operator and \(\psi :\Omega ^{n}\rightarrow X\), \(\xi :\Omega \rightarrow X\) are random variables. Define \(\psi _n:\Omega ^{\infty }\rightarrow X\), \(\Lambda _{n+1}:\Omega ^{\infty }\rightarrow L(X,X)\), \(\xi _{n+1}:\Omega ^{\infty }\rightarrow X\) by

$$\begin{aligned} \psi _n(\omega )=\psi (\omega _1,\ldots ,\omega _n), \qquad \Lambda _{n+1}(\omega )=\Lambda (\omega _{n+1}), \qquad \xi _{n+1}(\omega )=\xi (\omega _{n+1}) \end{aligned}$$

and \(\Lambda \psi _{n+1}:\Omega ^{\infty }\rightarrow X\) by

$$\begin{aligned} \Lambda \psi _{n+1}(\omega )=\Lambda _{n+1}(\omega )\psi _n(\omega )=\Lambda (\omega _{n+1})\psi (\omega _1,\ldots ,\omega _n), \end{aligned}$$

where \(\omega =(\omega _1,\omega _2,\ldots )\in \Omega ^{\infty }. \) If \(\Lambda _{n+1}\) and \(\xi _{n+1}\) are independent, then \(\Lambda \psi _{n+1}\) and \(\xi _{n+1}\) are also independent.

Proof

Fix \(B\in \mathcal {B}(X^2)\). Put

$$\begin{aligned} \eta (\omega _1,\ldots ,\omega _{n+1})= \Lambda (\omega _{n+1})\psi (\omega _1,\ldots ,\omega _n) \end{aligned}$$

and

$$\begin{aligned} \zeta (\omega _1,\ldots ,\omega _{n+1})= (\eta (\omega _1,\ldots ,\omega _{n+1}),\xi (\omega _{n+1})) \end{aligned}$$

for \(\omega _1,\dots ,\omega _{n+1}\in \Omega \). Then

$$\begin{aligned} \mu _{(\Lambda \psi _{n+1},\xi _{n+1})}(B)&=\mathbb {P}^{\infty }\Big (\Big \{(\omega _1, \omega _2\ldots ): \zeta (\omega _1,\ldots ,\omega _{n+1})\in B)\Big \}\Big )\\&=\mathbb {P}^{n+1}\Big (\Big \{(\omega _1,\ldots ,\omega _{n+1}): \zeta (\omega _1,\ldots ,\omega _{n+1})\in B)\Big \}\Big )\\&=\mathbb {P}^{n}\otimes \mathbb {P}\Big (\Big \{(\omega _1,\ldots ,\omega _{n+1}): \zeta (\omega _1,\ldots ,\omega _{n+1})\in B\Big \}\Big )\\&=\int _{\Omega ^n}\mathbb {P}\Big (\Big \{\omega _{n+1}: \zeta (\omega _1,\ldots ,\omega _{n+1})\in B)\Big \}\Big )d\mathbb {P}^n(d(\omega _1,\ldots ,\omega _n))\\&=\int _{\Omega ^n}\mu _{(\Lambda \psi (\omega _1,\ldots ,\omega _n),\xi )}(B)\mathbb {P}^n(d(\omega _1,\ldots ,\omega _n))\\&=\int _{\Omega ^n}\mu _{\Lambda \psi (\omega _1,\ldots ,\omega _n)}\otimes \mu _{\xi }(B)\mathbb {P}^n(d(\omega _1,\ldots ,\omega _n)), \end{aligned}$$

when the last equality holds due to Lemma 4.3. Therefore

$$\begin{aligned} \mu _{(\Lambda \psi _{n+1},\xi _{n+1})}(B)&=\int _{\Omega ^n}\int _X\mu _{\Lambda \psi (\omega _1,\ldots ,\omega _n)}(B_y) \mu _{\xi }(dy)\mathbb {P}^n(\omega _1,\ldots ,\omega _n)\\&=\int _X\int _{\Omega ^n}\mu _{\Lambda \psi (\omega _1,\ldots ,\omega _n)}(B_y) \mathbb {P}^n(d(\omega _1,\ldots ,\omega _n))\mu _{\xi }(dy)\\&=\int _X \mathbb {P}^n\otimes \mathbb {P}\Big (\Big \{(\omega _1,\ldots ,\omega _{n+1}):\eta (\omega _1,\ldots ,\omega _{n+1})\in B_y\Big \}\Big )\mu _{\xi }(dy)\\&=\int _X\mathbb {P}^{\infty }\Big (\Big \{(\omega _1,\omega _2\ldots ): \eta (\omega _1,\ldots ,\omega _{n+1})\in B_y\Big \}\Big )\mu _{\xi }(dy)\\&=\int _X\mathbb {P}^{\infty }\Big (\Big \{\omega : \Lambda \psi _{n+1}(\omega )\in B_y\Big \}\Big )\mu _{\xi }(dy)=\mu _{\Lambda \psi _{n+1}}\otimes \mu _{\xi _{n+1}}(B), \end{aligned}$$

which ends the proof. \(\square \)

Corollary 4.5

Let X be a separable Banach space. Assume that an rv-function \(f:X\times \Omega \rightarrow X\) is given by (1.2), where \(\Lambda :\Omega \rightarrow L(X,X)\) is a random operator and \(\xi :\Omega \rightarrow X\) is a random variable. If \(\Lambda \) and \(\xi \) are independent, \(x\in X\) and \(n\in \mathbb {N}\), then \(\Lambda _{n+1}(\cdot )f^n(x,\cdot ):\Omega ^{\infty }\rightarrow X\) with

$$\begin{aligned} \Lambda _{n+1}(\omega )f^n(x,\omega )=\Lambda (\omega _{n+1})f^n(x,\omega _1,\ldots ,\omega _n) \end{aligned}$$

and \(\xi _{n+1}:\Omega ^\infty \rightarrow X \) with \(\xi _{n+1}(\omega )=\xi (\omega _{n+1})\) are independent.

Having proved independence we also have to characterise the probability distribution of the sum of independent random variables. It is well known that such a distribution can be described as the convolution of each random variable distributions. More precisely, we have:

Theorem 4.6

Let X be a separable Banach space. If \(\eta :\Omega \rightarrow X\), \(\xi :\Omega \rightarrow X\) are independent random variables, then

$$\begin{aligned} \mu _{\eta +\xi }=\mu _{\eta }*\mu _{\xi }. \end{aligned}$$

Definition 4.7

If \(\Lambda :\Omega \rightarrow L(X,X)\) is a random operator, then a map \(\Lambda ^*:\Omega \rightarrow L(X,X)\) satisfying

$$\begin{aligned} (\Lambda ^*(\omega )x|y)=(x|\Lambda (\omega )y)\qquad \text {for every} \ \omega \in \Omega , x,y\in X \end{aligned}$$

is called an adjoint random operator to \(\Lambda \).

Lemma 4.8

A function \(\Lambda ^*:X\times \Omega \rightarrow X\) given by \(\Lambda ^*(x,\omega )=\Lambda ^*(\omega )x\) is \(\mathcal {B}(X)\otimes \mathcal {A}\)-measurable.

Proof

According to Remark 3.3 it is enough to show that \(\Lambda ^*(\cdot )x:\Omega \rightarrow X\) is \(\mathcal {A}\)-measurable for every \(x\in X\). Fix \(x\in X\) and observe that \((x|\Lambda (\omega )y):\Omega \rightarrow \mathbb {R}\) is \(\mathcal {A}\)-measurable for every \(y\in X\). By the Riesz Representation Theorem for every linear functional \(y^*:X\rightarrow \mathbb {R}\) there exists y such that

$$\begin{aligned} y^*\big (\Lambda ^*(\omega )x\big )=(\Lambda ^*(\omega )x|y) \qquad \text {for every} \ \omega \in \Omega . \end{aligned}$$

Therefore from the \(\mathcal {A}\)-measurability of \((x|\Lambda (\cdot )y):\Omega \rightarrow X\) we conclude that \(\Lambda ^*(\cdot )x\) is weak measurable. Since X is separable, we may conclude that \(\Lambda ^*(\cdot )x\) is strong measurable and consequently \(\mathcal {A}\)-measurable. \(\square \)

Remark 4.9

Note that \(\Vert \Lambda ^*(\cdot )\Vert :\Omega \rightarrow [0,\infty )\) is \(\mathcal {A}\)-measurable due to the equality

$$\begin{aligned} \Vert \Lambda (\omega )\Vert =\Vert \Lambda ^*(\omega )\Vert \ \ \text {for every} \ \omega \in \Omega . \end{aligned}$$

The following theorem characterizes the limit distribution of GRAM’s and it generalizes [5, Theorem 3.1] (see Remark 4.12).

Theorem 4.10

Assume that an rv-function f has the form (1.2) with a random operator \(\Lambda :\Omega \rightarrow L(X,X)\) and a random variable \(\xi :\Omega \rightarrow X\) such that \(\mathbb {E}\Vert \Lambda (\cdot )\Vert <1\), \(\mathbb {E}\Vert \xi \Vert <\infty \). Moreover, assume that \(\Lambda \) and \(\xi \) are independent. Then the characteristic function \(\varphi ^f\) of f is the only solution of the equation

$$\begin{aligned} \varphi ^f(u)= \varphi ^{\xi }(u)\int _{\Omega }\varphi ^f(\Lambda ^*(\omega )u)\mathbb {P}(d\omega ), \end{aligned}$$
(4.1)

which is continuous at zero, bounded and fulfills \(\varphi ^f(0)=1\).

Lemma 4.11

Let \((\Omega ,\mathcal {A},\mathbb {P})\) be an arbitrary probability space. Suppose that the independent and identically distributed random variables \(\zeta _i:\Omega \rightarrow \mathbb {R}\), \(i\in \mathbb {N}\) fulfil the following properties

  1. 1.

    \(\zeta _i \ge 0\)

  2. 2.

    \(0<\mathbb {E}\zeta _i <1\).

Then the sequence \((\prod _{i=1}^{n}\zeta _i)_{n\in \mathbb {N}}\) converges a.s. to zero.

Proof

To show convergence we will consider three cases:

I. If \(\mathbb {E}\zeta _i=0=\int _{\Omega }\zeta _i(\omega )\mathbb {P}(d\omega )\), then \(\zeta _i=0\) a.s., so is \(\prod _{i=1}^{n}\zeta _i\).

II. Assume that \(0<\mathbb {E}\zeta _i <1\) and \(\mathbb {P}(\zeta _i=0)=p>0\). Then

$$\begin{aligned}\mathbb {P}\Bigg (\Bigg \{\omega \in \Omega : \ {}&\prod _{i=1}^{n}\zeta _i(\omega )\ne 0\Bigg \}\Bigg )\\ {}&=\mathbb {P}\Big (\Big \{\omega \in \Omega : \zeta _i(\omega )\ne 0, \ \text{ for } \text{ every } \ i\in \left\{ 1,\ldots , n\right\} \Big \}\Big ) \\ {}&= \prod _{i=1}^{n}\mathbb {P}\Big (\Big \{\omega \in \Omega : \zeta _i(\omega )\ne 0 \Big \}\Big )=(1-p)^n. \end{aligned}$$

Define a set \(A_n=\left\{ \omega \in \Omega : \ \prod _{i=1}^{n}\zeta _i(\omega )\ne 0\right\} \) and observe that \(A_{n+1}\subset A_n\), and

$$\begin{aligned} A=\bigcap _{n=1}^{\infty }A_n\supset \left\{ \omega \in \Omega : \ \prod _{i=1}^{\infty }\zeta _i(\omega )\ne 0\right\} . \end{aligned}$$

By the continuity of the measure it follows that

$$\begin{aligned} \mathbb {P}\left( \left\{ \omega \in \Omega : \ \prod _{i=1}^{\infty }\zeta _i(\omega )\ne 0\right\} \right) =0. \end{aligned}$$

III. Now assume that \(0<\mathbb {E}\zeta _i <1\), and \(\mathbb {P}(\zeta _i=0)=0\). From Jensen’s inequality we have \(\mathbb {E}\log \zeta _i\le \log \mathbb {E}\zeta _i<0\). Observe that

$$\begin{aligned} \prod _{i=1}^{n}\zeta _i=e^{\log \prod _{i=1}^{n}\zeta _i}=\left( e^{\frac{1}{n}\sum _{i=1}^{n}\log \zeta _i}\right) ^n. \end{aligned}$$

If \(-\infty <\mathbb {E}\log \zeta _1\) then by the independence of \(\zeta _i's\) we can apply the Strong Law of Large Numbers, hence for \(0<\epsilon <|\mathbb {E}\log \zeta _1|\) there exists \(N_{\epsilon }\in \mathbb {N}\) such that

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^{n}\log \zeta _i<\mathbb {E}\log \zeta _1+\epsilon \qquad \text {for every} \ n>N_{\epsilon }. \end{aligned}$$

Therefore for the same \(n>N_{\epsilon }\) it holds that

$$\begin{aligned} \left( e^{\frac{1}{n}\sum _{i=1}^{n}\log \zeta _i}\right) ^n<e^{n(\mathbb {E}\log \zeta _1+\epsilon )}. \end{aligned}$$

Passing with n to the limit we obtain

$$\begin{aligned} \prod _{n=1}^{n}\zeta _i\xrightarrow {n\rightarrow \infty } 0\qquad \text {a.s.} \end{aligned}$$
(4.2)

If \(\mathbb {E}\log \zeta _1=-\infty \), then we can apply theorem [10, Theorem 2.4.5], from which we conclude that

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^{n}\log \zeta _i\xrightarrow {n\rightarrow \infty }-\infty \qquad \text {a.s.} \end{aligned}$$

Hence

$$\begin{aligned} \left( \prod _{n=1}^{n}\zeta _i\right) ^{\frac{1}{n}}=e^{\frac{1}{n}\sum _{i=1}^n\log \zeta _i}\xrightarrow {n\rightarrow \infty } 0\qquad \text {a.s.} \end{aligned}$$

Summarizing we get convergence in all cases. \(\square \)

Proof of Theorem 4.10

A random operator \(\Lambda :\Omega \rightarrow L(X,X)\) can be considered as an rv-function \(\Lambda :X\times \Omega \rightarrow X\) due to its measurability (see Sect. 3) and consequently we can associate it with a linear operator Q given by

$$\begin{aligned} Q\mu (B)=\int _{X}\int _{\Omega }\mathbbm {1}_B(\Lambda (\omega )x)\mathbb {P}(d\omega )\mu (dx), \ \ \text {for} \ B\in \mathcal {B}(X). \end{aligned}$$

Now let us define \(\pi ^{\Lambda f}_n:X\times \mathcal {B}(X)\rightarrow [0,1]\) by

$$\begin{aligned} \pi ^{\Lambda f}_n(x,B)=\mathbb {P}^{\infty }(\{(\omega _1,\omega _2\ldots ):\Lambda (\omega _{n+1})f^n(x,\omega _1,\ldots ,\omega _n)\in B\}) \end{aligned}$$

and observe that

$$\begin{aligned} \pi ^{\Lambda f}_n(x,\cdot )=Q\pi _{n}^f(x,\cdot ) \quad \text {for every} \ x\in X. \end{aligned}$$

Indeed, for fixed \(x\in X\), \(B\in \mathcal {B}(X)\) it holds that

$$\begin{aligned} \pi ^{\Lambda f}_n(x,B)&=\mathbb {P}^{\infty }(\{(\omega _1,\omega _2\ldots ):\Lambda (\omega _{n+1})f^n(x,\omega _1,\ldots ,\omega _n)\in B\})\\&=\int _{\Omega ^{\infty }}\mathbbm {1}_B(\Lambda (\omega _{n+1})f^n(x,\omega _1,\ldots ,\omega _n))\mathbb {P}^{\infty }(d(\omega _1,\omega _2\ldots ))\\&=\int _{\Omega }\int _{\Omega ^{\infty }}\mathbbm {1}_B(\Lambda (\overline{\omega })f^n(x,\omega _1,\ldots ,\omega _n))\mathbb {P}(d\overline{\omega })\mathbb {P}^{\infty }(d(\omega _1,\omega _2\ldots ))\\&=\int _{\Omega }\int _X\mathbbm {1}_B(\Lambda (\overline{\omega })y)\pi ^f_n(x,dy)\mathbb {P}(d\overline{\omega })=Q\pi _n^f(x,B). \end{aligned}$$

So now, by Corollary 4.5 and Theorem 4.6 we see that

$$\begin{aligned} \pi ^{ f}_{n+1}(x,\cdot )=\pi ^{\Lambda f}_{n}(x,\cdot )*\mu _{\xi } =Q\pi _{n}^f(x,\cdot )*\mu _{\xi }. \end{aligned}$$

It can be easily shown that the Markov operator Q has the Feller property. To do this let us see at first that

$$\begin{aligned} Q^*\psi (x)=\int _{\Omega }\psi (\Lambda (\omega )x)\mathbb {P}(d\omega ). \end{aligned}$$

For a fixed \(\psi \in C(X)\) take an arbitrary \(x_0\in X\) and note that for every \((x_n)_{n\in \mathbb {N}}\) such that \(x_n\xrightarrow []{n\rightarrow \infty }x_0\) we have \(\psi (\Lambda (\omega )x_n)\xrightarrow []{n\rightarrow \infty }\psi (\Lambda (\omega )x_0)\) for every \(\omega \in \Omega \). Let us define \(\varphi _n(\omega )=\psi (\Lambda (\omega )x_n)\) and \(\varphi _0(\omega )=\psi (\Lambda (\omega )x_0)\). Since \(|\varphi _n(\omega )|\le \Vert \psi \Vert _{\infty }\) for \(\omega \in \Omega \), \(n\in \mathbb {N}\) we can apply the Lebesgue Dominated Convergence theorem and hence

$$\begin{aligned} Q^*\psi (x_n)=\int _{\Omega } \varphi _n(\omega )\mathbb {P}(d\omega )\xrightarrow []{n\rightarrow \infty }\int _{\Omega } \varphi _0(\omega )\mathbb {P}(d\omega )=Q^*\psi (x_0). \end{aligned}$$

Because \(x_0\), \((x_n)_{n\in \mathbb {N}}\) and \(\psi \) are arbitrary, we have \(Q^*(C(X))\subset C(X)\). From that and [18, Theorem 1.1, Ch. III] we can pass n to the limit and we obtain

$$\begin{aligned} \pi ^f=Q\pi ^f*\mu _{\xi }. \end{aligned}$$

Now from the definition of the characteristic function we make the following computations

$$\begin{aligned}\varphi ^f(u)&=\int _Xe^{i(u|z)}\pi ^f(dz) =\int _Xe^{i(u|z)}Q\pi ^f*\mu _{\xi }(dz)\\&=\int _X\int _Xe^{i(u|x+y)}Q\pi ^f(dx)\mu _{\xi } (dy)\\&=\int _X\int _Xe^{i(u|x)}\cdot e^{i(u|y)}Q\pi ^f(dx)\mu _{\xi } (dy)\\&=\int _X\int _XQ^*e^{i(u|x)}\cdot e^{i(u|y)}\pi ^f(dx)\mu _{\xi } (dy)\\&=\int _X\int _X\left[ \int _{\Omega }e^{i(u|\Lambda (\omega )x)}\mathbb {P}(d\omega )\right] \cdot e^{i(u|y)}\pi ^f(dx)\mu _{\xi } (dy)\\&=\int _X e^{i(u|y)}\mu _{\xi }(dy)\cdot \int _{\Omega }\int _X e^{i(u|\Lambda (\omega )x)}\pi ^f(dx)\mathbb {P}(d\omega )\\&=\varphi ^{\xi }(u)\int _{\Omega }\int _X e^{i(\Lambda ^*(\omega )u|x)}\pi ^f(dx)\mathbb {P}(d\omega ) =\varphi ^{\xi }(u)\int _{\Omega }\varphi ^f(\Lambda ^*(\omega )u)\mathbb {P}(d\omega ). \end{aligned}$$

This shows that \(\varphi ^f\) satisfies (4.1).

It remains to show the uniqueness of the solution of (4.1). To do this, let us assume that \(\varphi \) is a bounded, continuous at zero solution of (4.1) and \(\varphi (0)=1\). Then observe that

$$\begin{aligned} \varphi (u)&=\int _{\Omega }\ldots \int _{\Omega }\varphi ^{\xi }(u)\prod _{i=2}^{n}\varphi ^{\xi }((\Lambda ^*)^{i-1}(\omega _1,\ldots ,\omega _{i-1})u)\times \\&\quad \times \varphi ((\Lambda ^*)^n(\omega _1,\ldots ,\omega _n )u)\mathbb {P}(d\omega _1)\ldots \mathbb {P}(d\omega _n), \end{aligned}$$

where

$$\begin{aligned} (\Lambda ^*)^i(\omega _1,\ldots ,\omega _i)u=\Lambda ^*(\omega _i)\circ \ldots \circ \Lambda ^*(\omega _1)u. \end{aligned}$$

It follows that for every \(n\in \mathbb {N}\) we can write

$$\begin{aligned} \varphi (u) =\int _{\Omega ^{\infty }}\prod _{i=1}^{n}\varphi ^{\xi }((\Lambda ^*)^{i-1}(\omega )u)\varphi ((\Lambda ^*)^n(\omega )u)\mathbb {P}^{\infty }(d\omega ). \end{aligned}$$
(4.3)

Since \(\Vert \Lambda ^*(\omega ) \Vert =\Vert \Lambda (\omega ) \Vert \) for every \(\omega \in \Omega \), we have \(\mathbb {E} \Vert \Lambda ^*(\cdot )\Vert =\mathbb {E} \Vert \Lambda (\cdot )\Vert <1\). Taking \(\zeta _i(\omega )= \Vert \Lambda ^*(\omega _i)\Vert \) for \(\omega =(\omega _1,\omega _2,\ldots )\in \Omega ^{\infty }\) we see that

$$\begin{aligned} \Vert (\Lambda ^*)^n(\omega )u\Vert \le \Vert u\Vert \prod _{i=1}^{n}\zeta _i(\omega ). \end{aligned}$$

By Lemma 4.11 we conclude that the sequence \(\big (\Vert (\Lambda ^*)^n(\cdot )(u)\Vert \big )_{n\in \mathbb N}\) converges a.s. to zero.

Fix \(n\in \mathbb {N}\) and let us define random variables \(\eta _n, \theta _n:\Omega ^{\infty }\rightarrow \mathbb {C}\), respectively, by

$$\begin{aligned} \eta _n(\omega )=\prod _{i=1}^n\varphi ^{\xi }((\Lambda ^*)^{i-1}(\omega )u) \quad {\text{ a }nd}\quad \theta _n(\omega )=\varphi ((\Lambda ^*)^n(\omega )u). \end{aligned}$$

Hence we can rewrite (4.3) as

$$\begin{aligned} \varphi (u)=\int _{\Omega ^{\infty }}\theta _n(\omega )\eta _n(\omega )\mathbb {P}^{\infty }(d\omega ), \ \ n\in \mathbb {N}, u\in X \end{aligned}$$

and thus we obtain

$$\begin{aligned}\Bigg |\int _{\Omega ^{\infty }}\theta _n(\omega )\eta _n(\omega )\mathbb {P}^{\infty }(d\omega )&-\int _{\Omega ^{\infty }}\eta _n(\omega )\mathbb {P}^{\infty }(d\omega )\Bigg |\\&\le \int _{\Omega ^{\infty }}|\theta _n(\omega )-1|\cdot |\eta _n(\omega )|\mathbb {P}^{\infty }(d\omega )\\&\le \int _{\Omega ^{\infty }}|\theta _n(\omega )-1|\mathbb {P}^{\infty }(d\omega ). \end{aligned}$$

Observe that \(|\theta _n(\omega )-1|\le \Vert \varphi \Vert _{\infty }+1\) and \((\theta _n)_{n\in \mathbb N}\) converges a.s. to 1, by the continuity of \(\varphi \) at zero. Therefore, from the Lebesgue dominated convergence theorem it can be concluded that

$$\begin{aligned} \int _{\Omega ^{\infty }}|\theta _n(\omega )-1|\mathbb {P}^{\infty }(d\omega )\xrightarrow []{n\rightarrow \infty }0. \end{aligned}$$

Hence passing with n to the limit we obtain

$$\begin{aligned} \varphi (u)=\lim _{n\rightarrow \infty }\int _{\Omega ^{\infty }}\prod _{i=1}^n\varphi ^{\xi }((\Lambda ^*)^{i-1}(\omega )u)\mathbb {P}^{\infty }(d\omega ),\end{aligned}$$
(4.4)

which completes the proof. \(\square \)

Remark 4.12

Note that under the assumptions of Theorem 4.10 the following statements hold:

  1. (i)

    The characteristic function \(\varphi ^f\) is the only solution of the equation (4.1), which is Lipschitz, continuous at zero and \(\varphi (0)=1\).

  2. (ii)

    If \(\Lambda \) does not depend on \(\omega \), i.e. \(\Lambda (\omega )\) is the same as \(\omega \) changes, then \(\varphi ^f\) is the only solution of the equation (4.1), which is continuous at zero and \(\varphi (0)=1\).

To show assertion (i) observe that for a function \(\varphi \) which is a solution of (4.1) and \(M>0\), a Lipschitz constant of \(\varphi \), the following inequalities hold,

$$\begin{aligned} \int _{\Omega ^{\infty }}|\varphi ((\Lambda ^*)^n(\omega )u)-1|\mathbb {P}^{\infty }(d\omega )&\le \int _{\Omega ^{\infty }}M\Vert (\Lambda ^*)^n(\omega )u)\Vert \mathbb {P}^{\infty }(d\omega ) \\&\le \Vert u\Vert M(\mathbb {E}\Vert \Lambda ^*(\cdot )\Vert )^n, \end{aligned}$$

which yields (4.4).

When (ii) holds, the formula (4.3) reduces to

$$\begin{aligned} \varphi (u)=\prod _{i=1}^{n}\varphi ^{\xi }((\Lambda ^*)^{i-1}u)\varphi ((\Lambda ^*)^nu) \end{aligned}$$

for any \(n\in \mathbb N\). Passing with n to the limit we obtain

$$\begin{aligned} \varphi (u)=\prod _{i=1}^{\infty }\varphi ^{\xi }((\Lambda ^*)^{i-1}u)\varphi ((\Lambda ^*)^nu). \end{aligned}$$
(4.5)

\(\square \)

Remark 4.13

Note that the expression (4.4) is in fact the formula of the unique solution \(\varphi \) of (4.1). In particular, when \(\Lambda \) is independent of \(\omega \), this solution takes the form (4.5) and it can also be found in [5, Theorem 3.1].

We now give an example of a GRAM which satisfies the assumptions of Theorem 4.10.

Example 4.14

Let us consider random variables \(\xi :\Omega \rightarrow X\) and \(\kappa :\Omega \rightarrow \mathbb {N}\). Take a countable family of linear bounded operators \(T_i:X\rightarrow X\), \(i\in \mathbb {N}\). We define \(\Lambda :\Omega \rightarrow L(X,X)\) as

$$\begin{aligned} \Lambda (\omega )=T_{\kappa {(\omega )}}, \ \ \text {for} \ \omega \in \Omega . \end{aligned}$$

Then the following statements hold:

  1. (i)

    \(\Lambda \) is a random operator.

  2. (ii)

    If \(\xi \) and \(\kappa \) are independent, then so are \(\xi \) and \(\Lambda \).

  3. (iii)

    The expected value of \(\Lambda \) is equal to

    $$\begin{aligned} \mathbb {E}\Vert \Lambda (\cdot )\Vert =\sum _{i\in \mathbb {N}}\mu _{\kappa }(\left\{ i\right\} )\Vert T_i\Vert . \end{aligned}$$
  4. (iv)

    The adjoint random operator \(\Lambda ^*\) has the form

    $$\begin{aligned} \Lambda ^*(\omega )=T^*_{\kappa (\omega )}. \end{aligned}$$

Assertion (i) follows from the fact that \(\Lambda \) can be rewritten in the form

$$\begin{aligned} \Lambda (\omega )=\sum _{i\in \mathbb {N}}\mathbbm {1}_{\kappa ^{-1}(\left\{ i\right\} )}(\omega )T_i, \ \ \text {for} \ \omega \in \Omega . \end{aligned}$$

Hence it can be easily seen that \(\Lambda \) is \(\mathcal {A}\)-measurable. To show statement (ii) assume that \(\xi \) and \(\kappa \) are independent and observe that \(\mu _{\Lambda }\) has the form

$$\begin{aligned}\mu _{\Lambda }(A)&=\mathbb {P}\left( \bigcup _{i\in \mathbb {N}}\left\{ \omega :\kappa (\omega )=i\right\} \cap \left\{ \omega :T_i\in A\right\} \right) \\&=\sum _{i\in \mathbb {N}}\mathbb {P}(\left\{ \omega :\kappa (\omega )=i\right\} \cap \left\{ \omega :T_i\in A\right\} )\end{aligned}$$

and

$$\begin{aligned}\mathbb {P}(\left\{ \omega :\kappa (\omega )=i\right\} \cap \left\{ \omega :T_i\in A\right\} )&={\left\{ \begin{array}{ll} \mathbb {P}(\left\{ \omega :\kappa (\omega )=i\right\} ), &{} T_i\in A \\ 0, &{} T_i\notin A \end{array}\right. }\\&=\mu _{\kappa }(\left\{ i\right\} )\delta _{T_i}(A).\end{aligned}$$

From that

$$\begin{aligned}\mu _{\Lambda }(A)&=\sum _{i\in \mathbb {N}}\mu _{\kappa }(\left\{ i\right\} )\delta _{T_i}(A).\end{aligned}$$

Now fix \(B\in \mathcal {B}(L(X,X))\otimes \mathcal {B}(X)\), define \(B_T\in \mathcal {B}(\mathbb {N})\otimes \mathcal {B}(X)\) as

$$\begin{aligned} B_T=\left\{ (i,y)\in \mathbb {N}\times X:(T_i,y)\in B \right\} \end{aligned}$$

and observe that

$$\begin{aligned} B^{T_i}=\left\{ y\in X:(T_i,y)\in B \right\} =(B_T)^i, \end{aligned}$$

where \(B^x=\left\{ y\in X:(x,y)\in B\right\} \), \(x\in L(X,X)\). An easy computation shows that

$$\begin{aligned} \mu _{\Lambda }\otimes \mu _{\xi }(B)&=\int _{L(X,X)}\mu _{\xi }(B^x)\mu _{\Lambda }(dx)=\sum _{i\in \mathbb {N}}\mu _{\xi }(B^{T_i})\cdot \mu _{\kappa }(\left\{ i\right\} )\\&=\int _{\mathbb {N}}\mu _{\xi }((B_T)^i)\mu _{\kappa }(di)=\mu _{\kappa }\otimes \mu _{\xi }(B_T)=\mu _{(\kappa ,\xi )}(B_T)\\&=\mathbb {P}(\omega :(\kappa (\omega ),\xi (\omega ))\in B_T)=\mathbb {P}(\omega :(T_{\kappa (\omega )},\xi (\omega ))\in B))\\&=\mu _{(\Lambda ,\xi )}(B). \end{aligned}$$

Statement (iii) is obvious. Finally to show (iv) fix \(i\in \mathbb {N}\) and observe that for \(\omega \in \kappa ^{-1}(\left\{ i\right\} )\) we have

$$\begin{aligned} (\Lambda ^*(\omega )x|y)=(x|T_iy)=(T^*_ix|y) \ \ \text {for every} \ x,y\in X. \end{aligned}$$

Therefore \(\Lambda ^*(\omega )=T_i^*\) for \(\omega \in \kappa ^{-1}(\left\{ i\right\} )\). From that we obtain

$$\begin{aligned} \Lambda ^*(\omega )=\sum _{i\in \mathbb {N}}\mathbbm {1}_{\kappa ^{-1}(\left\{ i\right\} )}(\omega )T^*_i=T^*_{\kappa (\omega )}, \ \ \text {for} \ \omega \in \Omega . \end{aligned}$$

By statements (i)–(iv) we can consider an rv-function f of the form

$$\begin{aligned} f(x,\omega )=T_{\kappa (\omega )}x+\xi (\omega ) \end{aligned}$$

and if we assume additionally that

$$\begin{aligned} \sum _{i\in \mathbb {N}}\mu _{\kappa }(\left\{ i\right\} )\Vert T_i\Vert<1\quad \textrm{and}\quad \mathbb {E}\Vert \xi \Vert <\infty , \end{aligned}$$

then Theorem 4.2 allows us to claim that (provided that \(\kappa \) and \(\xi \) are independent) the characteristic function \(\varphi ^f\) is the only solution of the equation

$$\begin{aligned} \varphi (u)=\varphi ^{\xi }(u)\sum _{i\in \mathbb {N}}\mu _{\kappa }(\left\{ i\right\} )\varphi (T^*_iu), \ \ u\in X,\end{aligned}$$
(4.6)

which is bounded, continuous at zero and \(\varphi (0)=1\). \(\square \)

It is worth pointing out that if we consider the class of solutions \(\varphi \) of the equation (4.1) (or in particular of (4.6)) which do not have to be either bounded or Lipschitz, then such a class can contain more than one solution, which is shown in the example given below.

Example 4.15

Fix \(a\in \mathbb {R}\) such that \(|a|>1\) and \(p\in \left( 0,\frac{1}{1+|a|}\right) \) and let \(X=\mathbb {R}\). Let operators \(T_i:\mathbb {R}\rightarrow \mathbb {R}, i\in \left\{ 1,2\right\} \) be given, respectively, by

$$\begin{aligned} T_1x=ax, \qquad T_2x=\frac{1}{a}x. \end{aligned}$$

Set a random variable \(\kappa :\Omega \rightarrow \mathbb {N}\) with the following distribution

$$\begin{aligned} \mu _{\kappa }(\left\{ 1\right\} )=p, \qquad \mu _{\kappa }(\left\{ 2\right\} )=1-p. \end{aligned}$$

It can be easily seen that for a random operator \(\Lambda \) given by

$$\begin{aligned} \Lambda (\omega )=T_{\kappa (\omega )}=\mathbbm {1}_{\kappa ^{-1}(\left\{ 1\right\} )}(\omega )T_1+\mathbbm {1}_{\kappa ^{-1}(\left\{ 2\right\} )}(\omega )T_2 \end{aligned}$$

we have

$$\begin{aligned} \mathbb {E}\Vert \Lambda (\cdot )\Vert =|a|\cdot p +\left| \frac{1}{a}\right| (1-p)<\frac{|a|^2-1}{|a|^2+|a|}+\frac{1}{|a|}=1. \end{aligned}$$

Observe furthermore that \(\Lambda \) and \(\Lambda ^*\) have the same distribution.

Now consider a random variable \(\xi :\Omega \rightarrow \mathbb {R}\), independent of \(\kappa \), with \(\mu _{\xi }=\delta _0\). Then \(\varphi ^\xi \equiv 1\). It is easy to check that \(\varphi ^f\equiv 1\) and it is a solution of the equation

$$\begin{aligned} \varphi (u)=p\varphi (au)+(1-p)\varphi \left( \frac{u}{a}\right) .\end{aligned}$$
(4.7)

However it is not unique in a family of continuous at zero functions \(\varphi \) which satisfy \(\varphi (0)=1\). To this end, take a function \(\varphi _0:\mathbb {R}\rightarrow \mathbb {R}\) with

$$\begin{aligned} \varphi _0(u)=|u|^{\log _{|a|}\left( \frac{1-p}{p}\right) }+1. \end{aligned}$$

Let us see that \(\varphi _0\) is continuous on its domain, \(\varphi _0(0)=1\) and

$$\begin{aligned}p\varphi _0(au)&+(1-p)\varphi _0\left( \frac{u}{a}\right) =p|u|^{\log _{|a|}\left( \frac{1-p}{p}\right) }\cdot |a|^{\log _{|a|}\left( \frac{1-p}{p}\right) }\\&+(1-p)|u|^{\log _{|a|}\left( \frac{1-p}{p}\right) }\cdot |a|^{-\log _{|a|}\left( \frac{1-p}{p}\right) }+1\\&=|u|^{\log _{|a|}\left( \frac{1-p}{p}\right) }+1=\varphi _0(u),\end{aligned}$$

so \(\varphi ^f\) is not the unique continuous solution of the equation (4.7) having value 1 at zero. \(\square \)

For GRAM’s f given above, the natural question arises whether an operator \( (\Lambda ,\xi ) \longmapsto \varphi ^f \) is continuous and what kind of continuity it has. Before we formulate an appropriate result, we present some additional facts in which \((X,\rho )\) is a metric space and

$$\begin{aligned} \ Lip_\alpha (X,Y)=\left\{ \varphi \in B(X,Y) :\Vert \varphi (x)-\varphi (y)\Vert \le \alpha \rho (x,y), \ x,y\in X \right\} \end{aligned}$$

for \(\alpha \in (0,\infty )\), and B(XY) is a set of all bounded functions acting on X into Y.

Definition 4.16

Let \((X,\rho )\) be a separable and complete metric space and let \((Y,\Vert \cdot \Vert )\) be a Banach space. We denote a metric \(d_H^{X,Y}\) on \(\mathcal M_1(X)\) by the formula

$$\begin{aligned} d_H^{X,Y}(\mu ,\nu )=\sup {\left\{ \left\| \int _X\varphi (x)\mu (dx)-\int _X \varphi (x) \nu (dx)\right\| :\varphi \in Lip_1(X,Y) \right\} }. \end{aligned}$$

Proposition 4.17

Assume that spaces X and Y are nontrivial. Then the metric \(d^{X,Y}_H\) is independent of the choise spaces X and Y, and moreover \(d^{X,Y}_H(\mu ,\nu )=d_H(\mu ,\nu )\) for every \(\mu ,\nu \in \mathcal {M}_1(X)\).

Proof

Fix \(u\in Lip_1(X)\) and \(x_0\in Y\) such that \(\Vert x_0\Vert =1\). Put \(\varphi _0 (x)=u(x)\cdot x_0\) for \(x\in X\), then \(\varphi _0\in Lip_1 (X,Y)\) and it is integrable in Bochner’s sense with respect to any probability measure, so we have

$$\begin{aligned}&\left| \int _X u(x)\mu (dx)-\int _X u(x)\nu (dx)\right| \\&\qquad \qquad \qquad \ =\frac{1}{\Vert x_0\Vert }\cdot \left\| x_0\left( \int _X u(x)\mu (dx)- \int _X u(x)\nu (dx) \right) \right\| \\&\qquad \qquad \qquad \ =\left\| \int _X\varphi _0 (x)\mu (dx)-\int _X \varphi _0 (x)\nu (dx)\right\| \le d^{X,Y}_H(\mu ,\nu ).\end{aligned}$$

Since u is arbitrary, we can take the supremum on the left hand side of the inequality and as a consequence we obtain \(d_H\le d^{X,Y}_H.\)

Now fix \(\varphi \in Lip_1(X,Y)\) and \(\mu , \nu \in \mathcal {M}_1(X)\). Then there exists \(y^*\in Y^*\) such that \(\Vert y^*\Vert =1\) and

$$\begin{aligned} \left\| \int _X\varphi (x) \mu (dx)-\int _X\varphi (x) \nu (dx)\right\| = \left| y^*\left( \int _X\varphi (x) \mu (dx)-\int _X\varphi (x) \nu (dx)\right) \right| \end{aligned}$$

by the Hahn–Banach theorem. Applying the Hille Theorem (see e.g. [8, Theorem 6 Ch. II]) we deduce that

$$\begin{aligned}&\left| y^*\left( \int _X\varphi (x) \mu (dx)-\int _X\varphi (x) \nu (dx)\right) \right| \\&\qquad \qquad \qquad \qquad \ =\left| \int _X y^*\circ \varphi (x) \mu (dx)-\int _X y^*\circ \varphi (x) \nu (dx)\right| \le d_H(\mu ,\nu ),\end{aligned}$$

and since \( y^*\circ \varphi \in Lip_1(X)\) we finally obtain \(d_H\ge d^{X,Y}_H.\) \(\square \)

Lemma 4.18

If \(u\in X\setminus \{0\}\) and a function \(\psi :X\rightarrow \mathbb {C}\) is given by \(\psi (z)= e^{i(u|z)}\), then \(\psi \in Lip_{\Vert u\Vert }(X,\mathbb {C})\).

Proof

Since \((u|z)\in \mathbb {R}\) for every \(u,z\in X\), it follows that

$$\begin{aligned}\left| \psi (z)-\psi (y)\right|&=\left| e^{i(u|z)}-e^{i(u|y)}\right| =\sqrt{2-2\cos {((u|z)-(u|y))}}\\&=2\left| \sin {\frac{(u|z-y)}{2}}\right| \le \left| 2\cdot \frac{(u|z-y)}{2}\right| \le \Vert u\Vert \cdot \Vert z-y\Vert .\end{aligned}$$

Then the proof is completed. \(\square \)

Proposition 4.19

Let \(f,g:X\times \Omega \rightarrow X\) be rv-functions. Assume that the iterates \((f^n(x, \cdot ))_{n\in \mathbb {N}}\), \((f^n(x, \cdot ))_{n\in \mathbb {N}}\) converge in law to \(\pi ^f\) and \(\pi ^g\), respectively, and the limits \(\pi ^f, \pi ^g\) do not depend on x. Then the following inequality for the characteristic functions \(\varphi ^f\) and \(\varphi ^g\) holds

$$\begin{aligned} \left| \varphi ^f(u)-\varphi ^g(u)\right| \le \Vert u\Vert \cdot d_H(\pi ^f,\pi ^g),\end{aligned}$$
(4.8)

for every \(u\in X\).

Proof

Fix \(u\in X\setminus \{0\}\) and define \(\psi :X\rightarrow \mathbb {C}\) as \(\psi (z)=e^{i(u|z)}\). Then \(\frac{1}{\Vert u\Vert }\psi \in Lip_{1}(X,\mathbb {C})\), by Lemma 4.18. Using Proposition 4.17 we see that

$$\begin{aligned}\frac{1}{\Vert u\Vert }\left| \varphi ^f(u)-\varphi ^g(u)\right|&=\left| \int \frac{1}{\Vert u\Vert }e^{i(u|z)}\pi ^f(dz)-\int \frac{1}{\Vert u\Vert }e^{i(u|z)}\pi ^g(dz)\right| \\&\le d^{X,\mathbb {C}}_H(\pi ^f,\pi ^g)=d_H(\pi ^f,\pi ^g).\end{aligned}$$

This ends the proof. \(\square \)

Remark 4.20

Inequality (4.8) can not be strengthened by

$$\begin{aligned} \left\| \varphi ^f-\varphi ^g\right\| _{\infty }\le d_H(\pi ^f,\pi ^g),\end{aligned}$$
(4.9)

which is shown in the example given below.

Example 4.21

Fix \(a\in \mathbb {R}\). For \(n\in \mathbb {N}\) let \(\xi _n:\Omega \rightarrow X\) be a random variable with uniform distribution on the interval \(\left[ a,a+\frac{1}{n}\right] \). (Obviously, we assume such \({\xi _n}'s\) can be constructed. It is possible for instance on the space \((\Omega ,\mathcal A,\mathbb P)\) as a unit interval with Lebesgue measure.) Define rv-functions \(f_n,g:X\times \Omega \rightarrow X\) by

$$\begin{aligned} f_n(x,\omega )=\xi _n(\omega ),\quad g(x,\omega )=a. \end{aligned}$$

Observe that the k-th iterate of \(f_n\) satisfies \(f_n^k(x,\omega _1,\ldots ,\omega _k)=\xi _n(\omega _k)\) and \(g^k(x,\omega )=a\). So we can write

$$\begin{aligned}\pi _k^{f_n}(A)&=\mathbb {P}^{\infty }\left( \left\{ (\omega _1,\omega _2,\ldots )\in \Omega ^{\infty }:f_n^{k}(x,\omega _1,\ldots ,\omega _k)\in A \right\} \right) \\&=\mathbb {P}^{\infty }\left( \left\{ (\omega _1,\omega _2,\ldots )\in \Omega ^{\infty }:\xi _n(\omega _k)\in A\right\} \right) \\&=\mathbb {P}\left( \left\{ \omega _k\in \Omega :\xi _n(\omega _k)\in A\right\} \right) =\int _A n\mathbbm {1}_{[a,n+\frac{1}{n}]}dx =\pi ^{f_n}(A).\end{aligned}$$

Additionally let us see that

$$\begin{aligned} \pi _k^g(A)=\delta _a(A)=\pi ^g(A). \end{aligned}$$

The characteristic functions of the above distributions have the following forms

$$\begin{aligned}\varphi ^{f_n}(u)&=\int _{\mathbb {R}}e^{iux}\pi ^{f_n}(dx)=\frac{n}{iu}e^{iua}\left( e^{iu\frac{1}{n}}-1\right) ,\\ \varphi ^g(u)&=\int _{\mathbb {R}}e^{iux}\pi ^{g}(dx)=e^{iua}.\end{aligned}$$

For every \(c\in Lip_1(\mathbb {R})\) we have the following computation

$$\begin{aligned}\Bigg |\int _{\mathbb {R}}c(x)\pi ^{f_n}(dx)&-\int _{\mathbb {R}}c(x)\pi ^g(dx)\Bigg |\\&=\Bigg |n\int _{\mathbb {R}}c(x)\cdot \mathbbm {1}_{[a,a+\frac{1}{n}]} dx-c(a)\Bigg |\\&=\Bigg |n\int _{\mathbb {R}}c(x)\cdot \mathbbm {1}_{[a,a+\frac{1}{n}]} dx-n\int _{\mathbb {R}}c(a)\cdot \mathbbm {1}_{[a,a+\frac{1}{n}]} dx\Bigg |\\&\le n\int _{\mathbb {R}}|x-a|\mathbbm {1}_{[a,a+\frac{1}{n}]}dx=\frac{1}{2n}.\end{aligned}$$

Taking supremum over all \(c\in Lip_1(\mathbb {R})\) we obtain

$$\begin{aligned} d_H(\pi ^{f_n},\pi ^g)\le \frac{1}{2n}\xrightarrow {n\rightarrow \infty }0. \end{aligned}$$

It is easily seen that \(\varphi ^{f_n}(u)\xrightarrow {n\rightarrow \infty }\varphi ^g(u)\) for every \(u\in X\), but

$$\begin{aligned} \left| \varphi ^{f_n}(u)-\varphi ^g(u)\right| \xrightarrow {u\rightarrow +\infty }1 \end{aligned}$$

for every \(n\in \mathbb {N}\). From that

$$\begin{aligned} \left\| \varphi ^{f_n}-\varphi ^g\right\| _{\infty }\ge 1 \ \ \text {for every} \ n\in \mathbb {N}. \end{aligned}$$

Therefore the sequence \(\big (\varphi ^{f_n}\big )_{n\in \mathbb N}\) is not convergent to \(\varphi ^g \) in the supremum norm \(\Vert \cdot \Vert _{\infty }\). \(\square \)

Now we turn to formulating the second theorem of this section that extends [4, Theorem 3]. We note that in this theorem a real separable Hilbert space X is considered and \(\varphi ^f,\varphi ^g\) denote the characteristic functions of \(\pi ^f, \pi ^f\), which result from Theorem 3.4. The announced theorem is a straightforward consequence of Theorem 3.4 and Lemma 4.19, and reads as follows.

Theorem 4.22

Assume that rv-functions fg satisfy (U\(_f\)) and (U\(_g\)), respectively. Then

$$\begin{aligned}\left| \varphi ^f(u)-\varphi ^g(u)\right| \le \Vert u\Vert \cdot \min&\left\{ \frac{1}{1-\mathbb {E}\Vert \Lambda _f(\cdot )\Vert }\left( \frac{\mathbb {E}\Vert \xi _g\Vert }{1-\mathbb {E}\Vert \Lambda _g(\cdot )\Vert }\alpha +\beta \right) ,\right. \\&\left. \frac{1}{1-\mathbb {E}\Vert \Lambda _g(\cdot )\Vert }\left( \frac{\mathbb {E}\Vert \xi _f\Vert }{1-\mathbb {E}\Vert \Lambda _f(\cdot )\Vert }\alpha +\beta \right) \right\} ,\end{aligned}$$

where \(\alpha =\mathbb {E}\Vert \Lambda _f(\cdot )-\Lambda _g(\cdot )\Vert , \ \beta =\mathbb {E}\Vert \xi _f-\xi _g\Vert .\)

Remark 4.23

The main results of [4, 5] concern rv-functions of the form \(f(x,\omega )=\Lambda x+\xi _f(\omega )\) with \(\Lambda \in L(X,X)\). In particular the author examines a kind of continuity of the operator \(\xi _f\longmapsto \varphi ^f\). Note that this is one case in our results, when \(\alpha =0\). Under appropriate assumptions we have

$$\begin{aligned} d_H(\pi ^f,\pi ^g)\le \frac{\mathbb E\Vert \xi _f-\xi _g\Vert }{1-\Vert \Lambda \Vert } \end{aligned}$$

as well as

$$\begin{aligned} \left| \varphi ^f(u)-\varphi ^g(u)\right| \le \frac{ \Vert u\Vert }{1-\Vert \Lambda \Vert }\mathbb E\Vert \xi _f-\xi _g\Vert . \end{aligned}$$