1 Introduction

In 2013, Khan [1] introduced the Picard-Mann hybrid iterative process. This new iterative process can be seen as a hybrid of Picard and Mann iterative processes. He proved that the Picard-Mann hybrid iterative process converges faster than all of the Picard, Mann, and Ishikawa iterative processes in the sense of Berinde [2]. It is our purpose to introduce the random Picard-Mann hybrid iterative process, which can be seen as the stochastic version of the Picard-Mann hybrid iterative process. A finer concept of almost stability for fixed point iteration procedures was introduced by Berinde [3]. He proved that the Kirk, Mann, and Ishikawa iteration procedures, which are known to be almost stable and stable with respect to some classes of contractive operators, are also summably almost stable. Also, we study the summable almost T-stability and strong convergence of the random Picard-Mann hybrid iterative process and the random Mann-type iterative process of a generalized class of random operators in separable Banach spaces. Our results are generalizations and improvements of several well-known deterministic stability results in a stochastic version.

Real world problems come with uncertainties and ambiguities. To deal with probabilistic models, probabilistic functional analysis has emerged as one of the momentous mathematical discipline and attracted the attention of several mathematicians over the years in view of its applications in diverse areas from pure mathematics to applied sciences. Random nonlinear analysis, which is an important branch of probabilistic functional analysis, deals with the solution of various classes of random operator equations and the related problems. Of course, the development of random methods have revolutionized the financial markets. Random fixed point theorems are stochastic generalizations of classical or deterministic fixed point theorems and are required for the theory of random equations, random matrices, random partial differential equations and various classes of random operators arising in physical systems (see [4, 5]). Random fixed point theory was initiated in the 1950s by the Prague school of probabilists. Spacek [6] and Hans [7] established a stochastic analog of the Banach fixed point theorem in a separable complete metric space. Itoh [8] in 1979 generalized and extended Spacek and Han’s theorem to a multi-valued contraction random operator. The survey article by Bharucha-Reid [9] in 1976, where he studied sufficient conditions for a stochastic analog of Schauder’s fixed point theorem for random operators, gave wings to random fixed point theory. Now this area has become a full fledged research area and many interesting techniques to obtain the solution of nonlinear random system have appeared in the literature (see [46, 8, 1019]).

Papageorgiou [16] established the existence of a random fixed point of measurable closed and nonclosed valued multi-functions satisfying general continuity conditions and hence improved the results in [8, 20] and [21]. Xu [18] extended the results of Itoh to a nonself random operator T, where T satisfies the weakly inward or the Leray-Schauder condition. Shahzad and Latif [17] proved a general random fixed point theorem for continuous random operators. As applications, they derived a number of random fixed points theorems for various classes of 1-set and 1-ball contractive random operators. Arunchai and Plubtieng [10] obtained some random fixed point results for the sum of a weakly-strongly continuous random operator and a nonexpansive random operator in Banach spaces.

Mann [22] introduced an iterative scheme and employed it to approximate the solution of a fixed point problem defined by nonexpansive mapping where Picard iterative scheme fails to converge. Later in 1974, Ishikawa [23] introduced an iterative scheme to obtain the convergence of a Lipschitzian pseudocontractive operator when Mann iterative scheme is not applicable. Many authors studied the convergence theorems and stability problems in Banach spaces and metric spaces (see [2428]).

The study of convergence of different random iterative processes constructed for various random operators is a recent development (see [1114], and the references mentioned therein). Recently, Zhang et al. [19] studied the almost sure T-stability and convergence of Ishikawa-type and Mann-type random algorithms for certain ϕ-weakly contractive type random operators in a separable Banach space.

They also established the Bochner integrability of random fixed point for such random operators. Beg et al. [29] recently studied the almost sure T-stability and strong convergence of the random Halpern iteration scheme and random Xu-Mann iteration scheme for a general class of random operators in a separable Banach space. Their results generalize well-known deterministic stability results in a stochastic version (see [30, 31]).

2 Preliminaries

Let \((\Omega,\Sigma ,\mu)\) be a complete probability measure space and \((E,B(E))\) be a measurable space, where E is a separable Banach space, \(B(E)\) is a Borel sigma algebra of E, \((\Omega,\Sigma )\) is a measurable space (Σ-sigma algebra) and μ is a probability measure on Σ, that is, a measure with total measure one.

A mapping \(\xi:\Omega\rightarrow E\) is called an E-valued random variable if ξ is \((\Sigma ,B(E))\)-measurable. A mapping \(\xi:\Omega\rightarrow E\) is called strongly μ-measurable if there exists a sequence \(\{\xi_{n}\}\) of μ-simple functions converging to ξ, μ-almost everywhere. Due to the separability of a Banach space E, the sum of two E-valued random variables is an E-valued random variable.

A mapping \(T:\Omega\times E\to E\) is called a random operator if for each fixed e in E, the mapping \(T(\cdot,e):\Omega\to E\) is measurable.

Throughout this paper, we assume that \((\Omega,\xi,\mu)\) is a complete probability measure space and E is a nonempty subset of a separable Banach space X.

Definition 2.1

[19]

Let \((\Omega,\xi,\mu)\) be a complete probability measure space and E be a nonempty subset of a separable Banach space X. Let \(T:\Omega\times E\to E\) be a random operator. Denote by

$$F(T)=\bigl\{ x^{\ast}(\omega)\in E:T\bigl(\omega,x^{\ast}(\omega) \bigr)=x^{\ast}(\omega), \omega\in\Omega\bigr\} $$

the set of random fixed points of T. For any given random variable \(x_{0}(\omega)\in E\), define the iterative scheme \(\{x_{n}(\omega)\} _{n=0}^{\infty}\subset E\) by

$$ x_{n+1}(\omega)=f\bigl(T, x_{n}(\omega)\bigr), \quad n=0,1,2, \ldots, $$
(2.1)

where f is a measurable function in the second variable.

Let \(x^{\ast}(\omega)\) be a random fixed point of T. Let \(\{y_{n}(\omega )\}_{n=0}^{\infty}\subset E\) be an arbitrary sequence of a random variable. Denote

$$ \varepsilon_{n}(\omega)=\bigl\Vert y_{n+1}(\omega)-f \bigl(T, y_{n}(\omega)\bigr)\bigr\Vert . $$
(2.2)

Then the iterative scheme (2.1) is said to be T-stable almost surely (a.s.) or stable with respect to T almost surely, if and only if \(\omega\in\Omega\), \(\varepsilon_{n}(\omega)\to0\) as \(n\to\infty\) implies that \(y_{n}(\omega)\to x^{\ast}(\omega)\in E\) almost surely.

Definition 2.2

[3]

Let \((X,d)\) be a metric space and \(T:X\to X\) be a self-map and \(x_{0}\in X\). Assume that the iteration procedure

$$ x_{n+1}=f(T,x_{n}),\quad n=0,1,2,\ldots, $$
(2.3)

converges to a fixed point p of T. Let \(\{y_{n}\}_{n=0}^{\infty}\) be an arbitrary sequence in X and

$$\varepsilon_{n}=d\bigl(y_{n+1},f(T,y_{n})\bigr),\quad n=0,1,2,\ldots. $$

The iteration procedure (2.3) is said to be summably almost T-stable or summably almost stable with respect to T if and only if

$$\sum_{n=0}^{\infty}\varepsilon_{n}< \infty \quad \text{implies that}\quad \sum_{n=0}^{\infty}d(y_{n},p)< \infty. $$

The following remarks were made by Berinde [3]

Remark 2.1

  1. (1)

    It is obvious that every almost stable iteration procedure is also summably almost stable, since

    $$\sum_{n=0}^{\infty}d(y_{n},p)< \infty \quad \text{implies that}\quad \lim_{n\to \infty}y_{n}=p. $$

    But the converse is not true. There exist fixed point iteration procedures which are not summably almost stable (see Example 1 [3]).

  2. (2)

    The summable almost stability of a fixed point iteration procedure actually expresses a very important property regarding the rate of convergence of the sequence \(\{y_{n}\}_{n=0}^{\infty}\), converging to the fixed point p, i.e., the fact that the ‘displacements’ \(d(y_{n},p)\) converge fast enough to 0 to ensure the convergence of the series \(\sum_{n=0}^{\infty}d(y_{n},p)\).

Example 2.1

[3]

Let \(T:[0,1]\to[0,1]\) be defined by \(Tx=x\), for each \(x\in[0,1]\), where [0,1] has the usual metric. Then T is continuous, nonexpansive and \(F(T)=[0,1]\). It is well known that the Picard iteration is not T-stable (and hence not almost T-stable). We shall show that the Picard iteration is not summably almost T-stable, too. Indeed, let \(p=0\). Take \(y_{n}=\frac{1}{n}\), for all \(n\geq1\). Then \(\lim_{n\to\infty}y_{n}=0\),

$$\varepsilon_{n}=\vert y_{n+1}-Ty_{n}\vert = \frac{1}{n(n+1)}, \quad \text{and hence } \sum_{n=0}^{\infty}\varepsilon_{n}< \infty, $$

but

$$\sum_{n=0}^{\infty} \Vert y_{n}-p \Vert =\sum_{n=0}^{\infty}\frac{1}{n}= \infty. $$

Therefore, the Picard iteration is not summably almost T-stable.

Motivated by the above facts, we now give the stochastic version of the concept of summable almost T-stability.

Definition 2.3

Let \((\Omega,\xi,\mu)\) be a complete probability measure space and E be a nonempty subset of a separable Banach space X. Let \(T:\Omega \times E\to E\) be a random operator. Denote by \(F(T)=\{x^{\ast}(\omega )\in E:T(\omega,x^{\ast}(\omega))=x^{\ast}(\omega), \omega\in\Omega\}\), the set of random fixed points of T. For any given random variable \(x_{0}(\omega)\in E\), define an iterative scheme \(\{x_{n}(\omega)\} _{n=0}^{\infty}\subset E\) by

$$ x_{n+1}(\omega)=f\bigl(T, x_{n}(\omega)\bigr), \quad n=0,1,2, \ldots, $$
(2.4)

where f is a measurable function in the second variable.

Let \(x^{\ast}(\omega)\) be a random fixed point of T. Let \(\{y_{n}(\omega )\}_{n=0}^{\infty}\subset E\) be an arbitrary sequence of a random variable. Set

$$ \varepsilon_{n}(\omega)=\bigl\Vert y_{n+1}(\omega)-f \bigl(T, y_{n}(\omega)\bigr)\bigr\Vert . $$
(2.5)

Then the iterative scheme (2.4) is said to be summably almost T-stable almost surely (a.s.) or summably almost stable with respect to T almost surely if and only if

$$ \omega\in\Omega, \sum_{n=0}^{\infty}\varepsilon_{n}(\omega)< \infty \quad \text{implies that}\quad \sum _{n=0}^{\infty}\bigl\Vert y_{n}(\omega) -x^{\ast}(\omega)\bigr\Vert < \infty $$
(2.6)

almost surely.

The concept of summable almost T-stability was introduced by Berinde [3]. Clearly, every almost stable iteration process is also summably almost stable, since

$$\sum_{n=0}^{\infty}\bigl\Vert y_{n}( \omega)-x^{\ast}(\omega)\bigr\Vert < \infty \quad \text{implies that}\quad \lim _{n\to\infty}y_{n}(\omega)=x^{\ast}(\omega), $$

but the converse is not true (see Example 1 of Berinde [3]).

It is well known that the summable almost stability of a fixed point iteration process actually expresses a very important property regarding the rate of convergence of the sequence \(\{y_{n}(\omega)\} _{n=0}^{\infty}\), converging to a fixed point \(x^{\ast}(\omega)\), i.e., the fact that the displacements \(\Vert y_{n}(\omega)-x^{\ast}(\omega)\Vert \) converge fast enough to 0 to ensure the convergence of the series \(\sum_{n=0}^{\infty} \Vert y_{n}(\omega)-x^{\ast}(\omega)\Vert \) (see Berinde [3]).

Osilike [32] introduced a contractive condition and established some interesting deterministic stability results considering the contractive operator T satisfying

$$ d(Tx,Ty)\leq Ld(x,Tx)+ad(x,y) $$
(2.7)

for all x and y, where \(L\geq0\) and \(0\leq a<1\). He proved some T-stability results for maps satisfying (2.7) with respect to Picard, Kirk, Mann, and Ishikawa iterative processes. Imoru and Olatinwo [33] generalized the results of Osilike [32] by proving some deterministic stability results for maps satisfying the following contractive conditions:

$$ d(Tx,Ty)\leq\varphi\bigl(d(x,Tx)\bigr)+ad(x,y), $$
(2.8)

where \(0\leq a<1\) and \(\varphi:\mathbb{R}^{+}\to\mathbb{R}^{+}\) is monotone increasing with \(\varphi(0)=0\). Bosede and Rhoades [34] considered the map T having a fixed point p and satisfying the contractive condition:

$$ d(p,Ty)\leq ad(p,y), $$
(2.9)

for some \(0\leq a<1\) and for each \(y\in X\), where X is a complete metric space.

Recently, Beg et al. [29] considered a general class of random mappings which generalize the contractive operators due to Osilike [32], Imoru and Olatinwo [33], and Bosede and Rhoades [34] in a stochastic version. This random operator is defined as follows:

$$ \bigl\Vert x^{\ast}(\omega)-T(\omega,y)\bigr\Vert \leq a(\omega)\bigl\Vert x^{\ast}(\omega)-y(\omega)\bigr\Vert , $$
(2.10)

where \(0\leq a(\omega)<1\) and for each \(y(\omega)\in E\).

From the above results, we now introduce the following generalized class of random operators.

Definition 2.4

Let \((\Omega,\xi,\mu)\) be a complete probability measure space and E be a nonempty subset of a separable Banach space X. A random operator \(T:\Omega\times E\to E\) is said to be generalized random φ-contractive type operator if there exists a continuous and nondecreasing function \(\varphi:\mathbb{R}^{+}\to\mathbb{R}^{+}\) with \(\varphi(t)>0\), \(\forall t\in(0,\infty)\) and \(\varphi(0)=0\) such that for each \(x^{\ast}(\omega)\in F(T)\), \(y\in E\), \(\omega\in\Omega\), we have

$$ \bigl\Vert x^{\ast}(\omega)-T(\omega,y)\bigr\Vert \leq\theta(\omega) \bigl\Vert x^{\ast}(\omega )-y(\omega)\bigr\Vert -\varphi \bigl(\bigl\Vert x^{\ast}(\omega)-y(\omega)\bigr\Vert \bigr), $$
(2.11)

where \(0\leq\theta(\omega)<1\).

We define the following iterative schemes for our main theorems due to the well-known Picard, Mann, Ishikawa iteration processes.

The random Picard-type iterative scheme is a sequence of functions \(\{\xi _{n}\}\) defined by

$$ \textstyle\begin{cases} \xi_{1}(\omega)\in E, \\ \xi_{n+1}(\omega)=T(\omega,\xi_{n}(\omega)). \end{cases} $$
(2.12)

The random Mann-type iterative scheme is a sequence of functions \(\{\xi _{n}\}\) defined by

$$ \textstyle\begin{cases} \xi_{0}(\omega)\in E, \\ \xi_{n+1}(\omega)=(1-a_{n})\xi_{n}(\omega)+a_{n}T(\omega,\xi _{n}(\omega)), \end{cases} $$
(2.13)

where \(0\leq a_{n} \leq1\) and \(\xi_{0}:\Omega\rightarrow E\) is an arbitrary measurable mapping.

The random Ishikwa-type iterative scheme is a sequence of functions \(\{\xi _{n}\}\) and \(\{\eta_{n}\}\) defined by

$$ \textstyle\begin{cases} \xi_{0}(\omega)\in E, \\ \xi_{n+1}(\omega)=(1-a_{n})\xi_{n}(\omega)+a_{n}T(\omega,\eta _{n}(\omega)), \\ \eta_{n}(\omega)=(1-c_{n})\xi_{n}(\omega)+c_{n}T(\omega,\xi _{n}(\omega )), \end{cases} $$
(2.14)

where \(0\leq a_{n},c_{n}\leq1\) and \(\xi_{0}:\Omega\rightarrow E\) is an arbitrary measurable mapping.

In 2013, Khan [1] introduced the Picard-Mann hybrid iterative process. This iterative process can be seen as a hybrid of Picard and Mann iterative processes. He proved that the Picard-Mann hybrid iterative process converges faster than all of the Picard, Mann, and Ishikawa iterative processes in the sense of Berinde [2].

Now, we introduce the random Picard-Mann hybrid iterative process as follows:

The random Picard-Mann hybrid iterative process is a sequence of functions \(\{x_{n}\}\) defined by

$$ \textstyle\begin{cases} x _{1}(\omega)\in E, \\ x _{n+1}(\omega)=T(\omega,y_{n}(\omega)), \\ y _{n}(\omega)=(1-\alpha_{n})x_{n}(\omega)+\alpha_{n}T(\omega,x _{n}(\omega)), \quad n\in\mathbb{N}, \end{cases} $$
(2.15)

where \(\{\alpha_{n}\}\in(0,1)\) and \(x_{1}:\Omega\to E\) is an arbitrary measurable mapping.

The purpose of this paper is to prove that the random Picard-Mann hybrid iterative process (2.15) is summably almost T-stable a.s., where T is the generalized random φ-contractive type operator defined in (2.11). We also establish some convergence results for the random Picard-Mann hybrid iterative process (2.15) and the random Mann iterative process (2.13). Our results are the improvements and generalizations of several well-known results in literature.

The following definitions are needed in this study and can be found in Beg et al. [29].

Definition 2.5

A mapping \(x:\Omega\to E\) is said to be a finitely valued random variable if it is constant on each finite number of disjoint sets \(A_{i}\in\Sigma \) and is equal to 0 on \(\Omega-(\bigcup_{i=1}^{n} A_{i})\). A mapping x is called a simple random variable if it is finitely valued and

$$\mu \bigl(\bigl\{ \omega:\bigl\Vert x(\omega)\bigr\Vert >0\bigr\} \bigr)< \infty. $$

Definition 2.6

A mapping \(x:\Omega\to E\) is said to be an E-valued random variable if the inverse image under the mapping x of every Borel subset β of E belongs to Σ, that is, \(x^{-1}(\beta)\in\Sigma \) for all \(\beta\in B(E)\).

Definition 2.7

A mapping \(x:\Omega\to E\) is said to be a strong random variable if there exists a sequence \(\{x_{n}(\omega)\}\) of simple random variables which converges to \(x(\omega)\) almost surely, i.e., there exists a set \(A_{0}\in\Sigma \) with \(\mu(A_{0})=0\) such that

$$\lim_{n\to\infty}x_{n}(\omega)=x(\omega),\quad \omega\in \Omega-A_{0}. $$

Definition 2.8

A mapping \(x:\Omega\to E\) is said to be a weak random variable if the function \(x^{\ast}(x(\omega))\) is a real valued random variable for each \(x^{\ast}\in E^{\ast}\), where \(E^{\ast}\) is the dual space of E.

In a separable Banach space X, the notions of strong and weak random variables \(x:\Omega\to X\) coincide and in respect of such a space X, x is called a random variable (see Joshi and Bose [4], Corollary 1).

Let Y be an another Banach space. Then Joshi and Bose [4] introduced the following definitions which will be needed in this study.

Definition 2.9

A mapping \(F:\Omega\times X\to Y\) is said to be a continuous random mapping if the set of all \(\omega\in\Omega\) for which \(F(\omega,x)\) is a continuous function of x, has measure one.

Definition 2.10

A mapping \(F:\Omega\times X\to Y\) is said to be a random mapping if \(F(\omega,x)=y(\omega)\) is a Y-valued random variable, for every \(x\in X\).

Definition 2.11

A random mapping \(F:\Omega\times X\to Y\) is said to be demi-continuous at \(x\in X\) if

$$\Vert x_{n}-x\Vert \to0 \quad \text{implies}\quad F(\omega,x_{n}) \rightharpoonup F(\omega,x) \quad \text{a.s.} $$

Definition 2.12

An equation of the type \(F(\omega,x(\omega))=x(\omega)\) is called a random fixed point equation, where \(F:\Omega\times X\to X\) is a random mapping.

Definition 2.13

A mapping \(x:\Omega\to X\) which satisfies the random fixed point equation \(F(\omega,x(\omega))=x(\omega)\) almost surely is said to be a wide sense solution of the fixed point equation.

Definition 2.14

A X-valued random variable \(x(\omega)\) is said to be a random solution of the fixed point equation or a random fixed point of F, if

$$\mu \bigl(\bigl\{ \omega:F\bigl(\omega,x(\omega)\bigr)=x(\omega)\bigr\} \bigr)=1. $$

Remark 2.2

It is well known that a random solution is a wide sense solution of the fixed point equation. The converse is not true. This was demonstrated in the following example given by Joshi and Bose [4].

Example 2.2

Let X be the set of all real numbers and let E be a non-measurable subset of X. Let \(F:\Omega\times X\to Y\) be a random mapping defined as \(F(\omega,x)=x^{2}+x-1\) for all \(\omega\in\Omega\). In this case, the real valued function \(x(\omega)\), defined as \(x(\omega)=1\) for all \(\omega\in\Omega\) is a random fixed point of F. However, the real valued function \(y(\omega)\) defined as

$$ y(\omega)= \textstyle\begin{cases} -1, &\omega\notin E,\\ 1, &\omega\in E, \end{cases} $$
(2.16)

is a wide sense solution of the fixed point equation \(F(\omega,x(\omega ))=x(\omega)\), without being a random fixed point of F.

The following lemmas are important roles for the proofs of the main theorems.

Lemma 2.1

[35]

Let \(\{\gamma_{n}\}\) and \(\{\lambda _{n}\} \) be two sequences of nonnegative real numbers and \(\{\sigma _{n}\}\) be a sequence of positive numbers satisfying the inequality:

$$\lambda_{n+1}\leq\lambda_{n}-\sigma_{n}\varphi ( \lambda_{n})+\gamma_{n}, $$

for each \(n\geq1\), where \(\varphi:\mathbb{R}^{+}\to\mathbb{R}^{+}\) is a continuous and strictly increasing function with \(\varphi(0)=0\). If

$$\sum_{n=1}^{\infty }\sigma_{n}=\infty \quad \textit{and} \quad \lim_{n\to\infty}\frac {\gamma_{n}}{\sigma_{n}}=0, $$

then \(\{\lambda_{n}\}\) converges to 0 as \(n\rightarrow\infty\).

Lemma 2.2

[2]

Let \(\{a_{n}\}_{n=0}^{\infty}\) and \(\{b_{n}\}_{n=0}^{\infty}\) be sequences of nonnegative numbers and \(0\leq q<1\) such that, for all \(n\geq0\),

$$a_{n+1}\leq qa_{n}+b_{n}. $$

Then we have the following statements:

  1. (i)

    If \(\lim_{n\to\infty}b_{n}=0\), then \(\lim_{n\to\infty}a_{n}=0\).

  2. (ii)

    If \(\sum_{n=0}^{\infty}b_{n}<\infty\), then \(\sum_{n=0}^{\infty}a_{n}<\infty\).

3 Convergence theorems

Now we are in a position to prove the convergence theorems of the iterative schemes. First, we give the convergence theorem for the random Picard-Mann hybrid iterative process.

Theorem 3.1

Let \((E,\Vert\cdot\Vert)\) be a separable Banach space and \(T:\Omega\times E\rightarrow E\) be a continuous generalized random φ-contractive type operator with a random fixed point \(x^{\ast}(\omega)\in F(T)\) satisfying (2.11). If \(\sum_{n=1}^{\infty}\alpha _{n}=\infty\), then the sequence \(\{x_{n}(\omega)\}_{n=1}^{\infty}\) of functions defined by the random Picard-Mann hybrid iterative process (2.15) converges strongly to \(x^{\ast}(\omega)\) almost surely.

Proof

Let \(A=\{\omega\in\Omega: 0\leq\theta(\omega)<1\}\) and

$$C_{x^{\ast},y}= \bigl\{ \omega\in\Omega: \bigl\Vert x^{\ast}( \omega)-T(\omega,y)\bigr\Vert \leq \theta(\omega)\bigl\Vert x^{\ast}( \omega)-y(\omega)\bigr\Vert -\varphi\bigl(\bigl\Vert x^{\ast}(\omega )-y(\omega)\bigr\Vert \bigr) \bigr\} . $$

Let S be a countable subset of E and let \(s\in S\). We have to show that

$$ \bigcap_{x^{\ast},y\in E}(C_{x^{\ast},y}\cap A)=\bigcap _{x^{\ast},s\in S}(C_{x^{\ast},s}\cap A). $$
(3.1)

Let \(\omega\in\bigcap_{x^{\ast},s\in S}(C_{x^{\ast},s}\cap A)\). Then, by using (2.11) and the triangle inequality, we have

$$ \begin{aligned}[b] \bigl\Vert x^{\ast}(\omega)-T(\omega,y) \bigr\Vert \leq{}&\bigl\Vert x^{\ast}(\omega)-T(\omega,s)\bigr\Vert + \bigl\Vert T(\omega,s)-T(\omega,y)\bigr\Vert \\ \leq{}&\theta(\omega)\bigl\Vert x^{\ast}(\omega)-s(\omega)\bigr\Vert -\varphi\bigl(\bigl\Vert x^{\ast}(\omega )-s(\omega)\bigr\Vert \bigr) \\ &{} +\bigl\Vert T(\omega,s)-T(\omega,y)\bigr\Vert \\ \leq{}&\theta(\omega) \bigl[\bigl\Vert x^{\ast}(\omega)-y(\omega)\bigr\Vert +\bigl\Vert y(\omega )-s(\omega)\bigr\Vert \bigr] \\ &{}-\varphi\bigl(\bigl\Vert x^{\ast}(\omega)-s(\omega)\bigr\Vert \bigr)+ \bigl\Vert T(\omega,s)-T(\omega,y)\bigr\Vert .\end{aligned} $$
(3.2)

Now, for any \(\epsilon>0\), we can find a \(\delta(y)>0\) such that

$$\bigl\Vert T(\omega,s)-T(\omega,y)\bigr\Vert < \epsilon, $$

whenever \(\Vert s(\omega)-y(\omega)\Vert <\delta\). But

$$ \bigl\Vert x^{\ast}(\omega)-T(\omega,y)\bigr\Vert \leq\theta(\omega) \bigl\Vert x^{\ast}(\omega )-y(\omega)\bigr\Vert -\varphi\bigl(\bigl\Vert x^{\ast}(\omega)-y(\omega)\bigr\Vert \bigr). $$
(3.3)

Hence, we have \(\omega\in\bigcap_{x^{\ast},y\in E}(C_{x^{\ast},y}\cap A)\). So we see that

$$ \bigcap_{x^{\ast},s\in S}(C_{x^{\ast},s}\cap A)\subset\bigcap _{x^{\ast},y\in E}(C_{x^{\ast},y}\cap A). $$
(3.4)

Clearly, we see that

$$ \bigcap_{x^{\ast},y\in E}(C_{x^{\ast},y}\cap A)\subset\bigcap _{x^{\ast},s\in S}(C_{x^{\ast},s}\cap A). $$
(3.5)

Hence, from (3.4) and (3.5) we have

$$ \bigcap_{x^{\ast},y\in E}(C_{x^{\ast},y}\cap A)= \bigcap _{x^{\ast},s\in S}(C_{x^{\ast},s}\cap A). $$
(3.6)

Let \(N=\bigcap_{x^{\ast},y\in E}(C_{x^{\ast},y}\cap A)\). Then \(\mu(N)=1\). Take \(\omega\in N\) and \(n\geq1\). Using (2.15), (3.2), and the fact that \(0\leq\theta(\omega)<1\), we have

$$\begin{aligned} \bigl\Vert x_{n+1}(\omega)-x^{\ast}( \omega)\bigr\Vert ={}&\bigl\Vert T\bigl(\omega,y_{n}(\omega) \bigr)-x^{\ast}(\omega)\bigr\Vert \\ ={}&\bigl\Vert x^{\ast}(\omega)-T\bigl(\omega,y_{n}(\omega) \bigr)\bigr\Vert \\ \leq{}&\theta(\omega)\bigl\Vert x^{\ast}(\omega)-y_{n}( \omega)\bigr\Vert -\varphi\bigl(\bigl\Vert x^{\ast}( \omega)-y_{n}(\omega)\bigr\Vert \bigr) \\ ={}&\theta(\omega)\bigl\Vert x^{\ast}(\omega)-\bigl[(1- \alpha_{n})x_{n}(\omega)+\alpha _{n}T\bigl( \omega,x_{n}(\omega)\bigr)\bigr]\bigr\Vert \\ &{}-\varphi\bigl(\bigl\Vert x^{\ast}(\omega)-y_{n}(\omega) \bigr\Vert \bigr) \\ \leq{}&\theta(\omega) (1-\alpha_{n})\bigl\Vert x^{\ast}( \omega)-x_{n}(\omega)\bigr\Vert \\ &{}+\theta(\omega)\alpha_{n}\bigl\Vert x^{\ast}(\omega)-T\bigl( \omega,x_{n}(\omega)\bigr)\bigr\Vert \\ \leq{}&(1-\alpha_{n})\bigl\Vert x^{\ast}( \omega)-x_{n}(\omega)\bigr\Vert +\alpha_{n}\theta(\omega ) \bigl\Vert x^{\ast}(\omega)- x_{n}(\omega)\bigr\Vert \\ &{}-\alpha_{n}\varphi\bigl(\bigl\Vert x^{\ast}( \omega)-x_{n}(\omega)\bigr\Vert \bigr) \\ \leq{}&(1-\alpha_{n})\bigl\Vert x^{\ast}( \omega)-x_{n}(\omega)\bigr\Vert +\alpha_{n}\bigl\Vert x^{\ast}(\omega )-x_{n}(\omega)\bigr\Vert \\ &{}-\alpha_{n}\varphi\bigl(\bigl\Vert x^{\ast}( \omega)-x_{n}(\omega)\bigr\Vert \bigr) \\ ={}&\bigl\Vert x^{\ast}(\omega)-x_{n}(\omega)\bigr\Vert - \alpha_{n}\varphi\bigl(\bigl\Vert x^{\ast}(\omega )-x_{n}(\omega)\bigr\Vert \bigr). \end{aligned}$$
(3.7)

Hence, from Lemma 2.1, we obtain

$$\lim_{n\to\infty}\bigl\Vert x_{n}(\omega)-x^{\ast}( \omega)\bigr\Vert =0. $$

Consequently, \(\{x_{n}(\omega)\}_{n=0}^{\infty}\) as defined by iterative process (2.15) converges strongly to \(x^{\ast}(\omega)\) almost surely. This completes the proof. □

Next, we establish the convergence result for the random Mann iterative process (2.13). This result improves, generalizes, and unites the results of Beg et al. [29] and several other well-known results in the literature.

Theorem 3.2

Let \((E,\Vert\cdot\Vert)\) be a separable Banach space and \(T:\Omega\times E\rightarrow E\) be a continuous generalized random φ-contractive type operator with a random fixed point \(x^{\ast}(\omega)\in F(T)\) satisfying (2.11). If \(\sum_{n=1}^{\infty}a_{n}=\infty\), then the sequence \(\{\xi_{n}(\omega)\}_{n=1}^{\infty}\) of functions defined by the random Mann iterative process (2.13) converges strongly to \(x^{\ast}(\omega)\) almost surely.

Proof

Using (2.13), (3.1)-(3.6) in Theorem 3.1 and recalling the condition that \(0\leq\theta(\omega)<1\), we have

$$\begin{aligned} \bigl\Vert \xi_{n+1}( \omega)-x^{\ast}(\omega)\bigr\Vert ={}&\bigl\Vert (1-a_{n}) \xi_{n}(\omega )+a_{n}T\bigl(\omega,\xi_{n}(\omega) \bigr)-x^{\ast}(\omega)\bigr\Vert \\ \leq{}&(1-a_{n})\bigl\Vert \xi_{n}(\omega)-x^{\ast}( \omega)\bigr\Vert +a_{n}\bigl\Vert x^{\ast}(\omega )-T\bigl( \omega,\xi_{n}(\omega)\bigr)\bigr\Vert \\ \leq{}&(1-a_{n})\bigl\Vert \xi_{n}(\omega)-x^{\ast}( \omega)\bigr\Vert +a_{n}\bigl[\theta(\omega)\bigl\Vert x^{\ast}(\omega)-\xi_{n}(\omega)\bigr\Vert \\ &{}-\varphi\bigl(\bigl\Vert x^{\ast}(\omega)-\xi_{n}(\omega) \bigr\Vert \bigr)\bigr] \\ \leq{}&(1-a_{n})\bigl\Vert \xi_{n}(\omega)-x^{\ast}( \omega)\bigr\Vert +a_{n}\bigl\Vert x^{\ast}(\omega)-\xi _{n}(\omega)\bigr\Vert \\ &{}-a_{n}\varphi\bigl(\bigl\Vert x^{\ast}(\omega)- \xi_{n}(\omega)\bigr\Vert \bigr) \\ ={}&\bigl\Vert \xi_{n}(\omega)-x^{\ast}(\omega)\bigr\Vert -a_{n}\varphi\bigl(\bigl\Vert \xi_{n}(\omega)-x^{\ast}(\omega)\bigr\Vert \bigr). \end{aligned}$$
(3.8)

Hence, from Lemma 2.1, we obtain

$$\lim_{n\to\infty}\bigl\Vert \xi_{n}(\omega)-x^{\ast}( \omega)\bigr\Vert =0. $$

Consequently, the sequence \(\{\xi_{n}(\omega)\}_{n=0}^{\infty}\) as defined by the iterative process (2.13) converges strongly to \(x^{\ast}(\omega)\) almost surely. This completes the proof. □

4 Summable almost stability

Next, we prove some stability results for the random Picard-Mann hybrid iterative process (2.15).

Theorem 4.1

Let \((E,\Vert\cdot\Vert)\) be a separable Banach space and \(T:\Omega\times E\rightarrow E\) be a continuous generalized random φ-contractive type operator with \(F(T)\neq\emptyset\). Let \(x^{\ast}(\omega)\) be a random fixed point of T. Let \(\{x_{n}\} _{n=0}^{\infty}\) be the sequence of functions defined by the random Picard-Mann hybrid iterative process (2.15) converging strongly to \(x^{\ast}(\omega)\) almost surely, where \(\alpha_{n}\in(0,1)\) and \(\sum_{n=1}^{\infty}\alpha_{n}=\infty\). Then \(\{x_{n}\}_{n=0}^{\infty}\) is summably almost stable with respect to T a.s.

Proof

Let \(\{k_{n}(\omega)\}_{n=0}^{\infty}\) be any sequence of random variables in E and

$$ \varepsilon_{n}(\omega)=\bigl\Vert k_{n+1}(\omega)-f \bigl(T,k_{n}(\omega)\bigr)\bigr\Vert =\bigl\Vert k_{n+1}( \omega)-T\bigl(\omega,k_{n}(\omega)\bigr)\bigr\Vert . $$
(4.1)

Then we want to show that the implication (2.6) holds. Now, using the proof of Theorem 3.1, (2.11), and (2.15), and recalling the condition that \(0\leq\theta(\omega)<1\), we have

$$\begin{aligned} \bigl\Vert k_{n+1}( \omega)-x^{\ast}(\omega)\bigr\Vert ={}&\bigl\Vert k_{n+1}( \omega)-T\bigl(\omega,k_{n}(\omega)\bigr)\bigr\Vert +\bigl\Vert T \bigl(\omega,k_{n}(\omega )\bigr)-x^{\ast}(\omega)\bigr\Vert \\ ={}&\varepsilon_{n}(\omega)+\bigl\Vert x^{\ast}(\omega)-T \bigl(\omega,k_{n}(\omega)\bigr)\bigr\Vert \\ \leq{}&\varepsilon_{n}(\omega)+\theta(\omega)\bigl\Vert x^{\ast}(\omega)-k_{n}(\omega)\bigr\Vert -\varphi\bigl(\bigl\Vert x^{\ast}(\omega)-k_{n}(\omega)\bigr\Vert \bigr) \\ \leq{}&\varepsilon_{n}(\omega)+\theta(\omega)\bigl\Vert k_{n}(\omega)-x^{\ast}(\omega)\bigr\Vert \\ ={}&\varepsilon_{n}(\omega)+\theta(\omega)\bigl\Vert (1- \alpha_{n})k_{n}(\omega)+\alpha _{n}T\bigl( \omega,k_{n}(\omega)\bigr)-x^{\ast}(\omega)\bigr\Vert \\ \leq{}&\varepsilon_{n}(\omega)+\theta(\omega) (1-\alpha_{n}) \bigl\Vert k_{n}(\omega )-x^{\ast}(\omega)\bigr\Vert \\ &{} +\theta(\omega)\alpha_{n}\bigl\Vert T\bigl(\omega,k_{n}( \omega)\bigr)-x^{\ast}(\omega)\bigr\Vert \\ \leq{}&\varepsilon_{n}(\omega)+\theta(\omega) (1-\alpha_{n}) \bigl\Vert k_{n}(\omega )-x^{\ast}(\omega)\bigr\Vert \\ &{}+\alpha_{n}\theta(\omega)\bigl\Vert x^{\ast}(\omega )-k_{n}(\omega)\bigr\Vert -\alpha_{n}\varphi\bigl(\bigl\Vert x^{\ast}(\omega)-k_{n}(\omega)\bigr\Vert \bigr) \\ ={}&\varepsilon_{n}(\omega)+\theta(\omega)\bigl[(1- \alpha_{n})\bigl\Vert k_{n}(\omega)-x^{\ast}(\omega) \bigr\Vert \\ &{}+\alpha_{n}\bigl\Vert x^{\ast}(\omega)-k_{n}( \omega)\bigr\Vert \bigr]-\alpha_{n}\varphi\bigl(\bigl\Vert x^{\ast}(\omega)-k_{n}(\omega)\bigr\Vert \bigr) \\ \leq{}&\varepsilon_{n}(\omega)+\theta(\omega)\bigl\Vert k_{n}(\omega)-x^{\ast}(\omega)\bigr\Vert . \end{aligned}$$
(4.2)

Now, if \(\sum_{n=0}^{\infty}\varepsilon_{n}(\omega)<\infty\) and using the condition \(0\leq\theta(\omega)<1\), then it follows from Lemma 2.2 that

$$\sum_{n=0}^{\infty}\theta(\omega)\bigl\Vert k_{n}(\omega)-x^{\ast}(\omega)\bigr\Vert < \infty, $$

which means that the random Picard-Mann hybrid iteration process (2.15) is summably almost stable with respect to T almost surely. This completes the proof. □

Next, we prove some summable almost T-stability results for the random Mann iterative process (2.13).

Theorem 4.2

Let \((E,\Vert\cdot\Vert)\) be a separable Banach space, \(T:\Omega\times E\rightarrow E\) be a continuous generalized random φ-contractive type operator with \(F(T)\neq\emptyset\), and \(x^{\ast}(\omega)\) be a random fixed point of T. Let \(\{\xi_{n}\} _{n=0}^{\infty}\) be the sequence of functions defined by the random Mann iterative process (2.13) converging strongly to \(x^{\ast}(\omega)\) almost surely, where \(a_{n}\in(0,1)\) and \(\sum_{n=1}^{\infty}a_{n}=\infty\). Then \(\{\xi_{n}\}_{n=0}^{\infty}\) is summably almost stable with respect to T a.s.

Proof

Let \(\{y_{n}(\omega)\}_{n=0}^{\infty}\) be a sequence of random variables in E and

$$ \varepsilon_{n}(\omega)=\bigl\Vert y_{n+1}(\omega)-f \bigl(T,y_{n}(\omega)\bigr)\bigr\Vert =\bigl\Vert y_{n+1}( \omega)-(1-a_{n})y_{n}(\omega)-a_{n}T\bigl( \omega,y_{n}(\omega)\bigr)\bigr\Vert . $$
(4.3)

Then we want to show that the implication (2.6) holds. Now, using the proof of Theorem 3.1, (2.11), and (2.13), and recalling the condition \(0\leq\theta(\omega)<1\), we have

$$\begin{aligned} \bigl\Vert y_{n+1}( \omega)-x^{\ast}(\omega)\bigr\Vert ={}&\bigl\Vert y_{n+1}( \omega)-(1-a_{n})y_{n}(\omega )-a_{n}T\bigl( \omega,y_{n}(\omega)\bigr) \\ &{}+\bigl[(1-a_{n})y_{n}(\omega)+a_{n}T\bigl( \omega,y_{n}(\omega)\bigr)\bigr]-x^{\ast}(\omega)\bigr\Vert \\ \leq{}&\bigl\Vert y_{n+1}(\omega)-(1-a_{n})y_{n}( \omega)-a_{n}T\bigl(\omega,y_{n}(\omega)\bigr)\bigr\Vert \\ &{}+\bigl\Vert (1-a_{n})y_{n}(\omega)+a_{n}T \bigl(\omega,y_{n}(\omega)\bigr)-x^{\ast}(\omega)\bigr\Vert \\ ={}&\varepsilon_{n}(\omega)+\bigl\Vert (1-a_{n})y_{n}( \omega)+a_{n}T\bigl(\omega,y_{n}(\omega )\bigr)-x^{\ast}( \omega)\bigr\Vert \\ \leq{}&\varepsilon_{n}(\omega)+(1-a_{n})\bigl\Vert y_{n}(\omega)-x^{\ast}(\omega)\bigr\Vert +a_{n}\bigl\Vert T\bigl(\omega,y_{n}(\omega)\bigr)-x^{\ast}(\omega)\bigr\Vert \\ ={}&\varepsilon_{n}(\omega)+(1-a_{n})\bigl\Vert y_{n}(\omega)-x^{\ast}(\omega)\bigr\Vert +a_{n}\bigl\Vert x^{\ast}(\omega)-T\bigl(\omega,y_{n}(\omega)\bigr)\bigr\Vert \\ \leq{}&\varepsilon_{n}(\omega)+(1-a_{n})\bigl\Vert y_{n}(\omega)-x^{\ast}(\omega)\bigr\Vert +a_{n} \bigl[\theta(\omega)\bigl\Vert x^{\ast}(\omega)-y_{n}(\omega) \bigr\Vert \\ &{}-\varphi\bigl(\bigl\Vert x^{\ast}(\omega)-y_{n}(\omega) \bigr\Vert \bigr)\bigr] \\ \leq{}&\varepsilon_{n}(\omega)+(1-a_{n})\bigl\Vert y_{n}(\omega)-x^{\ast}(\omega)\bigr\Vert +a_{n} \theta(\omega)\bigl\Vert y_{n}(\omega)-x^{\ast}(\omega)\bigr\Vert \\ ={}&\varepsilon_{n}(\omega)+\bigl(1-a_{n}+a_{n} \theta(\omega)\bigr)\bigl\Vert y_{n}(\omega)-x^{\ast}(\omega) \bigr\Vert \\ ={}&\varepsilon_{n}(\omega)+\bigl[1-\bigl(1-\theta(\omega) \bigr)a_{n}\bigr]\bigl\Vert y_{n}(\omega)-x^{\ast}( \omega)\bigr\Vert . \end{aligned}$$
(4.4)

Now, if \(\sum_{n=0}^{\infty}\varepsilon_{n}(\omega)<\infty\) and using the condition \(0\leq\theta(\omega)<1\), then it follows from Lemma 2.2 that

$$\sum_{n=0}^{\infty}\bigl[1-\bigl(1-\theta( \omega)\bigr)a_{n}\bigr]\bigl\Vert y_{n}( \omega)-x^{\ast}(\omega )\bigr\Vert < \infty, $$

which means that the random Mann iteration process (2.13) is summably almost stable with respect to T almost surely. This completes the proof. □