Abstract
In this paper, we establish strong and \(\Delta\)convergence theorems for a relatively new iteration process generated by generalized nonexpansive mappings in uniformly convex hyperbolic spaces. The theorems presented in this paper generalizes corresponding theorems for uniformly convex normed spaces of Kadioglu and Yildirim (Approximating fixed points of nonexpansive mappings by faster iteration process, arXiv:1402.6530v1 [math.FA], 2014) and CAT(0)spaces of Abbas et al. (J Inequal Appl 2014:212, 2014) and many others in this direction.
Introduction
In this paper, \(\mathbb {N}\) denotes the set of all positive integers while F(T) denotes the set of all fixed points of T, i.e., \(F(T)= \{ Tx = x; x \in C\}\).
Let C be a nonempty subset of normed space X and mapping \(T:C \rightarrow C\) is said to be

(i)
nonexpansive, if \(\Vert Tx  Ty \Vert \le \Vert x  y \Vert\), for all \(x, y \in C\),

(ii)
quasinonexpansive, if \(\Vert T x p \Vert \le \Vert x  p\Vert\), for all \(x\in C\) and \(p\in F(T)\).
Many nonlinear equations are naturally formulated as fixed point problems,
where T, the fixed point mapping, may be nonlinear. A solution \(x^{*}\) of the problem (1.1) is called a fixed point of the mapping T. Consider a fixed point iteration, which is given by
The iterative method (1.2) is also known as Picard iteration or the method of successive substitution. For the Banach contraction mapping theorem, the Picard iteration converges unique fixed point of T, but it fails to approximate fixed point for nonexpansive mappings, even when the existence of a fixed point of T is guaranteed.
Example 1.1
Consider a self mapping T on [0, 1] defined by \(Tx = 1x\) for \(0 \le x\le 1\). Then T is nonexpansive with unique fixed point at \(x= \frac{1}{2}\). If we choose a starting value \(x=a \ne \frac{1}{2}\), then successive iteration of T yield the sequence \(\{1a, a, 1a, \ldots \}\).
Thus, when a fixed point of nonexpansive mappings exists, other approximation techniques are needed to approximate it. In the last fifty years, the numerous numbers of researchers attracted in these direction and developed iterative process has been investigated to approximate fixed point for not only nonexpansive mapping, but also for some wider class of nonexpansive mappings (see e.g., Agarwal et al. [3], Ishikawa [9], Krasnosel’skiǐ [12], Mann [18], Noor [19], Schaefer [23]), and compare which one is faster.
Sahu [21] has introduced Normal Siteration Process, whose rate of convergence similar to the Picard iteration process and faster than other fixed point iteration processes (see [21, Theorem 3.6]).
\(\mathrm{(NS)}\) Normal Siteration process (see Sahu [21]) defined as follows:
For C a convex subset of normed space X and a nonlinear mapping T of C into itself, for each \(x_{1}\in C\), the sequence \(\{ x_{n}\}\) in C is defined by
where \(\{\alpha _{n}\}\) is real sequences in (0, 1).
It brings a following natural question.
Question 1.1
Does there exists an iteration process whose rate of convergence is faster than Normal Siteration process for contraction mappings?
The question have been resolved in affirmative way by Abbas et al. [2], Kadioglu and Yildirim [11, Theorem 5], Thakur et al. [26, Theorem 2.3], developed new iteration processes for approximating the fixed point, as earliest as possible compare Normal Siteration process. The following iteration process developed by Kadioglu and Yildirim [11] for approximating the fixed point for nonexpansive mapping and establish some strong and weak convergence theorems in uniformly convex Banach spaces.
\(\mathrm{(PNS)}\) Picard normal Siteration process (see Kadioglu and Yildirim [11]) defined as follows: With C, X and T as in (NS), for each \(x_{1}\in C\), the sequence \(\{ x_{n} \}\) in C is defined by
where \(\{\alpha _{n}\}\) and \(\{\beta _{n}\}\) are real sequences in (0, 1).
Remark 1.1
If \(\beta _{n}=0\) and \(\alpha _{n}= \beta _{n}=0\) in the process (1.4) then it reduces to Normal Siteration process (1.3) and Picard iteration process (1.2) respectively.
The purpose of this paper is to establish strong and \(\Delta\)convergence theorems for a new iteration process generated by generalized nonexpansive mappings in uniformly convex hyperbolic spaces. The theorems presented in this paper generalizes corresponding theorems for uniformly convex normed spaces of Kadioglu and Yildirim [11] and CAT(0)spaces of Abbas et al. [1] and many others in this directions (see Itoh [8], Kim et al. [14], Sahu [21] etc.).
Preliminaries
Let (X, d) be a metric space and C be a nonempty subset of X. Suzuki [24] introduced a class of single valued mappings called Suzukigeneralized nonexpansive mappings (or condition (C)), satisfying a condition
which is weaker than nonexpansiveness and stronger than quasi nonexpansiveness. The following examples make obvious this fact.
Example 2.1
[24] Define a mapping T on [0, 3] by
Then T satisfies condition (C), but T is not nonexpansive.
Example 2.2
[24] Define a mapping T on [0, 3] by
Then \(F(T)= \{0\} \ne \varnothing\) and T is quasinonexpansive, but T does not satisfy condition (C).
In [10], Karapinar and Tas introduced some new definitions which are modifications of Suzuki’sgeneralized nonexpansive mappings (or condition (C)) as follows.
Definition 2.1
Let C be a nonempty subset of a metric space X. The mapping \(T:C \rightarrow C\) is said to be

(i)
SuzukiCiric mapping (SCC) [10] if
$$\begin{aligned} & \frac{1}{2} d(T x, T y)\le d( x, y) \Longrightarrow d( Tx , Ty) \le M(x, y)\\ & \text {where} ~~ M(x, y)= \max \{d(x, y) , d( x, Tx) , d( y, Ty), d( x, Ty), d( y, Tx) \} \end{aligned}$$for all \(x, y \in C;\)

(ii)
SuzukiKC mapping (SKC) if
$$\begin{aligned} \frac{1}{2} d(T x, T y)\le & {} d( x, y)\; \Longrightarrow \; d( Tx , Ty) \le N(x, y) \\ \text {where}~~ N( x, y)= & {} \max \bigg \{ d(x, y) ,\frac{ d( x, Tx) + d( y, Ty)}{2}, \frac{d( x, Ty)+ d( y, Tx)}{2} \bigg \} \end{aligned}$$for all \(x, y \in C;\)

(iii)
Kannan Suzuki mapping (KSC) if
$$\begin{aligned} \frac{1}{2} d(T x, T y)\le & {} d( x, y) \; \Longrightarrow \; d( Tx , Ty) \le \frac{d( x, Tx) + d( y, Ty)}{2} \end{aligned}$$for all \(x, y \in C;\)

(iv)
Chatterjea–Suzuki mappings (CSC) if
$$\begin{aligned} \frac{1}{2} d(T x, T y)\le & {} d( x, y) \; \Longrightarrow \; d( Tx , Ty) \le \frac{d( y, Tx) + d( x, Ty)}{2} \end{aligned}$$for all \(x, y \in C;\)
Theorem 2.1
[10] Let T be a mapping on a closed subset C of a metric space X and T satisfy condition SKC. Then \(d( x, Ty) \le 5 d( Tx, x) + d( x, y)\) holds for \(x, y \in C\).
Remark 2.1
Theorem 2.1 holds if one replaces condition SKC by one of the conditions KSC, SCC, and CSC.
Recently, GarcíaFalset et al. [7] introduced two generalizations of nonexpansive mappings which in turn include Suzuki generalized nonexpansive mappings contained in [24].
Definition 2.2
Let T be a mapping defined on a subset C of metric space X and \(\mu \ge 1\). Then T is said to satisfy condition \((E_\mu )\), if (for all \(x, y \in C\))
Often, T is said to satisfy condition (E) whenever T satisfies condition \((E_\mu )\) for some \(\mu \ge 1\).
Remark 2.2
If T satisfies one of the conditions SKC, KSC, SCC, and CSC, then T satisfies condition \(E_\mu\) for \(\mu =5\).
Definition 2.3
Let T be a mapping defined on a subset C of a metric space X and \(\lambda \in (0, 1)\). Then T is said to satisfy the condition \((C_\lambda )\) if for all \(x, y \in C\)
For \(0< \lambda _1< \lambda _2 < 1\), the condition \((C_{\lambda _1)}\) implies the condition \((C_{\lambda _2})\).
The following example shows that the class of mappings satisfying conditions (E) and \((C_\lambda )\) for some \(\lambda \in (0, 1)\) is larger than the class of mappings satisfying the condition (C).
Example 2.3
[7] For a given \(\lambda \in (0, 1)\), define a mapping T on [0, 1] by
The mapping T satisfies the condition \((C_\lambda )\) but it fails the condition \((C_{\lambda _1})\), whenever \(0< \lambda _1 < \lambda\). Moreover, T satisfies the condition \((E_\mu )\) for \(\mu = \frac{2 + \lambda }{2}\).
Throughout, this paper we work in the setting of hyperbolic spaces introduced by Kohlenbach [15].
A hyperbolic space (X, d, W) is a metric space (X, d) together with a convexity mapping \(W: X^{2} \times [0,1] \rightarrow X\) satisfying

\((W_{1})\) \(d( u, W(x, y, \alpha )) \le \alpha d( u, x) + ( 1\alpha ) d( u, y);\)

\((W_{2})\) \(d(W(x, y, \alpha ), W(x, y,\beta )) = \vert \alpha  \beta \vert d( x, y);\)

\((W_{3})\) \(W(x, y, \alpha ) = W( y, x, 1\alpha );\)

\((W_{4})\) \(d( W(x, z, \alpha ), W( y, w, \alpha ))\le (1\alpha ) d( x, y) + \alpha d( z, w),\)

for all \(x, y, z, w \in X\) and \(\alpha , \beta \in [0,1]\).
A metric space is said to be a convex metric space in the sense of Takahashi [25], where a triple (X, d, W) satisfy only \((W_{1}).\) The concept of hyperbolic spaces in [15] is more restrictive than the hyperbolic type introduced by Goebel and Kirk [5] since \((W_{1})\) and\((W_{2})\) together are equivalent to (X, d, W) being a space of hyperbolic type in [5]. But it is slightly more general than the hyperbolic space defined in Reich and Shafrir [20] (see [15]). This class of metric spaces in [15] covers all normed linear spaces, \(\mathbb {R}\)trees in the sense of Tits, the Hilbert ball with the hyperbolic metric (see [6]), Cartesian products of Hilbert balls, Hadamard manifolds (see [20]), and CAT(0) spaces in the sense of Gromov (see [4]). A thorough discussion of hyperbolic spaces and a detailed treatment of examples can be found in [15] (see also [5, 6, 20]).
If \(x, y \in X\) and \(\lambda \in [0,1],\) then we use the notation \((1\lambda )x \oplus \lambda y\) for \(W(x, y, \lambda )\). The following holds even for the more general setting of convex metric space [25]: for all \(x, y\in X\) and \(\lambda \in [0,1],\)
As consequence,
and
A hyperbolic space (X, d, W) is uniformly convex [16] if for any \(r > 0\) and \(\varepsilon \in (0, 2],\) there exists \(\delta \in (0, 1]\) such that for all \(a, x, y \in X,\)
provided \(d(x, a) \le r,~ d(y, a) \le r\) and \(d(x, y) \ge \varepsilon r.\)
A mapping \(\eta :(0, \infty ) \times (0, 2] \rightarrow (0,1],\) which providing such a \(\delta = \eta (r, \varepsilon )\) for given \(r>0\) and \(\varepsilon \in (0, 2],\) is called as a modulus of uniform convexity. We call the function \(\eta\) is monotone if it decreases with r (for fixed \(\varepsilon\)), that is \(\eta (r_{2} , \varepsilon ) \le \eta (r_{1}, \varepsilon ), ~~\forall r_{2} \ge r_{1} >0.\)
In [16], Leuştean proved that CAT(0) spaces are uniformly convex hyperbolic spaces with modulus of uniform convexity \(\eta (r, \varepsilon ) = \frac{\varepsilon ^{2}}{8}\) quadratic in \(\varepsilon .\) Thus, the class of uniformly convex hyperbolic spaces are a natural generalization of both uniformly convex Banach spaces and CAT(0) spaces.
Now, we give the concept of \(\Delta\)convergence and some of its basic properties.
Let C be a nonempty subset of metric space (X, d) and \(\{x_{n}\}\) be any bounded sequence in X while diam(C) denote the diameter of C. Consider a continuous functional \(r_{a}(\cdot , \{x_{n}\}): X \rightarrow \mathbb {R^{+}}\) defined by
The infimum of \(r_{a} (\cdot , \{x_{n}\})\) over C is said to be the asymptotic radius of \(\{x_{n}\}\) with respect to C and is denoted by \(r_{a}(C, \{x_{n}\})\).
A point \(z \in C\) is said to be an asymptotic center of the sequence \(\{x_{n}\}\) with respect to C if
the set of all asymptotic centers of \(\{x_{n}\}\) with respect to C is denoted by \(AC(C, \{x_{n}\})\). This set may be empty, a singleton, or certain infinitely many points.
If the asymptotic radius and the asymptotic center are taken with respect to X, then these are simply denoted by \(r_{a}( X, \{x_{n}\}) = r_{a}( \{x_{n}\})\) and \(AC(X, \{x_{n}\})= AC(\{x_{n}\}),\) respectively. We know that for \(x \in X\), \(r_{a}( x, \{x_{n}\}) = 0\) if and only if \(\lim _{n \rightarrow \infty } x_{n} = x\). It is known that every bounded sequence has a unique asymptotic center with respect to each closed convex subset in uniformly convex Banach spaces and even CAT(0) spaces.
The following Lemma is due to Leuştean [17] and ensures that this property also holds in a complete uniformly convex hyperbolic space.
Lemma 2.1
[17, Proposition 3.3] Let (X, d, W) be a complete uniformly convex hyperbolic space with monotone modulus of uniform convexity \(\eta\). Then every bounded sequence \(\{x_{n}\}\) in X has a unique asymptotic center with respect to any nonempty closed convex subset C of X.
Recall that, a sequence \(\{x_{n}\}\) in X is said to \(\Delta\) converge to \(x \in X,\) if x is the unique asymptotic center of \(\{u_{n}\}\) for every subsequence \(\{u_{n}\}\) of \(\{x_{n}\}\). In this case, we write \(\Delta\)\(\lim \nolimits _{n} x_{n} = x\) and call x the \(\Delta\)limit of \(\{x_{n}\}\).
Lemma 2.2
[13] Let (X, d, W) be a uniformly convex hyperbolic space with monotone modulus of uniform convexity \(\eta\). Let \(x \in X\) and \(\{t_{n}\}\) be a sequence in [a, b] for some \(a, b \in (0,1)\). If \(\{x_{n}\}\) and \(\{y_{n}\}\) are sequences in X such that
for some \(c \ge 0,\) then \(\displaystyle \lim \nolimits _{ n \rightarrow \infty } d( x_{n}, y_{n} ) =0.\)
Lemma 2.3
Let (X, d) be complete uniformly convex hyperbolic space with monotone modulus of convexity \(\eta\), C be a nonempty closed convex subset of X and \(T : C \rightarrow C\) be a mapping which satisfies conditions \((C_\lambda )\) (for some \(\lambda \in (0, 1)\)) and (E) on C. Suppose \(\{x_{n}\}\) is bounded sequence in C such that
then T has a fixed point.
Proof
Since \(\{x_{n}\}\) is bounded sequence in X, then by Lemma 2.1, has unique asymptotic center in C, i.e., \(AC(C, \{x_{n}\}) = \{x\}\) is singleton and \(\lim _{ n \rightarrow \infty } d( x_{n} , Tx_{n})=0\). Since T satisfies the condition \((E_\mu )\) on C, there exists \(\mu > 1\) such that
Taking \(\limsup\) as \(n \rightarrow \infty\) both the sides, we have
By using the uniqueness of asymptotic center, \(Tx = x\), so x is fixed point of T. Hence, F(T) is nonempty. \(\square\)
Main results
We begin with the definition of Fejér monotone sequences:
Definition 3.1
Let C be a nonempty subset of hyperbolic space X and \(\{x_n\}\) be a sequence in X. Then \(\{x_n\}\) is Fejér monotone with respect to C if for all \(x \in C\) and \(n \in \mathbb {N}\),
Example 3.1
Let C be a nonempty subset of X, and \(T :C \rightarrow C\) be a quasinonexpansive (in particular, nonexpansive) mapping such that \(F(T) \ne \varnothing\) and \(x_0 \in C\). Then the sequence \(\{x_n\}\) of Picard iterates is Fejér monotone with respect to F(T).
We can easily prove the following proposition.
Proposition 3.1
Let \(\{x_n\}\) be a sequence in X and C be a nonempty subset of X. Suppose that \(\{x_n\}\) is Fejér monotone with respect to C, then we have the followings:

(1)
\(\{x_n\}\) is bounded.

(2)
The sequence \(\{d(x_n, p)\}\) is decreasing and converges for all \(p \in F(T)\).
We now define Picard Normal Siteration process (PNS) in hyperbolic spaces:
\(\mathrm{(PNS)}\) Picard normal Siteration process: Let C be a nonempty closed convex subset of a hyperbolic space X and \(T :C \rightarrow C\) be a mapping which satisfies the condition \((C_\lambda )\) for some \(\lambda \in (0, 1)\). For any \(x_1 \in C\) the sequence \(\{x_{n}\}\) is defined by
where \(\{\alpha _{n}\}\) and \(\{\beta _{n}\}\) are in \([\epsilon , 1 \epsilon ]\) for all \(n \in \mathbb {N}\) and some \(\epsilon \in (0,1)\).
Lemma 3.1
Let C be a nonempty closed convex subset of a hyperbolic space X and \(T :C \rightarrow C\) be a mapping which satisfies the condition \((C_\lambda )\) for some \(\lambda \in (0, 1)\). If \(\{x_n\}\) is a sequence defined by (3.1), then \(\{x_n\}\) is Fejér monotone with respect to F(T).
Proof
Since T satisfies the condition \((C_ \lambda )\) for some \(\lambda \in (0, 1)\) and \(p \in F(T),\) we have
and
for all \(n \in \mathbb {N}\) so that, we have
and
Using (3.1), we have
Again, using (3.2) and (3.3), we have
that is, \(d( x_{n+1}, p ) \le d( x_{n}, p )\) for all \(p \in F(T).\) Thus, \(\{x_n\}\) is Fejér monotone with respect to F(T). \(\square\)
Lemma 3.2
Let C be a nonempty closed convex subset of a complete uniformly convex hyperbolic space with monotone modulus of uniform convexity \(\eta\) and \(T :C \rightarrow C\) be a mapping which satisfies the condition \((C_\lambda )\) for some \(\lambda \in (0, 1)\). If \(\{x_n\}\) is a sequence defined by (3.1), then F(T) is nonempty if and only if the sequence \(\{ x_{n} \}\) is bounded and \(\displaystyle \lim \nolimits _{ n \rightarrow \infty } d( x_{n} , Tx_{n}) =0\).
Proof
Suppose that the fixed point set F(T) is nonempty and \(p \in F(T).\) Then by Lemma 3.1, \(\{x_n\}\) is Fejér monotone with respect to F(T) and hence by Proposition 3.1, \(\{x_n\}\) is bounded and \(\displaystyle \lim \nolimits _{n \rightarrow \infty }d(x_n, p)\) exists, let \(\displaystyle \lim \nolimits _{n \rightarrow \infty }d(x_n, p) = c \ge 0.\)

(i)
If \(c =0\), we obviously have
$$\begin{aligned} d( x_{n} , Tx_{n}) &\le \,d( x_{n}, p) + d( Tx_{n} , p) \\ &\le \,2 d( x_{n}, p), \end{aligned}$$by taking \(\lim\) as \(n \rightarrow \infty\) on both the sides above inequality, we have
$$\begin{aligned} \lim _ {n \rightarrow \infty } d( x_{n} , Tx_{n}) = 0. \end{aligned}$$ 
(ii)
If \(c > 0\), since T satisfies the condition \((C_ \lambda )\) for some \(\lambda \in (0, 1)\) and \(p \in F(T),\) we have
$$\begin{aligned} d( Tx_{n} , p) \le d( x_{n} , p), \end{aligned}$$taking \(\limsup\) as \(n \rightarrow \infty\) both the sides, we get
$$\begin{aligned} \displaystyle \limsup _{ n \rightarrow \infty } d( Tx_{n}, p) \le c. \end{aligned}$$Taking \(\limsup\) as \(n \rightarrow \infty\) both the sides in (3.2), we have
$$\begin{aligned} \displaystyle \limsup _{ n \rightarrow \infty } d( z_{n}, p) \le c. \end{aligned}$$(3.5)
Since
therefore, we take \(\liminf\) as \(n \rightarrow \infty\) both the sides, we get
it implies that
Hence, it follows from Lemma 2.2, we have
Conversely, suppose that sequence \(\{ x_{n} \}\) is bounded and \(\lim _{ n \rightarrow \infty } d( x_{n} , Tx_{n}) =0.\) Hence, it holds all the assumption of Lemma 2.3, so we have \(T x=x\), i.e., F(T) is nonempty. \(\square\)
Theorem 3.1
Let C be a nonempty closed convex subset of a complete uniformly convex hyperbolic space X with monotone modulus of uniform convexity \(\eta\) and \(T :C \rightarrow C\) be a mapping which satisfies conditions \((C_\lambda )\) (for some \(\lambda \in (0, 1)\)) and (E) on C with \(F(T) \ne \varnothing\). If \(\{x_n\}\) is the sequence defined by (3.1), then the sequence \(\{x_n\}\) \(\Delta\)converges to a fixed point of T.
Proof
From Lemma 3.2, we observe that \(\{x_n\}\) is a bounded sequence therefore, \(\{x_{n}\}\) has a \(\Delta\)convergent subsequence. We now prove that every \(\Delta\)convergent subsequence of \(\{x_{n}\}\) has unique \(\Delta\)limit F(T). For this, let u and v \(\Delta\)limits of the subsequences \(\{u_{n}\}\) and \(\{v_{n}\}\) of \(\{x_{n}\}\) respectively. By Lemma 2.1, \(AC(C, \{u_{n}\})= \{u\}\) and \(AC(C, \{v_{n}\})= \{v\}\). By Lemma 3.2, we have \(\displaystyle \lim \nolimits _{ n \rightarrow \infty } d( u_{n}, Tu_{n}) = 0\).
We claim that u and v are fixed points of T and it is unique.
By Lemma 2.3, u and v are fixed points of T. Now we show that \(u =v\). If not, then by uniqueness of asymptotic center
which is a contradiction. Hence \(u =v\), the sequence \(\{x_n\}\) \(\Delta\)converges to a fixed point of T. \(\square\)
Theorem 3.2
Let C be a nonempty closed convex subset of a complete uniformly convex hyperbolic space X with monotone modulus of uniform convexity \(\eta\) and \(T :C \rightarrow C\) be a mapping which satisfies conditions \((C_\lambda )\) (for some \(\lambda \in (0, 1))\) and (E) on C with \(F(T) \ne \varnothing\). Then the sequence \(\{x_n\}\) which is defined by (3.1), converges strongly to some fixed point of T if and only if \(\displaystyle \liminf \nolimits_{n \rightarrow \infty } D(x_n, F(T)) = 0\), where \(D(x_{n} , F(T))= \inf _{x \in F(T)} d( x_{n} , x)\).
Proof
Necessity is obvious, we have to prove only sufficient part. First, we show that the fixed point set F(T) is closed, let \(\{x_n\}\) be a sequence in F(T) which converges to some point \(z \in C\). As
in view of the condition \((C_\lambda )\), we have
By taking the limit of both sides we obtain
In view of the uniqueness of the limit, we have \(z = Tz\), so that F(T) is closed. Suppose
From (3.4)
it follows from Lemma 3.1 and Proposition 3.1 that \(\displaystyle \lim \nolimits _{n \rightarrow \infty }d(x_n, F(T))\) exists. Hence we know that \(\displaystyle \lim \nolimits _{ n \rightarrow \infty } D( x_{n} , F(T))=0\).
Consider a subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\) such that
for all \(k\ge 1\) where \(\{p_k\}\) is in F(T). By Lemma 3.1, we have
which implies that
This shows that \(\{p_k\}\) is a Cauchy sequence. Since F(T) is closed, \(\{p_k\}\) is a convergent sequence. Let \(\displaystyle \lim \nolimits _{k \rightarrow \infty }p_k = p\). Then we know that \(\{x_n\}\) converges to p. In fact, since
we have \(\displaystyle \lim \nolimits _{k \rightarrow \infty } d(x_{n_k} , p) = 0\). Since \(\displaystyle \lim \nolimits _{n \rightarrow \infty } d(x_{n} , p)\) exists, the sequence \(\{x_{n}\}\) is convergent to p.
We recall the definition of condition (I) due to Senter and Doston [22], define as follows:
Definition 3.2
[22] Let C be a nonempty subset of a metric space X. A mapping \(T :C \rightarrow C\) with nonempty fixed point set F(T) in C is said to satisfy Condition (I) if there is a nondecreasing function \(f:[0, \infty ) \rightarrow [0, \infty )\) with \(f(0)=0, f(t)>0\) for all \(t \in (0, \infty )\), such that \(d(x, Tx) \ge f(D(x, F(T)))\) for all \(x \in C\), where \(D (x, F(T))) = \inf \{d(x, p) : p \in F(T)\}\).
Theorem 3.3
Let C be a nonempty closed convex subset of a complete uniformly convex hyperbolic space X with monotone modulus of uniform convexity \(\eta\) and \(T :C \rightarrow C\) be a mapping which satisfies conditions \((C_\lambda )\) (for some \(\lambda \in (0, 1)\)) and (E) on C. Moreover, T satisfies the condition (I) with \(F(T) \ne \varnothing\). If \(\{x_n\}\) is the sequence defined by (3.1), then the sequence \(\{x_n\}\) converges strongly to some fixed point of T.
Proof
As in proof of Theorem 3.2, it can be shown that F(T) is closed. Observe that by Lemma 3.1, we have \(\displaystyle \lim \nolimits _{n \rightarrow \infty }d(x_n, Tx_n) = 0\). It follows from the condition (I) that
Therefore, we have
Since \(f :[0, \infty ] \rightarrow [0, \infty )\) is a nondecreasing mapping satisfying \(f (0) = 0\) and \(f(t)>0\) for all \(t \in (0, \infty )\), we have \(\displaystyle \lim \nolimits _{n \rightarrow \infty }d(x_n, F(T)) = 0\). Rest of the proof follows in lines of Theorem 3.2. \(\square\)
In the view of the Remark 1.1 the following Corollaries are trivially true.
Corollary 3.1
Let C be a nonempty closed convex subset of a complete uniformly convex hyperbolic space X with monotone modulus of uniform convexity \(\eta\) and \(T :C \rightarrow C\) be a mapping which satisfies conditions \((C_\lambda )\) (for some \(\lambda \in (0, 1)\)) and (E) on C with \(F(T) \ne \varnothing\). If \(\{x_n\}\) is the sequence defined by (for each \(x_{1} \in C\) )
then the sequence \(\{x_n\}\) \(\Delta\)converges to a fixed point of T.
Corollary 3.2
Under the assumption of Corollary 3.1 with \(F(T) \ne \varnothing\). The sequence \(\{x_n\}\) which is defined by (3.7), converges strongly to some fixed point of T if and only if \(\displaystyle \lim \nolimits _{n \rightarrow \infty } \inf D(x_n, F(T)) = 0,\) where \(D(x_{n} , F(T))= \displaystyle \inf \nolimits _{x \in F(T)} d( x_{n} , x).\)
Corollary 3.3
Under the assumption of Corollary 3.1 with \(F(T) \ne \varnothing\) and T satisfies the condition (I). The sequence \(\{x_n\}\) which is defined by (3.7), converges strongly to some fixed point of T.
In the view of the Remark 2.2, we have the following Corollaries:
Corollary 3.4
Let C be a nonempty closed convex subset of a complete uniformly convex hyperbolic space X with monotone modulus of uniform convexity \(\eta\) and \(T :C \rightarrow C\) be a SKC mapping with \(F(T) \ne \varnothing\). The sequence \(\{x_n\}\) defined by (3.1), \(\Delta\) converges to a fixed point of T.
Corollary 3.5
Under the assumption of Corollary 3.4 with \(F(T) \ne \varnothing\). The sequence \(\{x_n\}\) which is defined by (3.1), converges strongly to some fixed point of T if and only if \(\displaystyle \liminf \nolimits_{n \rightarrow \infty } D(x_n, F(T)) = 0\), where \(D(x_{n} , F(T))= \displaystyle \inf \nolimits _{x \in F(T)} d( x_{n} , x).\)
Corollary 3.6
Under the assumption of Corollary 3.4 with \(F(T) \ne \varnothing\) and T satisfies the condition (I). The sequence \(\{x_n\}\) which is defined by (3.1), converges strongly to some fixed point of T.
Corollary 3.7
Let C be a nonempty closed convex subset of a complete uniformly convex hyperbolic space X with monotone modulus of uniform convexity \(\eta\) and \(T :C \rightarrow C\) be a SKC mapping with \(F(T) \ne \varnothing\). Then the sequence \(\{x_n\}\) defined by (3.7), \(\Delta\)converges to a fixed point of T.
Corollary 3.8
Under the assumption of Corollary 3.7 with \(F(T) \ne \varnothing\). The sequence \(\{x_n\}\) which is defined by (3.1), converges strongly to some fixed point of T if and only if \(\displaystyle \liminf \nolimits _{n \rightarrow \infty } D(x_n, F(T)) = 0,\) where \(D(x_{n} , F(T))= \displaystyle \inf \nolimits _{x \in F(T)} d( x_{n} , x).\)
Corollary 3.9
Under the assumption of Corollary 3.7 with \(F(T) \ne \varnothing\) and T satisfies the condition (I). The sequence \(\{x_n\}\) which is defined by (3.1), converges strongly to some fixed point of T.
References
Abbas, M., Khan, S.H., Postolache, M.: Existence and approximation results for SKC mappings in CAT(0) spaces. J. Inequal. Appl. 2014, 212 (2014)
Abbas, M., Nazir, T.: A new faster iteration process applied to constrained minimization and feasibility problems. Mat. Vesn. 66, 223–234 (2014)
Agarwal, R.P., O’Regan, D., Sahu, D.R.: Iterative construction of fixed points of nearly asymptotically nonexpansive mappings. J. Convex Anal. 8(1), 61–79 (2007)
Bridson, N., Haefliger, A.: Metric Spaces of NonPositive Curvature. Springer, Berlin (1999)
Goebel, K., Kirk, W.A.: Iteration processes for nonexpansive mappings. In: Singh, S.P., Thomeier, S., Watson, B. (eds) Topological Methods in Nonlinear Functional Analysis (Toronto, 1982), pp. 115–123. Contemporary Mathematics, vol 21. American Mathematical Society, New York (1983)
Goebel, K., Reich, S.: Uniformly Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marcel Dekker Inc, New York (1984)
GarcíaFalset, J., LlorensFuster, E., Suzuki, T.: Fixed point theory for a class of generalized nonexpansive mappings. J. Math. Anal. Appl. 375, 185–195 (2011)
Itoh, S.: Some fixed point theorems in metric spaces. Fund. Math. 102, 109–117 (1979)
Ishikawa, S.: Fixed points by new iteration method. Proc. Am. Math. Soc. 149, 147–150 (1974)
Karapinar, E., Tas, K.: Generalized (\(C\))conditions and related fixed point theorems. Comput. Math. Appl. 61, 3370–3380 (2011)
Kadioglu, N., Yildirim, I.: Approximating fixed points of nonexpansive mappings by faster iteration process. arXiv:1402.6530v1 [math.FA] (2014)
Krasnosel’skiǐ, M.A.: Two remarks on the method of successive approximations. Usp. Mat. Nauk. 10, 123–127 (1955)
Khan, A.R., Fukharuddin, H., Khan, M.A.: An implicit algorithm for two finite families of nonexpansive maps in hyperbolic spaces. Fixed Point Theory Appl. 2012, 54 (2012)
Kim, J.K., Pathak, R.P., Dashputre, S., Diwan, S.D., Gupta, R.: Fixed point approximation of generalized nonexpansive mappings in hyperbolic spaces. Int. J. Math. Math. Sci. 2015, 6 (2015)
Kohlenbach, U.: Some logical metathorems with applications in functional analysis. Trans. Am. Math. Soc. 357(1), 89–128 (2005)
Leuştean, L.: A quadratic rate of asymptotic regularity for CAT(0) spaces. J. Math. Anal. Appl. 325(1), 386–399 (2007)
Leuştean, L.: Nonexpansive iteration in uniformly convex \(W\)hyperbolic spaces. In: Leizarowitz, A., Mordukhovich, B.S., Shafrir, I., Zaslavski, A. (eds) Nonlinear Analysis and Optimization I. Nonlinear Analysis. Contemporary Mathematics, vol. 513, pp. 193–210. Ramat Gan American Mathematical Society, Bar Ilan University, Providence (2010)
Mann, W.R.: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506–510 (1953)
Noor, M.A.: New approximation schemes for general variational inequalities. J. Math. Anal. Appl. 251, 217–229 (2000)
Reich, S., Shafrir, I.: Nonexpansive iterations in hyperbolic spaces. Nonlinear Anal. 15, 537–558 (1990)
Sahu, D.R.: Application of the \(S\)iteration process to constrained minimization problem and split feasibility problems. Fixed Point Theory 12, 187–204 (2011)
Senter, H.F., Doston Jr., W.G.: Approximating fixed points of nonexpansive mappings. Proc. Am. Math. Soc. 44(2), 375–380 (1974)
Schaefer, H.: Über die methode sukzessiver approximationen. Jber. Dtsch. Math. Ver. 59, 131–140 (1957)
Suzuki, T.: Fixed point theorems and convergence theorems for some generalized nonexpansive mappings. J. Math. Anal. Appl. 340, 1088–1095 (2008)
Takahashi, W.A.: A convexity in metric space and nonexpansive mappings I. Kodai Math. Sem. Rep. 22, 142–149 (1970)
Thakur, D., Thakur, B.S., Postolache, M.: New iteration scheme for numerical reckoning fixed points of nonexpansive mappings. J. Inequal. Appl. 2014, 328 (2014)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Imdad, M., Dashputre, S. Fixed point approximation of Picard normal Siteration process for generalized nonexpansive mappings in hyperbolic spaces. Math Sci 10, 131–138 (2016). https://doi.org/10.1007/s4009601601878
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s4009601601878
Keywords
 Generalized nonexpansive mappings
 Strong and Δconvergence
 Uniformly convex hyperbolic spaces
 Picard normal Siteration process