1 Introduction

Let C be a nonempty subset of a metric space (X,d) and T:CC be a mapping. Denote the set of fixed points of T by F(T). Then T is (i) nonexpansive if d(Tx,Ty)d(x,y) for x,yC (ii) quasi-nonexpansive if F(T) and d(Tx,y)d(x,y) for xC and yF(T). For an initial value x 1 C, Das and Debata [1] studied the strong convergence of Ishikawa iterates { x n } defined by

x n + 1 = α n S ( β n T x n + ( 1 β n ) x n ) +(1 α n ) x n
(1.1)

for two quasi-nonexpansive mappings S, T on a nonempty closed and convex subset of a strictly convex Banach space. Takahashi and Tamura [2] proved weak convergence of (1.1) to a common fixed point of two nonexpansive mappings in a uniformly convex Banach space which satisfies Opial’s condition or whose norm is Fréchet differentiable and strong convergence in a strictly convex Banach space (see also [3, 4]). Mann and Ishikawa iterative procedures are well-defined in a vector space through its built-in convexity. In the literature, several mathematicians have introduced the notion of convexity in metric spaces; for example [58]. In this work, we follow the original metric convexity introduced by Menger [9] and used by many authors like Kirk [5, 6] and Takahashi [8]. Note that Mann iterative procedures were also investigated in hyperbolic metric spaces [10, 11].

In this paper we investigate the results published in [2] and generalize them to uniformly convex hyperbolic spaces. A particular example of such metric spaces is the class of CAT(0)-spaces (in the sense of Gromov) and ℝ-trees (in the sense of Tits). Heavy use of the linear structure of Banach spaces in [2] presents some difficulties when extending these results to metric spaces. For example, a key assumption in many of their results is the smoothness of the norm which is hard to extend to metric spaces.

2 Menger convexity in metric spaces

Let (X,d) be a metric space. Assume that for any x and y in X, there exists a unique metric segment [x,y], which is an isometric copy of the real line interval [0,d(x,y)]. Note by ℱ the family of the metric segments in X. For any β[0,1], there exists a unique point z[x,y] such that

d(x,z)=(1β)d(x,y)andd(z,y)=βd(x,y).

Throughout this paper we will denote such point by βx(1β)y. Such metric spaces are usually called convex metric spaces [9]. Moreover, if we have

d ( α p ( 1 α ) x , α q ( 1 α ) y ) αd(p,q)+(1α)d(x,y)

for all p, q, x, y in X and α[0,1], then X is said to be a hyperbolic metric space (see [1113]). For q=y, the hyperbolic inequality reduces to the convex structure inequality [8]. Throughout this paper, we will assume

αx(1α)y=(1α)yαx

for any α[0,1] and any x,yX.

An example of hyperbolic spaces is the family of Banach vector spaces or any normed vector spaces. Hadamard manifolds [14], the Hilbert open unit ball equipped with the hyperbolic metric [15], and the CAT(0) spaces [6, 1620] (see Example 2.1) are examples of nonlinear structures which play a major role in recent research in metric fixed point theory. A subset C of a hyperbolic space X is said to be convex if [x,y]C, whenever x,yC (see also [21]).

Definition 2.1 [22, 23]

Let (X,d) be a hyperbolic metric space. For any r>0 and ε>0, set

δ(r,ε)=inf { 1 1 r d ( 1 2 x 1 2 y , a ) ; d ( x , a ) r , d ( y , a ) r , d ( x , y ) r ε }

for any aX. X is said to be uniformly convex whenever δ(r,ε)>0, for any r>0 and ε>0.

Throughout this paper we assume that if X is a uniformly convex hyperbolic space, then for every s0 and ε>0, there exists η(s,ε)>0 such that

δ(r,ε)>η(s,ε)>0for any r>s.

Remark 2.1

  1. (i)

    We have δ(r,0)=0. Moreover, δ(r,ε) is an increasing function of ε.

  2. (ii)

    For r 1 r 2 , we have

    1 r 2 r 1 ( 1 δ ( r 2 , ε r 1 r 2 ) ) δ( r 1 ,ε).

Next we give a very important example of uniformly convex hyperbolic metric space.

Example 2.1 [16]

Let (X,d) be a metric space. A geodesic from x to y in X is a mapping c from a closed interval [0,l]R to X such that c(0)=x, c(l)=y, and d(c(t),c( t ))=|t t | for all t, t [0,l]. In particular, c is an isometry and d(x,y)=l. The image α of c is called a geodesic (or metric) segment joining x and y. The space (X,d) is said to be a geodesic space if every two points of X are joined by a geodesic and X is said to be uniquely geodesic if there is exactly one geodesic joining x and y for each x,yX, which will be denoted by [x,y], and called the segment joining x to y.

A geodesic triangle Δ( x 1 , x 2 , x 3 ) in a geodesic metric space X consists of three points x 1 , x 2 , x 3 in X (the vertices of Δ) and a geodesic segment between each pair of vertices (the edges of Δ). A comparison triangle for geodesic triangle Δ( x 1 , x 2 , x 3 ) in (X,d) is a triangle Δ ¯ ( x 1 , x 2 , x 3 ):=Δ( x ¯ 1 , x ¯ 2 , x ¯ 3 ) in R 2 such that d R 2 ( x ¯ i , x ¯ j )=d( x i , x j ) for i,j{1,2,3}. Such a triangle always exists (see [18]).

A geodesic metric space is said to be a CAT(0) space if all geodesic triangles of appropriate size satisfy the following CAT(0) comparison axiom.

Let Δ be a geodesic triangle in X and let Δ ¯ R 2 be a comparison triangle for Δ. Then Δ is said to satisfy the CAT(0) inequality if for all x,yΔ and all comparison points x ¯ , y ¯ Δ ¯ ,

d(x,y)d( x ¯ , y ¯ ).

Complete CAT(0) spaces are often called Hadamard spaces (see [16]). If x, y 1 , y 2 are points of a CAT(0) space, then the CAT(0) inequality implies

d 2 ( x , 1 2 y 1 1 2 y 2 ) 1 2 d 2 (x, y 1 )+ 1 2 d 2 (x, y 2 ) 1 4 d 2 ( y 1 , y 2 ).

The above inequality is known as the (CN) inequality of Bruhat and Tits [24]. The (CN) inequality implies that CAT(0) spaces are uniformly convex with

δ(r,ε)=1 1 ε 2 4 .

In a hyperbolic space X, (1.1) is written as

x n + 1 = α n S ( β n T x n ( 1 β n ) x n ) (1 α n ) x n ,
(2.1)

where α n , β n [0,1]. If S=T in (2.1), it reduces to the following Ishikawa iteration process of one mapping:

x n + 1 = α n T ( β n T x n ( 1 β n ) x n , n 1 , ) (1 α n ) x n ,
(2.2)

where α n , β n [0,1]. Let { x n } be a bounded sequence in a metric space X and C be a nonempty subset. Define r(,{ x n }):C[0,), by

r ( x , { x n } ) = lim sup n d(x, x n ).

The asymptotic radius ρ C of { x n } with respect to C is given by

ρ C =inf { r ( x , { x n } ) : x C } .

ρ will denote the asymptotic radius of { x n } with respect to X. A point ξC is said to be an asymptotic center of { x n } with respect to C if r(ξ,{ x n })=r(C,{ x n })=min{r(x,{ x n }):xC}. We denote with A(C,{ x n }), the set of asymptotic centers of { x n } with respect to C. When C=X, we call ξ an asymptotic center of { x n } and we use the notation A({ x n }) instead of A(X,{ x n }). In general, the set A(C,{ x n }) of asymptotic centers of a bounded sequence { x n } may be empty or may even contain infinitely many points. Note that in the study of the geometry of Banach spaces, the function r(,{ x n }) is also known as a type. For more on types in metric spaces, we refer to [25].

The Δ-convergence, introduced independently several years ago by Kuczumow [26] and Lim [27], is shown in CAT(0) spaces to behave similarly as the weak convergence in Banach spaces.

Definition 2.2 A bounded sequence { x n } in X is said to Δ-converge to xX if x is the unique asymptotic center of every subsequence { u n } of { x n }. We write x n Δ x ({ x n } Δ-converges to x).

In this paper, we study the iteration schemes (2.1)-(2.2) for nonexpansive mappings. We study strong convergence of these iterates in strictly convex hyperbolic spaces and prove Δ-convergence results in uniformly convex hyperbolic spaces without requiring any condition similar to norm Fréchet differentiability.

In the sequel, the following results will be needed.

Lemma 2.1 [25, 28]

Let X be a hyperbolic metric spaces. Assume that X is uniformly convex. Let C be a nonempty, closed and convex subset of X. Then every bounded sequence { x n }X has a unique asymptotic center with respect to C.

Lemma 2.2 [25, 28]

Let X be a hyperbolic metric spaces. Assume that X is uniformly convex. Let C be a nonempty, closed and convex subset of X. Let C be a nonempty closed and convex subset of X, and { x n } be a bounded sequence in C such that A({ x n })={y} and r({ x n })=ρ. If { y m } is a sequence in C such that lim m r( y m ,{ x n })=ρ, then lim m y m =y.

The following lemma [29] will be useful in studying the sequence generated by (2.1) in uniformly convex metric spaces. Here we give a proof based on the ideas developed in [25].

Lemma 2.3 Let X be a uniformly convex hyperbolic space. Then for arbitrary positive numbers ε>0 and r>0, and α[0,1], we have

d ( a , α x ( 1 α ) y ) r ( 1 δ ( r , 2 min { α , 1 α } ε ) )

for all a,x,yX, such that d(z,x)r, d(z,y)r, and d(x,y)rε.

Proof Without loss of generality, we may assume α< 1 2 . In this case, we have min{α,1α}=α. Let aX be fixed and x,yX. Set x ¯ =2αx(12α)y. Since

d ( 1 2 x ¯ 1 2 y , x ) (1α)d(x,y)andd ( 1 2 x ¯ 1 2 y , y ) =αd(x,y),

the uniform convexity of X will imply 1 2 x ¯ 1 2 y=αx(1α)y. Using the uniform convexity of X, we get

d ( a , 1 2 x ¯ 1 2 y ) r ( 1 δ ( r , 2 α ε ) ) .

Hence

d ( a , α x ( 1 α ) y ) r ( 1 δ ( r , 2 min { α , 1 α } ε ) ) .

 □

Remark 2.2 If (X,d) is uniformly convex, then (X,d) is strictly convex, i.e., whenever

d ( α x ( 1 α ) y , a ) =d(x,a)=d(y,a)

for α(0,1) and any x,y,aX, then we must have x=y.

The following result is very useful.

Lemma 2.4 [25]

Let (X,d) be a uniformly convex hyperbolic space. Let R[0,+) be such that

lim sup n d( x n ,a)R, lim sup n d( y n ,a)Rand lim n d ( a , 1 2 x n 1 2 y n ) =R.

Then we have

lim n d( x n , y n )=0.

But since we use convex combinations other than the middle point, we will need the following generalization obtained by using Lemma 2.3.

Lemma 2.5 Let (X,d) be a uniformly convex hyperbolic space. Let R[0,+) be such that lim sup n d( x n ,a)R, lim sup n d( y n ,a)R, and

lim n d ( a , α n x n ( 1 α n ) y n ) =R,

where α n [a,b], with 0<ab<1. Then we have

lim n d( x n , y n )=0.

A subset C of a metric space X is Chebyshev if for every xX, there exists c 0 C such that d( c 0 ,x)<d(c,x) for all cC, c c 0 . In other words, for each point of the space, there is a well-defined nearest point of C. We can then define the nearest point projection P:XC by sending x to c 0 . We have the following result.

Lemma 2.6 [25]

Let (X,d) be a complete uniformly convex hyperbolic space. Let C be nonempty, convex and closed subset of X. Let xX be such that d(x,C)<. Then there exists a unique best approximant of x in C, i.e., there exists a unique c 0 C such that

d(x, c 0 )=d(x,C)=inf { d ( x , c ) ; c C } ,

i.e., C is Chebyshev.

3 Convergence in strictly convex hyperbolic space

Let (X,d) be a hyperbolic metric space. Let C be a nonempty closed convex subset of X. Let S,T:CC be two nonexpansive mappings. Throughout the paper, assume that F=F(S)F(T). Let x 1 C and pF (assuming F is not empty). Set r=d( x 1 ,p). Then

C( x 1 )=CB(p,r)= { x C ; d ( p , x ) r }

is nonempty and invariant by both S and T. Therefore one may always assume that C is bounded provided that S and T have a common fixed point. Moreover, if { x n } is the sequence generated by (2.1), then we have

d ( x n + 1 , p ) = d ( α n S y n ( 1 α n ) x n , p ) α n d ( S y n , p ) + ( 1 α n ) d ( x n , p ) α n d ( y n , p ) + ( 1 α n ) d ( x n , p ) = α n d ( β n T x n ( 1 β n ) x n , p ) + ( 1 α n ) d ( x n , p ) α n [ β n d ( T x n , p ) + ( 1 β n ) d ( x n , p ) ] + ( 1 α n ) d ( x n , p ) d ( x n , p ) ,

where y n = β n T x n (1 β n ) x n . This proves that {d( x n ,p)} is decreasing, which implies that lim n d( x n ,p) exists. Using the above inequalities, we get

lim n d ( x n , p ) = lim n d ( α n S y n ( 1 α n ) x n , p ) = lim n α n d ( S y n , p ) + ( 1 α n ) d ( x n , p ) = lim n α n d ( y n , p ) + ( 1 α n ) d ( x n , p ) = lim n α n d ( β n T x n ( 1 β n ) x n , p ) + ( 1 α n ) d ( x n , p ) = lim n α n [ β n d ( T x n , p ) + ( 1 β n ) d ( x n , p ) ] + ( 1 α n ) d ( x n , p ) .

The first result of this work discusses the convergence behavior of the sequence generated by (2.1).

Theorem 3.1 Let X be a strictly convex hyperbolic space. Let C be a nonempty bounded, closed and convex subset of X. Let S,T:CC be two nonexpansive mappings. Assume that F. Let x 1 C and { x n } be given by (2.1). Then the following hold:

  1. (i)

    if α n [a,b] and β n [0,b], with 0<ab<1, then x n i y implies yF(S);

  2. (ii)

    if α n [a,1] and β n [a,b], with 0<ab<1, then x n i y implies yF(T);

  3. (iii)

    if α n , β n [a,b], with 0<ab<1, then x n i y implies yF. In this case, we have x n y.

Proof Assume that x n i y. Let pF. Without loss of generality, we may assume lim n α n i =α and lim n β n i =β. Since {d( x n ,p)} is decreasing, we get

lim n i d( x n ,p)= lim n i d( x n i ,p)=d(y,p).

The above inequalities imply

d ( y , p ) = d ( α S ( β T y ( 1 β ) y ) ( 1 α ) y , p ) = α d ( S ( β T y ( 1 β ) y ) , p ) + ( 1 α ) d ( y , p ) = α d ( β T y ( 1 β ) y , p ) + ( 1 α ) d ( y , p ) = α [ β d ( T y , p ) + ( 1 β ) d ( y , p ) ] + ( 1 α ) d ( y , p ) .

Set r=d(y,p). Without loss of generality we may assume r>0 otherwise most of the conclusions in the theorem are trivial. Assume that lim inf n α n >0. Then α0. Hence

d ( S ( β T y ( 1 β ) y ) , p ) =d ( β T y ( 1 β ) y , p ) =βd(Ty,p)+(1β)r=r,

which implies βd(Ty,p)=βr. If we assume that lim inf n β n >0, then β0, which implies d(Ty,p)=r.

  1. (1)

    If α(0,1) and β>0, then

    d ( p , S ( β T y ( 1 β ) y ) ) =d ( α S ( β T y ( 1 β ) y ) ( 1 α ) y , p ) =r.

    The strict convexity of X will imply S(βTy(1β)y)=y.

  2. (2)

    If α(0,1) and β=0, then

    d(p,y)=d ( p , S ( y ) ) =d ( α S ( y ) ( 1 α ) y , p ) .

The strict convexity of X will imply S(y)=y.

  1. (3)

    If β(0,1) and α>0, then

    d(p,y)=d ( p , T ( y ) ) =d ( p , β T y ( 1 β ) y ) .

The strict convexity of X will imply T(y)=y.

  1. (4)

    If α,β(0,1), then T(y)=y and S(βTy(1β)y)=y. Hence S(y)=y.

Let us finish the proof of Theorem 3.1. Note that (i) implies α[a,b] and β[0,b]. If β=0, then the conclusion (2) above implies yF(S). Otherwise the conclusion (4) will imply yF. This proves (i).

For (ii), notice that α[a,1] and β[a,b]. Hence the conclusion (3) will imply yF(T) which proves (ii).

For (iii), notice that α,β[a,b]. Hence the conclusion (4) will imply yF(T). Since

lim n d( y n ,y)= lim n d( y n i ,y)=0,

we get x n y, which completes the proof of (iii). □

If we assume compactness, Theorem 3.1 will imply the following result.

Theorem 3.2 Let X be a strictly convex hyperbolic space. Let C be a nonempty bounded, closed and convex subset of X. Let S,T:CC be two nonexpansive mappings. Assume that F. Fix x 1 C. Assume that co ¯ {{ x 1 }S(C)T(C)} is a compact subset of C. Define { x n } as in (2.1) where α n , β n [a,b], with 0<ab<1, and x 1 is the initial element of the sequence. Then { x n } converges strongly to a common fixed point of S and T.

Proof We have x n co ¯ {{ x 1 }S(C)T(C)}. Since co ¯ {{ x 1 }S(C)T(C)} is compact, { x n } has a convergent subsequence { x n i }, i.e., x n i z. By Theorem 3.1, we have zF and x n z. □

The existence of a common fixed point T and S is crucial. If one assumes that T and S commute, i.e., ST=TS, then a common fixed point exists under the assumptions of Theorem 3.2. Indeed, fix x 0 C and define

T n x= 1 n x 0 ( 1 1 n ) Tx

for xC and n1. Then

d ( T n x , T n y ) = d ( 1 n x 0 ( 1 1 n ) T x , 1 n x 0 ( 1 1 n ) T y ) ( 1 1 n ) d ( T x , T y ) ( 1 1 n ) d ( x , y ) .

That is, T n is a contraction. By the Banach contraction principle (BCP), T n has a unique fixed point u n in C. Since the closure of T(C) is compact, there exists a subsequence {T u n i } of {T u n } such that T u n i u. Since T(C) is bounded and

d( u n ,T u n )=d ( 1 n x 0 ( 1 1 n ) T u n , T u n ) 1 n d( x 0 ,T u n ),

we have d( u n ,T u n )0. In particular, we have u n i u. Continuity of T implies Tu=u. Since X is strictly convex, then F(T) is a nonempty convex subset of X. Since T and S commute, we have S(F(T))F(T). Moreover, since the closure of T(C) is compact, we see that F(T) is compact. The above proof shows that S has a fixed point in F(T), i.e., F=F(S)F(T).

The case S=T gives the following result.

Theorem 3.3 Let C be a nonempty closed and convex subset of a complete strictly convex hyperbolic space X. Let T:CC be a nonexpansive mapping such that co ¯ {{ c 0 }T(C)} is a compact subset of C, where c 0 C. Define { x n } by (2.2), where x 1 = c 0 , α n [a,b] and β n [0,b] or α n [a,1] and β n [a,b], with 0<ab<1. Then { x n } converges strongly to a fixed point of T.

Proof We saw that in this case, we have F(T). Since x n co ¯ {{ x 1 }T(C)}. Then there exists a subsequence { x n i } of { x n } such that x n i zC. By Theorem 3.1, we have Tz=z and x n z. □

4 Convergence in uniformly convex hyperbolic spaces

The following result is similar to the well-known demi-closedness principle discovered by Göhde in uniformly convex Banach spaces [30].

Lemma 4.1 Let C be a nonempty, closed and convex subset of a complete uniformly convex hyperbolic space X. Let T:CC be a nonexpansive mapping. Let { x n }C be an approximate fixed point sequence of T, i.e., lim n d( x n ,T x n )=0. If xC is the asymptotic center of { x n } with respect to C, then x is a fixed point of T, i.e., xF(T). In particular, if { x n }C is an approximate fixed point sequence of T, such that x n Δ x, then xF(T).

Proof Let { x n } be an approximate fixed point sequence of T. Let xC be the unique asymptotic center of { x n } with respect to C. Since

d(Tx, x n )d(Tx,T x n )+d(T x n , x n )d(x, x n )+d(T x n , x n ),

we get

r ( T x , { x n } ) = lim sup n d ( T x , x n ) lim sup n [ d ( x , x n ) + d ( T x n , x n ) ] = r ( x , { x n } ) .

By the uniqueness of the asymptotic center, we get Tx=x. □

The following theorem is necessary to discuss the behavior of the iterates in (2.1).

Theorem 4.1 Let C be a nonempty, closed and convex subset of a complete uniformly convex hyperbolic space X. Let S,T:CC be nonexpansive mappings such that F. Fix x 1 C and generate { x n } by (2.1). Set

y n = β n T x n (1 β n ) x n

for any n1.

  1. (i)

    If α n [a,b], where 0<ab<1, then

    lim n d( x n ,S y n )=0.
  2. (ii)

    If lim inf n α n >0 and β n [a,b], with 0<ab<1, then

    lim n d( x n ,T x n )=0.
  3. (iii)

    If α n , β n [a,b], with 0<ab<1, then

    lim n d( x n ,S x n )=0and lim n d( x n ,T x n )=0.

Proof Let pF. Then the sequence {d( x n ,p)} is decreasing. Set c= lim n d( x n ,p). If c=0, then all the conclusions are trivial. Therefore we will assume that c>0. Note that we have

d( x n + 1 ,p) α n d(S y n ,p)+(1 α n )d( x n ,p)
(4.1)

and

d(S y n ,p)d( y n ,p) β n d(T x n ,p)+(1 β n )d( x n ,p)d( x n ,p)
(4.2)

for any n1. In order to prove (i), assume that α n [a,b], where 0<ab<1. From the inequalities (4.1) and (4.2), we get

d( x n + 1 ,p)=d ( α n S y n ( 1 α n ) x n , p ) α n d(S y n ,p)+(1 α n )d( x n ,p)d( x n ,p),

which implies lim n d(S y n ,p)=c. Indeed, let U be an ultrafilter over ℕ. Then we have lim U α n =α[a,b] and lim U d( x n ,p)= lim U d( x n + 1 ,p)=c. Hence

cα lim U d(S y n ,p)+(1α)cc.

Since α0, we get lim U d(S y n ,p)=c. Since U was an arbitrary ultrafilter, we get lim n d(S y n ,p)=c as claimed. Therefore we have

lim n d( x n ,p)= lim n d(S y n ,p)= lim n d ( α n S y n ( 1 α n ) x n , p ) =c.

Using Lemma 2.5, we get lim n d(S y n , x n )=0.

Next we prove (ii). Assume lim inf n α n >0 and β n [a,b], with 0<ab<1. First note that from (4.1) and (4.2), we get

d( x n + 1 ,p) α n d( y n ,p)+(1 α n )d( x n ,p)d( x n ,p),

which implies lim n α n d( y n ,p)+(1 α n )d( x n ,p)=c. Since lim inf n α n >0, we conclude that lim n d( y n ,p)=c. Since β n a>0, we get in a similar fashion lim n d(T x n ,p)=c. Therefore we have

lim n d( x n ,p)= lim n d(T x n ,p)= lim n d ( β n T x n ( 1 β n ) x n , p ) =c.

Using Lemma 2.5, we get lim n d(T x n , x n )=0.

Finally let us prove (iii). Assume that α n , β n [a,b], with 0<ab<1. Then from (i) and (ii), we know that

lim n d( x n ,S y n )=0and lim n d( x n ,T x n )=0.

Since

d ( x n , S x n ) d ( x n , S y n ) + d ( S y n , S x n ) d ( x n , S y n ) + d ( y n , x n ) = d ( x n , S y n ) + β n d ( T x n , x n ) d ( x n , S y n ) + d ( T x n , x n ) ,

we conclude that lim n d( x n ,S x n )=0. □

The conclusion of Theorem 4.1(iii) is amazing because the sequence generated by (2.1) gives an approximate fixed point sequence for both S and T without assuming that these mappings commute.

Remark 4.1 If we assume that 0 β n b<1 and lim inf n α n >0, then

lim n β n d( x n ,T x n )=0.

Indeed, if we assume this not to be so, then there exists a subsequence { β n } and δ>0 such that

β n d( x n ,T x n )δ

for any n1. In particular, it is clear, since {d( x n ,T x n )} is bounded, that lim n β n 0. Without loss of generality, we may assume that β n a>0, for n1. The proof of (ii) will imply

lim n d( x n ,T x n )=0,

which is a contradiction since { β n } is a bounded sequence. Therefore we must have

lim n β n d( x n ,T x n )=0.

In particular, if we assume α n [a,b], then we get

lim n d( x n ,S x n )=0.

As a direct consequence to Theorem 4.1 and Remark 4.1, we get the following result which discusses the Δ-convergence of the iterative sequence defined by (2.1).

Theorem 4.2 Let C be a nonempty, closed and convex subset of a complete uniformly convex hyperbolic space X. Let S,T:CC be two nonexpansive mappings such that F. Fix x 1 C and generate { x n } by (2.1). Then

  1. (i)

    if α n [a,b] and β n [0,b], with 0<ab<1, then x n Δ y and yF(S);

  2. (ii)

    if α n [a,1] and β n [a,b], with 0<ab<1, then x n Δ y and yF(T);

  3. (iii)

    if α n , β n [a,b], with 0<ab<1, then x n Δ y and yF.

Proof Let us prove (i). Assume α n [a,b] and β n [0,b], with 0<ab<1. Theorem 4.1 and Remark 4.1 imply that { x n } is an approximate fixed point sequence of S, i.e.,

lim n d( x n ,S x n )=0.

Let y be the unique asymptotic center of { x n }. Then Lemma 4.1 implies that yF(S). Let us prove that in fact { x n } Δ-converges to y. Let { x n i } be a subsequence of { x n }. Let z be the unique asymptotic center of { x n i }. Again since { x n i } is an approximate fixed point sequence of S, we get zF(S). Hence

lim sup n i d( x n i ,z) lim sup n i d( x n i ,y).

Since y,zF(S), we get

lim sup n i d( x n i ,z)= lim n d( x n ,z)and lim sup n i d( x n i ,y)= lim n d( x n ,y).

Since y is the unique asymptotic center of { x n }, we get y=z. This proves that { x n } Δ-converges to y.

Next we prove (ii). Assume α n [a,1] and β n [a,b], with 0<ab<1. Then Theorem 4.1 implies that { x n } is an approximate fixed point sequence of T, i.e.,

lim n d( x n ,T x n )=0.

Following the same proof as given above for (i), we get { x n } Δ-converges to its unique asymptotic center which is a fixed point of T.

The conclusion (iii) follows easily from (i) and (ii). □

As a corollary to Theorem 4.2, we get the following result when S=T.

Corollary 4.1 Let C be a nonempty, closed and convex subset of a complete uniformly convex hyperbolic space X. Let T:CC be a nonexpansive mapping with a fixed point. Suppose that { x n } is given by (2.2), where α n [a,b] and β n [0,b] or α n [a,1] and β n [a,b], with 0<ab<1. Then x n Δ p, with pF(T).

Using the concept of near point projection, we establish the following amazing convergence result.

Theorem 4.3 Let C be a nonempty, closed and convex subset of a complete uniformly convex hyperbolic space X. Let S,T:CC be nonexpansive mappings such that F. Let P be the nearest point projection of C onto F. For an initial value x 1 C, define { x n } as given in (2.1), where α n , β n [a,b], with 0<ab<1. Then {P x n } converges strongly to the asymptotic center of { x n }.

Proof First, we claim that

d(P x n , x n + m )d(P x n , x n )for m1,n1.
(4.3)

We prove (4.3) by induction on m1. For m=1, we have

d ( P x n , x n + 1 ) = d ( P x n , α n S y n ( 1 α n ) x n ) α n d ( P x n , S y n ) + ( 1 α n ) d ( P x n , x n ) α n d ( P x n , y n ) + ( 1 α n ) d ( P x n , x n ) = α n d ( P x n , β n T x n ( 1 β n ) x n ) + ( 1 α n ) d ( P x n , x n ) α n [ β n d ( P x n , T x n ) + ( 1 β n ) d ( P x n , x n ) ] + ( 1 α n ) d ( P x n , x n ) α n [ β n d ( P x n , x n ) + ( 1 β n ) d ( P x n , x n ) ] + ( 1 α n ) d ( P x n , x n ) = d ( P x n , x n ) .

That is,

d(P x n , x n + 1 )d(P x n , x n )

for n1. Assume that (4.3) is true for m=k. That is,

d(P x n , x n + k )d(P x n , x n )

for n1. Hence

d ( P x n , x n + k + 1 ) = d ( P x n , α n + k S y n + k ( 1 α n + k ) x n + k ) α n + k d ( P x n , S y n + k ) + ( 1 α n + k ) d ( P x n , x n + k ) α n + k d ( P x n , y n + k ) + ( 1 α n + k ) d ( P x n , x n + k ) = α n + k d ( P x n , β n + k T x n + k ( 1 β n + k ) x n + k ) + ( 1 α n + k ) d ( P x n , x n + k ) α n + k [ β n + k d ( P x n , T x n + k ) + ( 1 β n + k ) d ( P x n , x n + k ) ] + ( 1 α n + k ) d ( P x n , x n + k ) α n + k [ β n + k d ( P x n , x n + k ) + ( 1 β n + k ) d ( P x n , x n + k ) ] + ( 1 α n + k ) d ( P x n , x n + k ) = d ( P x n , x n + k ) d ( P x n , x n ) .

This completes the proof of (4.3). Let us complete the proof of Theorem 4.3. We know from Theorem 4.2(iii) that { x n } Δ-converges to its unique asymptotic center y, which is in F. Let us prove that {P x n } converges strongly to y. Assume not, i.e., there exist ε>0 and a subsequence {P x n i } such that d(P x n i ,y)ε, for any n i 1. It is clear that we must have R=d( x 1 ,y)>0, otherwise { x n } is a constant sequence. Since

{ d ( x n i , y ) d ( x n i , y ) , d ( x n i , P x n i ) d ( x n i , y ) , d ( P x n i , y ) ε = d ( x n i , y ) ε d ( x n i , y ) d ( x n i , y ) ε R

we get

d ( x n i , 1 2 P x n i 1 2 y ) d( x n i ,y) ( 1 δ ( d ( x n i , y ) , ε R ) )

for any n i 1. Using the properties of the modulus of uniform convexity, there exists η>0 such that

δ ( d ( x n i , y ) , ε R ) η

for any n i 1. Hence

d ( x n i , 1 2 P x n i 1 2 y ) d( x n i ,y)(1η)

for any n i 1. Using the definition of the nearest point projection P, we get

d( x n i ,P x n i )d( x n i ,y)(1η)

for any n i 1. Using the inequality (4.3) above, we get

d( x n i + m ,P x n i )d( x n i ,y)(1η)

for any n i 1 and m1. Since P x n i F, we know that {d( x n ,P x n i )} is decreasing (in n and fixed n i ). Hence

lim sup m d( x n i + m ,P x n i )= lim n d( x n ,P x n i )d( x n i ,y)(1η)

for any n i 1. Since y is the asymptotic center of { x n }, we get

lim n d( x n ,y) lim n d( x n ,P x n i )d( x n i ,y)(1η)

for any n i 1. Finally since yF, if we let n i , we get

lim n d( x n ,y) lim n d( x n ,y)(1η).

Since εd( x n i ,P x n i )d( x n i ,y), we conclude that ε lim n d( x n ,y), which implies 11η which is our desired contradiction. Therefore {P x n } converges strongly to y. □