1 Introduction

In this paper, without other specifications, let R be the set of real numbers, C be a nonempty, closed and convex subset of a real reflexive Banach space E with the dual space E . The norm and the dual pair between E and E are denoted by and ,, respectively. Let f:ER{+} be a proper convex and lower semicontinuous function. Denote the domain of f by domf, i.e., domf={xE:f(x)<+}. The Fenchel conjugate of f is the function f : E (,+] defined by f (ξ)=sup{ξ,xf(x):xE}. Let T:CC be a nonlinear mapping. Denote by F(T)={xC:Tx=x} the set of fixed points of T. A mapping T is said to be nonexpansive if TxTyxy for all x,yC.

In 1994, Blum and Oettli [1] firstly studied the equilibrium problem: finding x ¯ C such that

H( x ¯ ,y)0,yC,
(1.1)

where H:C×CR is functional. Denote the set of solutions of the problem (1.1) by EP(H). Since then, various equilibrium problems have been investigated. It is well known that equilibrium problems and their generalizations have been important tools for solving problems arising in the fields of linear or nonlinear programming, variational inequalities, complementary problems, optimization problems, fixed point problems and have been widely applied to physics, structural analysis, management science and economics etc. One of the most important and interesting topics in the theory of equilibria is to develop efficient and implementable algorithms for solving equilibrium problems and their generalizations (see, e.g., [28] and the references therein). Since the equilibrium problems have very close connections with both the fixed point problems and the variational inequalities problems, finding the common elements of these problems has drawn many people’s attention and has become one of the hot topics in the related fields in the past few years (see, e.g., [916] and the references therein).

In 1967, Bregman [17] discovered an elegant and effective technique for using of the so-called Bregman distance function D f (see, Section 2, Definition 2.1) in the process of designing and analyzing feasibility and optimization algorithms. This opened a growing area of research in which Bregman’s technique has been applied in various ways in order to design and analyze not only iterative algorithms for solving feasibility and optimization problems, but also algorithms for solving variational inequalities, for approximating equilibria, for computing fixed points of nonlinear mappings and so on (see, e.g., [1824] and the references therein). In 2005, Butnariu and Resmerita [25] presented Bregman-type iterative algorithms and studied the convergence of the Bregman-type iterative method of solving some nonlinear operator equations.

Recently, by using the Bregman projection, Reich and Sabach [26] presented the following algorithms for finding common zeroes of maximal monotone operators A i :E 2 E (i=1,2,,N) in a reflexive Banach space E, respectively:

{ x 0 E , y n i = Res λ n i f ( x n + e n i ) , C n i = { z E : D f ( z , y n i ) D f ( z , x n + e n i ) } , C n = i = 1 N C n i , Q n = { z E : f ( x 0 ) f ( x n ) , z x n 0 } , x n + 1 = proj C n Q n f x 0 , n 0

and

{ x 0 E , η n i = ξ n i + 1 λ n i ( f ( y n i ) f ( x n ) ) , ξ n i A i y n i , ω n i = f ( λ n i η n i + f ( x n ) ) , C n i = { z E : D f ( z , y n i ) D f ( z , ω n i ) } , C n = i = 1 N C n i , Q n = { z E : f ( x 0 ) f ( x n ) , z x n 0 } , x n + 1 = proj C n Q n f x 0 , n 0 ,

where { λ n i } i = 1 N (0,+), { e n } i = 1 N is an error sequence in E with e n i 0 and proj C f is the Bregman projection with respect to f from E onto a closed and convex subset C. Further, under some suitable conditions, they obtained two strong convergence theorems of maximal monotone operators in a reflexive Banach space. Reich and Sabach [7] also studied the convergence of two iterative algorithms for finitely many Bregman strongly nonexpansive operators in a Banach space. In [15], Reich and Sabach proposed the following algorithms for finding common fixed points of finitely many Bregman firmly nonexpansive operators T i :CC (i=1,2,,N) in a reflexive Banach space E if i = 1 N F( T i ):

{ x 0 E , Q 0 i = E , i = 1 , 2 , , N , y n i = T i ( x n + e n i ) , Q n + 1 i = { z Q n i : f ( x n + e n i ) f ( y n i ) , z y n i 0 } , Q n = i = 1 N Q n i , x n + 1 = proj Q n + 1 f x 0 , n 0 .
(1.2)

Under some suitable conditions, they proved that the sequence { x n } generated by (1.2) converges strongly to i = 1 N F( T i ) and applied the result to the solution of convex feasibility and equilibrium problems.

Very recently, Chen et al. [27] introduced the concept of weak Bregman relatively nonexpansive mappings in a reflexive Banach space and gave an example to illustrate the existence of a weak Bregman relatively nonexpansive mapping and the difference between a weak Bregman relatively nonexpansive mapping and a Bregman relatively nonexpansive mapping. They also proved the strong convergence of the sequences generated by the constructed algorithms with errors for finding a fixed point of weak Bregman relatively nonexpansive mappings and Bregman relatively nonexpansive mappings under some suitable conditions.

This paper is devoted to investigating the shrinking projection algorithm based on the prediction correction method for finding a common element of solutions to the equilibrium problem (1.1) and fixed points to weak Bregman relatively nonexpansive mappings in Banach spaces, and then the strong convergence of the sequence generated by the proposed algorithm is derived under some suitable assumptions.

2 Preliminaries

Let T:CC be a nonlinear mapping. A point ωC is called an asymptotic fixed point of T (see [28]) if C contains a sequence { x n } which converges weakly to ω such that lim n T x n x n =0. A point ωC is called a strong asymptotic fixed point of T (see [28]) if C contains a sequence { x n } which converges strongly to ω such that lim n T x n x n =0. We denote the sets of asymptotic fixed points and strong asymptotic fixed points of T by F ˆ (T) and F ˜ (T), respectively.

Let { x n } be a sequence in E; we denote the strong convergence of { x n } to xE by x n x. For any xint(domf), the right-hand derivative of f at x in the direction yE is defined by

f (x,y):= lim t 0 f ( x + t y ) f ( x ) t .

f is called Gâteaux differentiable at x if, for all yE, lim t 0 f ( x + t y ) f ( x ) t exists. In this case, f (x,y) coincides with f(x), the value of the gradient of f at x. f is called Gâteaux differentiable if it is Gâteaux differentiable for any xint(domf). f is called Fréchet differentiable at x if this limit is attained uniformly for y=1. We say that f is uniformly Fréchet differentiable on a subset C of E if the limit is attained uniformly for xC and y=1.

The Legendre function f:E(,+] is defined in [18]. From [18], if E is a reflexive Banach space, then f is the Legendre function if and only if it satisfies the following conditions (L1) and (L2):

(L1) The interior of the domain of f, int(domf), is nonempty, f is Gâteaux differentiable on int(domf) and domf=int(domf);

(L2) The interior of the domain of f , int(dom f ), is nonempty, f is Gâteaux differentiable on int(dom f ) and dom f =int(dom f ).

Since E is reflexive, we know that ( f ) 1 = f (see [29]). This, by (L1) and (L2), implies the following equalities:

f= ( f ) 1 ,ranf=dom f =int ( dom f )

and

ran f =domf=int(domf).

By Bauschke et al. [[18], Theorem 5.4], the conditions (L1) and (L2) also yield that the functions f and f are strictly convex on the interior of their respective domains. From now on we assume that the convex function f:E(,+] is Legendre.

We first recall some definitions and lemmas which are needed in our main results.

Assumption 2.1 Let C be a nonempty, closed convex subset of a uniformly convex and uniformly smooth Banach space E, and let H:C×CR satisfy the following conditions (C1)-(C4):

(C1) H(x,x)=0 for all xC;

(C2) H is monotone, i.e., H(x,y)+H(y,x)0 for all x,yC;

(C3) for all x,y,zC,

lim sup t 0 + H ( t z + ( 1 t ) x , y ) H(x,y);

(C4) for all xC, H(x,) is convex and lower semicontinuous.

Definition 2.1 [3, 17]

Let f:E(,+] be a Gâteaux differentiable and convex function. The function D f :domf×int(domf)[0,+) defined by

D f (y,x):=f(y)f(x) f ( x ) , y x

is called the Bregman distance with respect to f.

Remark 2.1 [15]

The Bregman distance has the following properties:

  1. (1)

    the three point identity, for any xdomf and y,zint(domf),

    D f (x,y)+ D f (y,z) D f (x,z)= f ( z ) f ( y ) , x y ;
  2. (2)

    the four point identity, for any y,ωdomf and x,zint(domf),

    D f (y,x) D f (y,z) D f (ω,x)+ D f (ω,z)= f ( z ) f ( x ) , y ω .

Definition 2.2 [17]

Let f:E(,+] be a Gâteaux differentiable and convex function. The Bregman projection of xint(domf) onto the nonempty, closed and convex set Cdomf is the necessarily unique vector proj C f (x)C satisfying the following:

D f ( proj C f ( x ) , x ) =inf { D f ( y , x ) : y C } .

Remark 2.2

  1. (1)

    If E is a Hilbert space and f(y)= 1 2 x 2 for all xE, then the Bregman projection proj C f (x) is reduced to the metric projection of x onto C;

  2. (2)

    If E is a smooth Banach space and f(y)= 1 2 x 2 for all xE, then the Bregman projection proj C f (x) is reduced to the generalized projection Π C (x) (see [11, 28]), which is defined by

    ϕ ( Π C ( x ) , x ) = min y C ϕ(y,x),

where ϕ(y,x)= y 2 2y,J(x)+ x 2 , J is the normalized duality mapping from E to 2 E .

Definition 2.3 [21, 26, 27]

Let C be a nonempty, closed and convex set of domf. The operator T:Cint(domf) with F(T) is called:

  1. (1)

    quasi-Bregman nonexpansive if

    D f (u,Tx) D f (u,x),xC,uF(T);
  2. (2)

    Bregman relatively nonexpansive if F ˆ (T)=F(T) and

    D f (u,Tx) D f (u,x),xC,uF(T);
  3. (3)

    Bregman firmly nonexpansive if

    f ( T x ) f ( T y ) , T x T y f ( x ) f ( y ) , T x T y ,x,yC,

or, equivalently,

D f (Tx,Ty)+ D f (Ty,Tx)+ D f (Tx,x)+ D f (Ty,y) D f (Tx,y)+ D f (Ty,x),x,yC;
  1. (4)

    a weak Bregman relatively nonexpansive mapping with F(T) if F ˜ (T)=F(T) and

    D f (u,Tx) D f (u,x),xC,uF(T).

Definition 2.4 [4]

Let H:C×CR be functional. The resolvent of H is the operator Res H f :E 2 C defined by

Res H f (x)= { z C : H ( z , y ) + f ( z ) f ( x ) , y z 0 , y C } .

Definition 2.5 [21]

Let f:E(,+] be a convex and Gâteaux differentiable function. f is called:

  1. (1)

    totally convex at xint(domf) if its modulus of total convexity at x, that is, the function ν f :int(domf)×[0,+)[0,+) defined by

    ν f (x,t):=inf { D f ( y , x ) : y dom f , y x = t } ,

is positive whenever t>0;

  1. (2)

    totally convex if it is totally convex at every point xint(domf);

  2. (3)

    totally convex on bounded sets if ν f (B,t) is positive for any nonempty bounded subset B of E and t>0, where the modulus of total convexity of the function f on the set B is the function ν f :int(domf)×[0,+)[0,+) defined by

    ν f (B,t):=inf { ν f ( x , t ) : x B dom f } .

Definition 2.6 [21, 26]

The function f:E(,+] is called:

  1. (1)

    cofinite if dom f = E ;

  2. (2)

    coercive if lim x + (f(x)/x)=+;

  3. (3)

    sequentially consistent if for any two sequences { x n } and { y n } in E such that { x n } is bounded,

    lim n D f ( y n , x n )=0 lim n y n x n =0.

Lemma 2.1 [[26], Proposition 2.3]

If f:E(,+] is Fréchet differentiable and totally convex, then f is cofinite.

Lemma 2.2 [[25], Theorem 2.10]

Let f:E(,+] be a convex function whose domain contains at least two points. Then the following statements hold:

  1. (1)

    f is sequentially consistent if and only if it is totally convex on bounded sets;

  2. (2)

    If f is lower semicontinuous, then f is sequentially consistent if and only if it is uniformly convex on bounded sets;

  3. (3)

    If f is uniformly strictly convex on bounded sets, then it is sequentially consistent and the converse implication holds when f is lower semicontinuous, Fréchet differentiable on its domain and the Fréchet derivativef is uniformly continuous on bounded sets.

Lemma 2.3 [[30], Proposition 2.1]

Let f:ER be uniformly Fréchet differentiable and bounded on bounded subsets of E. Thenf is uniformly continuous on bounded subsets of E from the strong topology of E to the strong topology of E .

Lemma 2.4 [[26], Lemma 3.1]

Let f:ER be a Gâteaux differentiable and totally convex function. If x 0 E and the sequence { D f ( x n , x 0 )} is bounded, then the sequence { x n } is also bounded.

Lemma 2.5 [[26], Proposition 2.2]

Let f:ER be a Gâteaux differentiable and totally convex function, x 0 E and let C be a nonempty, closed convex subset of E. Suppose that the sequence { x n } is bounded and any weak subsequential limit of { x n } belongs to C. If D f ( x n , x 0 ) D f ( proj C f ( x 0 ), x 0 ) for any nN, then { x n } n = 1 converges strongly to proj C f ( x 0 ).

Lemma 2.6 [[27], Proposition 2.17]

Let f:E(,+] be the Legendre function. Let C be a nonempty, closed convex subset of int(domf) and T:CC be a quasi-Bregman nonexpansive mapping with respect to f. Then F(T) is closed and convex.

Lemma 2.7 [[27], Lemma 2.18]

Let f:E(,+] be Gâteaux differentiable and proper convex lower semicontinuous. Then, for all zE,

D f ( z , f ( i = 1 N t i f ( x i ) ) ) i = 1 N t i D f (z, x i ),

where { x i } i = 1 N E and { t i } i = 1 N (0,1) with i = 1 N t i =1.

Lemma 2.8 [[25], Corollary 4.4]

Let f:E(,+] be Gâteaux differentiable and totally convex on int(domf). Let xint(domf) and Cint(domf) be a nonempty, closed convex set. If x ˆ C, then the following statements are equivalent:

  1. (1)

    the vector x ˆ is the Bregman projection of x onto C with respect to f;

  2. (2)

    the vector x ˆ is the unique solution of the variational inequality:

    f ( x ) f ( z ) , z y 0,yC;
  3. (3)

    the vector x ˆ is the unique solution of the inequality:

    D f (y,z)+ D f (z,x) D f (y,x),yC.

Lemma 2.9 [[7], Lemmas 1 and 2]

Let f:E(,+] be a coercive Legendre function. Let C be a nonempty, closed and convex subset of int(domf). Assume that H:C×CR satisfies Assumption  2.1. Then the following results hold:

  1. (1)
    Res H f

    is single-valued and dom( Res H f )=E;

  2. (2)
    Res H f

    is Bregman firmly nonexpansive;

  3. (3)
    EP(H)

    is a closed and convex subset of C and EP(H)=F( Res H f );

  4. (4)

    for all xE and for all uF( Res H f ),

    D f ( u , Res H f ( x ) ) + D f ( Res H f ( x ) , x ) D f (u,x).

Lemma 2.10 [[31], Proposition 5]

Let f:ER be a Legendre function such that f is bounded on bounded subsets of intdom f . Let xE. If { D f (x, x n )} is bounded, then the sequence { x n } is bounded too.

3 Main results

In this section, we will introduce a new shrinking projection algorithm based on the prediction correction method for finding a common element of solutions to the equilibrium problem (1.1) and fixed points to weak Bregman relatively nonexpansive mappings in Banach spaces, and then the strong convergence of the sequence generated by the proposed algorithm is proved under some suitable conditions.

Let { α n } and { β n } be the sequences in [0,1] such that lim n α n =0 and lim inf n (1 α n ) β n >0. We propose the following shrinking projection algorithm based on the prediction correction method.

Algorithm 3.1 Step 1: Select an arbitrary starting point x 0 C, let Q 0 =C and C 0 ={zC: D f (z, u 0 ) D f (z, x 0 )}.

Step 2: Given the current iterate x n , calculate the next iterate as follows:

{ z n = f ( β n f ( T ( x n ) ) + ( 1 β n ) f ( x n ) ) , y n = f ( α n f ( x 0 ) + ( 1 α n ) f ( z n ) ) , u n = Res H f ( y n ) , C n = { z C n 1 Q n 1 : D f ( z , u n ) α n D f ( z , x 0 ) + ( 1 α n ) D f ( z , x n ) } , Q n = { z C n 1 Q n 1 : f ( x 0 ) f ( x n ) , z x n 0 } , x n + 1 = proj C n Q n f x 0 , n 0 .
(3.1)

Theorem 3.1 Let C be a nonempty, closed and convex subset of a real reflexive Banach space E, f:ER be a coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on a bounded subset of E, and f be bounded on bounded subsets of E . Let H:C×CR satisfy Assumption  2.1 and T:CC be a weak Bregman relatively nonexpansive mapping such that EP(H)F(T). Then the sequence { x n } generated by Algorithm 3.1 converges strongly to the point proj EP ( H ) F ( T ) f ( x 0 ), where proj EP ( H ) F ( T ) f ( x 0 ) is the Bregman projection of C onto EP(H)F(T).

To prove Theorem 3.1, we need the following lemmas.

Lemma 3.1 Assume that EP(H)F(T) C n Q n for all n0. Then the sequence { x n } is bounded.

Proof Since f( x 0 )f( x n ),v x n 0 for all v Q n , it follows from Lemma 2.8 that x n = proj Q n f ( x 0 ) and so, by x n + 1 = proj C n Q n f ( x 0 ) Q n , we have

D f ( x n , x 0 ) D f ( x n + 1 , x 0 ).
(3.2)

Let ωEP(H)F(T). It follows from Lemma 2.8 that

D f ( ω , proj Q n f ( x 0 ) ) + D f ( proj Q n f ( x 0 ) , x 0 ) D f (ω, x 0 )

and so

D f ( x n , x 0 ) D f (ω, x 0 ) D f (ω, x n ) D f (ω, x 0 ).

Therefore, { D f ( x n , x 0 )} is bounded. Moreover, { x n } is bounded and so are {T( x n )}, { y n }, { z n }. This completes the proof. □

Lemma 3.2 Assume that EP(H)F(T) C n Q n for all n0. Then the sequence { x n } is a Cauchy sequence.

Proof By the proof of Lemma 3.1, we know that { D f ( x n , x 0 )} is bounded. It follows from (3.2) that lim n D f ( x n , x 0 ) exists. From x m Q m 1 Q n for all m>n and Lemma 2.8, one has

D f ( x m , proj Q n f ( x 0 ) ) + D f ( proj Q n f ( x 0 ) , x 0 ) D f ( x m , x 0 )

and so D f ( x m , x n ) D f ( x m , x 0 ) D f ( x n , x 0 ). Therefore, we have

lim n D f ( x m , x n ) lim n ( D f ( x m , x 0 ) D f ( x n , x 0 ) ) =0.
(3.3)

Since f is totally convex on bounded subsets of E, by Definition 2.6, Lemma 2.2 and (3.3), we obtain

lim n x m x n =0.
(3.4)

Thus { x n } is a Cauchy sequence and so lim n x n + 1 x n =0. This completes the proof. □

Lemma 3.3 Assume that EP(H)F(T) C n Q n for all n0. Then the sequence { x n } converges strongly to a point in EP(H)F(T).

Proof From Lemma 3.2, the sequence { x n } is a Cauchy sequence. Without loss of generality, let x n ω ˆ C. Since f is uniformly Fréchet differentiable on bounded subsets of E, it follows from Lemma 2.2 that ∇f is norm-to-norm uniformly continuous on bounded subsets of E. Hence, by (3.4), we have

lim n f ( x m ) f ( x n ) =0

and so

lim n f ( x n + 1 ) f ( x n ) =0.
(3.5)

Since x n + 1 C n , we have

D f ( x n + 1 , u n ) α n D f ( x n + 1 , x 0 )+(1 α n ) D f ( x n + 1 , x n ).

It follows from lim n α n =0 and lim n D f ( x n + 1 , x n )=0 that { D f ( x n + 1 , u n )} is bounded and

lim n D f ( x n + 1 , u n )=0.

By Lemma 2.10, { u n } is bounded. Hence, lim n x n + 1 u n =0 and so

lim n f ( x n + 1 ) f ( u n ) =0.
(3.6)

Taking into account x n u n x n x n + 1 + x n + 1 u n , we obtain

lim n x n u n =0

and so u n ω ˆ as n. For any ωEP(H)F(T), from Lemma 2.9, we get

By the three point identity of the Bregman distance, one has

Since f is a coercive Legendre function which is bounded, uniformly Fréchet differentiable and totally convex on a bounded subset of E, it follows from Lemma 2.3 that f is continuous on E and ∇f is uniformly continuous on bounded subsets of E from the strong topology of E to the strong topology of E . Therefore, we have

that is, lim n D f ( u n , y n )=0. Furthermore, one has lim n u n y n =0 and thus

lim n f ( u n ) f ( y n ) =0.

Since u n ω ˆ as n, we have y n ω ˆ as n. Further, in the light of u n = Res H f ( y n ) and Definition 2.4, it follows that, for each yC,

H( u n ,y)+ f ( u n ) f ( y n ) , y u n 0

and hence, combining this with Assumption 2.1,

f ( u n ) f ( y n ) , y u n H( u n ,y)H(y, u n ).

Consequently, one can conclude that

H ( y , ω ˆ ) lim inf n H ( y , u n ) lim inf n f ( u n ) f ( y n ) , y u n lim inf n f ( u n ) f ( y n ) y u n = 0 .

For any yC and t(0,1], let y t =ty+(1t) ω ˆ C. It follows from Assumption 2.1 that H( y t , ω ˆ )0 and

0=H( y t , y t )tH( y t ,y)+(1t)H( y t , ω ˆ )tH( y t ,y)

and so H( y t ,y)0. Moreover, one has

0 lim sup t 0 + H( y t ,y)= lim sup t 0 + H ( t y + ( 1 t ) ω ˆ , y ) H( ω ˆ ,y),yC,

which shows that ω ˆ EP(H).

Next, we prove that ω ˆ F(T). Note that

This implies that

(1 α n ) β n f ( x n ) f ( T ( x n ) ) f ( x n ) f ( y n ) + α n f ( x n ) f ( x 0 ) .
(3.7)

Letting n in (3.7), it follows from lim inf n (1 α n ) β n >0 and lim n α n =0 that

lim n f ( x n ) f ( T ( x n ) ) =0.

Moreover, we have that lim n x n T( x n )=0. This together with x n ω ˆ implies that ω ˆ F ˜ (T). In view of F ˜ (T)=F(T), one has ω ˆ EP(H)F(T). Therefore, the sequence { x n } generated by Algorithm 3.1 converges strongly to a point ω ˆ in EP(H)F(T). This completes the proof. □

Now, we prove Theorem 3.1 by using lemmas.

Proof of Theorem 3.1 From Lemmas 2.6 and 2.9, it follows that EP(H)F(T) is a nonempty, closed and convex subset of E. Clearly, C n and Q n are closed and convex and so C n Q n are closed and convex for all n0.

Now, we show that EP(H)F(T) C n Q n for all n0. Take ωEP(H)F(T) arbitrarily. Then

D f ( ω , u n ) = D f ( ω , Res H f ( y n ) ) D f ( ω , y n ) D f ( Res H f ( y n ) , y n ) D f ( ω , y n ) = D f ( ω , f ( α n f ( x 0 ) + ( 1 α n ) f ( z n ) ) ) α n D f ( ω , x 0 ) + ( 1 α n ) D f ( ω , z n ) = α n D f ( ω , x 0 ) + ( 1 α n ) D f ( ω , f ( β n f ( T ( x n ) ) + ( 1 β n ) f ( x n ) ) ) α n D f ( ω , x 0 ) + ( 1 α n ) [ β n D f ( ω , T ( x n ) ) + ( 1 β n ) D f ( ω , x n ) ] α n D f ( ω , x 0 ) + ( 1 α n ) D f ( ω , x n ) ,

which implies that ω C n and so EP(H)F(T) C n for all n0.

Next, we prove that EP(H)F(T) Q n for all n0. Obviously, EP(H)F(T) Q 0 ( Q 0 =C). Suppose that EP(H)F(T) Q k for all k0. In view of x k + 1 = proj C k Q k f ( x 0 ), it follows from Lemma 2.8 that

f ( x 0 ) f ( x k + 1 ) , x k + 1 v 0,v C k Q k .

Moreover, one has

f ( x 0 ) f ( x k + 1 ) , x k + 1 ω 0,ωEP(H)F(T)

and so, for each ωEP(H)F(T),

f ( x 0 ) f ( x k + 1 ) , ω x k + 1 0.

This implies that EP(H)F(T) Q k + 1 . To sum up, we have EP(H)F(T) Q n and so

EP(H)F(T) C n Q n ,n0.

This together with EP(H)F(T) yields that C n Q n is a nonempty, closed convex subset of C for all n0. Thus { x n } is well defined and, from both Lemmas 3.2 and 3.3, the sequence { x n } is a Cauchy sequence and converges strongly to a point ω ˆ of EP(H)F(T).

Finally, we now prove that ω ¯ = proj EP ( H ) F ( T ) f ( x 0 ). Since proj EP ( H ) F ( T ) f ( x 0 )EP(H)F(T), it follows from x n + 1 = proj ( C n Q n ) f ( x 0 ) that

D f ( x n + 1 , x 0 ) D f ( proj EP ( H ) F ( T ) f ( x 0 ) , x 0 ) .

Thus, by Lemma 2.5, we have x n proj EP ( H ) F ( T ) f ( x 0 ) as n. Therefore, the sequence { x n } converges strongly to the point proj EP ( H ) F ( T ) f ( x 0 ). This completes the proof. □

Remark 3.1 (1) If f(x)= 1 2 x 2 for all xE, then the weak Bregman relatively nonexpansive mapping is reduced to the weak relatively nonexpansive mapping defined by Su et al. [32], that is, T is called a weak relatively nonexpansive mapping if the following conditions are satisfied:

F ˜ (T)=F(T),ϕ(u,Tx)ϕ(u,x),xC,uF(T),

where ϕ(y,x)= y 2 2y,J(x)+ x 2 for all x,yE and J is the normalized duality mapping from E to 2 E ;

  1. (2)

    If f(x)= 1 2 x 2 for all xE, then Algorithm 3.1 is reduced to the following iterative algorithm.

Algorithm 3.2 Step 1: Select an arbitrary starting point x 0 C, let Q 0 =C and C 0 ={zC:ϕ(z, u 0 )ϕ(z, x 0 )}.

Step 2: Given the current iterate x n , calculate the next iterate as follows:

{ z n = J 1 ( β n J ( T ( x n ) ) + ( 1 β n ) J ( x n ) ) , y n = J 1 ( α n J ( x 0 ) + ( 1 α n ) J ( z n ) ) , u n = Res H f ( y n ) , C n = { z C n 1 Q n 1 : ϕ ( z , u n ) α n ϕ ( z , x 0 ) + ( 1 α n ) ϕ ( z , x n ) } , Q n = { z C n 1 Q n 1 : J ( x 0 ) J ( x n ) , z x n 0 } , x n + 1 = Π C n Q n x 0 , n 0 .
(3.8)
  1. (3)

    Particularly, if EP(H)=C, then Algorithm 3.2 is reduced to the following iterative algorithm.

Algorithm 3.3 Step 1: Select an arbitrary starting point x 0 C, let Q 0 =C and C 0 ={zC:ϕ(z, u 0 )ϕ(z, x 0 )}.

Step 2: Given the current iterate x n , calculate the next iterate as follows:

{ z n = J 1 ( β n J ( T ( x n ) ) + ( 1 β n ) J ( x n ) ) , y n = J 1 ( α n J ( x 0 ) + ( 1 α n ) J ( z n ) ) , C n = { z C n 1 Q n 1 : ϕ ( z , y n ) α n ϕ ( z , x 0 ) + ( 1 α n ) ϕ ( z , x n ) } , Q n = { z C n 1 Q n 1 : J ( x 0 ) J ( x n ) , z x n 0 } , x n + 1 = Π C n Q n x 0 , n 0 .
(3.9)
  1. (4)

    If Tx=x for all xC, then, by Algorithm 3.3, we can get the following modified Mann iteration algorithm for the equilibrium problem (1.1).

Algorithm 3.4 Step 1: Select an arbitrary starting point x 0 C, let Q 0 =C and C 0 ={zC:ϕ(z, u 0 )ϕ(z, x 0 )}.

Step 2: Given the current iterate x n , calculate the next iterate as follows:

{ y n = J 1 ( α n J ( x 0 ) + ( 1 α n ) J ( x n ) ) , u n = Res H f ( y n ) , C n = { z C n 1 Q n 1 : ϕ ( z , u n ) α n ϕ ( z , x 0 ) + ( 1 α n ) ϕ ( z , x n ) } , Q n = { z C n 1 Q n 1 : J ( x 0 ) J ( x n ) , z x n 0 } , x n + 1 = Π C n Q n x 0 , n 0 .
(3.10)

If f(x)= 1 2 x 2 for all xE, then, by Theorem 3.1 and Remark 3.1, the following results hold.

Corollary 3.1 Let C be a nonempty, closed convex subset of a real reflexive Banach space E. Suppose that H:C×CR satisfies Assumption  2.1 and T:CC is a weak relatively nonexpansive mapping such that EP(H)F(T). Then the sequence { x n } generated by Algorithm 3.2 converges strongly to the point Π EP ( H ) F ( T ) ( x 0 ), where Π EP ( H ) F ( T ) ( x 0 ) is the generalized projection of C onto EP(H)F(T).

Corollary 3.2 Let C be a nonempty, closed convex subset of a real reflexive Banach space E. Let T:CC be a weak relatively nonexpansive mapping such that F(T). Then the sequence { x n } generated by Algorithm 3.3 converges strongly to the point Π F ( T ) f ( x 0 ), where Π F ( T ) ( x 0 ) is the generalized projection of C onto F(T).

Corollary 3.3 Let C be a nonempty, closed convex subset of a real reflexive Banach space E. Suppose that H:C×CR satisfies Assumption  2.1 such that EP(H). Then the sequence { x n } generated by Algorithm 3.4 converges strongly to the point Π EP ( H ) ( x 0 ), where Π EP ( H ) ( x 0 ) is the generalized projection of C onto EP(H).

Remark 3.2

  1. (1)

    It is well known that any closed and firmly nonexpansive-type mapping (see [11, 33]) is a weak Bregman relatively nonexpansive mapping whenever f(x)= 1 2 x 2 for all xE. If β n 1 for all n0 and E is a uniformly convex and uniformly smooth Banach space, then Corollary 3.2 improves [[11], Corollary 3.1];

  2. (2)

    If α n 0 for all n0 and E is a uniformly convex and uniformly smooth Banach space, then Corollary 3.2 is reduced to [[32], Theorem 3.1];

  3. (3)

    If β n 1 β n for all n0, β n [0,1], f(x)= 1 2 x 2 for all xE and E is a uniformly convex and uniformly smooth Banach space, then Corollary 3.1 improves [[11], Theorem 4.1].