1 Introduction

Let H be a real Hilbert space whose inner product and norm are denoted by , and , respectively. Let C be a nonempty closed convex subset of H and F 1 :C×CR be a bifunction. The equilibrium problem (in short, EP) is to find xC such that

F 1 (x,y)0,yC.
(1.1)

The solution set of EP (1.1) is denoted by EP( F 1 ).

The equilibrium problem provides a unified, natural, innovative and general framework to study a wide class of problems arising in finance, economics, network analysis, transportation, elasticity and optimization. The theory of equilibrium problems has witnessed an explosive growth in theoretical advances and applications across all disciplines of pure and applied sciences; see [110] and the references therein.

If F 1 (x,y)=Ax,yx, where A:CH is a nonlinear operator, then EP (1.1) is equivalent to find a vector xC such that

yx,Ax0,yC.
(1.2)

It is a well-known classical variational inequality problem. We now have a variety of techniques to suggest and analyze various iterative algorithms for solving variational inequalities and the related optimization problems; see [127] and the references therein.

The fixed point problem for the mapping T:CH is to find xC such that

Tx=x.
(1.3)

We denote by F(T) the set of solutions of (1.3). It is well known that F(T) is closed and convex, and P F (T) is well defined.

Let S:CH be a nonexpansive mapping, that is, SxSyxy for all x,yC. The hierarchical fixed point problem (in short, HFPP) is to find xF(T) such that

xSx,yx0,yF(T).
(1.4)

It is linked with some monotone variational inequalities and convex programming problems; see [12]. Various methods have been proposed to solve HFPP (1.4); see, for example, [1317] and the references therein. In 2010, Yao et al. [12] studied the following iterative algorithm to solve HFPP (1.4):

y n = β n S x n + ( 1 β n ) x n , x n + 1 = P C [ α n f ( x n ) + ( 1 α n ) T y n ] , n 0 ,
(1.5)

where f:CH is a contraction mapping and { α n } and { β n } are sequences in (0,1). Under certain restrictions on the parameters, They proved that the sequence { x n } generated by (1.5) converges strongly to zF(T), which is also a unique solution of the following variational inequality:

( I f ) z , y z 0,yF(T).
(1.6)

In 2011, Ceng et al. [18] investigated the following iterative method:

x n + 1 = P C [ α n ρ U ( x n ) + ( I α n μ F ) ( T ( y n ) ) ] ,n0,
(1.7)

where U is a Lipschitzian mapping, and F is a Lipschitzian and strongly monotone mapping. They proved that under some approximate assumptions on the operators and parameters, the sequence { x n } generated by (1.7) converges strongly to a unique solution of the variational inequality:

ρ U ( z ) μ F ( z ) , x z 0,xFix(T).

In this paper, motivated by the work of Ceng et al. [18, 20], Yao et al. [12], Bnouhachem [19] and others, we propose an iterative method for finding an approximate element of the common set of solutions of EP (1.1) and HFPP (1.4) in the setting of real Hilbert spaces. We establish a strong convergence theorem for the sequence generated by the proposed method. The proposed method is quite general and flexible and includes several known methods for solving of variational inequality problems, equilibrium problems, and hierarchical fixed point problems; see, for example, [12, 13, 15, 18, 19, 21] and the references therein.

2 Preliminaries

We present some definitions which will be used in the sequel.

Definition 2.1 A mapping T:CH is said to be k-Lipschitz continuous if there exists a constant k>0 such that

TxTykxy,x,yC.
  • If k=1, then T is called nonexpansive.

  • If k(0,1), then T is called contraction.

Definition 2.2 A mapping T:CH is said to be

  1. (a)

    monotone if

    TxTy,xy0,x,yC;
  2. (b)

    strongly monotone if there exists an α>0 such that

    TxTy,xyα x y 2 ,x,yC;
  3. (c)

    α-inverse strongly monotone if there exists an α>0 such that

    TxTy,xyα T x T y 2 ,x,yC.

It is easy to observe that every α-inverse strongly monotone mapping is monotone and Lipschitz continuous. Also, for every nonexpansive mapping T:HH, we have

( x T ( x ) ) ( y T ( y ) ) , T ( y ) T ( x ) 1 2 ( T ( x ) x ) ( T ( y ) y ) 2 ,
(2.1)

for all (x,y)H×H. Therefore, for all (x,y)H×Fix(T), we have

x T ( x ) , y T ( x ) 1 2 T ( x ) x 2 .
(2.2)

The following lemma provides some basic properties of the projection onto C.

Lemma 2.1 Let P C denote the projection of H onto C. Then we have the following inequalities:

  1. (a)

    z P C [z], P C [z]v0, zH, vC;

  2. (b)

    uv, P C [u] P C [v] P C [ u ] P C [ v ] 2 , u,vH;

  3. (c)

    P C [u] P C [v]uv, u,vH;

  4. (d)

    u P C [ z ] 2 z u 2 z P C [ z ] 2 , zH, uC.

Assumption 2.1 [1]

Let F 1 :C×CR be a bifunction satisfying the following assumptions:

(A1) F 1 (x,x)=0, xC;

(A2) F 1 is monotone, that is, F 1 (x,y)+ F 1 (y,x)0, x,yC;

(A3) For each x,y,zC, lim t 0 F 1 (tz+(1t)x,y) F 1 (x,y);

(A4) For each xC, y F 1 (x,y) is convex and lower semicontinuous.

Lemma 2.2 [2]

Let C be a nonempty closed convex subset of a real Hilbert space H and F 1 :C×CR satisfy conditions (A1)-(A4). For r>0 and xH, define a mapping T r :HC as

T r (x)= { z C : F 1 ( z , y ) + 1 r y z , z x 0 , y C } .

Then the following statements hold:

  1. (i)

    T r is nonempty single-valued;

  2. (ii)

    T r is firmly nonexpansive, that is,

    T r ( x ) T r ( y ) 2 T r ( x ) T r ( y ) , x y ,x,yH;
  3. (iii)

    F( T r )=EP( F 1 );

  4. (iv)

    EP( F 1 ) is closed and convex.

Lemma 2.3 [22]

Let C be a nonempty closed convex subset of a real Hilbert space H. If T:CC is a nonexpansive mapping with Fix(T), then the mapping IT is demiclosed at 0, that is, if { x n } is a sequence in C converges weakly to x and {(IT) x n } converges strongly to 0, then (IT)x=0.

Lemma 2.4 [18]

Let U:CH be a τ-Lipschitzian mapping and F:CH be a k-Lipschitzian and η-strongly monotone mapping. Then, for 0ρτ<μη, μFρU is (μηρτ)-strongly monotone, that is,

( μ F ρ U ) x ( μ F ρ U ) y , x y (μηρτ) x y 2 ,x,yC.

Lemma 2.5 [23]

Let λ(0,1), μ>0, and F:CH be an k-Lipschitzian and η-strongly monotone operator. In association with a nonexpansive mapping T:CC, define a mapping T λ :CH by

T λ x=TxλμFT(x),xC.

Then T λ is a contraction provided μ< 2 η k 2 , that is,

T λ x T λ y (1λν)xy,x,yC,

where ν=1 1 μ ( 2 η μ k 2 ) .

Lemma 2.6 [25]

Let C be a closed convex subset of a real Hilbert space H and { x n } be a bounded sequence in H. Assume that

  1. (i)

    the weak w-limit set w w ( x n )C, where w w ( x n )={x: x n i x}

  2. (ii)

    for each zC, lim n x n z exists.

Then the sequence { x n } is weakly convergent to a point in C.

Lemma 2.7 [24]

Let { a n } be a sequence of nonnegative real numbers such that

a n + 1 (1 υ n ) a n + δ n ,

where { υ n } is a sequence in (0,1) and δ n is a sequence such that

  1. (i)

    n = 1 υ n =;

  2. (ii)

    lim sup n δ n / υ n 0 or n = 1 | δ n |<.

Then lim n a n =0.

3 An iterative method and strong convergence results

In this section, we propose and analyze an iterative method for finding the common solutions of EP (1.1) and HFPP (1.4).

Let C be a nonempty closed convex subset of a real Hilbert space H. Let F 1 :C×CR be a bifunction that satisfy conditions (A1)-(A4), and let S,T:CC be nonexpansive mappings such that F(T)EP( F 1 ). Let F:CC be a k-Lipschitzian mapping and η-strongly monotone, and let U:CC be a τ-Lipschitzian mapping.

Algorithm 3.1 For any given x 0 C, let the iterative sequences { u n }, { x n }, and { y n } be generated by

{ F 1 ( u n , y ) + 1 r n y u n , u n x n 0 , y C ; y n = β n S x n + ( 1 β n ) u n ; x n + 1 = P C [ α n ρ U ( x n ) + γ n x n + ( ( 1 γ n ) I α n μ F ) ( T ( y n ) ) ] , n 0 .
(3.1)

Suppose that the parameters satisfy 0<μ< 2 η k 2 and 0ρτ<ν, where ν=1 1 μ ( 2 η μ k 2 ) . Also, { γ n }, { α n }, { β n }, and { r n } are sequences in (0,1) satisfying the following conditions:

  1. (a)

    lim n γ n =0, γ n + α n <1,

  2. (b)

    lim n α n =0 and n = 1 α n =,

  3. (c)

    lim n ( β n / α n )=0,

  4. (d)

    n = 1 | α n 1 α n |<, n = 1 | γ n 1 γ n |< and n = 1 | β n 1 β n |<,

  5. (e)

    lim inf n r n >0 and n = 1 | r n 1 r n |<.

Remark 3.1 Algorithm 3.1 can be viewed as an extension and improvement for some well-known methods.

  • If γ n =0, then the proposed method is an extension and improvement of a method studied in [19, 26].

  • If U=f, F=I, ρ=μ=1, and γ n =0, then we obtain an extension and improvement of a method considered in [12].

  • The contractive mapping f with a coefficient α[0,1) in other papers [12, 21, 23] is extended to the cases of the Lipschitzian mapping U with a coefficient constant γ[0,).

Lemma 3.1 Let x F(T)EP( F 1 ). Then { x n }, { u n }, and { y n } are bounded.

Proof It follows from Lemma 2.2 that u n = T r n ( x n ). Let x F(T)EP( F 1 ), then x = T r n ( x ). Define V n = α n ρU( x n )+ γ n x n +((1 γ n )I α n μF)(T( y n )).

We prove that the sequence { x n } is bounded. Without loss of generality, we can assume that β n α n for all n1. From (3.1), we have

x n + 1 x = P C [ V n ] P C [ x ] α n ρ U ( x n ) + γ n x n + ( ( 1 γ n ) I α n μ F ) ( T ( y n ) ) x = α n ( ρ U ( x n ) μ F ( x ) ) + γ n ( x n x ) + ( ( 1 γ n ) I α n μ F ) ( T ( y n ) ) ( ( 1 γ n ) I α n μ F ) ( T ( x ) ) α n ρ U ( x n ) μ F ( x ) + γ n x n x + ( 1 γ n ) × ( I α n μ 1 γ n F ) ( T ( y n ) ) ( I α n μ 1 γ n F ) T ( x ) = α n ρ U ( x n ) ρ U ( x ) + ( ρ U μ F ) x + γ n x n x + ( 1 γ n ) × ( I α n μ 1 γ n F ) ( T ( y n ) ) ( I α n μ 1 γ n F ) T ( x ) α n ρ τ x n x + α n ( ρ U μ F ) x + γ n x n x + ( 1 γ n ) ( 1 α n ν 1 γ n ) y n x = α n ρ τ x n x + α n ( ρ U μ F ) x + γ n x n x + ( 1 γ n α n ν ) β n S x n + ( 1 β n ) u n x α n ρ τ x n x + α n ( ρ U μ F ) x + γ n x n x + ( 1 γ n α n ν ) ( β n S x n S x + β n S x x + ( 1 β n ) T r n ( x n ) x ) α n ρ τ x n x + α n ( ρ U μ F ) x + γ n x n x + ( 1 γ n α n ν ) ( β n S x n S x + β n S x x + ( 1 β n ) x n x ) α n ρ τ x n x + α n ( ρ U μ F ) x + γ n x n x + ( 1 γ n α n ν ) ( β n x n x + β n S x x + ( 1 β n ) x n x ) = ( 1 α n ( ν ρ τ ) ) x n x + α n ( ρ U μ F ) x + ( 1 γ n α n ν ) β n S x x ( 1 α n ( ν ρ τ ) ) x n x + α n ( ρ U μ F ) x + β n S x x ( 1 α n ( ν ρ τ ) ) x n x + α n ( ( ρ U μ F ) x + S x x ) = ( 1 α n ( ν ρ τ ) ) x n x + α n ( ν ρ τ ) ν ρ τ ( ( ρ U μ F ) x + S x x ) max { x n x , 1 ν ρ τ ( ( ρ U μ F ) x + S x x ) } ,

where the third inequality follows from Lemma 2.5. By induction on n, we obtain

x n x max { x 0 x , 1 ν ρ τ ( ( ρ U μ F ) x + S x x ) } ,

for n0 and x 0 C. Hence, { x n } is bounded, and consequently, we deduce that { u n }, { y n }, {S( x n )}, {T( y n )}, {F(T( y n ))}, and {U( x n )} are bounded. □

Lemma 3.2 Let x F(T)EP( F 1 ) and { x n } be a sequence generated by Algorithm  3.1. Then the following statements hold.

  1. (a)

    lim n x n + 1 x n =0.

  2. (b)

    The weak w-limit set w w ( x n )={x: x n i x}F(T).

Proof From the definition of the sequence { y n } in Algorithm 3.1, we have

y n y n 1 β n S x n + ( 1 β n ) u n ( β n 1 S x n 1 + ( 1 β n 1 ) u n 1 ) = β n ( S x n S x n 1 ) + ( β n β n 1 ) S x n 1 + ( 1 β n ) ( u n u n 1 ) + ( β n 1 β n ) u n 1 β n x n x n 1 + ( 1 β n ) u n u n 1 + | β n β n 1 | ( S x n 1 + u n 1 ) .
(3.2)

Since u n = T r n ( x n ) and u n 1 = T r n 1 ( x n 1 ), we have

F 1 ( u n ,y)+ 1 r n y u n , u n x n 0,yC,
(3.3)

and

F 1 ( u n 1 ,y)+ 1 r n 1 y u n 1 , u n 1 x n 1 0,yC.
(3.4)

Take y= u n 1 in (3.3) and y= u n in (3.4), we get

F 1 ( u n , u n 1 )+ 1 r n u n 1 u n , u n x n 0,
(3.5)

and

F 1 ( u n 1 , u n )+ 1 r n 1 u n u n 1 , u n 1 x n 1 0.
(3.6)

Adding (3.5) and (3.6), and using the monotonicity of F 1 , we obtain

u n u n 1 , u n 1 x n 1 r n 1 u n x n r n 0,

which implies that

0 u n u n 1 , r n r n 1 ( u n 1 x n 1 ) ( u n x n ) = u n 1 u n , u n u n 1 + ( 1 r n r n 1 ) u n 1 x n + r n r n 1 x n 1 = u n 1 u n , ( 1 r n r n 1 ) u n 1 x n + ( r n r n 1 ) x n 1 u n u n 1 2 = u n 1 u n , ( 1 r n r n 1 ) ( u n 1 x n 1 ) + ( x n 1 x n ) u n u n 1 2 u n 1 u n { | 1 r n r n 1 | u n 1 x n 1 + x n 1 x n } u n u n 1 2 ,

and then

u n 1 u n |1 r n r n 1 | u n 1 x n 1 + x n 1 x n .

Without loss of generality, assume that there exists a real number χ such that r n >χ>0 for all positive integers n. Then we get

u n 1 u n x n 1 x n + 1 χ | r n 1 r n | u n 1 x n 1 .
(3.7)

It follows from (3.2) and (3.7) that

y n y n 1 β n x n x n 1 + ( 1 β n ) { x n x n 1 + 1 χ | r n r n 1 | u n 1 x n 1 } + | β n β n 1 | ( S x n 1 + u n 1 ) = x n x n 1 + ( 1 β n ) { 1 χ | r n r n 1 | u n 1 x n 1 } + | β n β n 1 | ( S x n 1 + u n 1 ) .
(3.8)

Next, we estimate that

x n + 1 x n = P C [ V n ] P C [ V n 1 ] α n ρ ( U ( x n ) U ( x n 1 ) ) + ( α n α n 1 ) ρ U ( x n 1 ) + γ n ( x n x n 1 ) + ( γ n γ n 1 ) x n 1 + ( 1 γ n ) × [ ( I α n μ 1 γ n F ) ( T ( y n ) ) ( I α n μ 1 γ n F ) T ( y n 1 ) ] + ( ( 1 γ n ) I α n μ F ) ( T ( y n 1 ) ) ( ( 1 γ n 1 ) I α n 1 μ F ) ( T ( y n 1 ) ) α n ρ τ x n x n 1 + γ n x n x n 1 + ( 1 γ n ) ( 1 α n ν 1 γ n ) y n y n 1 + | γ n γ n 1 | ( x n 1 + T ( y n 1 ) ) + | α n α n 1 | ( ρ U ( x n 1 ) + μ F ( T ( y n 1 ) ) ) ,
(3.9)

where the second inequality follows from Lemma 2.5. From (3.8) and (3.9), we have

x n + 1 x n α n ρ τ x n x n 1 + γ n x n x n 1 + ( 1 γ n α n ν ) { x n x n 1 + 1 χ | r n r n 1 | u n 1 x n 1 + | β n β n 1 | ( S x n 1 + u n 1 ) } + | γ n γ n 1 | ( x n 1 + T ( y n 1 ) ) + | α n α n 1 | ( ρ U ( x n 1 ) + μ F ( T ( y n 1 ) ) ) ( 1 ( ν ρ τ ) α n ) x n x n 1 + 1 χ | r n r n 1 | u n 1 x n 1 + | β n β n 1 | ( S x n 1 + u n 1 ) + | γ n γ n 1 | ( x n 1 + T ( y n 1 ) ) + | α n α n 1 | ( ρ U ( x n 1 ) + μ F ( T ( y n 1 ) ) ) ( 1 ( ν ρ τ ) α n ) x n x n 1 + M ( 1 χ | r n 1 r n | + | β n β n 1 | + | γ n γ n 1 | + | α n α n 1 | ) ,
(3.10)

where

M = max { sup n 1 u n 1 x n 1 , sup n 1 ( S x n 1 + u n 1 ) , sup n 1 ( x n 1 + T ( y n 1 ) ) , sup n 1 ( ρ U ( x n 1 ) + μ F ( T ( y n 1 ) ) ) } .

It follows from conditions (b), (d), (e) of Algorithm 3.1, and Lemma 2.7 that

lim n x n + 1 x n =0.

Next, we show that lim n u n x n =0. Since T r n is firmly nonexpansive, we have

u n x 2 = T r n ( x n ) T r n ( x ) 2 u n x , x n x = 1 2 { u n x 2 + x n x 2 u n x ( x n x ) 2 } .

Hence, we get

u n x 2 x n x 2 u n x n 2 .

From above inequality, we have

x n + 1 x 2 = P C [ V n ] x , x n + 1 x = P C [ V n ] V n , P C [ V n ] x + V n x , x n + 1 x α n ( ρ U ( x n ) μ F ( x ) ) + γ n ( x n x ) + ( ( 1 γ n ) I α n μ F ) ( T ( y n ) ) ( ( 1 γ n ) I α n μ F ) ( T ( x ) ) , x n + 1 x = α n ρ ( U ( x n ) U ( x ) ) , x n + 1 x + α n ρ U ( x ) μ F ( x ) , x n + 1 x + γ n ( x n x ) , x n + 1 x + ( 1 γ n ) ( I α n μ 1 γ n F ) ( T ( y n ) ) ( I α n μ 1 γ n F ) ( T ( x ) ) , x n + 1 x ( α n ρ τ + γ n ) x n x x n + 1 x + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 γ n α n ν ) y n x x n + 1 x γ n + α n ρ τ 2 ( x n x 2 + x n + 1 x 2 ) + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 γ n α n ν ) 2 ( y n x 2 + x n + 1 x 2 ) ( 1 α n ( ν ρ τ ) ) 2 x n + 1 x 2 + γ n + α n ρ τ 2 x n x 2 + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 γ n α n ν ) 2 ( β n S x n x 2 + ( 1 β n ) u n x 2 ) ( 1 α n ( ν ρ τ ) ) 2 x n + 1 x 2 + γ n + α n ρ τ 2 x n x 2 + α n ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 γ n α n ν ) 2 { β n S x n x 2 + ( 1 β n ) ( x n x 2 u n x n 2 ) } ,
(3.11)

which implies that

x n + 1 x 2 γ n + α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 γ n α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + ( 1 γ n α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) { x n x 2 u n x n 2 } γ n + α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 γ n α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + x n x 2 ( 1 γ n α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) u n x n 2 .

Hence,

( 1 γ n α n ν ) ( 1 β n ) 1 + α n ( ν ρ τ ) u n x n 2 γ n + α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 γ n α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + x n x 2 x n + 1 x 2 γ n + α n ρ τ 1 + α n ( ν ρ τ ) x n x 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( x ) μ F ( x ) , x n + 1 x + ( 1 γ n α n ν ) β n 1 + α n ( ν ρ τ ) S x n x 2 + ( x n x + x n + 1 x ) x n + 1 x n .

Since lim n x n + 1 x n =0, α n 0, β n 0, we have

lim n u n x n =0.
(3.12)

Since T( x n )C, we have

x n T ( x n ) x n x n + 1 + x n + 1 T ( x n ) = x n x n + 1 + P C [ V n ] P C [ T ( x n ) ] x n x n + 1 + α n ( ρ U ( x n ) μ F ( T ( y n ) ) ) + γ n ( x n T ( y n ) ) + T ( y n ) T ( x n ) x n x n + 1 + α n ρ U ( x n ) μ F ( T ( y n ) ) + γ n x n T ( y n ) + y n x n x n x n + 1 + α n ρ U ( x n ) μ F ( T ( y n ) ) + γ n x n T ( y n ) + β n S x n + ( 1 β n ) u n x n x n x n + 1 + α n ρ U ( x n ) μ F ( T ( y n ) ) + γ n x n T ( y n ) + β n S x n x n + ( 1 β n ) u n x n .

Since lim n x n + 1 x n =0, γ n 0, α n 0, β n 0, and ρU( x n )μF(T( y n )) and S x n x n are bounded, and lim n u n x n =0, we obtain

lim n x n T ( x n ) =0.

Since { x n } is bounded, without loss of generality, we can assume that x n x C. It follows from Lemma 2.3 that x F(T). Therefore, w w ( x n )F(T). □

Theorem 3.1 The sequence { x n } generated by Algorithm  3.1 converges strongly to zF(T)EP( F 1 ), which is also a unique solution of the variational inequality:

ρ U ( z ) μ F ( z ) , x z 0,xF(T)EP( F 1 ).
(3.13)

Proof From Lemma 3.2, we have wF(T) since { x n } is bounded and x n w. We show that wEP( F 1 ). Since u n = T r n ( x n ), we have

F 1 ( u n ,y)+ 1 r n y u n , u n x n 0,yC.

It follows from the monotonicity of F 1 that

1 r n y u n , u n x n F 1 (y, u n ),yC,

and

y u n k , u n k x n k r n k F 1 (y, u n k ),yC.
(3.14)

Since lim n u n x n =0 and x n w, it is easy to observe that u n k w. For any 0<t1 and yC, let y t =ty+(1t)w. Then we have y t C, and from (3.14) we obtain

0 y t u n k , u n k x n k r n k + F 1 ( y t , u n k ).
(3.15)

Since u n k w, it follows from (3.15) that

0 F 1 ( y t ,w).
(3.16)

Since F 1 satisfies (A1)-(A4), it follows from (3.16) that

0= F 1 ( y t , y t )t F 1 ( y t ,y)+(1t) F 1 ( y t ,w)t F 1 ( y t ,y),
(3.17)

which implies that F 1 ( y t ,y)0. Letting t 0 + , we have

F 1 (w,y)0,yC,

which implies that wEP( F 1 ). Thus, we have

wF(T)EP( F 1 ).

Observe that the constants satisfy 0ρτ<ν and

k η k 2 η 2 1 2 μ η + μ 2 k 2 1 2 μ η + μ 2 η 2 1 μ ( 2 η μ k 2 ) 1 μ η μ η 1 1 μ ( 2 η μ k 2 ) μ η ν ,

therefore, from Lemma 2.4, the operator μFρU is μηρτ strongly monotone, and we get the uniqueness of the solution of variational inequality (3.13), and denote it by zF(T)EP( F 1 ).

Next, we claim that lim sup n ρU(z)μF(z), x n z0. Since { x n } is bounded, there exists a subsequence { x n k } of { x n } such that

lim sup n ρ U ( z ) μ F ( z ) , x n z = lim sup k ρ U ( z ) μ F ( z ) , x n k z = ρ U ( z ) μ F ( z ) , w z 0 .

Next, we show that x n z. We have

x n + 1 z 2 = P C [ V n ] z , x n + 1 z = P C [ V n ] V n , P C [ V n ] z + V n z , x n + 1 z α n ( ρ U ( x n ) μ F ( z ) ) + γ n ( x n z ) + ( 1 γ n ) [ ( I α n μ 1 γ n F ) ( T ( y n ) ) ( I α n μ 1 γ n F ) ( T ( z ) ) ] , x n + 1 z = α n ρ ( U ( x n ) U ( z ) ) , x n + 1 z + α n ρ U ( z ) μ F ( z ) , x n + 1 z + γ n x n z , x n + 1 z + ( 1 γ n ) ( I α n μ 1 γ n F ) ( T ( y n ) ) ( I α n μ 1 γ n F ) ( T ( z ) ) , x n + 1 z ( γ n + α n ρ τ ) x n z x n + 1 z + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 γ n α n ν ) y n z x n + 1 z ( γ n + α n ρ τ ) x n z x n + 1 z + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 γ n α n ν ) { β n S x n S z + β n S z z + ( 1 β n ) u n z } x n + 1 z = ( γ n + α n ρ τ ) x n z x n + 1 z + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 γ n α n ν ) { β n S x n S z + β n S z z + ( 1 β n ) T r n ( x n ) z } x n + 1 z ( γ n + α n ρ τ ) x n z x n + 1 z + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 γ n α n ν ) { β n x n z + β n S z z + ( 1 β n ) x n z } x n + 1 z = ( 1 α n ( ν ρ τ ) ) x n z x n + 1 z + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 γ n α n ν ) β n S z z x n + 1 z 1 α n ( ν ρ τ ) 2 ( x n z 2 + x n + 1 z 2 ) + α n ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 γ n α n ν ) β n S z z x n + 1 z ,

which implies that

x n + 1 z 2 1 α n ( ν ρ τ ) 1 + α n ( ν ρ τ ) x n z 2 + 2 α n 1 + α n ( ν ρ τ ) ρ U ( z ) μ F ( z ) , x n + 1 z + 2 ( 1 γ n α n ν ) β n 1 + α n ( ν ρ τ ) S z z x n + 1 z ( 1 α n ( ν ρ τ ) ) x n z 2 + 2 α n ( ν ρ τ ) 1 + α n ( ν ρ τ ) { 1 ν ρ τ ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 γ n α n ν ) β n α n ( ν ρ τ ) S z z x n + 1 z } .

Let υ n = α n (νρτ) and

δ n = 2 α n ( ν ρ τ ) 1 + α n ( ν ρ τ ) { 1 ν ρ τ ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 γ n α n ν ) β n α n ( ν ρ τ ) S z z x n + 1 z } .

We have n = 1 α n = and

lim sup n { 1 ν ρ τ ρ U ( z ) μ F ( z ) , x n + 1 z + ( 1 γ n α n ν ) β n α n ( ν ρ τ ) S z z x n + 1 z } 0.

It follows that

n = 1 υ n =and lim sup n δ n υ n 0.

Thus, all the conditions of Lemma 2.7 are satisfied. Hence, we deduce that x n z. This completes the proof. □

Putting γ n =0 in Algorithm 3.1, we obtain the following result, which can be viewed as an extension and improvement of the method studied in [26].

Corollary 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let F 1 :C×CR be a bifunction that satisfies (A1)-(A4) and S,T:CC be nonexpansive mappings such that F(T)EP( F 1 ). Let F:CC be a k-Lipschitzian mapping and η-strongly monotone, and let U:CC be a τ-Lipschitzian mapping. For a given x 0 C, let the iterative sequences { u n }, { x n } and { y n } be generated by

F 1 ( u n , y ) + 1 r n y u n , u n x n 0 , y C ; y n = β n S x n + ( 1 β n ) u n ; x n + 1 = P C [ α n ρ U ( x n ) + ( I α n μ F ) ( T ( y n ) ) ] , n 0 ,

where { r n },{ α n }(0,1), { β n }(0,1). Suppose that the parameters satisfy 0<μ< 2 η k 2 , 0ρτ<ν, where ν=1 1 μ ( 2 η μ k 2 ) . Also, { α n }, { β n }, and { r n } are sequences satisfying conditions (b)-(e) of Algorithm  3.1. Then the sequence { x n } converges strongly to some element zF(T)EP( F 1 ), which is also a unique solution of the variational inequality:

ρ U ( z ) μ F ( z ) , x z 0,xF(T)EP( F 1 ).

Putting U=f, F=I, ρ=μ=1, and γ n =0, we obtain an extension and improvement of the method considered in [12].

Corollary 3.2 Let C be a nonempty closed convex subset of a real Hilbert space H. Let F 1 :C×CR be a bifunction that satisfies (A1)-(A4) and S,T:CC be nonexpansive mappings such that F(T)EP( F 1 ). Let f:CC be a τ-Lipschitzian mapping. For a given x 0 C, let the iterative sequences { u n }, { x n }, and { y n } be generated by

F 1 ( u n , y ) + 1 r n y u n , u n x n 0 , y C ; y n = β n S x n + ( 1 β n ) u n ; x n + 1 = P C [ α n f ( x n ) + ( 1 α n ) T ( y n ) ] , n 0 ,

where { r n }, { α n }, { β n } are sequences in (0,1) which satisfy conditions (b)-(e) of Algorithm  3.1. Then the sequence { x n } converges strongly to some element zF(T)EP( F 1 ) which is also a unique solution of the variational inequality:

f ( z ) z , x z 0,xF(T)EP( F 1 ).

4 Examples

To illustrate Algorithm 3.1 and the convergence result, we consider the following examples.

Example 4.1 Let α n = 1 2 n , γ n = 1 2 n , β n = 1 n 2 and r n = n n + 1 . Then we have α n + γ n = 1 n <1,

lim n α n = lim n γ n = 1 2 lim n 1 n =0,

and

n = 1 α n = 1 2 n = 1 1 n =.

The sequences { α n } and { γ n } satisfy conditions (a) and (b). Since

lim n β n α n = lim n 2 n =0,

condition (c) is satisfied. We compute

α n 1 α n = 1 2 ( 1 n 1 1 n ) = 1 2 n ( n 1 ) .

It is easy to show n = 1 | α n 1 α n |<. Similarly, we can show n = 1 | γ n 1 γ n |< and n = 1 | β n 1 β n |<. The sequences { α n }, { γ n }, and { β n } satisfy condition (d). We have

lim inf n r n = lim inf n n n + 1 =1,

and

n = 1 | r n 1 r n | = n = 1 | n 1 n n n + 1 | = n = 1 1 n ( n + 1 ) n = 1 1 n 2 < .

Then the sequence { r n } satisfies condition (e).

Let the mappings T,F,S,U:RR be defined as

T ( x ) = x 2 , x R , F ( x ) = 2 x + 3 7 , x R , S ( x ) = x 3 , x R , U ( x ) = x 14 , x R ,

and let the mapping F 1 :R×RR be defined by

F 1 (x,y)=3 x 2 +xy+2 y 2 ,(x,y)R×R.

It is easy to show that T and S are nonexpansive mappings, F is 1-Lipschitzian and 1 7 -strongly monotone and U is 1 7 -Lipschitzian. It is clear that

EP( F 1 )F(T)={0}.

From the definition of F 1 , we have

0 F 1 ( u n , y ) + 1 r n y u n , u n x n = 3 u n 2 + u n y + 2 y 2 + 1 r n ( y u n ) ( u n x n ) .

Then

0 r n ( 3 u n 2 + u n y + 2 y 2 ) + ( y u n y x n u n 2 + u n x n ) = 2 r n y 2 + ( r n u n + u n x n ) y 3 r n u n 2 u n 2 + u n x n .

Let B(y)=2 r n y 2 +( r n u n + u n x n )y3 r n u n 2 u n 2 + u n x n . Then B(y) is a quadratic function of y with coefficient a=2 r n , b= r n u n + u n x n , c=3 r n u n 2 u n 2 + u n x n . We determine the discriminant Δ of B as follows:

Δ = b 2 4 a c = ( r n u n + u n x n ) 2 8 r n ( 3 r n u n 2 u n 2 + u n x n ) = u n 2 + 10 r n u n 2 + 25 u n 2 r n 2 2 x n u n 10 x n u n r n + x n 2 = ( u n + 5 u n r n ) 2 2 x n ( u n + 5 u n r n ) + x n 2 = ( u n + 5 u n r n x n ) 2 .

We have B(y)0, yR. If it has at most one solution in ℝ, then Δ=0, and we obtain

u n = x n 1 + 5 r n .
(4.1)

For every n1, from (4.1), we rewrite (3.1) as follows:

{ y n = x n 3 n 2 + ( 1 1 n 2 ) x n ( 1 + 5 r n ) ; x n + 1 = ρ x n 28 n + x n 2 n + ( 1 1 2 n ) y n 2 μ y n + 3 14 n .

In all the tests we take ρ= 1 15 , μ= 1 7 , and N=10 for Algorithm 3.1. In this example η= 1 7 , k=1, τ= 1 7 . It is easy to show that the parameters satisfy 0<μ< 2 η k 2 , 0ρτ<ν, where ν=1 1 μ ( 2 η μ k 2 ) .

The values of { u n }, { y n }, and { x n } with different n are reported in Tables 1 and 2. All codes were written in Matlab.

Table 1 The values of { u n } , { y n } , and { x n } with initial values x 1 =40 and x 1 =40
Table 2 The values of { u n } , { y n } , and { x n } with initial value x 1 =30

Remark 4.1 Table 1 and Figure 1 show that the sequences { u n }, { y n }, and { x n } converge to 0, where {0}=F(T)EP( F 1 ).

Figure 1
figure 1

The convergence of { u n } , { y n } , and { x n } with initial values x 1 =40 and x 1 =40 .

Example 4.2 In this example we take the same mappings and parameters as in Example 4.1 except T and F i .

Let T:[1,70][1,70] be defined by

T(x)= 3 x + 1 4 ,x[1,70],

and F 1 :[1,70]×[1,70]R be defined by

F 1 (x,y)=(yx)(y+2x3),(x,y)[1,70]×[1,70].

It is clear to see that

EP( F 1 )F(T)={1}.

By the definition of F 1 , we have

0 F 1 ( u n ,y)+ 1 r n y u n , u n x n =(y u n )(y+2 u n 3)+ 1 r n (y u n )( u n x n ).

Then

0 r n ( y u n ) ( y + 2 u n 3 ) + ( y u n y x n u n 2 + u n x n ) = r n y 2 + ( r n u n + u n x n 3 r n ) y + 3 r n u n u n 2 2 r n u n 2 + u n x n .

Let A(y)= r n y 2 +( r n u n + u n x n 3 r n )y+3 r n u n u n 2 2 r n u n 2 + u n x n . Then A(y) is a quadratic function of y with coefficient a= r n , b= r n u n + u n x n 3 r n , c=3 r n u n u n 2 2 r n u n 2 + u n x n . We determine the discriminant Δ of A as follows:

Δ = b 2 4 a c = ( r n u n + u n x n 3 r n ) 2 4 r n ( 3 r n u n u n 2 2 r n u n 2 + u n x n ) = 9 r n 2 6 r n u n 18 r n 2 u n + u n 2 + 6 r n u n 2 + 9 r n 2 u n 2 + 6 r n x n 2 u n x n 6 r n u n x n + x n 2 = ( u n 3 r n + 3 u n r n x n ) 2 .

We have A(y)0, yR. If it has at most one solution in ℝ, then Δ=0, we obtain

u n = x n + 3 r n 1 + 3 r n .
(4.2)

For every n1, from (4.2), we rewrite (3.1) as follows:

{ y n = x n 3 n 2 + ( 1 1 n 2 ) ( x n + 3 r n 1 + 3 r n ) ; x n + 1 = P [ 1 , 70 ] [ ρ x n 28 n + 1 2 n x n + ( 1 1 2 n ) ( 3 y n + 1 ) 4 μ 3 y n + 7 28 n ] .
(4.3)

Remark 4.2 Table 2 and Figure 2 show that the sequences { u n }, { y n }, and { x n } converge to 1, where {1}=F(T)EP( F 1 ).

Figure 2
figure 2

The convergence of { u n } , { y n } , and { x n } with initial value x 1 =30 .

5 Conclusions

In this paper, we suggested and analyzed an iterative method for finding an element of the common set of solutions of (1.1) and (1.4) in real Hilbert spaces. This method can be viewed as a refinement and improvement of some existing methods for solving variational inequality problem, equilibrium problem and a hierarchical fixed point problem. Some existing methods, for example, [12, 13, 15, 18, 19, 21], can be viewed as special cases of Algorithm 3.1. Therefore, Algorithm 3.1 is expected to be widely applicable. In the hierarchical fixed point problem (1.4), if S=I(ρUμF), then we can get the variational inequality (3.13). In (3.13), if U=0 then we get the variational inequality F(z),xz0, xF(T)EP( F 1 ), which just is a variational inequality studied by Suzuki [23].

Misc

Equal contributors