1 Introduction

Let H be a real Hilbert space. Let A:HH be a single-valued nonlinear mapping and B:H 2 H be a set-valued mapping.

Now, we are concerned with the following variational inclusion:

Find a zero xH of the sum of two monotone operators A and B such that

0Ax+Bx,
(1.1)

where 0 is the zero vector in H.

The set of solutions of the problem (1.1) is denoted by ( A + B ) 1 0. If H= R m , then the problem (1.1) becomes the generalized equation introduced by Robinson [1]. If A=0, then the problem (1.1) becomes the inclusion problem introduced by Rockafellar [2]. It is well known that the problem (1.1) is among the most interesting and intensively studied classes of mathematical problems and has wide applications in the fields of optimization and control, economics and transportation equilibrium, engineering science, and many others. For the past years, many existence results and iterative algorithms for various variational inequality and variational inclusion problems have been extended and generalized in various directions using novel and innovative techniques. A useful and important generalization is called the general variational inclusion involving the sum of two nonlinear operators. Moudafi and Noor [3] studied the sensitivity analysis of variational inclusions by using the technique of resolvent equations. Recently much attention has been given to developing iterative algorithms for solving the variational inclusions. Dong et al. [4] analyzed the solution’s sensitivity for variational inequalities and variational inclusions by using a resolvent operator technique. By using the concept and technique of resolvent operators, Agarwal et al. [5] and Jeong [6] introduced and studied a new system of parametric generalized nonlinear mixed quasi-variational inclusions in a Hilbert space. Lan [7] introduced and studied a stable iteration procedure for a class of generalized mixed quasi-variational inclusion systems in Hilbert spaces. Recently, Zhang et al. [8] introduced a new iterative scheme for finding a common element of the set of solutions to the problem (1.1) and the set of fixed points of nonexpansive mappings in Hilbert spaces. Peng et al. [9] introduced another iterative scheme by the viscosity approximate method for finding a common element of the set of solutions of a variational inclusion with set-valued maximal monotone mapping and inverse strongly monotone mappings, the set of solutions of an equilibrium problem, and the set of fixed points of a nonexpansive mapping. For some related work, see [923] and the references therein.

Recently, Takahashi et al. [24] introduced the following iterative algorithm for finding a zero of the sum of two monotone operators and a fixed point of a nonexpansive mapping:

x n + 1 = β n x n +(1 β n )S ( α n x + ( 1 α n ) J λ n B ( x n λ n A x n ) )
(1.2)

for all n0. Under some assumptions, they proved that the sequence { x n } converges strongly to a point of F(S) ( A + B ) 1 0.

Remark 1.1 We note that the algorithm (1.2) cannot be used to find the minimum-norm element due to the facts that xC and S is a self-mapping of C. However, there exist a large number of problems for which one needs to find the minimum-norm solution (see, for example, [2529]). A useful path to circumvent this problem is to use projection. Bauschke and Browein [30] and Censor and Zenios [31] provide reviews of the field. The main difficulty is in the computation. Hence it is an interesting problem to find the minimum-norm element without using the projection.

Motivated and inspired by the works in this field, we first suggest the following two algorithms without using projection:

x t =(1κ)S x t +κ J λ B ( t γ f ( x t ) + ( 1 t ) x t λ A x t )

for all t(0,1) and

x n + 1 =(1κ)S x n +κ J λ n B ( α n γ f ( x n ) + ( 1 α n ) x n λ n A x n )

for all n0. Notice that these two algorithms are indeed well defined (see the next section). We show that the suggested algorithms converge strongly to a point x ˜ = P F ( S ) ( A + B ) 1 0 (γf( x ˜ )) which solves the following variational inequality:

γ f ( x ˜ ) x ˜ , x ˜ z 0

for all zF(S) ( A + B ) 1 0.

As special cases, we can approach the minimum-norm element in F(S) ( A + B ) 1 0 without using the metric projection and give some applications.

2 Preliminaries

Let H be a real Hilbert space with the inner product , and the norm , respectively. Let C be a nonempty closed convex subset of H.

  1. (1)

    A mapping S:CC is said to be nonexpansive if

    SxSyxy

    for all x,yC. We denote by F(S) the set of fixed points of S.

  2. (2)

    A mapping A:CH is said to be α-inverse strongly monotone if there exists α>0 such that

    AxAy,xyα A x A y 2

for all x,yC.

It is well known that, if A is α-inverse strongly monotone, then AxAy 1 α xy for all x,yC.

Let B be a mapping from H into 2 H . The effective domain of B is denoted by dom(B), that is, dom(B)={xH:Bx}.

  1. (3)

    A multi-valued mapping B is said to be a monotone operator on H if

    xy,uv0

for all x,ydom(B), uBx, and vBy.

  1. (4)

    A monotone operator B on H is said to be maximal if its graph is not strictly contained in the graph of any other monotone operator on H.

Let B be a maximal monotone operator on H and B 1 0={xH:0Bx}. For a maximal monotone operator B on H and λ>0, we may define a single-valued operator J λ B = ( I + λ B ) 1 :Hdom(B), which is called the resolvent of B for λ. It is well known that the resolvent J λ B is firmly nonexpansive, i.e.,

J λ B x J λ B y 2 J λ B x J λ B y , x y

for all x,yC and B 1 0=F( J λ B ) for all λ>0. The following resolvent identity is well known: for any λ>0 and μ>0, the following identity holds:

J λ B x= J μ B ( μ λ x + ( 1 μ λ ) J λ B x )
(2.1)

for all xH.

We use the notation that x n x stands for the weak convergence of ( x n ) to x and x n x stands for the strong convergence of ( x n ) to x, respectively.

We need the following lemmas for the next section.

Lemma 2.1 ([32])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let A:CH be an α-inverse strongly monotone mapping and λ>0 be a constant. Then we have

( I λ A ) x ( I λ A ) y 2 x y 2 +λ(λ2α) A x A y 2

for all x,yC. In particular, if 0λ2α, then IλA is nonexpansive.

Lemma 2.2 ([33])

Let C be a closed convex subset of a Hilbert space H. Let S:CC be a nonexpansive mapping. Then F(S) is a closed convex subset of C and the mapping IS is demiclosed at 0, i.e. whenever { x n }C is such that x n x and (IS) x n 0, then (IS)x=0.

Lemma 2.3 ([1])

Let C be a nonempty closed convex subset of a real Hilbert space H. Assume that the mapping F:CH is monotone and weakly continuous along segments, that is, F(x+ty)F(x) weakly as t0. Then the variational inequality

x C, F x , x x 0

for all xC is equivalent to the dual variational inequality

x C, F x , x x 0

for all xC.

Lemma 2.4 ([34])

Let { x n }, { y n } be bounded sequences in a Banach space X and { β n } be a sequence in [0,1] with

0< lim inf n β n lim sup n β n <1.

Suppose that x n + 1 =(1 β n ) y n + β n x n for all n0 and

lim sup n ( y n + 1 y n x n + 1 x n ) 0.

Then lim n y n x n =0.

Lemma 2.5 ([35])

Assume that { a n } is a sequence of nonnegative real numbers such that

a n + 1 (1 γ n ) a n + δ n γ n

for all n1, where { γ n } is a sequence in (0,1) and { δ n } is a sequence such that

  1. (a)

    n = 1 γ n =;

  2. (b)

    lim sup n δ n 0 or n = 1 | δ n γ n |<.

Then lim n a n =0.

3 Main results

In this section, we prove our main results.

Theorem 3.1 Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping from C into H. Let f:CH be a ρ-contraction and γ be a constant such that 0<γ< 1 ρ . Let B be a maximal monotone operator on H such that the domain of B is included in C. Let J λ B = ( I + λ B ) 1 be the resolvent of B for any λ>0 and S be a nonexpansive mapping from C into itself such that F(S) ( A + B ) 1 0. Let λ and κ be two constants satisfying aλb, where [a,b](0,2α) and κ(0,1). For any t(0,1 λ 2 α ), let { x t }C be a net generated by

x t =(1κ)S x t +κ J λ B ( t γ f ( x t ) + ( 1 t ) x t λ A x t ) .
(3.1)

Then the net { x t } converges strongly as t0+ to a point x ˜ = P F ( S ) ( A + B ) 1 0 (γf( x ˜ )), which solves the following variational inequality:

γ f ( x ˜ ) x ˜ , x ˜ z 0

for all zF(S) ( A + B ) 1 0.

Proof First, we show that the net { x t } is well defined. For any t(0,1 λ 2 α ), we define a mapping W:=(1κ)S+κ J λ B (tγf+(1t)IλA). Note that J λ B , S, and I λ 1 t A (see Lemma 2.1) are nonexpansive. For any x,yC, we have

W x W y = ( 1 κ ) ( S x S y ) + κ ( J λ B ( t γ f ( x ) + ( 1 t ) ( I λ 1 t A ) x ) J λ B ( t γ f ( y ) + ( 1 t ) ( I λ 1 t A ) y ) ) ( 1 κ ) S x S y + κ t γ ( f ( x ) f ( y ) ) + ( 1 t ) [ ( I λ 1 t A ) x ( I λ 1 t A ) y ] ( 1 κ ) x y + κ t γ f ( x ) f ( y ) + ( 1 t ) κ ( I λ 1 t A ) x ( I λ 1 t A ) y ( 1 κ ) x y + t κ γ ρ x y + ( 1 t ) κ x y = [ 1 ( 1 γ ρ ) κ t ] x y ,

which implies the mapping W is a contraction on C. We use x t to denote the unique fixed point of W in C. Therefore, { x t } is well defined. Set y t = J λ B u t and u t =γf( x t )+(1t) x t λA x t for all t>0. Taking zF(S) ( A + B ) 1 0, it is obvious that z=Sz= J λ B (zλAz) for all λ>0 and so

z=Sz= J λ B (zλAz)= J λ B ( t z + ( 1 t ) ( I λ 1 t A ) z )

for all t(0,1 λ 2 α ). From (3.1), it follows that

x t z = ( 1 κ ) ( S x t z ) + κ ( y t z ) ( 1 κ ) S x t z + κ y t z ( 1 κ ) x t z + κ y t z .

Hence we get x t z y t z. Since J λ B is nonexpansive, we have

y t z = J λ B ( t γ f ( x t ) + ( 1 t ) ( x t λ 1 t A x t ) ) J λ B ( t z + ( 1 t ) ( z λ 1 t A z ) ) ( t γ f ( x t ) + ( 1 t ) ( x t λ 1 t A x t ) ) ( t z + ( 1 t ) ( z λ 1 t A z ) ) = ( 1 t ) ( ( x t λ 1 t A x t ) ( z λ 1 t A z ) ) + t ( γ f ( x t ) z ) ( 1 t ) ( I λ 1 t A ) x t ( I λ 1 t A ) z + t γ f ( x t ) f ( z ) + t γ f ( z ) z ( 1 t ) x t z + t γ ρ x t z + t γ f ( z ) z .
(3.2)

Thus it follows that

x t z 1 1 γ ρ γ f ( z ) z .

Therefore, { x t } is bounded. We deduce immediately that {f( x t )}, {A x t }, {S x t }, { u t }, and { y t } are also bounded. By using the convexity of and the α-inverse strong monotonicity of A, from (3.2), we derive

x t z 2 y t z 2 ( 1 t ) ( ( x t λ 1 t A x t ) ( z λ 1 t A z ) ) + t ( γ f ( x t ) z ) 2 ( 1 t ) ( x t λ 1 t A x t ) ( z λ 1 t A z ) 2 + t γ f ( x t ) z 2 = ( 1 t ) ( x t z ) λ 1 t ( A x t A z ) 2 + t γ f ( x t ) z 2 = ( 1 t ) ( x t z 2 2 λ 1 t A x t A z , x t z + λ 2 ( 1 t ) 2 A x t A z 2 ) + t γ f ( x t ) z 2 ( 1 t ) ( x t z 2 2 α λ 1 t A x t A z 2 + λ 2 ( 1 t ) 2 A x t A z 2 ) + t γ f ( x t ) z 2 = ( 1 t ) ( x t z 2 + λ ( 1 t ) 2 ( λ 2 ( 1 t ) α ) A x t A z 2 ) + t γ f ( x t ) z 2 ( 1 t ) x t z 2 + λ ( 1 t ) ( λ 2 ( 1 t ) α ) A x t A z 2 + t γ f ( x t ) z 2
(3.3)

and so

λ ( 1 t ) ( 2 ( 1 t ) α λ ) A x t A z 2 t γ f ( x t ) z 2 t x t z 2 0.

By the assumption, we have 2(1t)αλ>0 for all t(0,1 λ 2 α ) and so we obtain

lim t 0 + A x t Az=0.
(3.4)

Next, we show x t S x t 0. By using the firm nonexpansivity of J λ B , we have

y t z 2 = J λ B ( t γ f ( x t ) + ( 1 t ) x t λ A x t ) z 2 = J λ B ( t γ f ( x t ) + ( 1 t ) x t λ A x t ) J λ B ( z λ A z ) 2 t γ f ( x t ) + ( 1 t ) x t λ A x t ( z λ A z ) , y t z = 1 2 ( t γ f ( x t ) + ( 1 t ) x t λ A x t ( z λ A z ) 2 + y t z 2 t γ f ( x t ) + ( 1 t ) x t λ ( A x t A z ) y t 2 ) .

Thus it follows that

y t z 2 t γ f ( x t ) + ( 1 t ) x t λ A x t ( z λ A z ) 2 t γ f ( x t ) + ( 1 t ) x t λ ( A x t A z ) y t 2 .

By the nonexpansivity of I λ 1 t A, we have

t γ f ( x t ) + ( 1 t ) x t λ A x t ( z λ A z ) 2 = ( 1 t ) ( ( x t λ 1 t A x t ) ( z λ 1 t A z ) ) + t ( γ f ( x t ) z ) 2 ( 1 t ) ( x t λ 1 t A x t ) ( z λ 1 t A z ) 2 + t γ f ( x t ) z 2 ( 1 t ) x t z 2 + t γ f ( x t ) z 2

and thus

x t z 2 y t z 2 ( 1 t ) x t z 2 + t γ f ( x t ) z 2 t γ f ( x t ) + ( 1 t ) x t λ ( A x t A z ) y t 2 .

Hence it follows that

t γ f ( x t ) + ( 1 t ) x t λ ( A x t A z ) y t 2 t γ f ( x t ) z 2 0.

Since A x t Az0, we deduce lim t 0 + x t y t =0, which implies that

lim t 0 + x t S x t =0.
(3.5)

From (3.2), we have

y t z 2 ( 1 t ) ( ( x t λ 1 t A x t ) ( z λ 1 t A z ) ) + t ( γ f ( x t ) z ) 2 = ( 1 t ) 2 ( x t λ 1 t A x t ) ( z λ 1 t A z ) 2 + 2 t ( 1 t ) γ f ( x t ) z , ( x t λ 1 t A x t ) ( z λ 1 t A z ) + t 2 γ f ( x t ) z 2 ( 1 t ) 2 x t z 2 + 2 t ( 1 t ) γ f ( x t ) z , x t λ 1 t ( A x t A z ) z + t 2 γ f ( x t ) z 2 = ( 1 t ) 2 x t z 2 + 2 t ( 1 t ) γ f ( x t ) f ( z ) , x t λ 1 t ( A x t A z ) z + 2 t ( 1 t ) γ f ( z ) z , x t λ 1 t ( A x t A z ) z + t 2 γ f ( x t ) z 2 .

Note that x t z y t z. Then we obtain

x t z 2 ( 1 t ) 2 x t z 2 + 2 t ( 1 t ) γ f ( x t ) f ( z ) ( x t z + λ 1 t ( A x t A z ) ) + 2 t ( 1 t ) γ f ( z ) z , x t λ 1 t ( A x t A z ) z + t 2 γ f ( x t ) z 2 ( 1 t ) 2 x t z 2 + 2 t ( 1 t ) γ ρ x t z 2 + 2 t λ γ ρ x t z A x t A z + 2 t ( 1 t ) γ f ( z ) z , x t λ 1 t ( A x t A z ) z + t 2 γ f ( x t ) z 2 [ 1 2 ( 1 γ ρ ) t ] x t z 2 + 2 t [ ( 1 t ) γ f ( z ) z , x t λ 1 t ( A x t A z ) z + t 2 ( γ f ( x t ) z 2 + x t z 2 ) + λ γ ρ x t z A x t A z ] .

Thus it follows that

x t z 2 1 1 γ ρ ( γ f ( z ) z , x t λ 1 t ( A x t A z ) z + t 2 ( γ f ( x t ) z 2 + x t z 2 ) + t γ f ( z ) z x t λ 1 t ( A x t A z ) z + λ γ ρ x t z A x t A z ) 1 1 γ ρ γ f ( z ) z , x t z + ( t + A x t A z ) M ,
(3.6)

where M is some constant such that

sup 1 1 γ ρ { 1 2 ( γ f ( x t ) z 2 + x t z 2 ) + γ f ( z ) z x t λ 1 t ( A x t A z ) z , λ γ ρ x t z : t ( 0 , 1 λ 2 α ) } M .

Next, we show that { x t } is relatively norm-compact as t0+. Assume that { t n }(0,1 λ 2 α ) is such that t n 0+ as n. Put x n := x t n . From (3.6), we have

x n z 2 1 1 γ ρ γ f ( z ) z , x n z + ( t n + A x n A z ) M.
(3.7)

Since { x n } is bounded, without loss of generality, we may assume that x n j x ˜ C. Hence y n j x ˜ because of x n y n 0. From (3.5), we have

lim n x n S x n =0.
(3.8)

We can use Lemma 2.2 to (3.8) to deduce x ˜ F(S). Further, we show that x ˜ is also in ( A + B ) 1 0. Let vBu. Note that y n = J λ B ( t n γf( x n )+(1 t n ) x n λA x n ) for all n1. Then we have

t n γ f ( x n ) + ( 1 t n ) x n λ A x n ( I + λ B ) y n t n γ f ( x n ) λ + 1 t n λ x n A x n y n λ B y n .

Since B is monotone, we have, for all (u,v)B,

t n γ f ( x n ) λ + 1 t n λ x n A x n y n λ v , y n u 0 t n γ f ( x n ) + ( 1 t n ) x n λ A x n y n λ v , y n u 0 A x n + v , y n u 1 λ x n y n , y n u t n λ x n γ f ( x n ) , y n u A x ˜ + v , y n u 1 λ x n y n , y n u t n λ x n γ f ( x n ) , y n u A x ˜ + v , y n u + A x ˜ A x n , y n u A x ˜ + v , y n u 1 λ x n y n y n u + t n λ x n γ f ( x n ) y n u A x ˜ + v , y n u + A x ˜ A x n y n u .

Thus it follows that

A x ˜ + v , x ˜ u 1 λ x n j y n j y n j u + t n j λ x n j γ f ( x n j ) y n j u + A x ˜ A x n j y n j u + A x ˜ + v , x ˜ y n j .
(3.9)

Since x n j x ˜ ,A x n j A x ˜ α A x n j A x ˜ 2 , A x n j Az, and x n j x ˜ , it follows that A x n j A x ˜ . We also observe that t n 0 and y n x n 0. Then, from (3.9), we can derive A x ˜ +v, x ˜ u0, that is, A x ˜ v, x ˜ u0. Since B is maximal monotone, we have A x ˜ B x ˜ . This shows that 0(A+B) x ˜ . Hence we have x ˜ F(S) ( A + B ) 1 0. Therefore, substituting x ˜ for z in (3.7), we get

x n x ˜ 2 1 1 γ ρ γ f ( x ˜ ) x ˜ , x n x ˜ + ( t n + A x n A x ˜ ) M.

Consequently, the weak convergence of { x n } to x ˜ actually implies that x n x ˜ . This proved the relative norm-compactness of the net { x t } as t0+.

Now, we return to (3.7) and, taking the limit as n, we have

x ˜ z 2 1 1 γ ρ γ f ( z ) z , x ˜ z

for all zF(S) ( A + B ) 1 0. In particular, x ˜ solves the following variational inequality:

x ˜ F(S) ( A + B ) 1 0, γ f ( z ) z , x ˜ z 0

for all zF(S) ( A + B ) 1 0 or the equivalent dual variational inequality (see Lemma 2.3):

x ˜ F(S) ( A + B ) 1 0, γ f ( x ˜ ) x ˜ , x ˜ z 0

for all zF(S) ( A + B ) 1 0. Hence x ˜ = P F ( S ) ( A + B ) 1 0 (γf( x ˜ )). Clearly, this is sufficient to conclude that the entire net { x t } converges to x ˜ . This completes the proof. □

Theorem 3.2 Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping from C into H. Let f:CH be a ρ-contraction and γ be a constant such that 0<γ< 1 ρ . Let B be a maximal monotone operator on H such that the domain of B is included in C. Let J λ B = ( I + λ B ) 1 be the resolvent of B for any λ>0 and S be a nonexpansive mapping from C into itself such that F(S) ( A + B ) 1 0. For any x 0 C, let { x n }C be a sequence generated by

x n + 1 =(1κ)S x n +κ J λ n B ( α n γ f ( x n ) + ( 1 α n ) x n λ n A x n )
(3.10)

for all n0, where κ(0,1), { λ n }(0,2α) and { α n }(0,1) satisfy the following conditions:

  1. (a)

    lim n α n =0, lim n α n + 1 α n =1 and n α n =;

  2. (b)

    a(1 α n ) λ n b(1 α n ), where [a,b](0,2α) and lim n λ n + 1 λ n α n + 1 =0.

Then the sequence { x n } converges strongly to a point x ˜ = P F ( S ) ( A + B ) 1 0 (γf( x ˜ )), which solves the following variational inequality:

γ f ( x ˜ ) x ˜ , x ˜ z 0

for all zF(S) ( A + B ) 1 0.

Proof Set y n = J λ n B u n , u n = α n γf( x n )+(1 α n ) x n λ n A x n for all n0. Pick up zF(S) ( A + B ) 1 0. It is obvious that

z=Sz= J λ n B (z λ n Az)= J λ n B ( α n z + ( 1 α n ) ( z λ n 1 α n A z ) )

for all n0. Since J λ n B , S, and I λ n 1 α n A are nonexpansive for all λ>0 and n1, we have

y n z = J λ n B ( α n γ f ( x n ) + ( 1 α n ) x n λ n A x n ) z = J λ n B ( α n γ f ( x n ) + ( 1 α n ) ( x n λ n 1 α n A x n ) ) J λ n B ( α n z + ( 1 α n ) ( z λ n 1 α n A z ) ) ( α n γ f ( x n ) + ( 1 α n ) ( x n λ n 1 α n A x n ) ) ( α n z + ( 1 α n ) ( z λ n 1 α n A z ) ) 2 = ( 1 α n ) ( ( x n λ n 1 α n A x n ) ( z λ n 1 α n A z ) ) + α n ( γ f ( x n ) z ) ( 1 α n ) x n z + α n γ f ( x n ) γ f ( z ) + α n γ f ( z ) z [ 1 ( 1 γ ρ ) α n ] x n z + α n γ f ( z ) z .
(3.11)

Hence we have

x n + 1 z ( 1 κ ) S x n z + κ y n z ( 1 κ ) x n z + κ [ 1 ( 1 γ ρ ) α n ] x n z + κ α n γ f ( z ) z = [ 1 ( 1 γ ρ ) κ α n ] x n z + κ α n γ f ( z ) z .

By induction, we have

x n + 1 zmax { x 0 z , 1 1 γ ρ γ f ( z ) z } .

Therefore, { x n } is bounded. Since A is α-inverse strongly monotone, it is 1 α -Lipschitz continuous. We deduce immediately that {f( x n )}, {S x n }, {A x n }, { u n }, and { y n } are also bounded. By using the convexity of and the α-inverse strong monotonicity of A, it follows from (3.11) that

( 1 α n ) ( ( x n λ n 1 α n A x n ) ( z λ n 1 α n A z ) ) + α n ( γ f ( x n ) z ) 2 ( 1 α n ) ( x n λ n 1 α n A x n ) ( z λ n 1 α n A z ) 2 + α n γ f ( x n ) z 2 = ( 1 α n ) ( x n z ) λ n 1 α n ( A x n A z ) 2 + α n γ f ( x n ) z 2 = ( 1 α n ) ( x n z 2 2 λ n 1 α n A x n A z , x n z + λ n 2 ( 1 α n ) 2 A x n A z 2 ) + α n γ f ( x n ) z 2 ( 1 α n ) ( x n z 2 2 α λ n 1 α n A x n A z 2 + λ n 2 ( 1 α n ) 2 A x n A z 2 ) + α n γ f ( x n ) z 2 = ( 1 α n ) ( x n z 2 + λ n ( 1 α n ) 2 ( λ n 2 ( 1 α n ) α ) A x n A z 2 ) + α n γ f ( x n ) z 2 .
(3.12)

By the condition (c), we get λ n 2(1 α n )α0 for all n0. Then, from (3.11) and (3.12), we obtain

J λ n B u n z 2 ( 1 α n ) ( x n z 2 + λ n ( 1 α n ) 2 ( λ n 2 ( 1 α n ) α ) A x n A z 2 ) + α n γ f ( x n ) z 2 .
(3.13)

From (3.10), it follows that

x n + 1 z 2 = ( 1 κ ) ( S x n z ) + κ ( J λ n B u n z ) 2 ( 1 κ ) x n z 2 + κ J λ n B u n z 2 .
(3.14)

Next, we estimate x n + 1 x n . In fact, we have

x n + 2 x n + 1 = ( 1 κ ) ( S x n + 1 S x n ) + κ ( y n + 1 y n ) ( 1 κ ) x n + 1 x n + κ y n + 1 y n

and

y n + 1 y n = J λ n + 1 B u n + 1 J λ n B u n J λ n + 1 B u n + 1 J λ n + 1 B u n + J λ n + 1 B u n J λ n B u n ( α n + 1 γ f ( x n + 1 ) + ( 1 α n + 1 ) x n + 1 λ n + 1 A x n + 1 ) ( α n γ f ( x n ) + ( 1 α n ) x n λ n A x n ) + J λ n + 1 B u n J λ n B u n = α n + 1 γ ( f ( x n + 1 ) f ( x n ) ) + ( α n + 1 α n ) γ f ( x n ) + ( 1 α n + 1 ) [ ( I λ n + 1 1 α n + 1 A ) x n + 1 ( I λ n + 1 1 α n + 1 A ) x n ] + ( α n α n + 1 ) x n + ( λ n λ n + 1 ) A x n + J λ n + 1 B u n J λ n B u n α n + 1 γ ρ x n + 1 x n + | α n + 1 α n | ( γ f ( x n ) + x n ) + ( 1 α n + 1 ) x n + 1 x n + | λ n λ n + 1 | A x n + J λ n + 1 B u n J λ n B u n = [ 1 ( 1 γ ρ ) α n + 1 ] x n + 1 x n + | α n + 1 α n | ( γ f ( x n ) + x n ) + | λ n λ n + 1 | A x n + J λ n + 1 B u n J λ n B u n .

By the resolvent identity (2.1), we have

J λ n + 1 B u n = J λ n B ( λ n λ n + 1 u n + ( 1 λ n λ n + 1 ) J λ n + 1 B u n ) .

Thus it follows that

J λ n + 1 B u n J λ n B u n = J λ n B ( λ n λ n + 1 u n + ( 1 λ n λ n + 1 ) J λ n + 1 B u n ) J λ n B u n ( λ n λ n + 1 u n + ( 1 λ n λ n + 1 ) J λ n + 1 B u n ) u n | λ n + 1 λ n | λ n + 1 u n J λ n + 1 B u n

and so

x n + 2 x n + 1 ( 1 κ ) x n + 1 x n + κ y n + 1 y n ( 1 κ ) x n + 1 x n + κ [ 1 ( 1 γ ρ ) α n + 1 ] x n + 1 x n + κ | α n + 1 α n | ( γ f ( x n ) + x n ) + κ | λ n λ n + 1 | A x n + κ | λ n + 1 λ n | λ n + 1 u n J λ n + 1 B u n [ 1 ( 1 γ ρ ) κ α n + 1 ] x n + 1 x n + ( 1 γ ρ ) κ α n + 1 [ | α n + 1 α n | α n + 1 γ f ( x n ) + x n 1 γ ρ + | λ n λ n + 1 | α n + 1 A x n 1 γ ρ + | λ n + 1 λ n | α n + 1 λ n + 1 u n J λ n + 1 B u n 1 γ ρ ] .

By the assumptions, we know that | α n + 1 α n | α n + 1 0 and | λ n + 1 λ n | α n + 1 0. Then, from Lemma 2.5, we get

lim n x n + 1 x n =0.
(3.15)

Thus, from (3.13) and (3.14), it follows that

x n + 1 z 2 ( 1 κ ) x n z 2 + κ J λ n B u n z 2 κ [ ( 1 α n ) ( x n z 2 + λ n ( 1 α n ) 2 ( λ n 2 ( 1 α n ) α ) A x n A z 2 ) + α n γ f ( x n ) z 2 ] + ( 1 κ ) x n z 2 = [ 1 κ α n ] x n z 2 + κ λ n 1 α n ( λ n 2 ( 1 α n ) α ) A x n A z 2 + κ α n γ f ( x n ) z 2 x n z 2 + κ λ n 1 α n ( λ n 2 ( 1 α n ) α ) A x n A z 2 + κ α n γ f ( x n ) z 2

and so

κ λ n ( 1 α n ) ( 2 ( 1 α n ) α λ n ) A x n A z 2 x n z 2 x n + 1 z 2 + κ α n γ f ( x n ) z 2 ( x n z x n + 1 z ) x n + 1 x n + κ α n γ f ( x n ) z 2 .

Since lim n α n =0, lim n x n + 1 x n =0, and lim inf n κ λ n ( 1 α n ) (2(1 α n )α λ n )>0, we have

lim n A x n Az=0.
(3.16)

Next, we show x n S x n 0. By using the firm nonexpansivity of J λ n B , we have

J λ n B u n z 2 = J λ n B ( α n γ f ( x n ) + ( 1 α n ) x n λ n A x n ) J λ n B ( z λ n A z ) 2 α n γ f ( x n ) + ( 1 α n ) x n λ n A x n ( z λ n A z ) , J λ n B u n z = 1 2 ( α n γ f ( x n ) + ( 1 α n ) x n λ n A x n ( z λ n A z ) 2 + J λ n B u n z 2 α n γ f ( x n ) + ( 1 α n ) x n λ n ( A x n A z ) J λ n B u n 2 ) .

From the condition (c) and the α-inverse strongly monotonicity of A, we know that I λ n A/(1 α n ) is nonexpansive. Hence it follows that

α n γ f ( x n ) + ( 1 α n ) x n λ n A x n ( z λ n A z ) 2 = ( 1 α n ) ( ( x n λ n 1 α n A x n ) ( z λ n 1 α n A z ) ) + α n ( γ f ( x n ) z ) 2 ( 1 α n ) ( x n λ n 1 α n A x n ) ( z λ n 1 α n A z ) 2 + α n γ f ( x n ) z 2 ( 1 α n ) x n z 2 + α n γ f ( x n ) z 2

and thus

J λ n B u n z 2 1 2 ( ( 1 α n ) x n z 2 + α n γ f ( x n ) z 2 + J λ n B u n z 2 α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n λ n ( A x n A z ) 2 ) ,

that is,

J λ n B u n z 2 ( 1 α n ) x n z 2 + α n γ f ( x n ) z 2 α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n λ n ( A x n A z ) 2 = ( 1 α n ) x n z 2 + α n γ f ( x n ) z 2 α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n 2 + 2 λ n α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n , A x n A z λ n 2 A x n A z 2 ( 1 α n ) x n z 2 + α n γ f ( x n ) z 2 α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n 2 + 2 λ n α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n A x n A z .

This together with (3.14) implies that

x n + 1 z 2 ( 1 κ ) x n z 2 + κ ( 1 α n ) x n z 2 + κ α n γ f ( x n ) z 2 κ α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n 2 + 2 λ n κ α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n A x n A z = [ 1 κ α n ] x n z 2 + κ α n γ f ( x n ) z 2 κ α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n 2 + 2 λ n κ α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n A x n A z

and hence

κ α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n 2 x n z 2 x n + 1 z 2 κ α n x n z 2 + κ α n γ f ( x n ) z 2 + 2 λ n κ α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n A x n A z ( x n z + x n + 1 z ) x n + 1 x n + κ α n γ f ( x n ) z 2 + 2 λ n κ α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n A x n A z .

Since x n + 1 x n 0, α n 0, and A x n Az0 (by (3.16)), we deduce

lim n α n γ f ( x n ) + ( 1 α n ) x n J λ n B u n =0.

This implies that

lim n x n J λ n B u n =0.
(3.17)

Combining (3.10), (3.15), and (3.17), we get

lim n x n S x n =0.
(3.18)

Put x ˜ = lim t 0 + x t = P F ( S ) ( A + B ) 1 0 (γf( x ˜ )), where { x t } is the net defined by (3.1).

Finally, we show that x n x ˜ . Taking z= x ˜ in (3.16), we get A x n A x ˜ 0. First, we prove lim sup n γf( x ˜ ) x ˜ , x n x ˜ 0. We take a subsequence { x n i } of { x n } such that

lim sup n γ f ( x ˜ ) x ˜ , x n x ˜ = lim i γ f ( x ˜ ) x ˜ , x n i x ˜ .

There exists a subsequence { x n i j } of { x n i } which converges weakly to a point wC. Hence { y n i j } also converges weakly to w because of x n i j y n i j 0. By the demi-closedness principle of the nonexpansive mapping (see Lemma 2.2) and (3.18), we deduce wF(S). Furthermore, by a similar argument to that of Theorem 3.1, we can show that w is also in ( A + B ) 1 0. Hence we have wF(S) ( A + B ) 1 0. This implies that

lim sup n γ f ( x ˜ ) x ˜ , x n x ˜ = lim j γ f ( x ˜ ) x ˜ , x n i j x ˜ = γ f ( x ˜ ) x ˜ , w x ˜ .

Note that x ˜ = P F ( S ) ( A + B ) 1 0 (γf( x ˜ )). Then we have

γ f ( x ˜ ) x ˜ , w x ˜ 0

for all wF(S) ( A + B ) 1 0. Therefore, it follows that

lim sup n γ f ( x ˜ ) x ˜ , x n x ˜ 0.

From (3.10), we have

x n + 1 x ˜ 2 ( 1 κ ) x n x ˜ 2 + κ J λ n B u n x ˜ 2 = ( 1 κ ) x n x ˜ 2 + κ J λ n B u n J λ n B ( x ˜ λ n A x ˜ ) 2 ( 1 κ ) x n x ˜ 2 + κ u n ( x ˜ λ n A x ˜ ) 2 = ( 1 κ ) x n x ˜ 2 + κ α n γ f ( x n ) + ( 1 α n ) x n λ n A x n ( x ˜ λ n A x ˜ ) 2 = κ ( 1 α n ) ( ( x n λ n 1 α n A x n ) ( x ˜ λ n 1 α n A x ˜ ) ) + α n ( γ f ( x n ) x ˜ ) 2 + ( 1 κ ) x n x ˜ 2 = ( 1 κ ) x n x ˜ 2 + κ ( ( 1 α n ) 2 ( x n λ n 1 α n A x n ) ( x ˜ λ n 1 α n A x ˜ ) 2 + 2 α n ( 1 α n ) γ f ( x n ) x ˜ , ( x n λ n 1 α n A x n ) ( x ˜ λ n 1 α n A x ˜ ) + α n 2 γ f ( x n ) x ˜ 2 ) ( 1 κ ) x n x ˜ 2 + κ ( ( 1 α n ) 2 x n x ˜ 2 + 2 α n λ n γ f ( x n ) x ˜ , A x n A x ˜ + 2 α n ( 1 α n ) γ f ( x n ) f ( x ˜ ) , x n x ˜ + 2 α n ( 1 α n ) γ f ( x ˜ ) x ˜ , x n x ˜ + α n 2 γ f ( x n ) x ˜ 2 ) ( 1 κ ) x n x ˜ 2 + κ ( ( 1 α n ) 2 x n x ˜ 2 + 2 α n λ n γ f ( x n ) x ˜ A x n A x ˜ + 2 α n ( 1 α n ) γ ρ x n x ˜ 2 + 2 α n ( 1 α n ) γ f ( x ˜ ) x ˜ , x n x ˜ + α n 2 γ f ( x n ) x ˜ 2 ) [ 1 2 κ ( 1 γ ρ ) α n ] x n x ˜ 2 + 2 α n κ λ n γ f ( x n ) x ˜ A x n A x ˜ + 2 α n κ ( 1 α n ) γ f ( x ˜ ) x ˜ , x n x ˜ + κ α n 2 ( γ f ( x n ) x ˜ 2 + x n x ˜ 2 ) = [ 1 2 κ ( 1 γ ρ ) α n ] x n x ˜ 2 + 2 κ ( 1 γ ρ ) α n [ λ n 1 γ ρ γ f ( x n ) x ˜ A x n A x ˜ + 1 α n 1 γ ρ γ f ( x ˜ ) x ˜ , x n x ˜ + α n 2 ( 1 γ ρ ) ( γ f ( x n ) x ˜ 2 + x n x ˜ 2 ) ] .

It is clear that n 2κ(1γρ) α n = and

lim sup n { λ n 1 γ ρ γ f ( x n ) x ˜ A x n A x ˜ + 1 α n 1 γ ρ γ f ( x ˜ ) x ˜ , x n x ˜ + α n 2 ( 1 γ ρ ) ( γ f ( x n ) x ˜ 2 + x n x ˜ 2 ) } 0 .

Therefore, we can apply Lemma 2.5 to conclude that x n x ˜ . This completes the proof. □

Remark 3.3 One quite often seeks a particular solution of a given nonlinear problem, in particular, the minimum-norm element. For instance, given a closed convex subset C of a Hilbert space H 1 and a bounded linear operator W: H 1 H 2 , where H 2 is another Hilbert space. The C-constrained pseudoinverse of W, W C , is then defined as the minimum-norm solution of the constrained minimization problem

W C (b):=arg min x C Wxb,

which is equivalent to the fixed point problem

u= proj C ( u μ W ( W u b ) ) ,

where W is the adjoint of W and μ>0 is a constant, and b H 2 is such that P W ( C ) ¯ (b)W(C). From Theorems 3.1 and 3.2, we get the following corollaries which can find the minimum-norm element in F(S) ( A + B ) 1 0; that is, find x ˜ F(S) ( A + B ) 1 0 such that

x ˜ =arg min x F ( S ) ( A + B ) 1 0 x.

Corollary 3.4 Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping from C into H. Let B be a maximal monotone operator on H such that the domain of B is included in C. Let J λ B = ( I + λ B ) 1 be the resolvent of B for any λ>0 and S be a nonexpansive mapping from C into itself such that F(S) ( A + B ) 1 0. Let λ and κ be two constants satisfying aλb, where [a,b](0,2α) and κ(0,1). For any t(0,1 λ 2 α ), let { x t }C be a net generated by

x t =(1κ)S x t +κ J λ B ( ( 1 t ) x t λ A x t ) .

Then the net { x t } converges strongly as t0+ to a point x ˜ = P F ( S ) ( A + B ) 1 0 (0) which is the minimum-norm element in F(S) ( A + B ) 1 0.

Corollary 3.5 Let C be a closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping from C into H and let B be a maximal monotone operator on H such that the domain of B is included in C. Let J λ B = ( I + λ B ) 1 be the resolvent of B for any λ>0 such that ( A + B ) 1 0. Let λ be a constant satisfying aλb, where [a,b](0,2α). For any t(0,1 λ 2 α ), let { x t }C be a net generated by

x t = J λ B ( ( 1 t ) x t λ A x t ) .

Then the net { x t } converges strongly as t0+ to a point x ˜ = P ( A + B ) 1 0 (0), which is the minimum-norm element in ( A + B ) 1 0.

Corollary 3.6 Let C be a nonempty closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping from C into H. Let B be a maximal monotone operator on H such that the domain of B is included in C. Let J λ B = ( I + λ B ) 1 be the resolvent of B for any λ>0 and let S be a nonexpansive mapping from C into itself such that F(S) ( A + B ) 1 0. For any x 0 C, let { x n }C be a sequence generated by

x n + 1 =(1κ)S x n +κ J λ n B ( ( 1 α n ) x n λ n A x n )

for all n0, where κ(0,1), { λ n }(0,2α), and { α n }(0,1) satisfy the following conditions:

  1. (a)

    lim n α n =0, lim n α n + 1 α n =1, and n α n =;

  2. (b)

    a(1 α n ) λ n b(1 α n ), where [a,b](0,2α) and lim n λ n + 1 λ n α n =0.

Then the sequence { x n } converges strongly to a point x ˜ = P F ( S ) ( A + B ) 1 0 (0), which is the minimum-norm element in F(S) ( A + B ) 1 0.

Corollary 3.7 Let C be a closed convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping from C into H and let B be a maximal monotone operator on H such that the domain of B is included in C. Let J λ B = ( I + λ B ) 1 be the resolvent of B for any λ>0 such that ( A + B ) 1 0. For any x 0 C, let { x n }C be a sequence generated by

x n + 1 =(1κ) x n +κ J λ n B ( ( 1 α n ) x n λ n A x n )

for all n0, where κ(0,1), { λ n }(0,2α), and { α n }(0,1) satisfy the following conditions:

  1. (a)

    lim n α n =0, lim n α n + 1 α n =1, and n α n =;

  2. (b)

    a(1 α n ) λ n b(1 α n ), where [a,b](0,2α) and lim n λ n + 1 λ n α n =0.

Then the sequence { x n } converges strongly to a point x ˜ = P ( A + B ) 1 0 (0), which is the minimum-norm element in ( A + B ) 1 0.

Remark 3.8 The present paper provides some methods which do not use projection for finding the minimum-norm solution problem.

4 Applications

Next, we consider the problem for finding the minimum-norm solution of a mathematical model related to equilibrium problems. Let C be a nonempty closed convex subset of a Hilbert space and G:C×CR be a bifunction satisfying the following conditions:

(E1) G(x,x)=0 for all xC;

(E2) G is monotone, i.e., G(x,y)+G(y,x)0 for all x,yC;

(E3) for all x,y,zC, lim sup t 0 G(tz+(1t)x,y)G(x,y);

(E4) for all xC, G(x,) is convex and lower semicontinuous.

Then the mathematical model related to the equilibrium problem (with respect to C) is as follows:

Find x ˜ C such that

G( x ˜ ,y)0
(4.1)

for all yC. The set of such solutions x ˜ is denoted by EP(G).

The following lemma appears implicitly in Blum and Oettli [36].

Lemma 4.1 Let C be a nonempty closed convex subset of a Hilbert space H. Let G be a bifunction from C×C into R satisfying the conditions (E1)-(E4). Then, for any r>0 and xH, there exists zC such that

G(z,y)+ 1 r yz,zx0

for all yC.

The following lemma was given in Combettes and Hirstoaga [37].

Lemma 4.2 Assume that G is a bifunction from C×C into R satisfying the conditions (E1)-(E4). For any r>0 and xH, define a mapping T r :HC as follows:

T r (x)= { z C : G ( z , y ) + 1 r y z , z x 0 , y C }

for all xH. Then the following hold:

  1. (a)

    T r is single-valued;

  2. (b)

    T r is a firmly nonexpansive mapping, i.e., for all x,yH,

    T r x T r y 2 T r x T r y,xy;
  3. (c)

    F( T r )=EP(G);

  4. (d)

    EP(G) is closed and convex.

We call such a T r the resolvent of G for any r>0. Using Lemmas 4.1 and 4.2, we have the following lemma (see [38] for a more general result).

Lemma 4.3 Let C be a nonempty closed convex subset of a Hilbert space H. Let G be a bifunction from C×C into R satisfying the conditions (E1)-(E4). Let A G be a multi-valued mapping from H into itself defined by

A G x={ { z H : G ( x , y ) y x , z , y C } , x C , , x C .

Then EP(G)= A G 1 (0) and A G is a maximal monotone operator with dom( A G )C. Further, for any xH and r>0, the resolvent T r of G coincides with the resolvent of A G , i.e.,

T r x= ( I + r A G ) 1 x.

Form Lemma 4.3 and Theorems 3.1 and 3.2, we have the following results.

Theorem 4.4 Let C be a nonempty closed convex subset of a real Hilbert space H. Let G be a bifunction from C×C into R satisfying the conditions (E1)-(E4) and T r be the resolvent of G for any r>0. Let S be a nonexpansive mapping from C into itself such that F(S)EP(G). For any t(0,1), let { x t }C be a net generated by

x t =(1κ)S x t +κ T r ( ( 1 t ) x t ) .

Then the net { x t } converges strongly as t0+ to a point x ˜ = P F ( S ) E P ( G ) (0), which is the minimum-norm element in F(S)EP(G).

Proof From Lemma 4.3, we know A G is maximal monotone. Thus, in Theorem 3.1, we can set J λ B = T r . At the same time, in Theorem 3.1, we can choose f=0 and A=0, and (3.1) reduces to

x t =(1κ)S x t +κ T r ( ( 1 t ) x t ) .

Consequently, from Theorem 3.1, we get the desired result. This completes the proof. □

Corollary 4.5 Let C be a nonempty closed convex subset of a real Hilbert space H. Let G be a bifunction from C×C into R satisfying the conditions (E1)-(E4) and T r be the resolvent of G for any r>0. Suppose that EP(G). For any t(0,1), let { x t }C be a net generated by

x t = T r ( ( 1 t ) x t ) .

Then the net { x t } converges strongly as t0+ to a point x ˜ = P E P ( G ) (0), which is the minimum-norm element in EP(G).

Theorem 4.6 Let C be a nonempty closed and convex subset of a real Hilbert space H. Let G be a bifunction from C×C into R satisfying the conditions (E1)-(E4) and T λ be the resolvent of G for any λ>0. Let S be a nonexpansive mapping from C into itself such that F(S)EP(G). For any x 0 C, let { x n }C be a sequence generated by

x n + 1 =(1κ)S x n +κ T λ n ( ( 1 α n ) x n )

for all n0, where κ(0,1), { λ n }(0,), and { α n }(0,1) satisfy the conditions:

  1. (a)

    lim n α n =0, lim n α n + 1 α n =1, and n α n =;

  2. (b)

    a λ n b, where [a,b](0,) and lim n λ n + 1 λ n α n =0.

Then the sequence { x n } converges strongly to a point x ˜ = P F ( S ) E P ( G ) (0), which is the minimum-norm element in F(S)EP(G).

Proof From Lemma 4.3, we know A G is maximal monotone. Thus, in Theorem 3.2, we can set J λ n B = T λ n . At the same time, in Theorem 3.2, we can choose f=0 and A=0, and (3.10) reduces to

x n + 1 =(1κ)S x n +κ T λ n ( ( 1 α n ) x n )

for all n0. Consequently, from Theorem 3.2, we get the desired result. This completes the proof. □

Corollary 4.7 Let C be a nonempty closed convex subset of a real Hilbert space H. Let G be a bifunction from C×C into R satisfying the conditions (E1)-(E4) and T λ be the resolvent of G for any λ>0. Suppose EP(G). For any x 0 C, let { x n }C be a sequence generated by

x n + 1 =(1κ) x n +(1 β n ) T λ n ( ( 1 α n ) x n )

for all n0, where κ(0,1), { λ n }(0,), and { α n }(0,1) satisfy the following conditions:

  1. (a)

    lim n α n =0, lim n α n + 1 α n =1, and n α n =;

  2. (b)

    a λ n b, where [a,b](0,) and lim n λ n + 1 λ n α n =0.

Then the sequence { x n } converges strongly to a point x ˜ = P E P ( G ) (0), which is the minimum-norm element in EP(G).