1 Introduction

Let C and Q be nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively. The split feasibility problem (SFP) is formulated as

to find  x C and A x Q,
(1.1)

where A: H 1 H 2 is a bounded linear operator. In 1994, Censor and Elfving [1] first introduced the SFP in finite-dimensional Hilbert spaces for modeling inverse problems which arise from phase retrievals and in medical image reconstruction [2]. It has been found that the SFP can also be used in various disciplines such as image restoration, computer tomography and radiation therapy treatment planning [35]. The SFP in an infinite-dimensional real Hilbert space can be found in [2, 4, 610]. For comprehensive literature, bibliography and a survey on SFP, we refer to [11].

Assuming that the SFP is consistent, it is not hard to see that x C solves SFP if and only if it solves the fixed point equation

x = P C ( I γ A ( I P Q ) A ) x ,

where P C and P Q are the metric projection from H 1 onto C and from H 2 onto Q, respectively, γ>0 is a positive constant, and A is the adjoint of A.

A popular algorithm to be used to solves the SFP (1.1) is due to Byrne’s CQ-algorithm [2]:

x k + 1 = P C ( I γ k A ( I P Q ) A ) x k ,k1,

where γ k (0,2/λ) with λ being the spectral radius of the operator A A.

On the other hand, let H be a real Hilbert space, and B be a set-valued mapping with domain D(B):={xH:B(x)}. Recall that B is called monotone, if uv,x,xy0 for any uBx and vBy; B is maximal monotone, if its graph {(x,y):xD(B),yBx} is not properly contained in the graph of any other monotone mapping. An important problem for set-valued monotone mappings is to find x H such that 0B( x ). Here, x is called a zero point of B. A well-known method for approximating a zero point of a maximal monotone mapping defined in a real Hilbert space H is the proximal point algorithm first introduced by Martinet [12] and generated by Rockafellar [13]. This is an iterative procedure, which generates { x n } by x 1 =xH and

x n + 1 = J β n B x n ,n1,
(1.2)

where { β n }(0,), B is a maximal monotone mapping in a real Hilbert space, and J r B is the resolvent mapping of B defined by J r B = ( I + r B ) 1 for each r>0. Rockafellar [13] proved that if the solution set B 1 (0) is nonempty and lim inf n β n >0, then the sequence { x n } in (1.2) converges weakly to an element of B 1 (0). In particular, if B is the sub-differential ∂f of a proper convex and lower semicontinuous function f:HR, then (1.2) is reduced to

x n + 1 = argmin y H { f ( y ) + 1 2 β n y x n 2 } ,n1.
(1.3)

In this case, { x n } converges weakly to a minimizer of f. Later, many researchers have studied the convergence problems of the proximal point algorithm in Hilbert spaces (see [1421] and the references therein).

Motivated by the works in [1417] and related literature, the purpose of this paper is to introduce and consider the following general split variational inclusion problem.

Let H 1 and H 2 be two real Hilbert spaces, B i : H 1 H 1 and K i : H 2 H 2 , i=1,2, be two families of set-valued maximal monotone mappings, A: H 1 H 2 be a linear and bounded operator, and A be the adjoint of A. The so-called general split variational inclusion problem is

to find  x H 1  such that 0 i = 1 B i ( x )  and 0 i = 1 K i ( A x ) .
(1.4)

The following examples are special cases of (GSVIP) (1.4).

Classical split variational inclusion problem. Let B: H 1 H 1 and K: H 2 H 2 be set-valued maximal monotone mappings. The so-called classical split variational inclusion problem (CSVIP) is

to find  x H 1  such that 0B ( x )  and 0K ( A x ) ,
(1.5)

which was introduced by Moudafi [17]. It is obvious that problem (1.5) is a special case of (GSVIP) (1.4). In [17], Moudafi proved that the iteration process

x n + 1 = J λ B ( x n + γ A ( J λ K I ) A x n )

converges weakly to a solution of problem (1.5), where λ and γ are given positive numbers.

Split optimization problem. Let f: H 1 R, g: H 2 R be two proper convex and lower semicontinuous functions. The so-called split optimization problem (SOP) is

to find  x H 1  such that f ( x ) = min y H 1 f(y) and g ( A x ) = min z H 2 g(z).
(1.6)

Denote by B=(f) and K=(g), then B and K both are maximal monotone mappings, and problem (1.6) is equivalent to the following classical split variational inclusion problem, i.e.:

to find  x H 1  such that 0 ( f ( x ) )  and 0 ( g ( A x ) ) .
(1.7)

Split feasibility problem. As in (1.1), let C and Q be two nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively and A be the same as above. The split feasibility problem is

to find  x C such A x Q.
(1.8)

It is well known that this kind of problems was first introduced by Censor and Elfving [1] for modeling inverse problems arising from phase retrievals and in medical image reconstruction [2]. Also it can be used in various disciplines such as image restoration, computer tomography and radiation therapy treatment planning.

Let i C ( i Q ) be the indicator function of C (Q), i.e.,

i C (x)= { 0 , if  x C , + , if  x C ; i Q (x)= { 0 , if  x Q , + , if  x Q .
(1.9)

Then i C and i Q both are proper convex and lower semicontinuous functions, and its subdifferentials i C and i Q are maximal monotone operators. Consequently problem (1.8) is equivalent to the following ‘split optimization problem’ and ‘Moudafi’s classical split variational inclusion problem’, i.e.,

to find  x H 1  such that  i C ( x ) = min y H 1 i C ( y )  and  i Q ( A x ) = min z H 2 i Q ( z ) to find  x H 1  such that  0 ( i C ( x ) )  and  0 ( i Q ( A x ) ) .
(1.10)

For solving (GSVIP) (1.4), in our paper we propose the following iterative algorithms:

x n + 1 = α n x n + ξ n f( x n )+ i = 1 γ n , i J β i B i [ x n λ n , i A ( I J β i K i ) A x n ] ,n0,
(1.11)

where f: H 1 H 1 is a contraction mapping with a contractive constant k(0,1), { α n }, { ξ n } and { γ n , i } are sequence in [0,1] satisfying some conditions. Under suitable conditions, some strong convergence theorems for the sequence proposed by (1.11) to a solution for (GSVIP) (1.4) in Hilbert spaces are proved. As a particular case, we consider the algorithms for a split feasibility problem and a split optimization problem and give some strong convergence theorems for these problems in Hilbert spaces. Our results extend and improve the related results of Censor and Elfving [1], Byrne [2], Censor et al. [35], Rockafellar [13], Moudafi [14, 17], Eslamian and Latif [15], Eslamian [21], and Chuang [22].

2 Preliminaries

Throughout the paper, we denote by H a real Hilbert space, C be a nonempty closed and convex subset of H. F(T) denote by the set of fixed points of a mapping T. Let { x n } be a sequence in H and xH. Strong convergence of { x n } to x is denoted by x n x, and weak convergence of { x n } to x is denoted by x n x. For every point xH, there exists a unique nearest point in C, denoted by P C x. This point satisfies.

x P C xxy,yC.

The operator P C is called the metric projection. The metric projection P C is characterized by the fact that P C xC and

x P C x, P C xy0,xH,yC.

Recall that a mapping T:CH is said to be nonexpansive, if TxTyxy for every x,yC. T is said to be quasi-nonexpansive, if F(T) and Txpxp for every xC and pF(T). It is easy to see that F(T) is a closed convex subset of C if T is a quasi-nonexpansive mapping. Besides, T is said to be a firmly nonexpansive, if

T x T y 2 x y , T x T y x , y C ; T x T y 2 x y 2 ( I T ) x ( I T ) y 2 x , y C .

Lemma 2.1 (demi-closed principle)

Let C be a nonempty closed convex subset of a real Hilbert space H. Let T:CH be a nonexpansive mapping, and let { x n } be a sequence in C. If x n w and lim n x n T x n =0, then Tw=w.

Lemma 2.2 [23]

Let H be a (real) Hilbert space. Then for all x,yH,

x + y 2 x 2 +2y,x+y.
(2.1)

Lemma 2.3 [24]

Let H be a Hilbert space and let { x n } be a sequence in H. Then, for any given sequence { λ n }(0,1) with n = 1 λ n =1 and for any positive integers i, j with i<j,

n = 1 λ n x n 2 n = 1 λ n x n 2 λ i λ j x i x j 2 .
(2.2)

Lemma 2.4 Let { a n } be a sequence of nonnegative real numbers, { b n } be a sequence of real numbers in (0,1) with n = 1 b n =, { u n } be a sequence of nonnegative real numbers with n = 1 u n <, { t n } be a real numbers with lim sup n t n 0. If

a n + 1 (1 b n ) a n + b n t n + u n ,for each n1,

then lim n a n =0.

Lemma 2.5 [25]

Let { a n } be a sequence of real numbers such that there exists a subsequence { n i } of {n} such that a n i < a n i + 1 for all iN. Then there exists a nondecreasing sequence { m k }N such that m k , a m k a m k + 1 and a k a m k + 1 are satisfied by all (sufficiently large) numbers kN. In fact, m k =max{jk: a j < a j + 1 }.

Lemma 2.6 [22]

Let H be a real Hilbert space, B:H 2 H be a set-valued maximal monotone mapping, β>0, and let J β B be the resolvent mapping of B.

  1. (i)

    For each β>0, J β B is a single-valued and firmly nonexpansive mapping;

  2. (ii)

    D( J β B )=H and F( J β B )= B 1 (0):={xD(B):0Bx};

  3. (iii)

    (I J β B ) is a firmly nonexpansive mapping for each β>0;

  4. (iv)

    suppose that B 1 (0), then for each xH, each x B 1 (0) and each β>0

    x J β B x 2 + J β B x x x x 2 ;
  5. (v)

    suppose that B 1 (0). Then x J β B x, J β B xw0 for each xH and each w B 1 (0), and each β>0.

Lemma 2.7 Let H 1 , H 2 be two real Hilbert spaces, A: H 1 H 2 be a linear bounded operator and A be the adjoint of A. Let B: H 2 2 2 H be a set-valued maximal monotone mapping, β>0, and let J β B be the resolvent mapping of B, then

  1. (i)

    ( I J β B ) A x ( I J β B ) A y 2 (I J β B )Ax(I J β B )Ay,AxAy;

  2. (ii)

    A ( I J β B ) A x A ( I J β B ) A y 2 A 2 (I J β B )Ax(I J β B )Ay,AxAy;

  3. (iii)

    if ρ(0, 2 A 2 ), then (Iρ A (I J β B )A) is a nonexpansive mapping.

Proof By Lemma 2.6(iii), the mapping (I J β B ) is firmly nonexpansive, hence the conclusions (i) and (ii) are obvious.

Now we prove the conclusion (iii).

In fact, for any x,y H 1 , it follows from the conclusions (i) and (ii) that

( I ρ A ( I J β B ) A ) x ( I ρ A ( I J β B ) A ) y 2 = x y 2 2 ρ x y , A ( I J β B ) A x A ( I J β B ) A y + ρ 2 A ( I J β B ) A x A ( I J β B ) A y 2 x y 2 2 ρ A x A y , ( I J β B ) A x ( I J β B ) A y + ρ 2 A 2 ( I J β B ) A x ( I J β B ) A y 2 x y 2 ρ ( 2 ρ A 2 ) ( I J β B ) A x ( I J β B ) A y 2 x y 2 ( since  ρ ( 2 ρ A 2 ) 0 ) .

This completes the proof of Lemma 2.7. □

3 Main results

The following lemma will be used in proving our main results.

Lemma 3.1 Let H 1 and H 2 be two real Hilbert spaces, A: H 1 H 2 be a linear and bounded operator, and A be the adjoint of A. Let B i : H 1 2 H 1 , and K i : H 2 2 H 2 , i=1,2, , be two families of set-valued maximal monotone mappings, and let β>0 and γ>0. If Ω (the solution set of (GSVIP) (1.4)), then x H 1 is a solution of (GSVIP) (1.4) if and only if for each i1, for each γ>0 and for each β>0

x = J β B i ( x γ A ( I J β K i ) A x ) .
(3.1)

Proof Indeed, if x is a solution of (GSVIP) (1.4), then for each i1, γ>0 and β>0,

x B i 1 (0)andA x K i 1 (0),i.e., x = J β B i x  and A x = J β K i A x .

This implies that x = J β B i ( x γA x (I J β K i )A x ).

Conversely, if x solves (3.1), by Lemma 2.6(v), we have

x ( x γ A ( I J β K i ) A x ) , y x 0,y B i 1 (0).

Hence we have

( I J β K i ) A x , A y A x 0,y B i 1 (0).
(3.2)

On the other hand, by Lemma 2.6(v) again

( A x J β K i A x , J β K i A x v 0,v K i 1 (0).
(3.3)

Adding up (3.2) and (3.3), we have

A x J β K i A x , J β K i A x + A y A x v 0,y B i 1 (0), and v K i 1 (0).

Simplifying it, we have

A x J β K i A x 2 A x J β K i A x , A y v 0,y B i 1 (0), and v K i 1 (0).
(3.4)

By the assumption that Ω. Taking wΩ, hence for each i1 w B i 1 (0) and Aw K i 1 (0). In (3.4), taking y=w and v=Aw, then we have

A x J β K i A x 2 =0.

This implies that A x = J β K i A x , and so A x K i 1 (0) for each i1. Hence from (3.1), x = J β B i x , i.e., x B i 1 (0). Hence x is a solution of (GSVIP)(1.4).

This completes the proof of Lemma 3.1. □

We are now in a position to prove the following main result.

Theorem 3.2 Let H 1 , H 2 , A, A , { B i }, { K i }, Ω be the same as in Lemma  3.1. Let f: H 1 H 1 be a contractive mapping with contractive constant k(0,1). Let { α n }, { ξ n }, { γ n , i } be the sequences in (0,1) with α n + ξ n + i = 1 γ n , i =1, for each n0. Let { β i } be a sequence in (0,), and { λ n , i } be a sequence in (0, 2 A 2 ). Let { x n } be the sequence defined by (1.11). If Ω and the following conditions are satisfied:

  1. (i)

    lim n ξ n =0, and n = 0 ξ n =;

  2. (ii)

    lim inf n α n γ n , i >0 for each i1;

  3. (iii)

    0< lim inf n λ n , i lim sup n λ n , i < 2 A 2 ,

then x n x Ω where x = P Ω f( x ), where P Ω is the metric projection from H 1 onto Ω.

Proof (I) First we prove that { x n } is bounded.

In fact, letting zΩ, by Lemma 3.1, for each i1,

z= J β i B i [ z λ n , i A ( I J β i K i ) A z ] .

Hence it follows from Lemma 2.7(iii) that for each i1 and each n1 we have

x n + 1 z = α n x n + ξ n f ( x n ) + i = 1 γ n , i J β i B i [ x n λ n , i A ( I J β i K i ) A x n ] z α n x n z + ξ n f ( x n ) z + i = 1 γ n , i J β i B i [ x n λ n , i A ( I J β i K i ) A x n ] z α n x n z + ξ n f ( x n ) z + i = 1 γ n , i J β i B i [ x n λ n , i A ( I J β i K i ) A x n ] z α n x n z + ξ n f ( x n ) z + i = 1 γ n , i x n z = ( 1 ξ n ) x n z + ξ n f ( x n ) z ( 1 ξ n ) x n z + ξ n f ( x n ) f ( z ) + ξ n f ( z ) z ( 1 ξ n ( 1 k ) ) x n z + ξ n ( 1 k ) 1 k f ( z ) z max { x n z , 1 1 k f ( z ) z } .

By induction, we can prove that

x n zmax { x 0 z , 1 1 k f ( z ) z } ,n0.
(3.5)

This implies that { x n } is bounded, so is {f( x n )}.

(II) Now we prove that for each j1

α n γ n , j x n J β i B i [ x n λ n , i A ( I J β i K i ) A x n ] 2 x n z 2 x n + 1 z 2 + ξ n f ( x n ) z 2 , for each  i 1 .
(3.6)

Indeed, it follows from Lemma 2.3 that for any positive j1

x n + 1 z 2 = α n x n + ξ n f ( x n ) + i = 1 γ n , i J β i B i [ x n λ n , i A ( I J β i K i ) A x n ] z 2 α n x n z 2 + ξ n f ( x n ) z 2 + i = 1 γ n , i J β i B i [ x n λ n , i A ( I J β i K i ) A x n ] z 2 α n γ n , j x n J β i B i [ x n λ n , i A ( I J β i K i ) A x n ] 2 ( 1 ξ n ) x n z 2 + ξ n f ( x n ) z 2 α n γ n , j x n J β i B i [ x n λ n , i A ( I J β i K i ) A x n ] 2 .

Simplifying it, (3.6) is proved.

By the assumption that Ω, and it is easy to prove that Ω is closed and convex. This implies that P Ω is well defined. Again since P Ω f: H 1 Ω is a contraction mapping with contractive constant k(0,1), there exists a unique x Ω such that x = P Ω f x . Since x Ω, it solves (GSVIP) (1.4). By Lemma 3.1,

x = J β j B j ( x λ n , j A ( I J β j K j ) A x ) ,j1,n0.
(3.7)
  1. (III)

    Now we prove that x n x .

In order to prove that x n x (as n), we consider two cases.

Case 1. Assume that { x n x } is a monotone sequence. In other words, for n 0 large enough, { x n x } n n 0 is either nondecreasing or non-increasing. Since { x n x } is bounded, { x n x } is convergence. Again since lim n ξ n =0, and {f( x n )} is bounded, from (3.6) we get

lim n α n γ n , j x n J β i B i [ x n λ n , i A ( I J β i K i ) A x n ] 2 =0.

By condition (ii), we obtain

lim n x n J β i B i [ x n λ n , i A ( I J β i K i ) A x n ] =0.
(3.8)

Now we prove that

lim sup n f ( x ) x , x n x 0.
(3.9)

To show this inequality, we choose a subsequence { x n k } of { x n } such that x n k w, λ n k , i λ i (0, 2 A 2 ) for each i1, and

lim sup n f ( x ) x , x n x = lim n k f ( x ) x , x n k x .
(3.10)

It follows from (3.8) that

J β i B i [ x n λ i A ( I J β i K i ) A x n ] x n J β i B i [ x n λ i A ( I J β i K i ) A x n ] J β i B i [ x n λ n , i A ( I J β i K i ) A x n ] + J β i B i [ x n λ n , i A ( I J β i K i ) A x n ] x n [ x n λ i A ( I J β i K i ) A x n ] [ x n λ n , i A ( I J β i K i ) A x n ] + J β i B i [ x n λ n , i A ( I J β i K i ) A x n ] x n | λ i λ n , i | A ( I J β i K i ) A x n + J β i B i [ x n λ n , i A ( I J β i K i ) A x n ] x n 0 ( as  n ) .

For each i1, J β i B i [I λ i A (I J β i K i )A] is a nonexpansive mapping. Thus from Lemma 2.1, w= J β i B i [I λ i A (I J β i K i )A]w. By Lemma 3.1 wΩ, i.e., w is a solution of (GSVIP) (1.4). Consequently we have

lim sup n f ( x ) x , x n x = lim n k f ( x ) x , x n k x = f ( x ) x , w x 0 .
  1. (IV)

    Finally, we prove that x n P Ω f( x ).

In fact, from Lemma 2.2 we have

x n + 1 x 2 α n ( x n x ) + i = 1 γ n , i J β i B i [ x n λ n , i A ( I J β i K i ) A x n ] x 2 + 2 ξ n f ( x n ) x , x n + 1 x ( 1 ξ n ) 2 x n x 2 + 2 ξ n f ( x n ) f ( x ) , x n + 1 x + 2 ξ n f ( x ) x , x n + 1 x ( 1 ξ n ) 2 x n x 2 + 2 ξ n k x n x x n + 1 x + 2 ξ n f ( x ) x , x n + 1 x ( 1 ξ n ) 2 x n x 2 + ξ n k { x n + 1 x 2 + x n x 2 } + 2 ξ n f ( x ) x , x n + 1 x .

Simplifying it, we have

x n + 1 x 2 ( 1 ξ n ) 2 + ξ n k 1 ξ n k x n x 2 + 2 ξ n 1 ξ n k f ( x ) x , x n + 1 x 1 2 ξ n + ξ n k 1 ξ n k x n x 2 + ξ n 2 1 ξ n k x n x 2 + 2 ξ n 1 ξ n k f ( x ) x , x n + 1 x ( 1 η n ) x n x 2 + η n δ n , n 0 ,

where δ n = ξ n M 2 ( 1 k ) + 1 1 k f( x ) x , x n + 1 x , M= sup n 0 x n x 2 , and η n = 2 ( 1 k ) ξ n 1 ξ n k . It is easy to see that η n 0, n = 1 η n =, and lim sup n δ n 0. Hence by Lemma 2.4, the sequence { x n } converges strongly to x = P Ω f( x ).

Case 2. Assume that { x n x } is not a monotone sequence. Then, by Lemma 2.3, we can define a sequence of positive integers: {τ(n)}, n n 0 (where n 0 large enough) by

τ(n)=max { k n : x k x x k + 1 x } .
(3.11)

Clearly {τ(n)} is a nondecreasing sequence such that τ(n) as n, and for all n n 0

x τ ( n ) x x τ ( n ) + 1 x .
(3.12)

Therefore { x τ ( n ) x } is a nondecreasing sequence. According to Case (1), lim n x τ ( n ) x =0 and lim n x τ ( n ) + 1 x =0. Hence we have

0 x n x max { x n x , x τ ( n ) x } x τ ( n ) + 1 x 0,as n.

This implies that x n x and x = P Ω f( x ) is a solution of (GSVIP) (1.4).

This completes the proof of Theorem 3.2. □

In Theorem 3.2, if B i =B and K i =K, for each i1, where B: H 1 2 H 1 and K: H 2 2 H 2 are two set-valued maximal monotone mappings, then from Theorem 3.2 we have the following.

Theorem 3.3 Let H 1 , H 2 , A, A , B, K, Ω, f be the same as in Theorem  3.2. Let { α n }, { ξ n }, { γ n } be the sequence in (0,1) with α n + ξ n + γ n =1 for each n0. Let β>0 be any given positive number, and { λ n } be a sequence in (0, 2 A 2 ). Let { x n } be the sequence defined by

x n + 1 = α n x n + ξ n f( x n )+ γ n J β B [ x n λ n A ( I J β K ) A x n ] ,n0.
(3.13)

If Ω and the following conditions are satisfied:

  1. (i)

    lim n ξ n =0, and n = 0 ξ n =;

  2. (ii)

    lim inf n α n γ n >0;

  3. (iii)

    0< lim inf n λ n lim sup n λ n < 2 A 2 ,

then x n x Ω where x = P Ω f( x ).

4 Applications

In this section we shall utilize the results presented in Theorem 3.2 and Theorem 3.3 to study some problems.

4.1 Application to split optimization problem

Let H 1 and H 2 be two real Hilbert spaces. Let h: H 1 R and g: H 2 R be two proper, convex and lower semicontinuous functions, and A: H 1 H 2 be a linear and bounded operators. The so-called split optimization problem (SOP) is

to find  x H 1  such that h ( x ) = min y H 1 h(y) and g ( A x ) = min z H 2 g(z).
(4.1)

Denote by h=B and g=K. It is know that B: H 1 2 H 1 (resp. K: H 2 2 H 2 ) is a maximal monotone mapping, so we can define the resolvent J β B = ( I + β B ) 1 and J β K = ( I + β K ) 1 , where β>0. Since x and A x is a minimum of h on H 1 and g on H 2 , respectively, for any given β>0, we have

x B 1 (0)=F ( J β B ) ,andA x K 1 (0)=F ( J β K ) .
(4.2)

This implies that the (SOP) (4.1) is equivalent to the split variational inclusion problem (SVIP) (4.2). From Theorem 3.3 we have the following.

Theorem 4.1 Let H 1 , H 2 , A, B, K, h, g be the same as above. Let f, { α n }, { ξ n }, { γ n } be the same as in Theorem  3.3. Let β>0 be any given positive number, and { λ n } be a sequence in (0, 2 A 2 ). Let { x n } be a sequence generated by x 0 H 1

{ y n = argmin z H 2 { g ( z ) + 1 2 β z A x n 2 } , z n = x n λ n A ( A x n y n ) , w n = argmin y H 1 { h ( y ) + 1 2 β y z n 2 } , x n + 1 = α n x n + ξ n f ( x n ) + γ n w n , n 0 .
(4.3)

If Ω 1 , the solution set of the split optimization problem (4.1), and the following conditions are satisfied:

  1. (i)

    lim n ξ n =0, and n = 0 ξ n =;

  2. (ii)

    lim inf n α n γ n >0;

  3. (iii)

    0< lim inf n λ n lim sup n λ n < 2 A 2 ,

then x n x Ω 1 where x = P Ω 1 f( x ).

Proof Since h=B, g:=K, and y n = argmin z H 2 {g(z)+ 1 2 β z A x n 2 }, we have

0 [ K ( z ) + 1 β ( z A x n ) ] z = y n ,i.e.,A x n (βK+I)( y n ).

This implies that

y n = J β K (A x n ).
(4.4)

Similarly, from (4.3), we have

w n = J β B ( z n ).
(4.5)

From (4.3)-(4.5), we have

w n = J β B ( x n λ n A ( I J β K ) A x n ) .
(4.6)

Therefore (4.3) can be rewritten as

x n + 1 = α n x n + ξ n f( x n )+ γ n J β B ( x n λ n A ( I J β K ) A x n ) ,n0.
(4.7)

The conclusion of Theorem 4.1 can be obtained from Theorem 3.3 immediately. □

4.2 Application to split feasibility problem

Let C H 1 and Q H 2 be two nonempty closed convex subsets and A: H 1 H 2 be a bounded linear operator. Now we consider the following split feasibility problem, i.e.: to find

x C such that A x Q.
(4.8)

Let i C and i Q be the indicator functions of C and Q defined by (1.9). Let N C (u) be the normal cone at u H 1 defined by

N C (u)= { z H 1 : z , v u 0 , v C } .

Since i C and i Q both are proper convex and lower semicontinuous functions on H 1 and H 2 , respectively, and the subdifferential i C of i C (resp. i Q of i Q ) is a maximal monotone operator, we can define the resolvents J β i C of i C and J β i Q of i Q by

J β i C ( x ) = ( I + β i C ) 1 ( x ) , x H 1 , J β i Q ( x ) = ( I + β i Q ) 1 ( x ) , x H 2 ,

where β>0. By definition, we know that

i C ( x ) = { z H 1 : i C ( x ) + z , y x i C ( y ) , y H 1 } = { z H 1 : z , y x 0 , y C } = N C ( x ) , x C .

Hence, for each β>0, we have

u = J β i C ( x ) x u β N C ( u ) x u , y u 0 , y C u = P C ( x ) .

This implies that J β i C = P C . Similarly J β i Q = P Q . Taking h(x)= i C (x) and g(x)= i Q (x) in (4.1), then the (SFP) (4.8) is equivalent to the following split optimization problem:

to find  x H 1  such that  i C ( x ) = min y H 1 i C (y) and  i Q ( A x ) = min z H 2 i Q (z).
(4.9)

Hence, the following result can be obtained from Theorem 4.1 immediately.

Theorem 4.2 Let H 1 , H 2 , A, A , i C , i Q be the same as above. Let f, { α n }, { ξ n }, { γ n } be the same as in Theorem  4.1. Let { λ n } be a sequence in (0, 2 A 2 ). Let { x n } be the sequence defined by

x n + 1 = α n x n + ξ n f( x n )+ γ n P C [ x n λ n A ( I P Q ) A x n ] ,n0.
(4.10)

If the solution set of the split optimization problem (4.4) Ω 2 , and the following conditions are satisfied:

  1. (i)

    lim n ξ n =0, and n = 0 ξ n =;

  2. (ii)

    lim inf n α n γ n >0;

  3. (iii)

    0< lim inf n λ n lim sup n λ n < 2 A 2 ,

then x n x Ω 2 where x = P Ω 2 f( x ).

Remark 4.3 Theorem 4.2 extends and improves the main results in Censor and Elfving [1] and Byrne [2].