1 Introduction and preliminaries

Let E be a real Banach space with norm and let E denote the dual space of E. We use ‘→’ and ‘⇀’ to denote strong and weak convergence, respectively. We denote the value of f E at xE by x,f.

The normalized duality mapping J from E to 2 E is defined by

Jx:= { f E : x , f = x 2 = f 2 } ,xE.

We call J weakly sequentially continuous if { x n } is sequence in E which converges weakly to x it follows that {J x n } converges in weak to Jx.

A mapping T:D(T)=E E is said to be demi-continuous [1] on E if T x n Tx, as n, for any sequence { x n } strongly convergent to x in E. A mapping T:D(T)=E E is said to be hemi-continuous [1] on E if w- lim t 0 T(x+ty)=Tx, for any x,yE. A mapping T:EE is said to be non-expansive if TxTyxy, for x,yE.

The Lyapunov functional φ:E×E R + is defined as follows [2]:

φ(x,y)= x 2 2x,Jy+ y 2 ,

for x,yE.

It is obvious from the definition of φ that

( x y ) 2 φ(x,y) ( x + y ) 2 ,
(1.1)

for all x,yE. We also know that

φ(x,y)=φ(x,z)+φ(z,y)+2xz,JzJy,
(1.2)

for each x,y,zE; see [3, 4].

We use Fix(S) to denote the set of fixed points of a mapping S:EE. That is, Fix(S):={xE:Sx=x}. A mapping S:EE is said to be generalized non-expansive [4] if Fix(S) and φ(Sx,p)φ(x,p), for xE and pFix(S).

Let C be a nonempty closed subset of E and let Q be a mapping of E onto C. Then Q is said to be sunny [4] if Q(Q(x)+t(xQ(x)))=Q(x), for all xE and t0. A mapping Q:EC is said to be a retraction [4] if Q(z)=z for every zC. If E is a smooth and strictly convex Banach space, then a sunny generalized non-expansive retraction of E onto C is uniquely decided, which is denoted by R C .

Let I denote the identity operator on E. A mapping A:D(A)EE is said to be accretive if AxAy,J(xy)0, for x,yD(A) and it is called m-accretive if R(I+λA)=E, for λ>0.

If A is accretive, we can define, for each r>0, a single-valued mapping J r A :R(I+rA)D(A) by J r A = ( I + r A ) 1 , which is called the resolvent of A. And, J r A is a non-expansive mapping [1]. In the process of constructing iterative schemes to approximate zeros of an accretive mapping A, the non-expansive property of J r A plays an important role.

A mapping A:D(A)EE is said to be d-accretive [5] if AxAy,JxJy0, for x,yD(A). And it is called m-d-accretive if R(I+λA)=E, for λ>0. However, the resolvent of an m-d-accretive mapping is not a non-expansive mapping.

An operator BE× E is said to be monotone if x 1 x 2 , y 1 y 2 0, for y i B x i , i=1,2. A monotone operator B is said to be maximal monotone if R(J+λB)= E , for λ>0. An operator BE× E is said to be strictly monotone if x 1 x 2 , y 1 y 2 >0, for x 1 x 2 , y i B x i , i=1,2.

It is clear that in the frame of Hilbert spaces, (m-)accretive mappings, (m-)d-accretive mappings and (maximal) monotone operators are the same. But in the frame of non-Hilbertian Banach spaces, they belong to different classes of important nonlinear operators, which have practical backgrounds. During the past 50 years or so, a large number of researches have been done on the topics of constructing iterative schemes to approximate the zeros of m-accretive mappings and maximal monotone operators. However, rarely related work on d-accretive mappings can be found.

As we know, in 2000, Alber and Reich [5] presented the following iterative schemes for the d-accretive mapping T in a real uniformly smooth and uniformly convex Banach space:

x n + 1 = x n α n T x n
(1.3)

and

x n + 1 = x n α n T x n T x n ,n0.
(1.4)

They proved that the iterative sequences { x n } generated by (1.3) and (1.4) converge weakly to the zero point of T under the assumptions that T is uniformly bounded and demi-continuous.

In 2007, Guan [6] presented the following projective method for the m-d-accretive mapping A in a real uniformly smooth and uniformly convex Banach space:

{ x 1 D ( A ) , y n = J r n A x n , C n = { v D ( A ) : φ ( v , y n ) φ ( v , x n ) } , Q n = { v D ( A ) : x n v , J x 1 J x n 0 } , x n + 1 = Π C n Q n x 1 , n 1 ,
(1.5)

where J r n A = ( I + r n A ) 1 , and Π C n Q n is the generalized projection from D(A) onto C n Q n . It was shown that the iterative sequence { x n } generated by (1.5) converges strongly to the zero point of A under the assumptions that A is demi-continuous, the normalized duality mapping J is weakly sequentially continuous, and J r n A satisfies

φ ( p , J r n A x ) φ(p,x),
(1.6)

for xE and p A 1 0. The restrictions are extremely strong and it is hardly for us to find such an m-d-accretive mapping which both is demi-continuous and satisfies (1.6).

The so-called block iterative scheme for solving the problem of image recovery proposed by Aharoni and Censor [7] inspired us. In a finite-dimensional space H, the block iterative sequence { x n } is generated in the following way: x 1 =xH and

x n + 1 = i = 1 m ω n , i ( α n , i x n + ( 1 α n , i ) P i x n ) ,
(1.7)

where P i is a non-expansive retraction from H onto C i , and { C i } i = 1 m is a family of nonempty closed and convex subsets of H. { ω n , i }[0,1], i = 1 m ω n , i =1, and { α n , i }(1,1), for i=1,2,,m and n1.

In [8], Kikkawa and Takahashi applied the block iterative method to approximate the common fixed point of finite non-expansive mappings { T i } i = 1 m in Banach spaces in the following way and obtained the weak convergence theorems: x 1 =xC, and

x n + 1 = i = 1 m ω n , i ( α n , i x n + ( 1 α n , i ) T i x n ) ,
(1.8)

where { ω n , i }[0,1], i = 1 m ω n , i =1, and { α n , i }[0,1], for i=1,2,,m and n1.

In this paper, we shall borrow the idea of block iterative method which highlights the convex combination techniques. Our main work can be divided into three parts. In Section 2, we shall construct iterative schemes by convex combination techniques for approximating common zeros of m-d-accretive mappings. Some weak convergence theorems are obtained in a Banach space. In Section 3, we shall construct iterative schemes by convex combination and retraction techniques for approximating common zeros of m-d-accretive mappings. Some strong convergence theorems are obtained in a Banach space. In Section 4, we shall present a nonlinear elliptic equation from which we can define an m-d-accretive mapping. Our main contributions lie in the following aspects:

  1. (i)

    The restrictions are weaker. The semi-continuity of the d-accretive mapping A and the inequality of (1.6) are no longer needed.

  2. (ii)

    The Lyapunov functional is employed in the process of estimating the convergence of the iterative sequence. This is mainly because the resolvent of a d-accretive mapping is not non-expansive.

  3. (iii)

    The connection between a nonlinear elliptic equation and an m-d-accretive mapping is set up, from which we cannot only find a good example of m-d-accretive mapping but also see the iterative construction of the solution of the nonlinear elliptic equation.

In order to prove our convergence theorems, we also need the following lemmas.

Lemma 1.1 [1, 9, 10]

The duality mapping J:E 2 E has the following properties:

  1. (i)

    If E is a real reflexive and smooth Banach space, then J:E E is single-valued.

  2. (ii)

    If E is reflexive, then J is a surjection.

  3. (iii)

    If E is a real uniformly smooth and uniformly smooth Banach space, then J 1 : E E is also a duality mapping. Moreover, J and J 1 are uniformly continuous on each bounded subset of E and E , respectively.

  4. (iv)

    E is strictly convex if and only if J is strictly monotone.

Lemma 1.2 [10]

Let E be a real smooth and uniformly convex Banach space, BE× E be a maximal monotone operator, then B 1 0 is a closed and convex subset of E and the graph of B, G(B) is demi-closed in the following sense: { x n }D(B) with x n x in E, and y n B x n with y n y in E it follows that xD(B) and yBx.

Lemma 1.3 [2]

Let E be a real reflexive, strictly convex, and smooth Banach space, let C be a nonempty closed subset of E, and let R C :EC be a sunny generalized non-expansive retraction. Then, for uC and xE, φ(x, R C x)+φ( R C x,u)φ(x,u).

Lemma 1.4 [3]

Let E be a real smooth and uniformly convex Banach space, and let { x n } and { y n } be two sequences in E. If either { x n } or { y n } is bounded and φ( x n , y n )0, as n, then x n y n 0, as n.

Lemma 1.5 [11]

Let { a n } and { b n } be two sequences of nonnegative real numbers and a n + 1 a n + b n , for n1. If n = 1 b n <+, then lim n a n exists.

2 Weak convergence theorems

Theorem 2.1 Let E be a real uniformly smooth and uniformly convex Banach space. Let A i :EE be a finite family of m-d-accretive mappings, { ω n , i },{ η n , i }(0,1], { α n , i },{ β n , i }[0,1), { r n , i },{ s n , i }(0,+), for i=1,2,,m. i = 1 m ω n , i =1 and i = 1 m η n , i =1. Let D:= i = 1 m A i 1 0. Suppose that the normalized duality mapping J:E E is weakly sequentially continuous. Let { x n } be generated by the following iterative algorithm:

{ x 1 E , y n = i = 1 m ω n , i [ α n , i x n + ( 1 α n , i ) ( I + r n , i A i ) 1 x n ] , x n + 1 = i = 1 m η n , i [ β n , i x n + ( 1 β n , i ) ( I + s n , i A i ) 1 y n ] , n 1 .
(2.1)

Suppose the following conditions are satisfied:

  1. (i)

    lim sup n α n , i <1, lim sup n β n , i <1, for i=1,2,,m;

  2. (ii)

    lim inf n η n , i >0, lim inf n ω n , i >0, for i=1,2,,m;

  3. (iii)

    inf n 1 r n , i >0, inf n 1 s n , i >0, for i=1,2,,m.

Then { x n } converges weakly to a point v 0 D.

Proof For i=1,2,,m, let J r n , i A i = ( I + r n , i A i ) 1 and J s n , i A i = ( I + s n , i A i ) 1 .

We split the proof into the following six steps.

Step 1. For pD, J r n , i A i and J s n , i A i satisfy the following two inequalities, respectively:

φ ( x n , J r n , i A i x n ) +φ ( J r n , i A i x n , p ) φ( x n ,p),
(2.2)
φ ( y n , J s n , i A i y n ) +φ ( J s n , i A i y n , p ) φ( y n ,p).
(2.3)

In fact, using (1.2), we know that for pD,

φ( x n ,p)=φ ( x n , J r n , i A i x n ) +φ ( J r n , i A i x n , p ) +2 x n J r n , i A i x n , J J r n , i A i x n J p .
(2.4)

Since A i is d-accretive and x n J r n , i A i x n r n , i = A i J r n , i A i x n ,

x n J r n , i A i x n r n , i , J J r n , i A i x n J p 0.

From (2.4) we know that (2.2) is true. So is (2.3).

Step 2. { x n } is bounded.

pD, using (2.2) and (2.3), we have

φ ( x n + 1 , p ) i = 1 m η n , i [ β n , i φ ( x n , p ) + ( 1 β n , i ) φ ( J s n , i A i y n , p ) ] i = 1 m η n , i [ β n , i φ ( x n , p ) + ( 1 β n , i ) φ ( y n , p ) ] i = 1 m η n , i β n , i φ ( x n , p ) + i = 1 m η n , i ( 1 β n , i ) i = 1 m ω n , i [ α n , i φ ( x n , p ) + ( 1 α n , i ) φ ( J r n , i A i x n , p ) ] φ ( x n , p ) .

Lemma 1.5 implies that lim n φ( x n ,p) exists. Then (1.1) ensures that { x n } is bounded.

Step 3. A i J 1 E ×E is maximal monotone, for each i, 1im.

Since A i is d-accretive, then x,y E ,

x y , A i J 1 x A i J 1 y = A i ( J 1 x ) A i ( J 1 y ) , J ( J 1 x ) J ( J 1 y ) 0.

Therefore, A i J 1 is monotone, for each i, 1im.

Since R(I+λ A i )=E, λ>0, then yE, there exists xE satisfying x+λ A i x=y, λ>0. Using Lemma 1.1(ii), there exists x E such that J 1 x =x. Thus J 1 x +λ A i J 1 x =y, which implies that R( J 1 +λ A i J 1 )=E, λ>0. Thus A i J 1 is maximal monotone, for each i, 1im.

Step 4. ( A i J 1 ) 1 0, for each i, 1im.

Since D, then there exists xE such that A i x=0, where i=1,2,,m. Using Lemma 1.1(ii) again, there exists x E such that J 1 x =x. Thus A i J 1 x =0, for each i, 1im. That is, x ( A i J 1 ) 1 0, for each i, 1im.

Step 5. ω( x n )D, where ω( x n ) denotes the set of all of the weak limit points of the weakly convergent subsequences of { x n }.

Since { x n } is bounded, there exists a subsequence of { x n }, for simplicity, we still denote it by { x n } such that x n x, as n.

For pD, using (2.2) and (2.3), we have

φ ( x n + 1 , p ) i = 1 m η n , i [ β n , i φ ( x n , p ) + ( 1 β n , i ) φ ( y n , p ) ] i = 1 m η n , i β n , i φ ( x n , p ) + i = 1 m η n , i ( 1 β n , i ) i = 1 m ω n , i [ α n , i φ ( x n , p ) + ( 1 α n , i ) φ ( J r n , i A i x n , p ) ] i = 1 m η n , i β n , i φ ( x n , p ) + i = 1 m η n , i ( 1 β n , i ) i = 1 m ω n , i α n , i φ ( x n , p ) + i = 1 m η n , i ( 1 β n , i ) i = 1 m ω n , i ( 1 α n , i ) [ φ ( x n , p ) φ ( x n , J r n , i A i x n ) ] = φ ( x n , p ) i = 1 m η n , i ( 1 β n , i ) i = 1 m ω n , i ( 1 α n , i ) φ ( x n , J r n , i A i x n ) ,

which implies that

i = 1 m η n , i (1 β n , i ) i = 1 m ω n , i (1 α n , i )φ ( x n , J r n , i A i x n ) φ( x n ,p)φ( x n + 1 ,p).

Using the assumptions and the result of Step 2, we know that φ( x n , J r n , i A i x n )0, as n, for i=1,2,,m. Then Lemma 1.4 ensures that x n J r n , i A i x n 0, as n, for i=1,2,,m.

Let u i = A i v, since A i is d-accretive, then

u i x n J r n , i A i x n r n , i , J v J J r n , i A i x n 0.

Since both { x n } and { J r n , i A i x n } are bounded, then letting n and using Lemma 1.1(iii), we have

u i ,JvJx0,

i=1,2,,m. That is, A i J 1 (Jv),JvJx0, for i=1,2,,m. From Step 3 and Lemma 1.2, we know that Jx ( A i J 1 ) 1 0, which implies that x A i 1 0. And then xD.

Step 6. x n v 0 , as n, where v 0 is the unique element in D.

From Steps 2 and 5, we know that there exists a subsequence { x n i } of { x n } such that x n i v 0 D, as i. If there exists another subsequence { x n j } of { x n } such that x n j v 1 D, as j, then from Step 2, we know that

lim n [ φ ( x n , v 0 ) φ ( x n , v 1 ) ] = lim i [ φ ( x n i , v 0 ) φ ( x n i , v 1 ) ] = lim i [ v 0 2 v 1 2 + 2 x n i , J v 1 J v 0 ] = v 0 2 v 1 2 + 2 v 0 , J v 1 J v 0 .
(2.5)

Similarly,

lim n [ φ ( x n , v 0 ) φ ( x n , v 1 ) ] = lim j [ φ ( x n j , v 0 ) φ ( x n j , v 1 ) ] = lim j [ v 0 2 v 1 2 + 2 x n j , J v 1 J v 0 ] = v 0 2 v 1 2 + 2 v 1 , J v 1 J v 0 .
(2.6)

From (2.5) and (2.6), we have v 1 v 0 ,J v 1 J v 0 =0, which implies that v 0 = v 1 since J is strictly monotone.

This completes the proof. □

Remark 2.1 If E reduces to the Hilbert space H, then (2.1) becomes the iterative scheme for approximating common zeros of m-accretive mappings.

Remark 2.2 The iterative scheme (2.1) can be regarded as two-step block iterative scheme.

Remark 2.3 If m=1, then (2.1) becomes to the following one approximating the zero point of an m-d-accretive mapping A:

{ x 1 E , y n = α n x n + ( 1 α n ) ( I + r n A ) 1 x n , x n + 1 = β n x n + ( 1 β n ) ( I + s n A ) 1 y n , n 1 .
(2.7)

If, moreover, α n 0, β n 0, then (2.7) becomes the so-called double-backward iterative scheme for the m-d-accretive mapping A:

{ x 1 E , y n = ( I + r n A ) 1 x n , x n + 1 = ( I + s n A ) 1 y n , n 1 .
(2.8)

3 Strong convergence theorems

Theorem 3.1 Let E be a real uniformly smooth and uniformly convex Banach space. Let A i :EE be a finite family of m-d-accretive mappings, { ω n , i },{ η n , i }(0,1], { α n , i },{ β n , i }[0,1), { r n , i },{ s n , i }(0,+), for i=1,2,,m. i = 1 m ω n , i =1 and i = 1 m η n , i =1. Let D:= i = 1 m A i 1 0. Suppose the normalized duality mapping J:E E is weakly sequentially continuous. Let { x n } be generated by the iterative scheme:

{ x 0 E , u n = i = 1 m ω n , i [ α n , i x n + ( 1 α n , i ) ( I + r n , i A i ) 1 x n ] , v n = i = 1 m η n , i [ β n , i x n + ( 1 β n , i ) ( I + s n , i A i ) 1 u n ] , H 0 = E , H n + 1 = { z H n : φ ( u n , z ) φ ( x n , z ) } , x n + 1 = R H n + 1 x 0 , n 0 .
(3.1)

Suppose the following conditions are satisfied:

  1. (i)

    lim sup n α n , i <1, lim sup n β n , i <1, for i=1,2,,m;

  2. (ii)

    lim inf n η n , i >0, lim inf n ω n , i >0, for i=1,2,,m;

  3. (iii)

    inf n 0 r n , i >0, inf n 0 s n , i >0, for i=1,2,,m.

Then { x n } converges strongly to p 0 = R D x 0 , where R D is the sunny generalized non-expansive retraction from E onto D, as n.

Proof We split the proof into six steps.

Step 1. { x n } is well defined.

Noticing that

φ( u n ,z)φ( x n ,z) u n 2 x n 2 2 u n x n ,Jz,

then from Lemma 1.1(iii), we can easily know that H n (n0) is a closed subset of E.

For pD, using (2.2), we know that

φ ( u n , p ) i = 1 m ω n , i [ α n , i φ ( x n , p ) + ( 1 α n , i ) φ ( J r n , i A i x n , p ) ] i = 1 m ω n , i [ α n , i φ ( x n , p ) + ( 1 α n , i ) φ ( x n , p ) ] = φ ( x n , p ) ,

which implies that p H n . Thus D H n , for n0.

Since H n is a nonempty closed subset of E, there exists a unique sunny generalized non-expansive retraction from E onto H n , which is denoted by R H n . Therefore, { x n } is well defined.

Step 2. { x n } is bounded.

Using Lemma 1.3, φ( x n + 1 ,p)φ( x 0 ,p), pD H n + 1 . Thus {φ( x n ,p)} is bounded and then (1.1) ensures that { x n } is bounded.

Step 3. ω( x n )D, where ω( x n ) denotes the set of all of the weak limit points of the weakly convergent subsequences of { x n }.

Since { x n } is bounded, there exists a subsequence of { x n }, for simplicity, we still denote it by { x n } such that x n x, as n.

Since x n + 1 H n + 1 H n , using Lemma 1.3, we have

φ( x n , x n + 1 )+φ( x 0 , x n )φ( x 0 , x n + 1 ),

which implies that lim n φ( x 0 , x n ) exists. Thus φ( x n , x n + 1 )0. Lemma 1.4 implies that x n x n + 1 0, as n.

Since x n + 1 H n + 1 H n , then φ( u n , x n + 1 )φ( x n , x n + 1 )0, which implies that x n u n 0, as n.

pD, using (2.2) again, we have

φ ( u n , p ) i = 1 m ω n , i [ α n , i φ ( x n , p ) + ( 1 α n , i ) φ ( J r n , i A i x n , p ) ] i = 1 m ω n , i α n , i φ ( x n , p ) + i = 1 m ω n , i ( 1 α n , i ) [ φ ( x n , p ) φ ( x n , J r n , i A i x n ) ] = φ ( x n , p ) i = 1 m ω n , i ( 1 α n , i ) φ ( x n , J r n , i A i x n ) .

Then

i = 1 m ω n , i ( 1 α n , i ) φ ( x n , J r n , i A i x n ) φ ( x n , p ) φ ( u n , p ) = x n 2 u n 2 2 x n u n , J p 0 ,

as n. Lemma 1.4 implies that x n J r n , i A i x n 0, as n, where i=1,2,,m.

The remaining part is similar to that of Step 5 in Theorem 2.1, then we have ω( x n )D.

Step 4. { x n } is a Cauchy sequence.

If, on the contrary, there exist two subsequences { n k } and { m k } of {n} such that x n k + m k x n k ε 0 , k1.

Since lim n φ( x 0 , x n ) exists, using Lemma 1.3 again,

φ ( x n k , x n k + m k ) φ ( x 0 , x n k + m k ) φ ( x 0 , x n k ) = φ ( x 0 , x n k + m k ) lim k φ ( x 0 , x n k + m k ) + lim k φ ( x 0 , x n k ) φ ( x 0 , x n k ) 0 ,

as k. Lemma 1.4 implies that lim k x n k + m k x n k =0, which makes a contradiction. Thus { x n } is a Cauchy sequence.

Step 5. D is a closed subset of E.

Let z n D with z n z, as n. Then A i z n =0, for i=1,2,,m. In view of Lemma 1.1(ii), there exists z n E such that z n = J 1 z n . Using Lemma 1.1(iii), z n Jz, as n. Since A i J 1 z n =0, z n Jz and A i J 1 is maximal monotone, Lemma 1.2 ensures that Jz ( A i J 1 ) 1 0. Thus, z A i 1 0, for i=1,2,,m. And then zD. Therefore, D is closed subset of E, which ensures there exists a unique sunny generalized non-expansive retraction R D from E onto D.

Step 6. x n q 0 = R D x 0 , as n.

Since { x n } is a Cauchy sequence, there exists q 0 E such that x n q, as n. From Step 5, q 0 D.

Now, we prove that q 0 = R D x 0 .

Using Lemma 1.3, we have the following two inequalities:

φ( x 0 , R D x 0 )+φ( R D x 0 , q 0 )φ( x 0 , q 0 )
(3.2)

and

φ( x 0 , x n )+φ( x n , R D x 0 )φ( x 0 , R D x 0 ).
(3.3)

Letting n+, from (3.3), we know that

φ( x 0 , q 0 )+φ( q 0 , R D x 0 )φ( x 0 , R D x 0 ).
(3.4)

From (3.2) and (3.4), 0φ( R D x 0 , q 0 )φ( q 0 , R D x 0 )0. Thus φ( R D x 0 , q 0 )=0. So in view of Lemma 1.4, q= R D x 0 .

This completes the proof. □

Remark 3.1 Combining the techniques of convex combination and the retraction, the strong convergence of iterative scheme (3.1) is obtained.

Remark 3.2 It is obvious that the restrictions in Theorems 2.1 and 3.1 are weaker.

4 Connection between nonlinear mappings and nonlinear elliptic equations

We have mentioned that in a Hilbert space, m-d-accretive mappings and m-accretive mappings are the same, while in a non-Hilbertian Banach space, they are different. So, there are many examples of m-d-accretive mappings in Hilbert spaces. Can we find one mapping that is (m-)d-accretive but not (m-)accretive?

In Section 4.1, we shall review the work done in [12], where an m-accretive mapping is constructed for discussing the existence of solution of one kind nonlinear elliptic equations.

In Section 4.2, we shall construct an m-d-accretive mapping based on the same nonlinear elliptic equation presented in Section 4.1, from which we can see that it is quite different from the m-accretive mapping defined in Section 4.1.

4.1 m-Accretive mappings and nonlinear elliptic equations

The following nonlinear elliptic boundary value problem is extensively studied in [12, 13]:

{ div ( α ( grad u ) ) + | u | p 2 u + g ( x , u ( x ) ) = f ( x ) , a.e. in  Ω , ϑ , α ( grad u ) β x ( u ( x ) ) , a.e. on  Γ .
(4.1)

In (4.1), Ω is a bounded conical domain of a Euclidean space R N with its boundary Γ C 1 (see [14]). f L s (Ω) is a given function, ϑ is the exterior normal derivative of Γ, g:Ω×RR is a given function satisfying Carathéodory’s conditions such that the mapping u L s (Ω)g(x,u(x)) L s (Ω) is defined and there exists a function T(x) L s (Ω) such that g(x,t)t0, for |t|T(x) and xΩ. β x is the subdifferential of a proper, convex, and semi-lower-continuous function. α: R N R N is a monotone and continuous function, and there exist positive constants k 1 , k 2 , and k 3 such that, for ξ, ξ R N , the following conditions are satisfied:

  1. (i)

    |α(ξ)| k 1 | ξ | p 1 ;

  2. (ii)

    |α(ξ)α( ξ )| k 2 | | ξ | p 2 ξ | ξ | p 2 ξ |;

  3. (iii)

    α(ξ),ξ k 3 | ξ | p ,

where , denotes the inner product in R N .

In [12], they first present the following definitions.

Definition 4.1 [12]

Define the mapping B p : W 1 , p (Ω) ( W 1 , p ( Ω ) ) by

(v, B p u)= Ω α ( grad u ) , grad v dx+ Ω |u(x) | p 2 u(x)v(x)dx,

for any u,v W 1 , p (Ω).

Definition 4.2 [12]

Define the mapping Φ p : W 1 , p (Ω)R by Φ p (u)= Γ φ x (u | Γ (x))dΓ(x), for any u W 1 , p (Ω).

Definition 4.3 [12]

Define a mapping A: L 2 (Ω) 2 L 2 ( Ω ) as follows:

D(A)= { u L 2 ( Ω ) there exists an  f L 2 ( Ω )  such that  f B p u + Φ p ( u ) } .

For uD(A), Au={f L 2 (Ω)f B p u+ Φ p (u)}.

Definition 4.4 [12]

Define a mapping A s : L s (Ω) 2 L s ( Ω ) as follows:

  1. (i)

    If s2, then

    D( A s )= { u L s ( Ω ) there exists an  f L s ( Ω )  such that  f B p u + Φ p ( u ) } .

For uD( A s ), we set A s u={f L s (Ω)f B p u+ Φ p (u)}.

  1. (ii)

    If 1<s<2, then define A s : L s (Ω) 2 L s ( Ω ) as the L s -closure of A: L 2 (Ω) 2 L 2 ( Ω ) defined in Definition 4.3.

Then they get the following important result in [12].

Proposition 4.1 [12]

Both A and A s are m-accretive mapping.

Later, by using the perturbations on ranges of m-accretive mappings, the sufficient condition on the existence of solution of (4.1) is discussed.

Theorem 4.1 [12]

If f L s (Ω) ( 2 N N + 1 <ps<+) satisfies the following condition:

Γ β (x)dΓ(x)+ Ω g (x)dx< Ω f(x)dx< Γ β + (x)dΓ(x)+ Ω g + (x)dx,

then (4.1) has a solution in L s (Ω).

The meaning of β (x), β + (x), g (x), and g + (x) can be found in the following two definitions.

Definition 4.5 [12, 14]

For tR and xΓ, let β x 0 (t) β x (t) be the element with least absolute value if β x (t) and β x 0 (t)=±, where t>0 or <0, respectively, in the case β x (t)=. Finally, let β ± (x)= lim t ± β x 0 (t) (in the extended sense) for xΓ. Then β ± (x) define measurable functions on Γ.

Definition 4.6 [12, 14]

Define g + (x)= lim inf t + g(x,t) and g (x)= lim sup t g(x,t).

4.2 Examples of m-d-accretive mappings

Now, based on nonlinear elliptic problem (4.1), we are ready to give the example of m-d-accretive mapping in the sequel.

Lemma 4.1 [10]

Let E be a Banach space, if B:E 2 E is an everywhere defined, monotone, and hemi-continuous mapping, then B is maximal monotone.

Definition 4.7 Let 1<p2 and 1 p + 1 p =1.

Define the mapping B: W 1 , p (Ω) ( W 1 , p ( Ω ) ) by

v,Bu= Ω α ( grad ( | u | p 1 sgn u u p 2 p ) ) , grad ( | v | p 1 sgn v v p 2 p ) dx,

for any u,v W 1 , p (Ω).

Proposition 4.2 B: W 1 , p (Ω) ( W 1 , p ( Ω ) ) (1<p2) is maximal monotone.

Proof We split the proof into four steps.

Step 1. B is everywhere defined.

In fact, for u,v W 1 , p (Ω), from the property (i) of α, we have

| v , B u | k 1 Ω | grad ( | u | p 1 sgn u u p 2 p ) | p 1 | grad ( | v | p 1 sgn v v p 2 p ) | d x = k 1 ( p 1 ) p u p ( p 1 ) ( 2 p ) v p 2 p Ω | grad u | p 1 | u | 2 p | grad v | | v | p 2 d x k 1 ( p 1 ) p u p ( p 1 ) ( 2 p ) v p 2 p ( Ω | grad u | p | u | p p d x ) 1 p × ( Ω | grad v | p | v | p p d x ) 1 p k 1 ( p 1 ) p u p ( p 1 ) ( 2 p ) v p 2 p ( Ω | grad u | p d x ) p ( p ) 2 × ( Ω | u | p d x ) p p ( p ) 2 ( Ω | grad v | p d x ) 1 p ( Ω | v | p d x ) p p p p c o n s t . u 1 , p p 1 v 1 , p .

Thus B is everywhere defined.

Step 2. B is monotone.

Since α is monotone, then, for u,vD(B),

u v , B u B v = Ω α ( grad ( | u | p 1 sgn u u p 2 p ) ) α ( grad ( | v | p 1 sgn v v p 2 p ) ) , grad ( | u | p 1 sgn u u p 2 p ) grad ( | v | p 1 sgn v v p 2 p ) d x 0 ,

which implies that B is monotone.

Step 3. B is hemi-continuous.

To show that B is hemi-continuous. It suffices to prove that for u,v,w W 1 , p (Ω) and t[0,1], w,B(u+tv)Bu as t0.

In fact, since α is continuous,

| w , B ( u + t v ) B u | Ω | α ( grad ( | u + t v | p 1 sgn ( u + t v ) u + t v p 2 p ) ) α ( grad ( | u | p 1 sgn u u p 2 p ) ) | × | grad ( | w | p 1 sgn w w p 2 p ) | d x 0 ,

as t0.

Step 4. B is maximal monotone.

Lemma 4.1 implies that B is maximal monotone.

This completes the proof. □

Remark 4.1 [10]

There exists a maximal monotone extension of B from L p (Ω) to L p (Ω), which is denoted by B ˜ .

Definition 4.8 For 1<p2, the normalized duality mapping J: L p (Ω) L p (Ω) is defined by

Ju= | u | p 1 sgnu u p 2 p ,

for u L p (Ω).

Define the mapping A: L p (Ω) L p (Ω) (1<p2) as follows:

Au= B ˜ J 1 u,u L p (Ω).

Proposition 4.3 The mapping A: L p (Ω) L p (Ω) (1<p2) is m-d-accretive.

Proof Since B ˜ is monotone, for u,vD(A),

A u A v , J 1 u J 1 v = B ˜ J 1 u B ˜ J 1 v , J 1 u J 1 v 0.

Thus A is d-accretive.

In view of Remark 4.1, B ˜ is maximal monotone, then R(J+λ B ˜ )= L p (Ω), for λ>0.

For f L p (Ω), there exists u L p (Ω) such that Ju+λ B ˜ u=f. Using Lemma 1.1 again, there exists u L p (Ω) such that u= J 1 u . Then u +λ B ˜ J 1 u =f. Thus fR(I+λA) and then R(I+λA)= L p (Ω), for λ>0.

Thus A is m-d-accretive.

This completes the proof. □

Proposition 4.4 A 1 0={u L p (Ω):u(x) c o n s t . }.

Proof It is obvious that {u L p (Ω):u(x) c o n s t . } A 1 0.

On the other hand, if u(x) A 1 0, then Au(x)0. Let u L p (Ω) be such that u=J u . From the property (iii) of α, we have

0= u , A J u k 3 Ω |grad ( | u | p 1 sgn u u p 2 p ) | p dx= k 3 Ω |grad ( J u ) | p dx,

which implies that u=J u const.

Thus A 1 0{u L p (Ω):u(x) c o n s t . }.

This completes the proof. □

Remark 4.2 From Propositions 4.2 and 4.3, we know that the restriction on the m-d-accretive mapping in Theorem 2.1 or 3.1 that A 1 0 is valid.

Remark 4.3 If (4.1) is reduced to the following:

div ( α ( grad u ) ) =0,a.e. in Ω,
(4.2)

then it is not difficult to see that u A 1 0 is exactly the solution of (4.2), from which we cannot only see the connections between the zeros of an m-d-accretive mapping and the nonlinear equation but also see that the work on designing the iterative schemes to approximate zeros of nonlinear mappings is meaningful.