1 Introduction

Let H be a real inner product space. A map \(A:H\rightarrow2^{H}\) is called monotone if for each \(x,y\in H\),

$$ \langle\eta-\nu,x- y \rangle\geq0\quad \forall\eta\in Ax, \nu \in Ay. $$
(1.1)

Monotone mappings were first studied in Hilbert spaces by Zarantonello [1], Minty [2], Kačurovskii [3] and a host of other authors. Interest in such mappings stems mainly from their usefulness in applications. In particular, monotone mappings appear in convex optimization theory. Consider, for example, the following:. Let \(g:H\rightarrow\mathbb{R}\cup\{\infty\}\) be a proper convex function. The subdifferential of g, \(\partial g:H\rightarrow2^{H}\), is defined for each \(x\in H\) by

$$\partial g(x)= \bigl\{ x^{*}\in H:g(y)-g(x)\geq \langle y-x,x^{*} \rangle \ \forall y\in H \bigr\} . $$

It is easy to check that ∂g is a monotone operator on H, and that \(0\in\partial g(u)\) if and only if u is a minimizer of g. Setting \(\partial g\equiv A\), it follows that solving the inclusion \(0\in Au\), in this case, is solving for a minimizer of g.

Furthermore, the equation \(0\in Au\) when A is a monotone map from a real Hilbert space to itself also appears in evolution systems. Consider the evolution equation \(\frac{du}{dt} + Au=0\) where A is a monotone map from a real Hilbert space to itself. At an equilibrium state, \(\frac{du}{dt}=0\) so that \(Au=0\), whose solutions correspond to the equilibrium state of the dynamical system.

In particular, consider the following diffusion equation:

$$ \left \{ \textstyle\begin{array}{l} \frac{\partial u}{\partial t}(t,x) = \bigtriangleup u(t,x)+g(u(t,x)),\quad t\geq0, x\in\Omega, \\ u(t,x) = 0,\quad t\geq0, x\in\partial\Omega, \\ u(0,x) = u_{0}(x), \quad u_{0}\in L_{2}(\Omega), \end{array}\displaystyle \right . $$
(1.2)

where Ω is an open subset of \({\mathbb{R}}^{n}\).

By a simple transformation, i.e., by setting \(v(t)=u(t,\cdot)\), where

$$v:[0, \infty)\rightarrow {L_{2}(\Omega)} $$

is defined by \(v(t)(x)=u(t,x)\) and \(f(\varphi)(x)=g(\varphi(x))\), where

$$f:L_{2}(\Omega)\rightarrow {L_{2}(\Omega)}, $$

we see that equation (1.2) is equivalent to

$$ \left \{ \textstyle\begin{array}{l} v'(t) =Av(t)+f(v(t)), \quad t\geq0, \\ v(0) =u_{0}, \end{array}\displaystyle \right . $$
(1.3)

where A is a nonlinear monotone-type mapping defined on \(L_{2}(\Omega )\). Setting f to be identically zero, at an equilibrium state (i.e., when the system becomes independent of time) we see that equation (1.3) reduces to

$$ Au=0. $$
(1.4)

Thus, approximating zeros of equation (1.4) is equivalent to the approximation of solutions of the diffusion equation (1.2) at equilibrium state.

The notion of monotone mapping has been extended to real normed spaces. We now briefly examine two well-studied extensions of Hilbert space monotonicity to arbitrary normed spaces.

1.1 Accretive-type mappings

Let E be a real normed space with dual space \(E^{*}\). A map \(J:E\rightarrow2^{E^{*}}\) defined by

$$Jx:= \bigl\{ x^{*}\in E^{*}: < x,x^{*} >=\Vert x\Vert \cdot\bigl\Vert x^{*}\bigr\Vert , \Vert x\Vert =\bigl\Vert x^{*}\bigr\Vert \bigr\} $$

is called the normalized duality map on E. We have with \(J^{-1}=J^{*}\), \(JJ^{*}=I_{E^{*}}\) and \(J^{*}J =I_{E}\), where \(I_{E}\) and \(I_{E^{*}}\) are the identity mappings on E and \(E^{*}\), respectively.

A map \(A:E\rightarrow2^{E}\) is called accretive if for each \(x,y\in E\), there exists \(j(x-y)\in J(x-y)\) such that

$$ \bigl\langle \eta-\nu,j(x- y) \bigr\rangle \geq0\quad \forall\eta\in Ax, \nu\in Ay. $$
(1.5)

A is called m-accretive if, in addition, the graph of A is not properly contained in the graph of any other accretive operator. It is m-accretive if and only if A is accretive and \(R(I+tA)=E\) for all \(t>0\).

In a Hilbert space, the normalized duality map is the identity map, and so, in this case, inequality (1.5) and inequality (1.1) coincide. Hence, accretivity is one extension of Hilbert space monotonicity to general normed spaces.

Accretive operators have been studied extensively by numerous mathematicians (see, e.g., the following monographs: Berinde [4], Browder [5], Chidume [6], Reich [7], and the references therein).

1.2 Monotone-type mappings in arbitrary normed spaces

Let E be a real normed space with dual \(E^{*}\). A map \(A:E\rightarrow 2^{E^{*}}\) is called monotone if for each \(x,y\in E\), the following inequality holds:

$$ \langle\eta-\nu,x- y \rangle\geq0 \quad \forall\eta\in Ax, \nu \in Ay. $$
(1.6)

It is called maximal monotone if, in addition, the graph of A is not properly contained in the graph of any other monotone operator. Also, A is maximal monotone if and only if it is monotone and \(R(J+tA)=E^{*}\) for all \(t>0\).

It is obvious that monotonicity of a map defined from a normed space to its dual is another extension of Hilbert space monotonicity to general normed spaces.

The extension of the monotonicity condition from a Banach space into its dual has been the starting point for the development of nonlinear functional analysis…. The monotone mappings appear in a rather wide variety of contexts, since they can be found in many functional equations. Many of them appear also in calculus of variations, as subdifferential of convex functions (Pascali and Sburian [8], p.101).

Accretive mappings were introduced independently in 1967 by Browder [5] and Kato [9]. Interest in such mappings stems mainly from their firm connection with the existence theory for nonlinear equations of evolution in real Banach spaces. It is known (see, e.g., Zeidler [10]) that many physically significant problems can be modeled in terms of an initial-value problem of the form

$$ 0\in\frac{du}{dt}+Au,\qquad u(0)=u_{0}, $$
(1.7)

where A is a multi-valued accretive map on an appropriate real Banach space. Typical examples of such evolution equations are found in models involving the heat, wave or Schrödinger equations (see, e.g., Browder [11], Zeidler [10]). Observe that in the model (1.7), if the solution u is independent of time (i.e., at the equilibrium state of the system), then \(\frac{du}{dt} = {0}\) and (1.7) reduces to

$$ 0\in Au $$
(1.8)

whose solutions then correspond to the equilibrium state of the system described by (1.7). Solutions of equation (1.8) can also represent solutions of partial differential equations (see, e.g., Benilan et al. [12], Khatibzadeh and Moroşanu [13], Khatibzadeh and Shokri [14], Showalter [15], Volpert [16], and so on).

In studying the equation \(0\in Au\), where A is a multi-valued accretive operator on a Hilbert space H, Browder introduced an operator T defined by \(T:= I-A\) where I is the identity map on H. He called such an operator pseudocontractive. It is clear that solutions of \(0\in Au\), if they exist, correspond to fixed points of T.

Within the past 35 years or so, methods for approximating solutions of equation (1.8) when A is an accretive-type operator have become a flourishing area of research for numerous mathematicians. Numerous convergence theorems have been published in various Banach spaces and under various continuity assumptions. Many important results have been proved, thanks to geometric properties of Banach spaces developed from the mid-1980s to the early 1990s. The theory of approximation of solutions of the equation when A is of the accretive-type reached a level of maturity appropriate for an examination of its central themes. This resulted in the publication of several monographs which presented in-depth coverage of the main ideas, concepts, and most important results on iterative algorithms for appropriation of fixed points of nonexpansive and pseudocontractive mappings and their generalizations, approximation of zeros of accretive-type operators; iterative algorithms for solutions of Hammerstein integral equations involving accretive-type mappings; iterative approximation of common fixed points (and common zeros) of families of these mappings; solutions of equilibrium problems; and so on (see, e.g., Agarwal et al. [17]; Berinde [4]; Chidume [6]; Reich [18]; Censor and Reich [19]; William and Shahzad [20], and the references therein). Typical of the results proved for solutions of equation (1.8) is the following theorem.

Theorem 1.1

(Chidume [21])

Let E be a uniformly smooth real Banach space with modulus of smoothness \(\rho_{E}\), and let \(A:E\rightarrow2^{E}\) be a multi-valued bounded m-accretive operator with \(D(A)=E\) such that the inclusion \(0\in Au\) has a solution. For arbitrary \(x_{1}\in E\), define a sequence \(\{x_{n}\}\) by

$$x_{n+1} = x_{n}-\lambda_{n}u_{n}- \lambda_{n}\theta_{n}(x_{n}-x_{1}), \quad u_{n}\in Ax_{n}, n\geq1, $$

where \(\{\lambda_{n}\}\) and \(\{\theta_{n}\}\) are sequences in \((0,1)\) satisfying the following conditions:

  1. (i)

    \(\lim_{n\rightarrow\infty}\theta_{n} =0\), \(\{\theta_{n}\}\) is decreasing;

  2. (ii)

    \(\sum\lambda_{n}\theta_{n} = \infty\); \(\sum\rho_{E}(\lambda_{n}M_{1})<\infty\), for some constant \(M_{1} > 0\);

  3. (iii)

    \(\lim_{n\rightarrow\infty} \frac{ [\frac{\theta _{n-1}}{\theta _{n}}-1 ]}{\lambda_{n}\theta_{n}}=0\).

There exists a constant \(\gamma_{0} > 0\) such that \(\frac{\rho _{E}(\lambda_{n})}{\lambda_{n}}\leq\gamma_{0}\theta_{n}\). Then the sequence \(\{x_{n}\}\) converges strongly to a zero of A.

Unfortunately, developing algorithms for approximating solutions of equations of type (1.8) when \(A:E\rightarrow2^{E^{*}}\) is of monotone type has not been very fruitful. Part of the difficulty seems to be that all efforts made to apply directly the geometric properties of Banach spaces developed from the mid 1980s to the early 1990s proved abortive. Furthermore, the technique of converting the inclusion (1.8) into a fixed point problem for \(T:= I-A : E\rightarrow E\) is not applicable since, in this case when A is monotone, A maps E into \(E^{*}\), and the identity map does not make sense.

Fortunately, Alber [22] (see also, Alber and Ryazantseva [23]) recently introduced a Lyapunov functional \(\phi:E\times E\rightarrow\mathbb{R}\), which signaled the beginning of the development of new geometric properties of Banach spaces which are appropriate for studying iterative methods for approximating solutions of (1.8) when \(A:E\rightarrow2^{E^{*}}\) is of monotone type. Geometric properties so far obtained have rekindled enormous research interest on iterative methods for approximating solutions of equation (1.8) where A is of monotone type, and other related problems (see, e.g., Alber [22]; Alber and Guerre-Delabriere [24]; Chidume [21, 25]; Chidume et al. [26]; Diop et al.[27]; Moudafi [28], Moudafi and Tera [29]; Reich [30]; Sow et al. [31]; Takahashi [32]; Zegeye [33] and the references therein).

It is our purpose in this paper to apply the notion of J-fixed points (which has also been defined as a semi-fixed point (see, e.g., Zegeye [33]), a duality fixed point (see, e.g., Liu [34]) and, as far as we know, a new class of mappings called J-pseudocontractive maps introduced here to prove that \(T:=(J-A)\) is J-pseudocontractive if and only if A is monotone; and in the case that E is a uniformly convex and a uniformly smooth real Banach space with dual \(E^{*}\), \(T: E\rightarrow2^{E^{*}}\) is a bounded J-pseudocontractive map with a nonempty J-fixed point set, and \(J-T :E\rightarrow2^{E^{*}}\) is maximal monotone, a sequence is constructed which converges strongly to a J-fixed point of T. As an immediate application of this result, an analog of Theorem 1.1 for bounded maximal monotone maps is obtained, which is also a complement of the proximal point algorithm of Martinet [35] and Rockafellar [36], which has also been studied by numerous authors (see, e.g., Bruck [37]; Chidume [38]; Chidume [21]; Chidume and Djitte [39]; Kamimura and Takahashi [40]; Lehdili and Moudafi [41]; Reich [42]; Reich and Sabach [43, 44]; Solodov and Svaiter [45]; Xu [46] and the references therein). Furthermore, this analog is applied to approximate solutions of Hammerstein integral equations and is also applied to convex optimization problems. Finally, our techniques of proofs are of independent interest.

2 Preliminaries

Let E be a real normed linear space of dimension ≥2. The modulus of smoothness of E, \(\rho_{E}:[0,\infty )\rightarrow[0,\infty)\), is defined by

$$\rho_{E}(\tau):= \sup \biggl\{ \frac{\| x+y\| +\| x-y\|}{2}-1: \| x\| =1, \| y\| = \tau, \tau>0 \biggr\} . $$

A normed linear space E is called uniformly smooth if

$$\lim_{\tau\rightarrow0}\frac{\rho_{E}(\tau)}{\tau} = 0. $$

It is well known (see, e.g., Chidume [6], p.16, also Lindenstrauss and Tzafriri [47]) that \(\rho_{E}\) is nondecreasing. If there exist a constant \(c>0\) and a real number \(q>1\) such that \(\rho_{E}(\tau)\leq c\tau^{q}\), then E is said to be q-uniformly smooth. Typical examples of such spaces are the \(L_{p}\), \(\ell_{p}\), and \(W^{m}_{p}\) spaces for \(1< p<\infty\) where

$$L_{p}\ (\mbox{or }l_{p}) \mbox{ or } W^{m}_{p} \mbox{ is } \left \{ \textstyle\begin{array}{l@{\quad}l} 2\text{-uniformly smooth} & \text{if } 2\leq p< \infty; \\ p\text{-uniformly smooth} & \text{if } 1< p< 2. \end{array}\displaystyle \right . $$

A Banach space E is said to be strictly convex if

$$\|x\|=\|y\|=1, \qquad x\ne y \quad \Longrightarrow\quad \biggl\Vert \frac{x+y}{2} \biggr\Vert < 1. $$

The modulus of convexity of E is the function \(\delta _{E}:(0,2]\rightarrow[0,1]\) defined by

$$\delta_{E}(\epsilon):=\inf \biggl\{ 1- \biggl\| \frac{x+y}{2} \biggr\| :\|x\| = \|y\| =1; \epsilon=\|x-y\| \biggr\} . $$

The space E is uniformly convex if and only if \(\delta _{E}(\epsilon)>0\) for every \(\epsilon\in(0,2]\). It is also well known (see e.g., Chidume [6], p.34, Lindenstrauss and Tzafriri [47]) that \(\delta_{E}\) is nondecreasing. If there exist a constant \(c>0\) and a real number \(p>1\) such that \(\delta_{E}(\epsilon)\ge c\epsilon^{p}\), then E is said to be p-uniformly convex. Typical examples of such spaces are the \(L_{p}\), \(\ell_{p}\), and \(W^{m}_{p}\) spaces for \(1< p<\infty\) where

$$L_{p}\ (\mbox{or }l_{p})\mbox{ or }W^{m}_{p} \mbox{ is } \left \{ \textstyle\begin{array}{l@{\quad}l} p\text{-uniformly convex} & \text{if } 2\leq p< \infty; \\ 2\text{-uniformly convex} & \text{if } 1< p< 2. \end{array}\displaystyle \right . $$

The norm of E is said to be Fréchet differentiable if, for each \(x\in S:= \{u\in E: \|u\|=1\}\),

$$\lim_{t\rightarrow0}\frac{\|x+ty\|-\|x\|}{t} $$

exists and is attained uniformly for \(y\in E\).

For \(q>1\), let \(J_{q}\) denote the generalized duality mapping from E to \(2^{E^{\ast}}\) defined by

$$J_{q}(x):= \bigl\{ f\in E^{\ast}: \langle x, f\rangle =\Vert x \Vert^{q} \text{ and } \Vert f \Vert=\Vert x\Vert^{q-1} \bigr\} , $$

where \(\langle\cdot,\cdot\rangle\) denotes the generalized duality pairing. \(J_{2}\) is called the normalized duality mapping and is denoted by J. It is well known that if E is smooth, then \(J_{q}\) is single-valued.

In the sequel, we shall need the following definitions and results. Let E be a smooth real Banach space with dual \(E^{*}\). The Lyapounov functional \(\phi:E\times E\to\mathbb{R}\), defined by

$$ \phi(x,y)=\|x\|^{2}-2\langle x,Jy\rangle+\|y \|^{2} \quad \text{for } x,y\in E, $$
(2.1)

where J is the normalized duality mapping from E into \(E^{*}\) will play a central role in the sequel. It was introduced by Alber and has been studied by Alber [22], Alber and Guerre-Delabriere [24], Kamimura and Takahashi [48], Reich [18], and a host of other authors. If \(E=H\), a real Hilbert space, then equation (2.1) reduces to \(\phi(x,y)=\|x-y\|^{2}\) for \(x,y\in H\). It is obvious from the definition of the function ϕ that

$$ \bigl(\Vert x\Vert -\Vert y\Vert \bigr)^{2}\leq \phi(x,y)\leq\bigl(\Vert x\Vert +\Vert y\Vert \bigr)^{2} \quad \text{for } x,y\in E. $$
(2.2)

Define a map \(V:X\times X^{*}\to\mathbb{R}\) by

$$ V\bigl(x,x^{*}\bigr)=\Vert x\Vert ^{2}-2\bigl\langle x,x^{*}\bigr\rangle +\bigl\Vert x^{*}\bigr\Vert ^{2}\quad \text{for }x\in X, x^{*}\in X^{*}. $$
(2.3)

Then it is easy to see that

$$ V\bigl(x,x^{*}\bigr)=\phi\bigl(x,J^{-1}\bigl(x^{*}\bigr)\bigr)\quad \forall x\in X, x^{*}\in X^{*}. $$
(2.4)

Lemma 2.1

(Alber and Ryazantseva [23])

Let X be a reflexive strictly convex and smooth Banach space with \(X^{*}\) as its dual. Then

$$ V\bigl(x,x^{*}\bigr)+2\bigl\langle J^{-1}x^{*}-x,y^{*}\bigr\rangle \leq V\bigl(x,x^{*}+y^{*}\bigr)$$
(2.5)

for all \(x\in X\) and \(x^{*},y^{*}\in X^{*}\).

Lemma 2.2

(Alber and Ryazantseva [23], p.50)

Let X be a reflexive strictly convex and smooth Banach space with \(X^{*}\) as its dual. Let \(W:X\times X\rightarrow\mathbb{R}^{1}\) be defined by \(W(x,y)=\frac {1}{2}\phi(y,x)\). Then

$$ W(x,y)-W(z,y)\ge\langle Jx-Jz, z-y\rangle, $$

i.e.,

$$\phi(y,x) - \phi(y,z)\ge2\langle Jx-Jz, z-y\rangle, $$

and also

$$ W(x,y)\le\langle Jx-Jy, x-y\rangle $$

for all \(x, y, z\in X\).

Lemma 2.3

(Alber and Ryazantseva [23], p.45)

Let X be a uniformly convex Banach space. Then, for any \(R>0\) and any \(x, y\in X\) such that \(\|x\|\le R\), \(\|y\|\le R\), the following inequality holds:

$$\langle Jx-Jy,x-y\rangle\ge(2L)^{-1}\delta_{X} \bigl(c_{2}^{-1}\Vert x-y\Vert \bigr), $$

where \(c_{2}=2\max\{1,R\}\), \(1< L<1.7\).

Define

$$ K:=4RL\sup\bigl\{ \Vert Jx-Jy\Vert : \Vert x\Vert \le R, \Vert y\Vert \le R\bigr\} +1. $$
(2.6)

Lemma 2.4

(Alber and Ryazantseva [23], p.46)

Let X be a uniformly smooth and strictly convex Banach space. Then for any \(R>0\) and any \(x, y\in X\) such that \(\|x\|\le R\), \(\|y\|\le R\) the following inequality holds:

$$\langle Jx-Jy,x-y\rangle\ge(2L)^{-1}\delta_{X^{*}} \bigl(c_{2}^{-1}\Vert Jx-Jy\Vert \bigr), $$

where \(c_{2}=2\max\{1,R\}\), \(1< L<1.7\).

Let \(E^{*}\) be a real strictly convex dual Banach space with a Fréchet differentiable norm. Let \(A:E\rightarrow2^{E^{*}}\) be a maximal monotone operator with no monotone extension. Let \(z\in E^{*}\) be fixed. Then for every \(\lambda>0\), there exists a unique \(x_{\lambda}\in E\) such that \(Jx_{\lambda}+\lambda Ax_{\lambda}\ni z\) (see Reich [7], p. 342). Setting \(J_{\lambda}z=x_{\lambda}\), we have the resolvent \(J_{\lambda}:=(J+\lambda A)^{-1} :E^{*}\rightarrow E\) of A for every \(\lambda>0\). The following is a celebrated result of Reich.

Lemma 2.5

(Reich, [7]; see also, Kido, [49])

Let \(E^{*}\) be a strictly convex dual Banach space with a Fréchet differentiable norm, and let A be a maximal monotone operator from E to \(E^{*}\) such that \(A^{-1}0\ne\emptyset\). Let \(z\in E^{*}\) be arbitrary but fixed. For each \(\lambda>0\) there exists a unique \(x_{\lambda}\in E\) such that \(Jx_{\lambda}+ \lambda Ax_{\lambda}\ni z\). Furthermore, \(x_{\lambda}\) converges strongly to a unique \(p\in A^{-1}0\).

Lemma 2.6

From Lemma  2.5, setting \(\lambda_{n}:=\frac{1}{\theta_{n}}\) where \(\theta_{n} \rightarrow0\) as \(n\rightarrow\infty\), \(z=Jv\) for some \(v\in E\), and \(y_{n}:= (J+\frac{1}{\theta_{n}}A )^{-1}z\), we obtain

$$ \begin{aligned} &Ay_{n}=\theta_{n}(Jv-Jy_{n}), \\ &y_{n}\rightarrow y^{*}\in A^{-1}0, \end{aligned} $$
(2.7)

where \(A:E\rightarrow E^{*}\) is maximal monotone.

Remark 1

Let \(R>0\) such that \(\|v\|\le R\), \(\|y_{n}\|\le R\) for all \(n\ge1\). We observe that equation (2.7) yields

$$ Jy_{n-1}-Jy_{n} + \frac{1}{\theta_{n}} (Ay_{n-1}-Ay_{n} )=\frac {\theta _{n-1}-\theta_{n}}{\theta_{n}} (Jv-Jy_{n-1} ). $$
(2.8)

Taking the duality pairing of the LHS of this equation with \(y_{n-1}-y_{n}\), applying Cauchy-Schwarz, and using (2.8), we obtain

$$\langle Jy_{n-1}-Jy_{n},y_{n-1}-y_{n} \rangle \le\frac{\theta_{n-1}-\theta_{n}}{\theta_{n}} \|Jv-Jy_{n-1} \| \|y_{n-1}-y_{n} \|. $$

It follows that if E is uniformly convex and uniformly smooth, using Lemma 2.3 we obtain

$$\begin{aligned} (2L)^{-1}\delta_{E}\bigl(c_{2}^{-1} \|y_{n-1}-y_{n}\|\bigr) \le& \frac{\theta _{n-1}-\theta_{n}}{\theta_{n}}\Vert Jv-Jy_{n-1}\Vert \|y_{n-1}-y_{n}\| \\ \le& 2R\sup \bigl\{ \Vert Jv-Jy_{n-1}\Vert \bigr\} \frac{\theta _{n-1}-\theta _{n}}{\theta_{n}}, \end{aligned}$$
(2.9)

which gives, using equation (2.6),

$$ \|y_{n-1}-y_{n}\|\le c_{2}\delta^{-1}_{E} \biggl(\frac{\theta_{n-1}-\theta _{n}}{\theta_{n}}K \biggr). $$
(2.10)

Similarly, using Lemma 2.4, we obtain

$$ \|Jy_{n-1}-Jy_{n}\|\le c_{2}\delta^{-1}_{E^{*}} \biggl(\frac{\theta _{n-1}-\theta_{n}}{\theta_{n}}K \biggr). $$
(2.11)

Remark 2

In p-uniformly convex spaces, we have (see, e.g., Chidume [6], p.34), for some constant \(c>0\),

$$ \delta_{E}(\epsilon)\ge c\epsilon^{p}\quad \text{for } 0< \epsilon\le2. $$
(2.12)

From inequality (2.9), using inequality (2.12), we obtain

$$\frac{c}{2Lc_{2}^{p}}\|y_{n-1}-y_{n}\|^{p} \le \biggl( \frac{\theta _{n-1}-\theta _{n}}{\theta_{n}} \biggr) \|Jv-Jy_{n-1} \|\|y_{n-1}-y_{n} \|, $$

which gives

$$ \|y_{n-1}-y_{n}\|\le \biggl(\frac{\theta_{n-1}-\theta_{n}}{\theta _{n}} \biggr)^{1/p}K_{1}\quad \text{for some } K_{1}>0. $$
(2.13)

Also, we have from Lemma 2.4 that

$$(2L)^{-1}\delta_{X^{*}}\bigl(c_{2}^{-1} \|Jx-Jy\|\bigr) \le\langle Jx-Jy,x-y\rangle. $$

Again, using inequality (2.12), we obtain

$$\frac{c}{2Lc_{2}^{p}}\|Jy_{n-1}-Jy_{n}\|^{p} \le \langle Jy_{n-1}-Jy_{n},y_{n-1}-y_{n}\rangle \le\|Jy_{n-1}-Jy_{n}\|\|y_{n-1}-y_{n}\|, $$

which gives

$$ \|Jy_{n-1}-Jy_{n}\|\le \biggl(\frac{\theta_{n-1}-\theta_{n}}{\theta _{n}} \biggr)^{1/p}K_{2}\quad \text{for some } K_{2}>0. $$
(2.14)

Lemma 2.7

(Kamimura and Takahashi [48])

Let X be a real smooth and uniformly convex Banach space, and let \(\{ x_{n}\}\) and \(\{y_{n}\}\) be two sequences of X. If either \(\{x_{n}\}\) or \(\{ y_{n}\}\) is bounded and \(\phi(x_{n},y_{n})\to0\) as \(n\to\infty\), then \(\| x_{n}-y_{n}\| \to0\) as \(n\to\infty\).

Lemma 2.8

(Xu [50])

Let \(\{a_{n}\}_{n=1}^{\infty}\) be a sequence of non-negative real numbers satisfying the following relation:

$$ a_{n+1}\leq (1-\sigma_{n} )a_{n} + \sigma_{n}b_{n} + c_{n},\quad n\geq0, $$
(2.15)

where \(\{\sigma_{n}\}_{n=0}^{\infty}\), \(\{b_{n}\}_{n=1}^{\infty}\), and \(\{ c_{n}\}_{n=1}^{\infty}\) satisfy the conditions:

  1. (i)

    \(\{\sigma_{n}\}_{n=1}^{\infty}\subset[0,1]\), \(\sum_{n=1}^{\infty}\sigma_{n}=\infty\), or equivalently, \(\prod_{n=1}^{\infty}(1-\sigma_{n})=0\);

  2. (ii)

    \(\limsup_{n\rightarrow\infty}b_{n}\le0\);

  3. (iii)

    \(c_{n}\ge0\) (\(n\ge0\)), \(\sum_{n=1}^{\infty}c_{n}<\infty\).

Then \(\lim_{n\rightarrow\infty}a_{n}=0\).

Definition 2.9

(J-fixed point)

Let E be an arbitrary normed space and \(E^{*}\) be its dual. Let \(T:E\rightarrow2^{E^{*}}\) be any mapping. A point \(x\in E\) will be called a J-fixed point of T if and only if there exists \(\eta\in Tx\) such that \(\eta\in Jx\).

Remark 3

The notion of J-fixed points, as far as we know, was first introduced by Zegeye [33] who called a point \(x^{*}\in E\) such that \(Tx^{*}=Jx^{*}\), a semi-fixed point of T. Later, Liu [34] called such a point a duality fixed point of T.

3 Main results

We introduce the following definition.

Definition 3.1

(J-pseudocontractive mappings)

Let E be a normed space. A mapping \(T:E\rightarrow2^{E^{*}}\) is called J-pseudocontractive if for every \(x, y\in E\),

$$\langle\tau-\zeta,x-y\rangle\le\langle\eta-\nu, x-y\rangle\quad \text{for all } \tau\in Tx, \zeta\in Ty, \eta\in Jx, \nu\in Jy. $$

Example 1

If \(E=H\), a real Hilbert space, then J is the identity map on H. Consequently, every pseudocontractive map on H is J-pseudocontractive.

For our next example, we need the following characterization of the normalized duality map on \(l_{p}\), \(1< p<\infty\).

In \(l_{p}\) spaces, \(1< p<\infty\), for arbitrary \(x\in l_{p}\), \(x=(x_{1},x_{2},x_{3},\ldots)\),

$$Jx=\|x\|^{2-p}\bigl(|x_{1}|^{p-2}x_{1},|x_{2}|^{p-2},|x_{3}|^{p-2}x_{3}, \ldots\bigr) $$

(see, e.g., Alber and Ryazantseva [23], p.36).

Example 2

Let \(1< q< p<\infty\) and let \(\lambda\in\mathbb{R}\) be arbitrary. Define \(T:l_{p}\rightarrow l_{q}\) by

$$Tx=(\lambda,x_{2},x_{3},\ldots). $$

Then (i) T is J-pseudocontractive, (ii) \(x_{\lambda}:=(\lambda,0,0,\ldots)\) is a J-fixed point of T.

Remark 4

We observe that, assuming existence, a zero of a monotone mapping \(A:E\rightarrow2^{E^{*}}\) corresponds to a J-fixed point of a J-pseudocontractive mapping, T.

The following lemma asserts that \(A:E\rightarrow2^{E^{*}}\) is monotone if and only if \(T:=(J-A):E\rightarrow2^{E^{*}}\) is J-pseudocontractive.

Lemma 3.2

Let E be an arbitrary real normed space and \(E^{*}\) be its dual space. Let \(A:E\rightarrow2^{E^{*}}\) be any mapping. Then A is monotone if and only if \(T:=(J-A): E\rightarrow2^{E^{*}}\) is J-pseudocontractive.

Proof

Let \(x, y\in E\) be arbitrary. Suppose A is monotone. Then, for every \(\mu_{x} \in Ax\), \(\mu_{y}\in Ay\), \(jx\in Jx\), \(jy\in Jy\), \(\tau _{x}\in Tx\), \(\tau_{y}\in Ty\), such that \(\tau_{x}=jx-\mu_{x}\), \(\tau _{y}=jy-\mu_{y}\), we have

$$\begin{aligned} \langle\tau_{x}-\tau_{y},x-y\rangle =&\langle jx-jy,x-y \rangle- \langle\mu_{x}-\mu_{y},x-y\rangle \\ \le& \langle jx-jy,x-y\rangle. \end{aligned}$$

Hence, T is J-pseudocontractive.

Conversely, suppose \(T:= (J-A)\) is J-pseudocontractive, we prove \(A:= J-T\) is monotone. For all \(x, y\in E\), let \(\mu_{x}\in Ax\), \(\mu_{y}\in Ay\). Then \(\mu_{x}=jx-\zeta_{x}\) and \(\mu_{y}=jy-\zeta_{y}\) for some \(\zeta _{x}\in Tx\), \(\zeta_{y}\in Ty\), \(jx\in Jx\), and \(jy\in Jy\). We have

$$\begin{aligned} \langle\mu_{x}-\mu_{y},x-y\rangle =&\langle jx- \zeta_{x}-jy+\zeta _{y},x-y\rangle \\ =& \langle jx-jy,x-y\rangle- \langle\zeta_{x}-\zeta_{y},x-y \rangle \\ \ge& 0. \end{aligned}$$

Hence, A is monotone. □

We now prove the following lemma, which will be crucial in the sequel.

Lemma 3.3

Let E be a smooth real Banach space with dual \(E^{*}\). Let \(\phi :E\times E\to\mathbb{R}\) be the Lyapounov functional. Then

$$ \phi(y,x)=\phi(x,y)-2\langle x+y,Jx-Jy\rangle+2\bigl(\|x \|^{2}-\|y\|^{2}\bigr)\quad \textit {for all } x,y\in E. $$

Proof

Let \(x, y\in E\), we have

$$\begin{aligned} \phi(y,x) =&\|x\|^{2}-2\langle y,Jx\rangle+\|y\|^{2} \\ =&\phi(x,y)-2 \bigl(\langle y,Jx\rangle- \langle x,Jy\rangle \bigr). \end{aligned}$$
(3.1)

But,

$$ \langle x+y, Jx-Jy\rangle = \|x\|^{2}-\langle x,Jy\rangle+ \langle y,Jx\rangle- \|y\|^{2}, $$

so that

$$ \langle y,Jx\rangle- \langle x,Jy\rangle = \langle x+y, Jx-Jy\rangle+ \|y \|^{2} -\|x\|^{2}; $$

and substituting in (3.1), the result follows. □

In Theorem 3.4 below, the sequence \(\{\lambda_{n}\}_{n=1}^{\infty}\subset(0,1)\) satisfies the following conditions:

  1. (i)

    \(\sum_{n=1}^{\infty}\lambda_{n}=\infty\);

  2. (ii)

    \(\lambda_{n}M_{0}^{*}\le\gamma_{0}\theta_{n}\); \(\delta ^{-1}_{E}(\lambda _{n}M_{0}^{*}) \leq\gamma_{0}\theta_{n}\),

for all \(n\ge1\) and for some constants \(M_{0}^{*}>0\), \(\gamma_{0}>0\).

Theorem 3.4

Let E be a uniformly convex and uniformly smooth real Banach space and let \(E^{*}\) be its dual. Let \(T:E\to2^{E^{*}}\) be a multi-valued J-pseudocontractive and bounded map. Suppose \(F_{E}^{J}(T):=\{v\in E: Jv\in Tx\}\ne\emptyset\). For arbitrary \(u\in E\), define a sequence \(\{x_{n}\}\) iteratively by: \(x_{1}\in E\),

$$ x_{n+1}=J^{-1} \bigl((1-\lambda_{n})Jx_{n}+ \lambda_{n}\eta_{n}-\lambda _{n}\theta _{n}(Jx_{n}-Ju) \bigr), \quad n\geq1, \textit{where } \eta_{n}\in Tx_{n}. $$
(3.2)

Then the sequence \(\{x_{n}\}\) is bounded.

Proof

Since \(F_{E}^{J}(T)\ne\emptyset\), let \(x^{*}\in F_{E}^{J}(T)\). Then there exists \(r>0\) such that \(\max \{\phi(x^{*},u), \phi(x^{*},x_{1})\}\le\frac{r}{8}\). Let \(B:=\{x\in E: \phi(x^{*},x)\le r\}\), and since T is bounded, we define:

$$\begin{aligned}& M_{0}:=\sup\bigl\{ \bigl\| Jx-\eta+\theta(Jx-Ju)\bigr\| : \theta\in(0,1), x\in B, \eta\in Tx\bigr\} +1 , \\& M_{1}:=\sup\bigl\{ \|Jx-Ju\|: x\in B\bigr\} +1 , \\& M_{2}:=\sup\bigl\{ \bigl\| J^{-1} \bigl[Jx-\lambda \bigl(Jx-\eta+ \theta (Jx-Ju) \bigr) \bigr]-x\bigr\| : \lambda, \theta\in(0,1), x\in B, \eta\in Tx\bigr\} +1 . \end{aligned}$$

Let \(M:=\max\{M_{2}M_{0}, c_{2}M_{0}, c_{2}M_{1}\}\), and

$$\gamma_{0}: = \min \biggl\{ 1, \frac{r}{16M} \biggr\} , $$

where \(c_{2}\) is the constant in Lemma 2.3. We show that \(\phi(x^{*},x_{n})\le r\) for all \(n\ge1\). We proceed by induction. Clearly, \(\phi(x^{*},x_{1})\le r\). Suppose \(\phi(x^{*},x_{n})\le r\) for some \(n\ge1\). We show \(\phi(x^{*},x_{n+1})\le r\). Suppose this is not the case, then \(\phi(x^{*},x_{n+1})>r\). Observe that

$$\Vert x_{n+1}-x_{n}\Vert =\bigl\Vert J^{-1} \bigl[Jx_{n}-\lambda_{n} \bigl(Jx_{n}-\eta _{n}+\theta _{n}(Jx_{n}-Ju) \bigr) \bigr]-J^{-1}Jx_{n}\bigr\Vert . $$

From Lemma 2.3 and the recurrence relation (3.2), we have

$$\begin{aligned} (2L)^{-1}\delta_{E}\bigl(c_{2}^{-1} \|x_{n+1}-x_{n}\|\bigr) \le&\langle Jx_{n+1}-Jx_{n},x_{n+1}-x_{n} \rangle \\ \le& \|Jx_{n+1}-Jx_{n}\|\|x_{n+1}-x_{n}\| \\ \le&\lambda_{n}M_{0}\|x_{n+1}-x_{n} \|. \end{aligned}$$
(3.3)

We hence obtain

$$ \|x_{n+1}-x_{n}\|\le c_{2} \delta^{-1}_{E}\bigl(\lambda_{n}M_{0}^{*} \bigr)\quad \text{for some } M_{0}^{*}>0. $$
(3.4)

Using inequality (2.5) with \(y^{*}=\lambda_{n} [Jx_{n}-\eta _{n}+\theta_{n}(Jx_{n}-Ju) ]\), we obtain using also inequality (3.4)

$$\begin{aligned} \phi\bigl(x^{*},x_{n+1}\bigr) =& V\bigl(x^{*}, Jx_{n}- \lambda_{n} \bigl[Jx_{n}-\eta_{n}+\theta _{n}(Jx_{n}-Ju) \bigr]\bigr) \\ \le& V\bigl(x^{*},Jx_{n}\bigr)-2\lambda_{n}\bigl\langle x_{n}-x^{*}, Jx_{n}-\eta_{n} + \theta _{n}(Jx_{n}-Ju)\bigr\rangle \\ &{}-2\lambda_{n}\bigl\langle x_{n+1}-x_{n},Jx_{n}- \eta_{n}+\theta _{n}(Jx_{n}-Ju)\bigr\rangle \\ \le& V\bigl(x^{*},Jx_{n}\bigr)-2\lambda_{n}\bigl\langle x_{n}-x^{*}, Jx_{n}-\eta_{n} + \theta _{n}(Jx_{n}-Ju)\bigr\rangle \\ &{}+2\lambda_{n}\Vert x_{n+1}-x_{n}\Vert \bigl\Vert Jx_{n}-\eta_{n}+\theta_{n}(Jx_{n}-Ju) \bigr\Vert \\ \le& V\bigl(x^{*},Jx_{n}\bigr)-2\lambda_{n}\bigl\langle x_{n}-x^{*}, Jx_{n}-\eta_{n}\bigr\rangle \\ &{}-2 \lambda_{n}\theta_{n}\bigl\langle x_{n}-x^{*}, Jx_{n}-Ju\bigr\rangle + 2\lambda _{n}M_{0}c_{2} \delta^{-1}_{E}\bigl(\lambda_{n}M_{0}^{*} \bigr). \end{aligned}$$

Since T is J-pseudocontractive, so that \((J-T)\) is monotone, and using the recursion formula, we have

$$\begin{aligned} \phi\bigl(x^{*},x_{n+1}\bigr) \le& V\bigl(x^{*},Jx_{n}\bigr)-2 \lambda_{n}\theta_{n}\bigl\langle x_{n}-x^{*}, Jx_{n}-Ju\bigr\rangle +2\lambda_{n}M_{0}c_{2} \delta^{-1}_{E}\bigl(\lambda _{n}M_{0}^{*} \bigr) \\ =& \phi\bigl(x^{*},x_{n}\bigr)-2\lambda_{n} \theta_{n}\langle x_{n}-x_{n+1}, Jx_{n}-Ju \rangle -2\lambda_{n}\theta_{n}\bigl\langle x_{n+1}-x^{*}, Jx_{n}-Jx_{n+1}\bigr\rangle \\ &{}-2\lambda_{n}\theta_{n}\bigl\langle x_{n+1}-x^{*}, Jx_{n+1}-Ju\bigr\rangle + 2\lambda _{n}M_{0}c_{2} \delta^{-1}_{E}\bigl(\lambda_{n}M_{0}^{*} \bigr). \end{aligned}$$
(3.5)

We have from Lemma 2.2

$$-2\lambda_{n}\theta_{n}\bigl\langle x_{n+1}-x^{*}, Jx_{n+1}-Ju\bigr\rangle \le \lambda _{n}\theta_{n} \phi\bigl(x^{*},u\bigr)-\lambda_{n}\theta_{n}\phi \bigl(x^{*},x_{n+1}\bigr). $$

Substituting this in inequality (3.5), we obtain

$$\begin{aligned} r < &\phi\bigl(x^{*},x_{n+1}\bigr) \\ \le& \phi\bigl(x^{*},x_{n}\bigr)-\lambda_{n} \theta_{n}\phi\bigl(x^{*},x_{n+1}\bigr)+\lambda _{n} \theta _{n}\phi\bigl(x^{*},u\bigr)+2\lambda_{n} \theta_{n}M_{1}c_{2}\delta^{-1}_{E} \bigl(\lambda _{n}M_{0}^{*}\bigr) \\ &{}+2\lambda_{n}\theta_{n}M_{2}( \lambda_{n}M_{0}) + 2\lambda_{n}M_{0}c_{2} \delta ^{-1}_{E}\bigl(\lambda_{n}M_{0}^{*} \bigr) \\ \le& \phi\bigl(x^{*},x_{n}\bigr)-\lambda_{n} \theta_{n}\phi\bigl(x^{*},x_{n+1}\bigr)+\lambda _{n} \theta _{n}\phi\bigl(x^{*},u\bigr) \\ &{}+2\lambda_{n} \theta_{n}\gamma_{0}M_{1}c_{2}+2 \lambda_{n}\theta _{n}\gamma _{0} M_{2}M_{0} + 2\lambda_{n}\theta_{n} \gamma_{0}M_{0}c_{2} \\ \le& \phi\bigl(x^{*},x_{n}\bigr)-\lambda_{n} \theta_{n}\phi\bigl(x^{*},x_{n+1}\bigr) + 4\lambda _{n} \theta_{n}\frac{r}{8} \\ \le& r - \lambda_{n}\theta_{n}r + \frac{\lambda_{n}\theta_{n}r}{2}= r- \frac {\lambda_{n}\theta_{n}r}{2} < r. \end{aligned}$$

This is a contradiction. Hence, \(\{x_{n}\}_{n=1}^{\infty}\) is bounded. □

In Theorem 3.5 below, \(\lambda_{n}\) and \(\theta_{n}\) are real sequences in \((0,1)\) satisfying the following conditions:

  1. (i)

    \(\sum_{n=1}^{\infty}\lambda_{n}\theta_{n}=\infty\),

  2. (ii)

    \(\lambda_{n}M_{0}^{*}\le\gamma_{0}\theta_{n}\); \(\delta ^{-1}_{E}(\lambda_{n}M_{0}^{*}) \leq\gamma_{0}\theta_{n}\),

  3. (iii)

    \(\frac{\delta^{-1}_{E} (\frac{\theta_{n-1}-\theta _{n}}{\theta_{n}}K )}{\lambda_{n}\theta_{n}} \rightarrow0\), \(\frac{\delta ^{-1}_{E^{*}} (\frac{\theta_{n-1}-\theta_{n}}{\theta_{n}}K )}{\lambda_{n}\theta_{n}} \rightarrow0\), as \(n\rightarrow\infty\),

  4. (iv)

    \(\frac{1}{2} (\frac{\theta_{n-1}-\theta_{n}}{\theta _{n}}K )\in(0,1)\),

for some constants \(M_{0}^{*}>0\), and \(\gamma_{0}>0\); where \(\delta_{E}: (0,\infty)\rightarrow(0,\infty)\) is the modulus of convexity of E and \(K>0\) is as defined in Lemma 2.3.

Theorem 3.5

Let E be a uniformly convex and uniformly smooth real Banach space and let \(E^{*}\) be its dual. Let \(T:E\to2^{E^{*}}\) be a J-pseudocontractive and bounded map such that \((J-T)\) is maximal monotone. Suppose \(F_{E}^{J}(T)=\{v\in E: Jv\in Tv\}\ne\emptyset\). For arbitrary \(x_{1}, u\in E\), define a sequence \(\{x_{n}\}\) iteratively by:

$$ x_{n+1}=J^{-1} \bigl[(1-\lambda_{n})Jx_{n}+ \lambda_{n}\eta_{n}-\lambda _{n}\theta _{n}(Jx_{n}-Ju) \bigr], \quad \eta_{n}\in Tx_{n}, n\geq1, $$
(3.6)

where \(\{\lambda_{n}\}\) and \(\{\theta_{n}\}\) are sequences in \((0,1)\) satisfying conditions (i)-(iv) above. Then the sequence \(\{x_{n}\}\) converges strongly to a J-fixed point of T.

Proof

Setting \(y^{*}=\lambda_{n} [Jx_{n}-\eta_{n}+\theta _{n}(Jx_{n}-Ju) ]\in E^{*}\), applying inequality (2.5) and using Lemma 3.3, we compute as follows:

$$\begin{aligned} \phi(y_{n},x_{n+1}) =& V\bigl(y_{n}, Jx_{n}-\lambda_{n} \bigl(Jx_{n}-\eta_{n}+\theta_{n}(Jx_{n}-Ju) \bigr)\bigr) \\ \le& V(y_{n},Jx_{n})-2\bigl\langle x_{n+1}-y_{n}, \lambda_{n} \bigl(Jx_{n}-\eta _{n}+ \theta_{n}(Jx_{n}-Ju) \bigr)\bigr\rangle \\ =& \phi(y_{n},x_{n})-2\lambda_{n}\langle x_{n+1}-y_{n},Jx_{n}-\eta_{n}\rangle- 2 \lambda_{n}\theta_{n}\langle x_{n+1}-y_{n},Jx_{n}-Ju \rangle \\ =& \phi(x_{n},y_{n})-2\langle x_{n}+y_{n},Jx_{n}-Jy_{n} \rangle+2 \bigl(\Vert x_{n}\Vert ^{2}-\Vert y_{n}\Vert ^{2} \bigr) \\ & {}-2\lambda_{n}\langle x_{n+1}-y_{n},Jx_{n}-\eta _{n}\rangle - 2\lambda_{n}\theta_{n}\langle x_{n+1}-y_{n},Jx_{n}-Ju\rangle. \end{aligned}$$
(3.7)

But we have from Lemma 2.6, \(y_{n}=J^{-1} [\tau _{n}-\theta _{n}(Jy_{n}-Ju) ]\) for some \(\tau_{n}\in Ty_{n}\) and thus obtain

$$\begin{aligned} \phi(x_{n},y_{n}) =&V(x_{n},Jy_{n})=V(x_{n},Jy_{n-1}+Jy_{n}-Jy_{n-1}) \\ \le& V(x_{n},Jy_{n-1})-2\langle y_{n}-x_{n},Jy_{n-1}-Jy_{n} \rangle. \end{aligned}$$

Hence, substituting this in inequality (3.7) and using Lemma 3.3, we obtain

$$\begin{aligned} \phi(y_{n},x_{n+1}) \le& V(x_{n},Jy_{n-1})-2\langle y_{n}-x_{n},Jy_{n-1}-Jy_{n} \rangle+ 2 \bigl(\| x_{n}\|^{2}-\|y_{n} \|^{2} \bigr) \\ & {}-2\langle x_{n}+y_{n}, Jx_{n}-Jy_{n}\rangle -2\lambda_{n}\langle x_{n+1}-y_{n},Jx_{n}- \eta_{n}\rangle \\ &{}- 2\lambda _{n}\theta _{n}\langle x_{n+1}-y_{n}, Jx_{n}-Ju\rangle \\ =& \phi(y_{n-1},x_{n}) + 2 \bigl(\|y_{n-1} \|^{2}-\|y_{n}\|^{2} \bigr) + 2\langle y_{n-1}+x_{n}, Jx_{n}-Jy_{n-1}\rangle \\ &{}-2 \langle y_{n}-x_{n},Jy_{n-1}-Jy_{n}\rangle -2\langle x_{n}+y_{n}, Jx_{n}-Jy_{n} \rangle \\ &{}-2\lambda_{n}\langle x_{n+1}-y_{n}, Jx_{n}-\eta_{n}\rangle- 2\lambda_{n} \theta_{n}\langle x_{n+1}-y_{n}, Jx_{n}-Ju \rangle. \end{aligned}$$
(3.8)

Furthermore, using Lemma 2.2, we obtain

$$\begin{aligned}& -2\lambda_{n}\theta_{n}\langle x_{n+1}-y_{n}, Jx_{n} - Ju\rangle \\& \quad = -2\lambda_{n}\theta_{n}\langle x_{n+1}-x_{n}, Jx_{n} - Ju\rangle- 2\lambda _{n}\theta_{n}\langle x_{n}-y_{n-1}, Jx_{n} - Jy_{n-1}\rangle \\& \qquad {} - 2\lambda_{n}\theta_{n}\langle x_{n}-y_{n-1}, Jy_{n-1} - Ju\rangle- 2 \lambda_{n}\theta_{n}\langle y_{n-1}-y_{n}, Jx_{n} - Ju\rangle \\& \quad \le -2\lambda_{n}\theta_{n}\langle x_{n+1}-x_{n}, Jx_{n} - Ju\rangle- \lambda _{n}\theta_{n}\phi(y_{n-1},x_{n}) \\& \qquad {} - 2\lambda_{n}\theta_{n}\langle x_{n}-y_{n-1}, Jy_{n-1} - Ju\rangle- 2 \lambda_{n}\theta_{n}\langle y_{n-1}-y_{n}, Jx_{n} - Ju\rangle. \end{aligned}$$

Substituting this inequality in inequality (3.8), we thus have

$$\begin{aligned} \phi(y_{n},x_{n+1}) \le& \phi(y_{n-1},x_{n}) + 2 \bigl(\|y_{n-1} \|^{2}-\|y_{n}\|^{2} \bigr) + 2\langle y_{n-1}+x_{n}, Jx_{n}-Jy_{n-1}\rangle \\ & {}-2 \langle y_{n}-x_{n},Jy_{n-1}-Jy_{n}\rangle-2\langle x_{n}+y_{n}, Jx_{n}-Jy_{n} \rangle \\ &{}-2\lambda_{n}\langle x_{n+1}-y_{n}, Jx_{n}-\eta_{n}\rangle- 2\lambda_{n} \theta_{n}\langle x_{n+1}-x_{n}, Jx_{n} - Ju\rangle- \lambda_{n}\theta_{n}\phi(y_{n-1},x_{n}) \\ & {} - 2\lambda_{n}\theta_{n}\langle x_{n}-y_{n-1}, Jy_{n-1} - Ju\rangle- 2\lambda_{n}\theta_{n}\langle y_{n-1}-y_{n}, Jx_{n} - Ju\rangle \\ \le& \phi(y_{n-1},x_{n}) - \lambda_{n} \theta_{n}\phi(y_{n-1},x_{n}) + 2 \bigl( \|y_{n-1}\|^{2}-\|y_{n}\|^{2} \bigr) \\ & {}+ 2 \langle y_{n-1}-y_{n}, Jx_{n}-Jy_{n-1} \rangle-2\langle y_{n}-x_{n},Jy_{n-1}-Jy_{n} \rangle \\ &{}-2\langle x_{n}+y_{n}, Jy_{n}-Jy_{n-1} \rangle-\underline{ 2\lambda_{n}\langle x_{n+1}-y_{n}, Jx_{n}-\eta_{n}\rangle} \\ & {}- 2\lambda_{n}\theta_{n}\langle x_{n+1}-x_{n}, Jx_{n} - Ju\rangle- \underline{2 \lambda_{n}\theta_{n}\langle x_{n}-y_{n-1}, Jy_{n-1} - Ju\rangle } \\ &{}- 2\lambda_{n}\theta_{n} \langle y_{n-1}-y_{n}, Jx_{n} - Ju\rangle. \end{aligned}$$

Estimating the underlined terms, we obtain

$$\begin{aligned}& - 2\lambda_{n}\langle x_{n+1}-y_{n}, Jx_{n}-\eta_{n}\rangle- 2\lambda _{n}\theta _{n}\langle x_{n}-y_{n-1}, Jy_{n-1} - Ju \rangle \\& \quad = - 2\lambda_{n}\langle x_{n+1}-x_{n}, Jx_{n}-\eta_{n}\rangle\underline{- 2\lambda_{n} \langle x_{n}-y_{n}, Jx_{n}-\eta_{n} \rangle} - 2\lambda_{n}\theta _{n}\langle x_{n}-y_{n}, Jy_{n-1} - Jy_{n}\rangle \\& \qquad {}-\underline{2\lambda_{n}\bigl\langle x_{n}-y_{n}, -(Jy_{n}-\tau_{n})\bigr\rangle } - 2\lambda_{n} \theta_{n}\langle y_{n}-y_{n-1}, Jy_{n-1} - Ju\rangle \\& \quad \le - 2\lambda_{n}\langle x_{n+1}-x_{n}, Jx_{n}-\eta_{n}\rangle- 2\lambda _{n} \theta_{n}\langle x_{n}-y_{n}, Jy_{n-1} - Jy_{n}\rangle \\& \qquad {}- 2\lambda _{n}\theta _{n}\langle y_{n}-y_{n-1}, Jy_{n-1} - Ju\rangle. \end{aligned}$$

We thus have

$$\begin{aligned} \phi(y_{n},x_{n+1}) \le& \phi(y_{n-1},x_{n}) - \lambda_{n} \theta_{n}\phi(y_{n-1},x_{n}) + 2\Vert y_{n-1} - y_{n}\Vert \bigl(\Vert y_{n-1}\Vert + \Vert y_{n}\Vert \bigr) \\ & {}+ 2\langle y_{n-1}-y_{n}, Jx_{n} - Jy_{n-1}\rangle- 2\langle x_{n}+y_{n}, Jy_{n} - Jy_{n-1}\rangle \\ &{}- 2\langle y_{n}-x_{n}, Jy_{n-1}-Jy_{n}\rangle- 2\lambda_{n} \theta_{n}\langle x_{n+1}-x_{n}, Jx_{n}-Ju \rangle \\ & {}- 2\lambda_{n}\theta_{n}\langle y_{n-1}-y_{n}, Jx_{n}-Ju\rangle- 2\lambda _{n}\langle x_{n+1}-x_{n}, Jx_{n}- \eta_{n}\rangle \\ & {}- 2\lambda_{n}\theta _{n}\langle x_{n}-y_{n}, Jy_{n-1}-Jy_{n}\rangle- 2\lambda_{n}\theta_{n}\langle y_{n}-y_{n-1}, Jy_{n-1}-Ju\rangle \\ \le& \phi(y_{n-1},x_{n}) - \lambda_{n} \theta_{n}\phi(y_{n-1},x_{n}) + 2\Vert y_{n-1} - y_{n}\Vert \bigl(\Vert y_{n-1}\Vert + \Vert y_{n}\Vert \bigr) \\ & {}+ 2\langle y_{n-1}-y_{n}, Jx_{n} - Jy_{n}\rangle- 2\langle y_{n-1}+x_{n}, Jy_{n} - Jy_{n-1}\rangle \\ &{}- 2\langle y_{n}-x_{n}, Jy_{n-1}-Jy_{n}\rangle- 2\lambda_{n} \theta_{n}\langle x_{n+1}-x_{n}, Jx_{n}-Ju \rangle \\ & {}- 2\lambda_{n}\theta_{n}\langle y_{n-1}-y_{n}, Jx_{n}-Ju\rangle- 2\lambda _{n}\langle x_{n+1}-x_{n}, Jx_{n}- \eta_{n}\rangle \\ & {}- 2\lambda_{n}\theta _{n}\langle x_{n}-y_{n}, Jy_{n-1}-Jy_{n}\rangle- 2\lambda_{n}\theta_{n}\langle y_{n}-y_{n-1}, Jy_{n-1}-Ju\rangle \\ \le& (1 - \lambda_{n}\theta_{n})\phi(y_{n-1},x_{n}) \\ &{}+ 2\lambda_{n}\theta _{n}M_{a} \bigl(\Vert x_{n+1}-x_{n}\Vert + \Vert y_{n-1}-y_{n} \Vert + \Vert Jy_{n-1}-Jy_{n}\Vert \bigr) \\ & {}+ M_{b} \bigl(\lambda_{n}\Vert x_{n+1}-x_{n}\Vert + \Vert y_{n-1}-y_{n} \Vert + \Vert Jy_{n-1}-Jy_{n}\Vert \bigr) \\ &\text{for some } M_{a}>0, M_{b}>0 \end{aligned}$$
(3.9)
$$\begin{aligned} \le& (1 - \lambda_{n}\theta_{n})\phi(y_{n-1},x_{n}) \\ &{}+ 2\lambda_{n}\theta _{n}M_{a} \biggl(c_{2}\delta^{-1}_{E}\bigl( \lambda_{n}M_{0}^{*}\bigr) + \delta^{-1}_{E} \biggl(\frac {\theta_{n-1}-\theta_{n}}{\theta_{n}}K \biggr) + \delta^{-1}_{E^{*}} \biggl( \frac {\theta_{n-1}-\theta_{n}}{\theta_{n}}K \biggr) \biggr) \\ & {}+ M_{b} \biggl(c_{2}\lambda_{n} \delta^{-1}_{E}\bigl(\lambda_{n}M_{0}^{*} \bigr) + \delta ^{-1}_{E} \biggl(\frac{\theta_{n-1}-\theta_{n}}{\theta_{n}}K \biggr) + \delta ^{-1}_{E^{*}} \biggl(\frac{\theta_{n-1}-\theta_{n}}{\theta_{n}}K \biggr) \biggr) \end{aligned}$$
(3.10)
$$\begin{aligned} \le& (1 - \lambda_{n}\theta_{n})\phi(y_{n-1},x_{n}) + \lambda_{n}\theta _{n}M_{a}^{*} \biggl(c_{2}\delta^{-1}_{E}\bigl( \lambda_{n}M_{0}^{*}\bigr) + \delta^{-1}_{E} \biggl(\frac {\theta_{n-1}-\theta_{n}}{\theta_{n}}K \biggr) \\ & {}+ \delta^{-1}_{E^{*}} \biggl( \frac {\theta_{n-1}-\theta_{n}}{\theta_{n}}K \biggr)+ \frac{\delta^{-1}_{E} (\frac{\theta_{n-1}-\theta_{n}}{\theta _{n}}K )}{\lambda_{n}\theta_{n}}+ \frac{\delta^{-1}_{E^{*}} (\frac {\theta _{n-1}-\theta_{n}}{\theta_{n}}K )}{\lambda_{n}\theta_{n}} \\ &{}+\frac {c_{2}\delta ^{-1}_{E}(\lambda_{n}M_{0}^{*})}{\theta_{n}} \biggr), \quad \text{where } M_{a}^{*}=2\max\{ M_{a}, M_{b} \}. \end{aligned}$$
(3.11)

Now, setting

$$a_{n}:=\phi(y_{n-1},x_{n}); \qquad \sigma_{n}:=\lambda_{n}\theta_{n};\qquad c_{n}\equiv0, $$

and

$$\begin{aligned} b_{n} :=& \biggl[M_{a}^{*} \biggl(c_{2} \delta^{-1}_{E}\bigl(\lambda_{n}M_{0}^{*} \bigr) + \delta^{-1}_{E} \biggl(\frac{\theta_{n-1}-\theta_{n}}{\theta_{n}}K \biggr)+ \delta ^{-1}_{E^{*}} \biggl(\frac{\theta_{n-1}-\theta_{n}}{\theta_{n}}K \biggr) \\ &{}+ \frac {\delta^{-1}_{E} (\frac{\theta_{n-1}-\theta_{n}}{\theta_{n}}K )}{\lambda _{n}\theta_{n}}+ \frac{\delta^{-1}_{E^{*}} (\frac{\theta _{n-1}-\theta _{n}}{\theta_{n}}K )}{\lambda_{n}\theta_{n}}+\frac{c_{2}\delta ^{-1}_{E}(\lambda _{n}M_{0}^{*})}{\theta_{n}} \biggr) \biggr], \end{aligned}$$

inequality (3.11) becomes

$$a_{n+1}\leq (1-\sigma_{n} )a_{n} + \sigma_{n}b_{n} + c_{n},\quad n\geq0. $$

It now follows from Lemma 2.8 that \(\phi (y_{n-1},x_{n})\rightarrow0\) as \(n\rightarrow\infty\). From Lemma 2.7, we have \(\|x_{n}-y_{n-1}\|\rightarrow0\) and since \(y_{n}\rightarrow y^{*}\in(J-T)^{-1}0\), we obtain \(x_{n}\rightarrow y^{*}\in(J-T)^{-1}0\). This completes the proof. □

Example 3

We have (see, e.g., [23], p.47) for \(p>1\), \(q>1\), \(X=L^{p}\), \(X^{*}=L^{q}\),

$$\delta_{X^{*}}(\epsilon)=1- \biggl(1- \biggl(\frac{\epsilon}{2} \biggr)^{q} \biggr)^{1/q}, $$

and so obtain

$$\delta_{X^{*}}^{-1}(\epsilon)=2 \bigl[1-(1- \epsilon)^{q} \bigr]^{1/q}\le 2q^{1/q} \epsilon^{1/q},\quad \text{since } (1-\epsilon)^{q} > 1-q \epsilon \text{ for } q>1. $$

The prototypes for our theorems are the following:

$$\begin{aligned}& \lambda_{n}=\frac{1}{(n+1)^{a}},\qquad \theta_{n}= \frac{1}{(n+1)^{b}}, \\& 0< b< \frac{1}{r}\cdot a,\qquad a+b< 1/r, \\& b< 1/K,\quad \text{where } K>0 \text{ is as defined in Lemma~2.3}, r=\max\{p,q\}. \end{aligned}$$

In particular, without loss of generality, let \(r=p\). Then one can choose \(a:=\frac{1}{(p+1)}\) and \(b:= \min \{\frac{1}{2K},\frac {1}{2p(p+1)} \}\).

We now verify that, with these prototypes, the conditions (i)-(iii) of Theorem 3.5 are satisfied. Clearly (i) and the first part of (ii) are easily verified.

For the second part of condition (ii), we have

$$\begin{aligned} \frac{\delta^{-1}_{E}(\lambda_{n}M_{0}^{*})}{\theta_{n}} =&\frac{2 [1-(1-\lambda_{n}M_{0}^{*})^{p} ]^{1/p}}{\theta_{n}} \\ \le&\frac{2(pM_{0}^{*})^{1/p}\lambda_{n}^{1/p}}{\theta _{n}}=2\bigl(pM_{0}^{*}\bigr)^{1/p} \cdot(n+1)^{b-(a/p)} \rightarrow 0. \end{aligned}$$

For condition (iii), we have

$$\begin{aligned} \frac{\delta_{E^{*}}^{-1} (\frac{\theta_{n-1}}{\theta _{n}}-1 )}{\lambda_{n}\theta_{n}} =&\frac{2 [1- (2-\frac{\theta _{n-1}}{\theta _{n}} )^{q} ]^{1/q}}{\lambda_{n}\theta_{n}} \\ =&\frac{2 [1- (2- (\frac{n+1}{n} )^{b} )^{q} ]^{1/q}}{1/(n+1)^{a+b}}=2 \biggl[1- \biggl(2- \biggl(1+\frac{1}{n} \biggr)^{b} \biggr)^{q} \biggr]^{1/q} \cdot(n+1)^{a+b} \\ \le&2 \biggl[1- \biggl(2-1-\frac{b}{n} \biggr)^{q} \biggr]^{1/q}\cdot (n+1)^{a+b} \le2 \biggl[\frac{bq}{n} \biggr]^{1/q}\cdot(n+1)^{a+b} \\ =&2(bq)^{1/q}\cdot\frac{1}{n^{1/q}}\cdot(n+1)^{a+b} \le 2^{a+b+1}(bq)^{1/q}\cdot n^{a+b-(1/q)} \rightarrow 0. \end{aligned}$$

Similarly, we obtain

$$\frac{\delta_{E}^{-1} (\frac{\theta_{n-1}}{\theta_{n}}-1 )}{\lambda_{n}\theta_{n}}=\frac{2 [1- (2-\frac{\theta _{n-1}}{\theta _{n}} )^{p} ]^{1/p}}{\lambda_{n}\theta_{n}} \rightarrow 0. $$

Finally, for condition (iv), we have

$$ \frac{1}{2} \biggl(\frac{\theta_{n-1}-\theta_{n}}{\theta_{n}}K \biggr) = \frac {1}{2} \biggl[ \biggl(1+\frac{1}{n} \biggr)^{b}-1 \biggr]\cdot K\le \frac{bK}{2n}< 1. $$

This completes the verification.

Remark 5

We remark, following Lindenstrauss and Tzafriri [47], that in applications, we do not often use the precise value of the modulus of convexity but only a power type estimate from below.

A uniformly convex space X has modulus of convexity of power type p if, for some \(0< K<\infty\), \(\delta_{X}(\epsilon)\ge K\epsilon^{p}\). For instance, \(L_{p}\) spaces have modulus of convexity of power type 2, for \(1< p\le2\), and of power type p, for \(p>2\) (see, e.g., [47], p.63). We observe that the condition for modulus of convexity of power type p corresponds to that of p-uniformly convex spaces. However, we see that \(L_{p}\) spaces are p-uniformly convex, for \(1< p< 2\), and are 2-uniformly convex, for \(p\ge2\). These lead us to prove the following corollary of Theorem 3.4, which will be crucial in several applications.

Corollary 3.6

For \(p>1\), \(q>1\), let E be a p-uniformly convex and q-uniformly smooth real Banach space and let \(E^{*}\) be its dual. Let \(T:E\to E^{*}\) be a J-pseudocontractive and bounded map. Suppose \(F_{E}^{J}(T):=\{u^{*}\in E: Tu^{*}=Ju^{*}\}\ne\emptyset\). For arbitrary \(x_{1}, u\in E\), define a sequence \(\{x_{n}\}\) iteratively by:

$$ x_{n+1}=J^{-1} \bigl[(1-\lambda_{n})Jx_{n}+ \lambda_{n}\eta_{n}-\lambda _{n}\theta _{n}(Jx_{n}-Ju) \bigr], \quad n\geq1,\textit{where } \eta_{n}\in Tx_{n}, $$
(3.12)

where \(\{\lambda_{n}\}\) and \(\{\theta_{n}\}\) are sequences in \((0,1)\) satisfying conditions (i)-(iii) of Theorem  3.4. Then the sequence \(\{x_{n}\}\) converges strongly to a J-fixed point of T.

Proof

We observe, for p-uniformly convex space, using Remark 2, that conditions (i)-(iv) of Theorem 3.5 reduce to:

(i) :

\(\lambda_{n}\le\gamma_{0}\theta_{n}\),

(ii) :

\(\sum_{n=1}^{\infty}\lambda_{n}\theta_{n}=\infty\),

(iii) :

\((\frac{\theta_{n-1}-\theta_{n}}{\theta_{n}} )^{1/p}\rightarrow0\), \(\frac{M^{*} (\frac{\theta_{n-1}-\theta _{n}}{\theta_{n}} )^{1/p}}{\lambda_{n}\theta_{n}}\rightarrow0\), \(\frac{ (\lambda _{n}^{(1/p)}M_{0}^{**} )}{\theta_{n}} \rightarrow0\), as \(n\rightarrow\infty\), for some \(M_{0}^{**}, M^{*}>0\),

and for p-uniformly convex spaces, we have from (3.3), using equation (2.12),

$$ \begin{aligned} &c_{2}^{-1}\|x_{n+1}-x_{n} \|^{p} \le 2LM_{0}\lambda_{n}\|x_{n+1}-x_{n} \|, \\ &\|x_{n+1}-x_{n}\|\le \lambda_{n}^{1/p}M_{0}^{**} \quad \text{for some } M_{0}^{**}>0. \end{aligned} $$
(3.13)

Following the proof of Theorem 3.5, we have from inequality (3.9), using (3.13):

$$\begin{aligned} \phi(y_{n},x_{n+1}) \le& (1 - \lambda_{n}\theta_{n})\phi(y_{n-1},x_{n}) \\ &{}+ 2\lambda_{n}\theta _{n}M_{a} \biggl( \lambda_{n}^{1/p}M_{0}^{**} + K_{1} \biggl(\frac{\theta _{n-1}-\theta _{n}}{\theta_{n}} \biggr)^{1/p} + K_{2} \biggl(\frac{\theta_{n-1}-\theta _{n}}{\theta_{n}} \biggr)^{1/p} \biggr) \\ & {} + M_{b} \biggl(\lambda_{n}^{1+(1/p)}M_{0}^{**} + K_{1} \biggl(\frac{\theta _{n-1}-\theta_{n}}{\theta_{n}} \biggr)^{1/p} + K_{2} \biggl(\frac{\theta _{n-1}-\theta_{n}}{\theta_{n}} \biggr)^{1/p} \biggr) \\ \le& (1 - \lambda_{n}\theta_{n})\phi(y_{n-1},x_{n}) \\ &{}+ \lambda_{n}\theta _{n}M_{a}^{*} \biggl( \lambda_{n}^{1/p}M_{0}^{**} + M^{*} \biggl( \frac{\theta _{n-1}-\theta_{n}}{\theta_{n}} \biggr)^{1/p}+\frac{M^{*} (\frac{\theta _{n-1}-\theta_{n}}{\theta_{n}} )^{1/p}}{\lambda_{n}\theta_{n}} \\ & {}+\frac{ (\lambda_{n}^{(1/p)}M_{0}^{**} )}{\theta_{n}} \biggr),\quad \text{where }M^{*}=\max \{K_{1},K_{2}\}, M_{a}^{*}=2\max\{M_{a}, M_{b}\}. \end{aligned}$$
(3.14)

Now, setting

$$a_{n}:=\phi(y_{n-1},x_{n});\qquad \sigma_{n}:=\lambda_{n}\theta_{n};\qquad c_{n}\equiv0, $$

and

$$\begin{aligned}& b_{n}:= \biggl[M_{a}^{*} \biggl(\lambda_{n}^{1/p}M_{0}^{**} + M^{*} \biggl(\frac{\theta_{n-1}-\theta_{n}}{\theta_{n}} \biggr)^{1/p}+\frac {M^{*} (\frac{\theta_{n-1}-\theta_{n}}{\theta_{n}} )^{1/p}}{\lambda _{n}\theta_{n}} + \frac{ (\lambda_{n}^{(1/p)}M_{0}^{**} )}{\theta _{n}} \biggr) \biggr], \\& a_{n+1}\leq (1-\sigma_{n} )a_{n} + \sigma_{n}b_{n} + c_{n},\quad n\geq0. \end{aligned}$$

It now follows from Lemma 2.8 that \(\phi (y_{n-1},x_{n})\rightarrow0\) as \(n\rightarrow\infty\). From Lemma 2.7, we have \(\|x_{n}-y_{n-1}\|\rightarrow0\), and since \(y_{n}\rightarrow y^{*}\in(J-T)^{-1}0\), this completes the proof. □

Example 4

Real sequences that satisfy the conditions (i)-(iv) in Corollary 3.6 are the following:

$$\begin{aligned}& \lambda_{n}=(n+1)^{-a}\quad \text{and} \quad \theta_{n}=(n+1)^{-b},\quad n\ge1, \\& 0< b< \frac{1}{p}\cdot a, \qquad a+b< 1/p. \end{aligned}$$

For example, one can choose \(a:=\frac{1}{(p+1)}\) and \(b:= \frac {1}{2p(p+1)}\). We now check these prototypes.

Clearly conditions (i)-(ii) are satisfied. We verify condition (iii). Using the fact that \((1+x)^{s}\le1+sx\), for \(x>-1\) and \(0< s<1\), we have

$$\begin{aligned} 0 \le&\frac{M^{*} (\frac{\theta_{n-1}}{\theta_{n}}-1 )^{1/p}}{\lambda_{n}\theta_{n}}=M^{*} \biggl[ \biggl(1+\frac{1}{n} \biggr)^{b}-1 \biggr]^{1/p}\cdot(n+1)^{a+b} \\ \le& M^{*}b^{1/p}\cdot\frac {(n+1)^{a+b}}{n^{1/p}}=2^{a+b}M^{*}b^{1/p} \cdot n^{a+b-(1/p)}\rightarrow 0. \end{aligned}$$

Also,

$$ 0 \le \biggl(\frac{\theta_{n-1}}{\theta_{n}}-1 \biggr)^{1/p}= \biggl[ \biggl(1+ \frac{1}{n} \biggr)^{b}-1 \biggr]^{1/p}\le \frac{ b^{1/p}}{n^{1/p}}\rightarrow 0 $$

and

$$ 0 \le \frac{\lambda_{n}^{(1/p)}M_{0}^{**}}{\theta_{n}}= M_{0}^{**}(n+1)^{b-(a/p)} \rightarrow 0. $$
(3.15)

4 Application to zeros of maximal monotone maps

Corollary 4.1

Let E be a uniformly convex and uniformly smooth real Banach space and let \(E^{*}\) be its dual. Let \(A:E\to2^{E^{*}}\) be a multi-valued maximal monotone and bounded map such that \(A^{-1}0\ne\emptyset\). For fixed \(u, x_{1}\in E\), let a sequence \(\{x_{n}\}\) be iteratively defined by

$$ x_{n+1}=J^{-1} \bigl[Jx_{n}- \lambda_{n}\mu_{n}-\lambda_{n}\theta _{n}(Jx_{n}-Ju) \bigr],\quad n\geq1, \mu_{n}\in Ax_{n}, $$
(4.1)

where \(\{\lambda_{n}\}\) and \(\{\theta_{n}\}\) are sequences in \((0,1)\). Then the sequence \(\{x_{n}\}\) converges strongly to a zero of A.

Proof

Recall that A is monotone if and only if \(T=(J-A)\) is J-pseudocontractive and that zeros of A correspond to J-fixed points of T.. Now, if we replace A by \(J-T\) in equation (4.1), the equation reduces to (3.6) and hence the proof follows. □

5 Complement to proximal point algorithm

The proximal point algorithm of Martinet [35] and Rockafellar [36] was introduced to approximate a solution of \(0\in Au\) where A is the subdifferential of some convex functional defined on a real Hilbert space. A solution of this inclusion gives the minimizers of the convex functional. Let E be a real normed space with dual space, \(E^{*}\) and \(f:E\rightarrow\mathbb{R}\) be a convex functional. The subdifferential of f, \(\partial f:E\rightarrow2^{E^{*}}\) at \(u\in E\), is defined as follows:

$$(\partial f) (u)=\bigl\{ x^{*}\in E^{*}: f(y)-f(x)\ge\bigl\langle y-x,x^{*}\bigr\rangle \ \forall y\in E\bigr\} . $$

It is well known that ∂f is a maximal monotone map on E and that \(0\in(\partial f)(u)\) if and only if u is a minimizer of f. Following this, the proximal point algorithm has been studied for minimizers of f in real Banach spaces more general than Hilbert spaces.

Rockafellar [36] proved that the proximal point algorithm defined as follows:

$$ x_{k+1}= \biggl(I+ \frac{1}{\lambda_{k}}A \biggr)^{-1}(x_{k}) + e_{k},\quad x_{1}\in H, $$
(5.1)

where \(\lambda_{k}>0\) is a regularizing parameter; converges weakly to a solution of \(0\in Au\) where A is the subdifferential of a convex functional on a Hilbert space provided a solution exists. He then asked if the proximal point algorithm always converge strongly.

This was resolved in the negative by Güler [51] who produced a proper closed convex function g in the infinite dimensional Hilbert space \(l_{2}\) for which the proximal point algorithm converges weakly but not strongly (see also Bauschke et al. [52]). Several authors modified the proximal point algorithm to obtain strong convergence (see, e.g., Bruck [37]; Kamimura and Takahashi [40]; Lehdili and Moudafi [41]; Reich [42]; Solodov and Svaiter [45]; Xu [46]). We remark that in every one of these modifications, the recursion formula developed involved either the computation of \((I+\lambda_{k} A)^{-1}(x_{k})\) at each point of the iteration process or the construction, at each iteration, of two subsets of the space, intersecting them and projecting the initial vector onto the intersection. As far as we know, the first iteration process to approximate a solution of \(0\in Au\) in real Banach spaces more general than Hilbert spaces and which does not involve either of these setbacks was given by Chidume and Djitte [39] who proved a special case of Theorem 1.1 in which the space E is a 2-uniformly smooth real Banach space. These spaces include \(L_{p}\) spaces, \(2\le p<\infty\), but do not include \(L_{p}\) spaces, \(1< p<2\). This result of Chidume and Djitte has recently been proved in uniformly convex and uniformly smooth real Banach spaces (which include \(L_{p}\) spaces, \(1< p<\infty\)) (Chidume (Theorem 1.1) above).

Corollary 4.1 of this paper is an analog of Theorem 1.1 for maximal monotone maps when \(A:E\rightarrow2^{E^{*}}\) is a maximal monotone and bounded map, a result which complements the proximal point algorithm, under this setting, in the sense that it yields strong convergence to a solution of \(0\in Au\) and without requiring either the computation of \((J+\lambda A)^{-1}(z_{n})\) at each iteration process, or the construction of two subsets of E, and projection of the initial vector onto their intersection, at each stage of the iteration process.

6 Application to solutions of Hammerstein integral equations

Definition 6.1

Let \(\Omega\subset{\mathbb{R}}^{n}\) be bounded. Let \(k:\Omega\times \Omega\to\mathbb{R}\) and \(f:\Omega\times\mathbb{R} \to\mathbb{R}\) be measurable real-valued functions. An integral equation (generally nonlinear) of Hammerstein-type has the form

$$ u(x)+ \int_{\Omega}k(x,y)f\bigl(y,u(y)\bigr)\, dy=w(x), $$
(6.1)

where the unknown function u and inhomogeneous function w lie in a Banach space E of measurable real-valued functions.

By a simple transformation (6.1) can put in the form

$$ u+KFu=w, $$
(6.2)

which, without loss of generality can be written as

$$ u+KFu=0. $$
(6.3)

Interest in Hammerstein integral equations stems mainly from the fact that several problems that arise in differential equations, for instance, elliptic boundary value problems whose linear part posses Green’s function can, as a rule, be transformed into the form (6.1) (see, e.g., Pascali and Sburian [8], p.164).

Among the first early results on the approximation of solution of Hammerstein equations is the following result of Brézis and Browder.

Theorem 6.2

(Brézis and Browder [53])

Let H be a separable Hilbert space and C be a closed subspace of H. Let \(K:H \to C\) be a bounded continuous monotone operator and \(F:C\to H\) be angle-bounded and weakly compact mapping. For a giving \(f\in C\), consider the Hammerstein equation

$$ (I+KF)u=f $$
(6.4)

and its nth Galerkin approximation given by

$$ (I+K_{n}F_{n})u_{n}=P^{*}f, $$
(6.5)

where \(K_{n}=P_{n}^{*}KP_{n}:H\to C\) and \(F_{n}=P_{n}FP_{n}^{*}:C_{n} \to H\), where the symbols have their usual meanings (see [8]). Then, for each \(n\in\mathbb{N}\), the Galerkin approximation (6.5) admits a unique solution \(u_{n}\) in \(C_{n}\) and \(\{u_{n}\}\) converges strongly in H to the unique solution \(u\in C\) of the equation (6.4) where \(K_{n}=P_{n}^{*}KP_{n}:H\to C\) and \(F_{n}=P_{n}FP_{n}^{*}:C_{n} \to H\), where the symbols have their usual meanings (see [53]). Then, for each \(n\in\mathbb{N}\), the Galerkin approximation (6.5) admits a unique solution \(u_{n}\) in \(C_{n}\) and \(\{u_{n}\}\) converges strongly in H to the unique solution \(u\in C\) of the equation (6.4).

It is obvious that if an iterative algorithm can be developed for the approximation of solutions of equation of Hammerstein-type (6.3), this will certainly be preferred.

Attempts have been made to approximate solutions of equations of Hammerstein-type using Mann-type iteration scheme. However, the results obtained were not satisfactory (see, e.g., [54]). The recurrence formulas used in early attempts involved \(K^{-1}\) which is also required to be strongly monotone, and this, apart from limiting the class of mappings to which such iterative schemes are applicable, it is also not convenient in applications. Part of the difficulty is the fact that the composition of two monotone operators need not to be monotone.

The first satisfactory results on iterative methods for approximating solutions of Hammerstein equations in real Banach spaces more general Hilbert spaces, as far as we know, were obtained by Chidume and Zegeye [5557]. For the case of real Hilbert space H, for \(F,K:H \to H\), they defined an auxiliary map on the Cartesian product \(E:=H\times H\), \(T:E\to E\) by

$$T[u,v]=[Fu-v,Kv+u]. $$

We note that

$$T[u,v]=0\quad \Longleftrightarrow \quad u \mbox{ solves (6.3) and } v=Fu. $$

With this, they were able to obtain strong convergence of an iterative scheme defined in the Cartesian product space E to a solution of Hammerstein equation (6.3). The method of proof used by Chidume and Zegeye provided the clue to the establishment of the following couple explicit algorithm for computing a solution of the equation \(u+KFu=0\) in the original space X. With initial vectors \(u_{0}, v_{0}\in X\), sequences \(\{u_{n}\}\) and \(\{v_{n}\}\) in X are defined iteratively as follows:

$$\begin{aligned}& u_{n+1}=u_{n}-\alpha_{n}(Fu_{n}-v_{n}), \quad n\geq0, \end{aligned}$$
(6.6)
$$\begin{aligned}& v_{n+1}=v_{n}-\alpha_{n}(Kv_{n}+u_{n}), \quad n\geq0, \end{aligned}$$
(6.7)

where \(\alpha_{n}\) is a sequence in \((0,1)\) satisfying appropriate conditions.

Some typical results obtained using the recursion formulas described above in approximating solutions of nonlinear Hammerstein equations involving monotone maps in Hilbert spaces can be found in [57, 58].

In real Banach space X more general than Hilbert spaces, where \(F,K: X\rightarrow X\) are of accretive-type, Chidume and Zegeye considered an operator \(A:E\rightarrow E\) where \(E:= X\times X\) and were able to successfully approximate solutions of Hammerstein equations using recursion formulas described above. These schemes have now been employed by Chidume and other authors to approximate solutions of Hammerstein equations in various Banach spaces under various continuity assumptions (see, e.g., [27, 31, 5571]). This success has not carried over to the case of monotone-type mappings in Banach spaces where K and F map a space into its dual. In this section, we introduce a new iterative scheme and prove that a sequence of our scheme converges strongly to a solution of a Hammerstein equation under this setting. For this purpose, we begin with the following preliminaries and lemmas.

We now prove the following lemmas.

Lemma 6.3

Let X, Y be real uniformly convex and uniformly smooth spaces. Let \(E=X\times Y\) with the norm \(\|z\|_{E}=(\|u\|^{q}_{X} + \|v\|^{q}_{Y})^{\frac {1}{q}}\), for arbitrary \(z=[u,v]\in E\). Let \(E^{*}=X^{*}\times Y^{*}\) denote the dual space of E. For arbitrary \(x=[x_{1},x_{2}]\in E\), define the map \(j_{q}^{E}:E\rightarrow E^{*}\) by

$$j_{q}^{E}(x)=j_{q}^{E}[x_{1},x_{2}]:= \bigl[j_{q}^{X}(x_{1}),j_{q}^{Y}(x_{2}) \bigr], $$

so that for arbitrary \(z_{1}=[u_{1},v_{1}]\), \(z_{2}=[u_{2},v_{2}]\) in E, the duality pairing \(\langle\cdot,\cdot\rangle\) is given by

$$\bigl\langle z_{1},j_{q}^{E}\bigr\rangle := \bigl\langle u_{1},j_{q}^{X}(u_{2})\bigr\rangle + \bigl\langle v_{1},j_{q}^{Y}(v_{2}) \bigr\rangle . $$

Then

  1. (a)

    E is uniformly smooth and uniformly convex,

  2. (b)

    \(j_{q}^{E}\) is single-valued duality mapping on E.

Proof

(a) Let \(p>1\), \(q>1\). Let \(x=[x_{1},x_{2}]\), \(y=[y_{1},y_{2}]\) be arbitrary elements of E. Using Condition (iii)′ of Corollary 2r in [72], we have

$$\begin{aligned}& \bigl\langle x-y, j_{q}(x)-j_{q}(y) \bigr\rangle \\& \quad = \bigl\langle [x_{1}-y_{1},x_{2}-y_{2}], \bigl[j_{q}^{X}(x_{1})-j_{q}^{X}(y_{1}),j_{q}^{Y}(x_{2})-j_{q}^{Y}(y_{2}) \bigr] \bigr\rangle \\& \quad = \bigl\langle x_{1}-y_{1},j_{q}^{X}(x_{1})-j_{q}^{X}(x_{2}) \bigr\rangle + \bigl\langle x_{2}-y_{2},j_{q}^{Y}(x_{2})-j_{q}^{Y}(y_{2}) \bigr\rangle \\& \quad \le g^{*}_{1}\bigl(\Vert x_{1}-y_{1}\Vert \bigr) + g^{*}_{2}\bigl(\Vert x_{2}-y_{2}\Vert \bigr), \end{aligned}$$

where \(g^{*}_{1}\), \(g^{*}_{2}\) are strictly increasing continuous and convex functions on \(\mathbb{R}^{+}\) and \(g^{*}_{1}(0)=g^{*}_{2}(0)=0\). It follows that

$$\bigl\langle x-y,j_{q}^{E}(x)-j_{q}^{E}(y) \bigr\rangle \le g^{*}\bigl(\Vert x-y\Vert \bigr), $$

where \(g^{*}(\|x-y\|)=g^{*}_{1}(\|x_{1}-y_{1}\|) + g^{*}_{2}(\|x_{2}-y_{2}\|)\). Hence the result follows from Corollary 2′ that E is uniformly smooth.

Also, using condition (iii) of Corollary 3 in [72], we have

$$\begin{aligned}& \bigl\langle x-y, j_{p}(x)-j_{p}(y) \bigr\rangle \\& \quad = \bigl\langle [x_{1}-y_{1},x_{2}-y_{2}], \bigl[j_{p}^{X}(x_{1})-j_{p}^{X}(y_{1}),j_{p}^{Y}(x_{2})-j_{p}^{Y}(y_{2}) \bigr] \bigr\rangle \\& \quad = \bigl\langle x_{1}-y_{1},j_{p}^{X}(x_{1})-j_{p}^{X}(x_{2}) \bigr\rangle + \bigl\langle x_{2}-y_{2},j_{p}^{Y}(x_{2})-j_{p}^{Y}(y_{2}) \bigr\rangle \\& \quad \ge g_{1}\bigl(\Vert x_{1}-y_{1}\Vert \bigr) + g_{2}\bigl(\Vert x_{2}-y_{2}\Vert \bigr), \end{aligned}$$

where \(g_{1}\), \(g_{2}\) are strictly increasing continuous and convex functions on \(\mathbb{R}^{+}\) and \(g_{1}(0)=g_{2}(0)=0\). It follows that

$$\bigl\langle x-y,j_{p}^{E}(x)-j_{p}^{E}(y) \bigr\rangle \ge g\bigl(\Vert x-y\Vert \bigr), $$

where \(g(\|x-y\|)=g_{1}(\|x_{1}-y_{1}\|) + g_{2}(\|x_{2}-y_{2}\|)\). Hence the result follows from Corollary 3 that E is uniformly convex. Since E is uniformly smooth, it is smooth and hence any duality mapping on E is single-valued.

(b) For arbitrary \(x=[x_{1},x_{2}]\in E\), let \(j_{q}^{E}(x)=j_{q}^{E}[x_{1},x_{1}] = \psi_{q}\). Then \(\psi_{q}=[j_{q}^{X}(x_{1}),j_{q}^{Y}(x_{2})]\in E^{*}\). We have, for \(p>1\) such that \(1/p + 1/q=1\),

$$\begin{aligned} \Vert \psi_{q}\Vert _{E^{*}} =&\bigl(\bigl\Vert \bigl[j_{q}^{X}(x_{1}),j_{q}^{Y}(x_{2}) \bigr] \bigr\Vert \bigr)^{1/p}=\bigl(\bigl\Vert j_{q}(x_{1}) \bigr\Vert ^{p}_{X^{*}} + \bigl\Vert j_{q}(x_{2}) \bigr\Vert ^{p}_{Y^{*}}\bigr)^{1/p} \\ =&\bigl(\Vert x_{1}\Vert _{X}^{(q-1)p} + \Vert x_{2}\Vert _{Y}^{(q-1)p}\bigr)^{1/p}=\bigl( \Vert x_{1}\Vert ^{q}_{X} + \Vert x_{2}\Vert ^{q}_{Y}\bigr)^{(q-1)p} \\ =&\Vert x\Vert _{E}^{q-1}. \end{aligned}$$

Hence, \(\|\psi\|_{E^{*}}=\|x\|_{E}^{q-1}\). Furthermore,

$$\begin{aligned} \langle x,\psi_{q}\rangle =& \bigl\langle [x_{1},x_{2}], \bigl[j_{q}^{X}(x_{1}),j_{q}^{Y}(x_{2}) \bigr]\bigr\rangle = \bigl\langle x_{1},j_{q}^{X}(x_{1}) \bigr\rangle + \bigl\langle x_{2},j_{q}^{Y}(x_{2}) \bigr\rangle \\ =&\Vert x_{1}\Vert ^{q}_{X} + \Vert x_{2}\Vert ^{q}_{Y} = \bigl(\Vert x_{1}\Vert _{X}^{q} + \Vert x_{2} \Vert ^{q}_{Y} \bigr)^{1/q}\bigl(\Vert x_{1}\Vert _{X}^{q} + \Vert x_{2} \Vert ^{q}_{Y}\bigr)^{(q-1)/q} \\ =& \Vert x\Vert _{E} \cdot \Vert \psi \Vert _{E^{*}}^{q-1}. \end{aligned}$$

Hence, \(j_{q}^{E}\) is a single-valued normalized duality mapping on E. □

The following lemma will be needed in the following.

Lemma 6.4

(Browder [73])

Let X be a strictly convex reflexive Banach space with a strictly convex conjugate space \(X^{*}\), \(T_{1}\) a maximal monotone mapping from X to \(X^{*}\), \(T_{2}\) a hemicontinuous monotone mapping of all of X into \(X^{*}\) which carries bounded subsets of X into bounded subsets of \(X^{*}\). Then the mapping \(T=T_{1}+T_{2}\) is a maximal monotone map of X into \(X^{*}\).

Using Lemma 6.4, we prove the following important lemma which will be used in the sequel.

Lemma 6.5

Let E be a Banach space. Let \(F:E\rightarrow E^{*}\) and \(K:E^{*}\rightarrow E\) be bounded and maximal monotone mappings with \(D(F)=D(K)=E\). Let \(T:E\times E^{*}\rightarrow E^{*}\times E\) be defined by

$$T[u,v]=[Ju-Fu+v,J_{*}v-Kv-u]\quad \textit{for all }(u,v)\in E\times E^{*}, $$

then the mapping \(A:=(J-T)\) is maximal monotone.

Proof

We show that the mapping \(A=(J-T):E\times E^{*}\rightarrow E^{*}\times E\) defined as

$$A[u,v]=[Fu-v,Kv+u] $$

is maximal monotone. Let \(S,T:E\times E^{*}\rightarrow E^{*}\times E\) be defined as

$$S[u,v]=[Fu,Kv], \qquad T[u,v]=[-v,u]. $$

Then \(A=S+T\). It suffices to show S, T are maximal monotone.

Observe that S is monotone. Let \(h=[h_{1},h_{2}]\in E^{*}\times E\). Since F, K are maximal monotone, take \(u=(J+\lambda F)^{-1}h_{1}\) and \(v=(J_{*}+\lambda K)^{-1}h_{2}\). Then \((J+\lambda S)w=h\), where \(w=[u,v]\). Hence, S is maximal monotone.

Clearly, T is bounded and monotone. Furthermore it is continuous. Hence, it is hemi-continuous. Therefore by Lemma 6.4, \(A=S+T\) is maximal monotone. □

Lemma 6.6

Let E be a uniformly convex and uniformly smooth real Banach space. Let \(F:E\rightarrow E^{*}\) and \(K:E^{*}\rightarrow E\) be monotone mappings with \(D(F)=D(K)=E\). Let \(T:E\times E^{*}\rightarrow E^{*}\times E\) be defined by \(T[u,v]=[Ju-Fu+v,J_{*}v-Kv-u]\) for all \((u,v)\in E\times E^{*}\), then T is J-pseudocontractive. Moreover, if the Hammerstein equation \(u+KFu=0\) has a solution in E, then \(u^{*}\) is a solution of \(u+KFu=0\) if and only if \((u^{*},v^{*})\in F_{E}^{J}(T)\), where \(v^{*}=Fu^{*}\).

Proof

Using the monotonicity of F and K, we easily obtain \(\langle Tw_{1}-Tw_{2},w_{1}-w_{2}\rangle\le\langle Jw_{1}-Jw_{2},w_{1}-w_{2}\rangle\) for all \(w_{1}=[u_{1},v_{1}], w_{2}=[u_{2},v_{2}]\in E\times E^{*}\).

Moreover, we observe that

$$\begin{aligned}& T\bigl(u^{*},v^{*}\bigr)=J\bigl(u^{*},v^{*}\bigr) \\& \quad \iff\quad \bigl[Ju^{*}-Fu^{*}+v^{*},J_{*}v^{*}-Kv^{*}-u^{*}\bigr]=\bigl[Ju^{*},J_{*}v^{*}\bigr] \\& \quad \iff\quad Ju^{*}-Fu^{*}+v^{*}=Ju^{*} \quad \text{and}\quad J_{*}v^{*}-Kv^{*}-u^{*}=J_{*}v^{*} \\& \quad \iff\quad v^{*}=Fu^{*} \quad \text{and} \quad u^{*}+Kv^{*}=0\quad \iff\quad u^{*}+KFu^{*}=0. \end{aligned}$$

 □

We now prove the following theorem.

Theorem 6.7

Let E be a uniformly smooth and uniformly convex real Banach space and \(F:E\rightarrow E^{*}\), \(K:E^{*}\rightarrow E\) be maximal monotone and bounded maps, respectively. For \((x_{1},y_{1}), (u_{1},v_{1})\in E\times E^{*}\), define the sequences \(\{u_{n}\}\) and \(\{v_{n}\}\) in E and \(E^{*}\) respectively, by

$$\begin{aligned}& u_{n+1}=J^{-1} \bigl[Ju_{n}- \lambda_{n}(Fu_{n}-v_{n})-\lambda_{n} \theta _{n}(Ju_{n}-Ju) \bigr],\quad n\geq1, \end{aligned}$$
(6.8)
$$\begin{aligned}& v_{n+1}=J^{-1}_{*} \bigl[Jv_{n}- \lambda_{n}(Kv_{n}+u_{n})-\lambda_{n} \theta _{n}(J_{*}v_{n}-J_{*}y_{1}) \bigr],\quad n\geq1. \end{aligned}$$
(6.9)

Assume that the equation \(u+KFu=0\) has a solution. Then the sequences \(\{u_{n}\} _{n=1}^{\infty}\) and \(\{v_{n}\}_{n=1}^{\infty}\) converge strongly to \(u^{*}\) and \(v^{*}\), respectively, where \(u^{*}\) is the solution of \(u+KFu=0\) with \(v^{*}=Fu^{*}\).

Proof

From Lemma 6.6 we see that \(T:E\times E^{*}\rightarrow E^{*}\times E\) defined by \(T[u,v]=[Ju-Fu+v,J_{*}v-Kv-u]\) for all \((u,v)\in E\times E^{*}\) is J-pseudocontractive, and \(A:=(J-T)\) is maximal monotone.

Applying Theorem 3.4 where \(X=E\times E^{*}\), from Lemma 6.3, X is uniformly convex and uniformly smooth. We obtain (6.8) and (6.9) and the proof follows. □

7 Application to convex optimization problem

The following lemma is well known (see, e.g., [74], p.23, for similar proof in the Hilbert space case).

Lemma 7.1

Let X be a normed space. Let \(f:X\rightarrow\mathbb{R}\) be a convex function that is bounded on bounded subsets of X. Then the subdifferential, \(\partial f:X\rightarrow2^{X^{*}}\) is bounded on bounded subsets of E.

We now prove the following strong convergence theorem.

Theorem 7.2

Let E be a uniformly convex and uniformly smooth real Banach space with dual \(E^{*}\). Let \(f:E\rightarrow(-\infty,\infty]\) be a lower semi-continuously Frèchet differentiable convex and bounded functional such that \((\partial f)^{-1}0\ne\emptyset\). For given \(u,x_{1}\in E\), let \(\{x_{n}\}\) be generated by the algorithm

$$ x_{n+1}=J^{-1} \bigl[Jx_{n}-\lambda_{n}( \partial f)x_{n}-\lambda_{n}\theta _{n}(Jx_{n}-Ju) \bigr],\quad n\geq1. $$
(7.1)

Then \(\{x_{n}\}\) converges strongly to some \(x^{*}\in(\partial f)^{-1}0\).

Proof

Since f is convex and bounded, we see that ∂f is bounded. By Rockafellar [75, 76] (see also, e.g., Minty [2], Moreau [77]), we see that \((\partial f)\) is maximal monotone mapping from \(E^{*}\) into E and \(0\in(\partial f)^{-1}v\) if and only if \(f(v)=\min_{x\in E}f(x)\). Since f is convex and bounded, from Lemma 7.1 we see that ∂f is bounded, hence, the conclusion follows from Corollary 4.1. □

Remark 6

The analytical representations of duality mappings are known in a number of Banach spaces. For instance, in the spaces \(L^{p}(G)\) and \(W^{p}_{m}(G)\), \(p \in(1,\infty)\) we have, respectively,

$$Jx=\|x\|_{L^{p}}^{2-p}\bigl\vert x(s)\bigr\vert ^{p-2}x(s) \in L^{q}(G),\quad s\in G, $$

and

$$Jx=\|x\|_{W_{m}^{p}}^{2-p} \sum_{ |\alpha| \leq m }(-1)^{|\alpha |}D^{\alpha } \bigl(\bigl\vert D^{\alpha}x(s)\bigr\vert ^{p-2}D^{\alpha}x(s) \bigr) \in W_{-m}^{q}(G),\quad m>0, s\in G, $$

where \(p^{-1}+q^{-1}=1\). (See, e.g., Alber and Ryazantseva [23], p.36.)

8 Conclusion

Let E be a uniformly convex and uniformly smooth real Banach space with dual \(E^{*}\). Approximation of zeros of accretive-type maps of E to itself, assuming existence, has been studied extensively within the past 40 years or so (see, e.g., Agarwal et al. [17]; Berinde [4]; Chidume [6]; Reich [18]; Censor and Reich [19]; William and Shahzad [20], and the references therein). The key tool for this study has been the study of fixed points of pseudocontractive-type maps.

Unfortunately, for approximating zeros of monotone-type maps from E to \(E^{*}\), the normal fixed point technique is not applicable. This motivated the study of the notion of J-pseudocontractive maps introduced in this paper. The main result of this paper is Theorem 3.5 which provides an easily applicable iterative sequence that converges strongly to a J-fixed point of T, where \(T:E\rightarrow2^{E^{*}}\) is a J-pseudocontractive and bounded map such that \(J-T\) is maximal monotone. The two parameters in the recursion formula of the theorem, \(\theta_{n}\) and \(\lambda_{n}\), are easily chosen in any possible application of the theorem (see Example 4 above).

The theorem is, in particular, applicable in \(L_{p}\) and \(l_{p}\) spaces, \(1< p<\infty\). In these spaces, the normalized duality maps J and \(J^{-1}\) which appear in the recursion formula of the theorem are precisely known (see Remark 6 above).

Consequently, while the proof of the theorem is very technical and nontrivial, with the simple choices of the iteration parameters and the exact explicit formula for J and \(J^{-1}\), the recursion formula of the theorem which does not involve the resolvent operator, \((J+\lambda A)^{-1}\), is extremely attractive and user friendly.

Theorem 3.5 is applicable in numerous situations. In this paper, it has been applied to approximate a zero of a bounded maximal monotone map \(A:E\rightarrow2^{E^{*}}\) with \(A^{-1}(0)\ne\emptyset\).

Furthermore, the theorem complements the proximal point algorithm by providing strong convergence to a zero of a maximal monotone operator A and without involving the resolvent \(J_{r}:=(J+rA)^{-1}\) in the recursion formula. In addition, it is applied to approximate solutions of Hammerstein integral equations and also to approximate solutions of convex optimization problems. Theorem 3.5 continues to be applicable in approximating solutions of nonlinear equations. It has recently been applied to approximate a common zero of an infinite family of J-nonexpansive maps, \(T_{i}: E\rightarrow2^{E^{*}}\), \(i\ge1\) (see Chidume et al. [78]). In the case that \(E=H\) is a real Hilbert space, the result obtained in Chidume et al. [78] is a significant improvement of important known results. We strongly believed that the results of this paper will continue to be applied to approximate solutions of equilibrium problems in nonlinear operator theory.