1 Introduction

Let E be a real normed space with dual space \(E^{*}\), and C be a nonempty closed and convex subset of E. The variational inequality problem is to find an element \(v\in C\) such that

$$ \langle y-v,fv\rangle\ge0,\quad \forall y\in C, $$
(1.1)

where \(f:E\rightarrow E^{*}\). The solution set of this variational inequality problem will be denoted by \(\mathrm{VI}(f, C)\). This problem has numerous applications in many areas of mathematics, such as in partial differential equations, optimal control, optimization, mathematical programming, and some other nonlinear problems (see, for example, [1] and the references contained in them). The map f is called K-Lipschitz and monotone if

$$ \bigl\Vert f(x)-f(y) \bigr\Vert \le K \Vert x-y \Vert , \quad\forall x,y\in E, $$

and

$$ \bigl\langle x-y,f(x)-f(y) \bigr\rangle \ge0,\quad \forall x,y\in E, $$

respectively, where \(K>0\) is a Lipschitz constant, and is called η-strongly monotone if there exists \(\eta>0\) such that

$$ \bigl\langle x-y,f(x)-f(y) \bigr\rangle \ge\eta \Vert x-y \Vert ^{2},\quad \forall x,y\in E. $$

In the case that E is a real Hilbert space H, some authors have proposed and analyzed several iterative methods for solving the variational inequality problem (1.1). The simplest of them is the following projection method given by

$$ \textstyle\begin{cases} x_{1}\in H,\\ x_{k+1}=P_{C} (x^{k}-\tau f(x^{k}) ),\quad\forall k\ge1, \end{cases} $$
(1.2)

where f is Lipschitz and η-strongly monotone with \(\tau\in (0,\frac{2\eta}{K^{2}} )\). Yao et al. [18] showed that the projection gradient method (1.2) may not converge if the strong monotonicity assumption is relaxed to plain monotonicity. To overcome this difficulty, Korpelevich [14] proposed the following extragradient method:

$$ \textstyle\begin{cases} x_{1}\in H,\\ y^{k}=P_{C} (x^{k}-\tau f(x^{k}) ),\\ x^{k+1}=P_{C} (x^{k}-\tau f(y^{k}) ), \end{cases} $$
(1.3)

for each \(k\ge1\), which converges if f is monotone and Lipschitz. However, the weakness of this extragradient method is that one needs to calculate two projections onto C in each iteration process. It is known that if C is a general closed and convex set, this iteration process might require a huge amount of computation time. To overcome this difficulty, Censor et al. [6] introduced the subgradient extragradient method given by

$$ \textstyle\begin{cases} x_{0}\in H,\\ y^{k}=P_{C} (x^{k}-\tau f(x^{k}) ),\\ T_{k}=\{w\in H:\langle x^{k}-\tau f(x^{k})-y_{k},w-y^{k}\rangle\le0 \},\\ x^{k+1}=P_{T_{k}} (x^{k}-\tau f(y^{k}) ),\quad\forall k\ge0, \end{cases} $$
(1.4)

replacing one of the projections onto C of the extragradient method by a projection onto a specific constructible subgradient half-space \(T_{k}\). This projection method has an advantage in computing over the extragradient method proposed by Korpelevich [14] (see, e.g., Censor et al. [5], Dong et al. [9] and the references contained in them). They proved the following theorem in a real Hilbert space.

Theorem 1.1

(Censor et al., [6])

Assume that f is monotone, Lipschitz and \(\mathrm{VI}(f,C)\neq\emptyset\), with \(\tau<\frac{1}{K}\). Then any sequences \(\{x^{k}\}_{k=0}^{\infty}\) and \(\{y^{k}\}_{k=0}^{\infty}\) generated by (1.4) weakly converge to the same solution \(u^{*}\in \mathrm{VI}(f,C)\) and, furthermore, \(u^{*}=\lim_{k\rightarrow\infty} P_{\mathrm{VI}(f,C)}x^{k}\).

In addition, they introduced a modified subgradient extragradient method as follows:

$$ \textstyle\begin{cases} x_{0}\in H,\\ y^{k}=P_{C} (x^{k}-\tau f(x^{k}) ),\\ T_{k}=\{w\in H:\langle x^{k}-\tau f(x^{k})-y_{k},w-y^{k}\rangle\le0 \},\\ x^{k+1}=\alpha_{k} x^{k}+(1-\alpha_{k})SP_{T_{k}} (x^{k}-\tau f(y^{k}) ),\quad\forall k\ge0, \end{cases} $$
(1.5)

and proved the following theorem in a real Hilbert space.

Theorem 1.2

(Censor et al., [6])

Assume that f is monotone, Lipschitz and \(\mathrm{VI}(f,C)\cap \operatorname{Fix}(S)\neq\emptyset\), with \(\tau<\frac {1}{K}\). Then any sequences \(\{x^{k}\}\) and \(\{y^{k}\}\) generated by (1.5) weakly converge to the same solution \(u^{*}\in \mathrm{VI}(f,C)\cap \operatorname{Fix}(S)\) and, furthermore, \(u^{*}=\lim_{k\rightarrow\infty} P_{\mathrm{VI}(f,C)\cap \operatorname{Fix}(S)}x^{k}\).

Developing algorithms for solving variational inequality problems has continued to attract the interest of numerous researchers in nonlinear operator theory. The reader may see the following important related papers (Gang et al. [11], Anh and Hieu [3], Anh and Hieu [4], Dong et al. [10] and the references contained in them).

Motivated by the result of Censor et al. [6], we propose in this paper a Krasnoselskii-type subgradient extragradient algorithm and prove a weak convergence theorem for obtaining a common element of solutions of variational inequality problems and common fixed points for a countable family of relatively-nonexpansive maps in a uniformly smooth and 2-uniformly convex real Banach space. Our theorem is an improvement of the result of Censor et al. [6], and a host of other results (see Sect. 5 below).

2 Methods

The paper is organized as follows. Section 3 contains the preliminaries to include definitions and lemmas with corresponding references that will be used in the sequel. Section 4 contains the main result of the paper. In Sect. 5, we compare our theorems with important recent results in the literature and, thereafter, conclude our findings.

3 Preliminaries

Let E be a real normed space with dual space \(E^{*}\). We shall denote \(x_{k}\rightharpoonup x^{*}\) and \(x_{k}\rightarrow x^{*}\) to indicate that the sequence \(\{x_{k}\}\) converges weakly to \(x^{*}\) and converges strongly to \(x^{*}\), respectively.

A map \(J: E\rightarrow2^{E^{*}}\) defined by \(J(x):= \{x^{*}\in E^{*}: \langle x,x^{*}\rangle= \Vert x \Vert ^{2}= \Vert x^{*} \Vert ^{2} \}\) is called the normalized duality map on E. The following properties of the duality map will be needed in the sequel (see, e.g., Chidume [7], Cioranescu [8] and the references contained in them):

  1. (1)

    If E is a reflexive, strictly convex, and smooth real Banach space, then J is surjective, injective, and single-valued.

  2. (2)

    If E is uniformly smooth, then J is uniformly continuous on a bounded subset of E.

  3. (3)

    If \(E=H\), a real Hilbert space, then J is the identity map on H.

Remark 1

J is weakly sequentially continuous if, for any sequence \(\{x_{k}\} \subset E\) such that \(x_{k}\rightharpoonup x^{*}\) as \(k\rightarrow\infty\), then \(Jx_{k}\rightharpoonup Jx^{*}\) as \(k\rightarrow\infty\). It is known that the normalized duality map on \(l_{p}\) spaces, \(1< p<\infty\), is weakly sequentially continuous.

Let E be a smooth real Banach space and \(\phi: E\times E\rightarrow\mathbb{R}\) be a map defined by \(\phi(x,y)= \Vert x \Vert ^{2}-2\langle x,Jy\rangle+ \Vert y \Vert ^{2}\) for all \(x,y \in E\). This map was introduced by Alber [1] and has been extensively studied by a host of other authors. It is easy to see from the definition of ϕ that, if \(E=H\), a real Hilbert space, then \(\phi(x,y)= \Vert x-y \Vert ^{2}\) for all \(x,y\in H\). Furthermore, for any \(x,y,z\in E\) and \(\beta\in(0,1)\), we have the following properties.

(\(P_{1}\)):

\(( \Vert x \Vert - \Vert y \Vert )^{2}\le\phi(x,y)\le ( \Vert x \Vert + \Vert y \Vert )^{2}, \forall x,y \in E\).

(\(P_{2}\)):

\(\phi(x,z)=\phi(x,y)+\phi(y,z)+2\langle y-x,Jz-Jy\rangle \).

(\(P_{3}\)):

\(\phi (x,J^{-1}(\beta Jy +(1-\beta)Jz )\le \beta\phi (x,y)+(1-\beta)\phi(x,z)\).

Definition 3.1

Let C be a nonempty closed and convex subset of a real Banach space E and T be a map from C to E.

  1. (a)

    \(x^{*}\) is called an asymptotic fixed point of T if there exists a sequence \(\{x_{k}\}\subset C\) such that \(x_{k}\rightharpoonup x^{*}\) and \(\Vert Tx_{k}-x_{k} \Vert \rightarrow0\), as \(k\rightarrow\infty\). We shall denote the set of asymptotic fixed points of T by \(\widehat {F}(T)\).

  2. (b)

    T is called relatively nonexpansive if the fixed point set of T is denoted by \(F(T)=\widehat {F}(T)\ne\emptyset\) and \(\phi(p,Tx)\le\phi(p,x)\) for all \(x\in C, p\in F(T)\).

Definition 3.2

(Rockafellar, [16])

The normal cone of C at \(v\in C\) denoted by \(N_{C}(v)\) is given by \(N_{C}(v):=\{w\in E^{*}:\langle y-v,w\rangle\le 0, \forall y\in C\}\).

Definition 3.3

A map \(T:E\rightarrow2^{E^{*}}\) is called monotone if \(\langle \eta_{x}-\eta_{y},x-y\rangle\ge0, \forall x,y \in E\) and \(\eta_{x}\in Tx, \eta_{y}\in Tx\). Furthermore, T is maximal monotone if it is monotone and the graph \(G(T):=\{(x,y)\in E\times E^{*}: y\in T(x)\}\) is not properly contained in the graph of any other monotone operator.

Definition 3.4

A convex feasibility problem is a problem of finding a point in the intersection of convex sets.

Lemma 3.5

(Rockafellar, [16])

Let C be a nonempty closed and convex subset of a reflexive Banach space E. Let \(f:C\rightarrow E^{*}\) be a monotone and hemicontinuous map and \(T\subset E\times E^{*}\) be a map defined by

$$ Tv= \textstyle\begin{cases} f(v)+N_{C}(v)& \textit{if } v\in C,\\ \emptyset& \textit{if } v\notin C. \end{cases} $$

Then T is maximal monotone and \(0\in Tv\) if and only if \(v\in \mathrm{VI}(f,C)\).

Remark 2

It is known that a monotone map T is maximal if given \((x,y)\in E\times E^{*}\) and if \(\langle x-u, y-v\rangle\ge0, \forall (u,v)\in G(T)\), then \(y\in Tx\).

Lemma 3.6

(Matsushita and Takahashi, [15])

Let E be a smooth, strictly convex, and reflexive Banach space and C be a nonempty closed convex subset of E. Then the following hold:

  1. (1)

    \(\phi (x,\Pi_{C}y) +\phi(\Pi_{C}y,y)\le\phi(x,y), \forall x\in C, y\in E\).

  2. (2)

    \(z=\Pi_{C}x\iff\langle z-y,Jx-Jz\rangle\ge 0, \forall y\in C\).

Lemma 3.7

(Kamimura and Takahashi, [12])

Let E be a uniformly convex and uniformly smooth real Banach space and \(\{x_{n}\}_{n=1}^{\infty}, \{y_{n}\} _{n=1}^{\infty}\) be sequences in E such that either \(\{x_{n}\} _{n=1}^{\infty}\) or \(\{y_{n}\}_{n=1}^{\infty}\) is bounded. If \(\lim_{n\rightarrow\infty}\phi(x_{n},y_{n})=0\), then \(\lim_{n\rightarrow \infty} \Vert x_{n}-y_{n} \Vert =0\).

Lemma 3.8

(Xu, [17])

Let E be a uniformly convex real Banach space. Let \(r>0\). Then there exists a strictly increasing continuous and convex function \(g:[0,\infty)\rightarrow[0,\infty)\) such that \(g(0)=0\) and the following inequality holds:

$$ \bigl\Vert \lambda x + (1-\lambda)y \bigr\Vert ^{2}\le\lambda \Vert x \Vert ^{2} + (1-\lambda) \Vert y \Vert ^{2} - \lambda(1-\lambda)g \bigl( \Vert x -y \Vert \bigr),\quad \textit{for all }x,y \in B_{r}(0), $$

where \(B_{r}(0):=\{v\in E: \Vert v \Vert \le r\}\) and \(\lambda\in[0,1]\).

Lemma 3.9

(Xu, [17])

Let E be a 2-uniformly convex real Banach space. Then there exists a constant \(c_{2}>0\) such that, for every \(x,y\in E\),

$$ c_{2} \Vert x-y \Vert ^{2}\le\langle x-y,jx-jy\rangle \ge0, \quad\forall jx\in Jx, jy\in Jy. $$

Lemma 3.10

(Xu, [17])

Let E be a 2-uniformly convex and smooth real Banach space. Then, for any \(x,y\in E\) and for some \(\alpha>0\),

$$ {\alpha} \Vert x-y \Vert ^{2}\le\phi(x,y). $$

Without loss of generality, we may assume \(\alpha\in(0,1)\).

Lemma 3.11

(Kohsaka and Takahashi, [13])

Let C be a closed convex subset of a uniformly convex and uniformly smooth Banach space E. Let \(T_{i}: C\rightarrow E, i=1,2,\ldots \) , be a countable sequence of relatively nonexpansive maps such that \(\bigcap_{i=1}^{\infty}F(T_{i})\neq\emptyset\). Suppose that \(\{\alpha_{i}\}\subset(0,1)\) and \(\{\beta_{i}\} _{i=1}^{\infty}\subset(0,1)\) are sequences such that \(\sum_{i=1}^{\infty }\alpha _{i}=1\) and \(U: C\rightarrow E\) is defined by

$$ Ux:=J^{-1} \Biggl(\sum_{i=1}^{\infty} \alpha_{i} \bigl(\beta _{i}Jx+(1-\beta_{i})JT_{i}x \bigr) \Biggr)\quad\textit{for each }x \in C, $$

then U is relatively nonexpansive and \(F(U)=\bigcap_{i=1}^{\infty}F(T_{i})\).

4 Main result

In the sequel, \(\alpha\in(0,1)\) is the constant appearing in Lemma 3.10.

4.1 The Krasnoselskii-type subgradient extragradient algorithm

Let E be a uniformly smooth and 2-uniformly convex real Banach space with dual space \(E^{*}\). Let C be a nonempty closed and convex subset of E. Let J be the normalized duality maps on E.

Algorithm 1

Let \(\{v_{k}\}\) be a sequence generated iteratively by

$$ \textstyle\begin{cases} v_{1}\in E \quad\text{and} \quad\tau>0,\\ y_{k}= \Pi_{C}J^{-1}(Jv_{k}-\tau f(v_{k})), \\ T_{k} = \{w\in E: \langle w-y_{k},(Jv_{k}-\tau f(v_{k}))-Jy_{k} \rangle \le0 \},\\ v_{k+1}=\Pi_{T_{k}}J^{-1} (Jv_{k}-\tau f(y_{k}) ),\quad \forall k\ge1. \end{cases} $$
(4.1)

If \(v_{k}=y_{k}\), we stop. Otherwise, replace k by \((k+1)\) and return to algorithm.

We shall make the following assumptions.

\({C_{1}}\) :

The map f is monotone on E.

\({C_{2}}\) :

The map f is Lipschitz on E, with constant \(K>0\).

\({C_{3}}\) :

\(\mathrm{VI}(f,C)\neq\emptyset\).

Lemma 4.1

If \(v_{k}=y_{k}\) in Algorithm 1, then \(v_{k}\in \mathrm{VI}(f,C)\).

Proof

If \(v_{k}=y_{k}\), then \(v_{k}= \Pi_{C}J^{-1} (Jv_{k}-\tau f(v_{k}) )\in C\). Furthermore, by the characterization of the generalized projection onto C, we obtain that

$$\begin{aligned} \begin{aligned} & \bigl\langle w-v_{k},Jv_{k}-\tau f(v_{k})-Jv_{k} \bigr\rangle \le0, \quad\forall w\in C\\ &\quad\iff\quad\tau \bigl\langle w-v_{k}, f(v_{k}) \bigr\rangle \ge0,\quad \forall w\in C, \tau>0. \end{aligned} \end{aligned}$$
(4.2)

Hence, \(v_{k}\in \mathrm{VI}(f,C)\). □

The following lemma is crucial for the proof of our main theorem.

Lemma 4.2

Let \(\{v_{k}\}_{k=1}^{\infty}\) be the sequence defined in Algorithm 1. Assume conditions \(C_{1}, C_{2}\), and \(C_{3}\) hold with \(\tau\in(0,\frac{\alpha}{K})\). Then, for any \(v\in \mathrm{VI}(f,C)\), the following inequality holds:

$$ \phi(v,v_{k+1})\le\phi(v,v_{k})- \biggl(1-\frac{\tau K}{\alpha } \biggr)\phi(y_{k},v_{k})- \biggl(1-\frac{\tau K}{\alpha} \biggr) \phi (v_{k+1},y_{k}),\quad \forall k\ge1. $$

Proof

Let \(v\in \mathrm{VI}(f,C)\). Then we have that

$$\begin{aligned} \begin{aligned} & \bigl\langle y_{k}-v,f(y_{k})-f(v) \bigr\rangle \ge0,\quad \forall k\ge 1 \\ &\quad\implies\quad \bigl\langle v-v_{k+1},f(y_{k}) \bigr\rangle \le \bigl\langle y_{k}-v_{k+1},f(y_{k}) \bigr\rangle . \end{aligned} \end{aligned}$$
(4.3)

Since \(v_{k+1}\in T_{k}\), we have that \(\langle v_{k+1}-y_{k},Jv_{k}-\tau f(v_{k})-Jy_{k} \rangle\le0, \forall k\ge1\). From the above inequality, we obtain that

$$\begin{aligned} & \bigl\langle v_{k+1}-y_{k},Jv_{k}- \tau f(y_{k})-Jy_{k} \bigr\rangle \\ &\quad= \bigl\langle v_{k+1}-y_{k},Jv_{k}-\tau f(v_{k})-Jy_{k} \bigr\rangle +\tau \bigl\langle v_{k+1}-y_{k}, f(v_{k})-f(y_{k}) \bigr\rangle \\ &\quad\le \tau \bigl\langle v_{k+1}-y_{k},f(v_{k})-f(y_{k}) \bigr\rangle . \end{aligned}$$
(4.4)

Set \(Jz_{k}= Jv_{k}-\tau f(y_{k})\). Then we compute as follows:

$$\begin{aligned} \phi(v,v_{k+1})&\le\phi(v,z_{k})- \phi(v_{k+1},z_{k}) \\ &= \Vert v \Vert ^{2}-2 \bigl\langle v,Jv_{k}-\tau f(y_{k}) \bigr\rangle - \Vert v_{k+1} \Vert ^{2}+2 \bigl\langle v_{k+1},Jv_{k}-\tau f(y_{k}) \bigr\rangle \\ &=\phi(v,v_{k})- \Vert v_{k} \Vert ^{2}+2 \bigl\langle v,\tau f(y_{k}) \bigr\rangle - \Vert v_{k+1} \Vert ^{2}+2 \bigl\langle v_{k+1},Jv_{k}-\tau f(y_{k}) \bigr\rangle \\ &=\phi(v,v_{k})-\phi(v_{k+1},v_{k})+2 \tau \bigl\langle v- v_{k+1}, f(y_{k}) \bigr\rangle . \end{aligned}$$

From inequality (4.3) and property \(P_{2}\), it follows that

$$\begin{aligned} \phi(v,v_{k+1})&\le\phi(v,v_{k})-\phi(v_{k+1},v_{k})+2 \tau \bigl\langle y_{k}- v_{k+1}, f(y_{k}) \bigr\rangle \\ &=\phi(v,v_{k})- \phi(y_{k},v_{k})-\phi(v_{k+1},y_{k})+2 \bigl\langle v_{k+1}-y_{k},Jv_{k}-\tau f(y_{k})-Jy_{k} \bigr\rangle . \end{aligned}$$

From inequality (4.4), it follows that

$$\begin{aligned} \phi(v,v_{k+1})\le\phi(v,v_{k})-\phi(y_{k},v_{k})- \phi (v_{k+1},y_{k})+2\tau \bigl\langle v_{k+1}-y_{k},f(v_{k})-f(y_{k}) \bigr\rangle . \end{aligned}$$

By condition \(C_{3}\) and Lemma 3.10 in the above inequality, it follows that

$$\begin{aligned} \phi(v,v_{k+1})&\le\phi(v,v_{k})-\phi(y_{k},v_{k})- \phi (v_{k+1},y_{k})+2\tau K \Vert v_{k+1}-y_{k} \Vert \Vert v_{k}-y_{k} \Vert \\ &\le\phi(v,v_{k})- \biggl(1-\frac{\tau K}{\alpha} \biggr)\phi (y_{k},v_{k})- \biggl(1-\frac{\tau K}{\alpha} \biggr)\phi(v_{k+1},y_{k}). \end{aligned}$$

This completes the proof. □

Theorem 4.3

Let E be a uniformly smooth and 2-uniformly convex real Banach space with dual space \(E^{*}\). Let C be a nonempty closed and convex subset of E and \(f:E\rightarrow E^{*}\) be a map satisfying conditions \(C_{1}\) and \(C_{2}\) with \(\tau\in(0,\frac {\alpha }{K})\). Assume that condition \(C_{3}\) holds and J is weakly sequentially continuous on E. Then the sequence \(\{v_{k}\}_{k=1}^{\infty}\) generated iteratively by Algorithm 1 converges weakly to some \(v^{*}\in \mathrm{VI}(f,C)\).

Proof

Since \(\mathrm{VI}(f,C)\neq\emptyset\), let \(v\in \mathrm{VI}(f,C)\). Define \(\gamma :=1-\frac{\tau K}{\alpha}\), then \(\gamma\in(0,1)\). By Lemma 4.2, we have that \(\lim_{k\rightarrow\infty}\phi(v,v_{k})\) exists, \(\{\phi(y_{k},v_{k})\}\) is bounded and

$$ \phi(y_{k},v_{k})\le\frac{1}{\gamma} \bigl( \phi(v,v_{k})-\phi (v,v_{k+1}) \bigr),\quad \forall k\ge1. $$

Taking limit of both sides of the above inequality, we have that

$$ \lim_{k\rightarrow\infty} \phi(y_{k},v_{k})=0. $$
(4.5)

By Lemma (3.7), \(\lim_{n\rightarrow\infty} \Vert y_{k}-v_{k} \Vert =0\).

Next, we show that \(\Omega_{\omega}(v_{k})\subset \mathrm{VI}(f,C)\), where \(\Omega _{\omega}(v_{k})\) is the set of weak sub-sequential limit of \(\{v_{k}\}\). Let \(x^{*}\in\Omega_{\omega}(v_{k})\) and \(\{v_{k_{j}}\}_{j=1}^{\infty}\) be a subsequence of \(\{v_{k}\}_{k=1}^{\infty}\) such that

$$ v_{k_{j}}\rightharpoonup x^{*} \text{ as } j\rightarrow \infty.\ \mathrm{Consequently},\ y_{k_{j}}\rightharpoonup x^{*} \text{ as } j \rightarrow\infty. $$
(4.6)

Let \(T:E\rightarrow E^{*}\) be a map defined by

$$ Tv= \textstyle\begin{cases} fv + N_{C}(v)&\text{if } v\in C,\\ \emptyset&\text{if } v\notin C, \end{cases} $$
(4.7)

where \(N_{C}(v)\) is the normal cone to C at \(v\in C\). Then T is maximal monotone and \({T^{-1}(0)=\mathrm{VI}(f,C)}\) (Rockafellar [16]). Let \((v,w)\in G(T)\), where \(G(T)\) is the graph of T. Then \(w\in Tv=fv +N_{C}(v)\). Hence, we get that \(w-fv\in N_{C}(v)\). This implies that \(\langle v-t,w-fv\rangle\ge0, \forall t\in C\). In particular,

$$ \bigl\langle v-y_{k},w-f(v) \bigr\rangle \ge0. $$
(4.8)

Furthermore, \(y_{k}=\Pi_{C}J^{-1} (Jv_{k}-\tau f(v_{k}) ), \forall k\ge1\). By characterization of the generalized projection map, we obtain that

$$ \bigl\langle y_{k}-v,Jv_{k}-\tau f(v_{k})-Jy_{k} \bigr\rangle \ge 0,\quad \forall v\in C. $$
(4.9)

This implies that

$$ \biggl\langle v-y_{k},\frac{Jy_{k}-Jv_{k}}{\tau} + f(v_{k}) \biggr\rangle \ge 0,\quad\forall v\in C. $$
(4.10)

Using inequalities (4.8) and (4.10) for some \(M_{0}> 0\), Cauchy–Schwarz inequality, and condition \(C_{2}\), we have that

$$\begin{aligned} &\langle v-y_{k_{j}},w \rangle \\ &\quad\ge \bigl\langle v-y_{k_{j}},f(v) \bigr\rangle \\ &\quad\ge \bigl\langle v-y_{k_{j}},f(v) \bigr\rangle - \biggl\langle v-y_{k_{j}},\frac{Jy_{k_{j}}-Jv_{k_{j}}}{\tau} + f(v_{k_{j}}) \biggr\rangle \\ &\quad= \bigl\langle v-y_{k_{j}},f(v)-f(y_{k_{j}}) \bigr\rangle + \bigl\langle v-y_{k_{j}},f(y_{k_{j}})-f(v_{k_{j}}) \bigr\rangle - \biggl\langle v-y_{k_{j}},\frac{Jy_{k_{j}}-Jv_{k_{j}}}{\tau} \biggr\rangle \\ &\quad\ge -KM_{0} \Vert y_{k_{j}}-v_{k_{j}} \Vert -M_{0} \Vert {Jy_{k_{j}}-Jv_{k_{j}}} \Vert . \end{aligned}$$
(4.11)

Taking limit of both sides of inequality (4.11) and using the fact that J is uniformly continuous on bounded subset of E, we obtain that

$$ \bigl\langle v-x^{*},w \bigr\rangle \ge0. $$
(4.12)

Since T is a maximal monotone operator, it follows that \(x^{*}\in T^{-1}(0)=\mathrm{VI}(f,C)\), which implies that \(\Omega_{\omega}(v_{k})\subset \mathrm{VI}(f,C)\).

Now, we show that \(v_{k}\rightharpoonup x^{*}\) as \(k\rightarrow\infty\). Define \(x_{k}:=\Pi_{\mathrm{VI}(f,C)}v_{k}\). Then \(\{x_{k}\}\subset \mathrm{VI}(f,C)\). Furthermore, by Lemmas 4.2 and 3.6, we have that

$$ \phi(x_{k},v_{k+1})\le\phi(x_{k},v_{k}) \quad\text{and}\quad \phi(x_{k+1},v_{k+1})\le\phi(x_{k},v_{k+1})- \phi(x_{k},x_{k+1}), $$
(4.13)

which implies that \(\{\phi(x_{k},v_{k})\}\) converges. From inequality (4.13) and for any \(m>k\), we have that

$$ \phi(x_{k},v_{m})\le\phi(x_{k},v_{k})\quad \text{and}\quad \phi(x_{k},x_{m})\le\phi(x_{k},v_{m})- \phi(x_{m},v_{m}). $$
(4.14)

Furthermore, \(\lim_{k\rightarrow\infty}\phi(x_{k},x_{m})=0\). Hence, by Lemma 3.7, we obtain that \(\lim_{k,m\rightarrow \infty } \Vert x_{k}-x_{m} \Vert =0\), which implies that \(\{x_{k}\}\) is a Cauchy sequence in \(\mathrm{VI}(f,C)\). Therefore, there exists \(u^{*}\in \mathrm{VI}(f,C)\) such that \(\lim_{k\rightarrow\infty}x_{k}=u^{*}\).

Now, using the definition of \(x_{k}=\Pi_{\mathrm{VI}(f,C)}v_{k}, \forall k\ge 0\), it follows from Lemma 3.6 that for any \(p\in \mathrm{VI}(f,C)\), we have that

$$ \langle x_{k}-p,Jx_{k}-Jv_{k} \rangle \ge0. $$
(4.15)

Let \(\{v_{k_{i}}\}\) be any subsequence of \(\{v_{k}\}\). We may assume without loss of generality that \(\{v_{k_{i}}\}\) converges weakly to some \(p^{*}\in \mathrm{VI}(f,C)\). By inequality (4.15), weak sequential continuity of J, and the fact that \(\lim_{k\rightarrow\infty} x_{k}=u^{*}\), we obtain that

$$ \bigl\langle u^{*}-p^{*},Jp^{*}-Ju^{*} \bigr\rangle \ge0. $$
(4.16)

However, from the monotonicity of J, we obtain that

$$ \bigl\langle u^{*}-p^{*},Ju^{*}-Jp^{*} \bigr\rangle \ge0. $$
(4.17)

Combining inequalities (4.16) and (4.17), we have that

$$ \bigl\langle u^{*}-p^{*},Ju^{*}-Jp^{*} \bigr\rangle = 0. $$
(4.18)

By Lemma 3.9, we obtain that

$$ \bigl\Vert u^{*}-p^{*} \bigr\Vert ^{2}\le\frac{1}{c_{2}} \bigl\langle u^{*}-p^{*},Ju^{*}-Jp^{*} \bigr\rangle = 0, $$

which implies that \(u^{*}=p^{*}\). Hence, \(v_{k}\rightharpoonup u^{*}=\lim_{k\rightarrow\infty} x_{k}\). This completes the proof. □

4.2 The modified Krasnoselskii-type subgradient extragradient algorithm

Algorithm 2

Let \(\{v_{k}\}_{k=1}^{\infty}\) be a sequence generated iteratively by

$$ \textstyle\begin{cases} v_{1}\in E \quad\text{and}\quad \tau>0,\\ y_{k}= \Pi_{C}J^{-1}(Jv_{k}-\tau f(v_{k})), \\ T_{k} = \{w\in E: \langle w-y_{k},(Jv_{k}-\tau f(v_{k}))-Jy_{k} \rangle \le0 \},\\ v_{k+1}=J^{-1} (\beta Jv_{k}+(1-\beta)JS\Pi _{T_{k}}J^{-1}(Jv_{k}-\tau f(y_{k})) ), \quad\forall k\ge1. \end{cases} $$
(4.19)

We shall make the following assumption.

\({C_{4}}\) :

\(\mathcal{G}:=\mathrm{VI}(f,C)\cap F(S)\neq \emptyset\), \(F(S)\) is the set of fixed points of S.

The following lemma is crucial for the proof of the next theorem.

Lemma 4.4

Let E be a uniformly smooth and 2-uniformly convex real Banach space with dual space \(E^{*}\). Let C be a nonempty closed and convex subset of E. Let \(S:E\rightarrow E\) be a relatively nonexpansive map and \(f:E\rightarrow E^{*}\) be a map satisfying conditions \(C_{1}\) and \(C_{2}\) with \(\tau\in(0,\frac{\alpha }{K})\), and let \(\beta\in(0,1)\). Assume that condition \(C_{4}\) holds and J is weakly sequentially continuous on E. Then the sequence \(\{v_{k}\} _{k=1}^{\infty}\) generated iteratively by Algorithm 2 converges weakly to some \(v^{*}\in\mathcal{G}\).

Proof

Denote \(t_{k}=\Pi_{T_{k}}J^{-1}(Jv_{k}-\tau f(y_{k})), \forall k\ge1\), \(Jz_{k}:=Jv_{k}-\tau f(y_{k})\), and \(\gamma =1-\frac {\tau K}{\alpha}\).

Since \(\mathcal{G}\neq \emptyset \), let \(u\in\mathcal{G}\). Then we have that

$$\begin{aligned} \phi(u,t_{k})\le{}& \phi(u,z_{k})- \phi(t_{k},z_{k}) \\ ={}& \Vert u \Vert ^{2}-2 \bigl\langle u,Jv_{k}-\tau f(y_{k}) \bigr\rangle - \Vert t_{k} \Vert ^{2}+2 \bigl\langle t_{k},Jv_{k}-\tau f(y_{k}) \bigr\rangle \\ ={}&\phi(u,v_{k})- \phi(t_{k},v_{k})+2\tau \bigl\langle u-t_{k}, f(y_{k}) \bigr\rangle \\ ={}&\phi(u,v_{k})-\phi(t_{k},v_{k})+2\tau \bigl\langle u-y_{k}, f(y_{k})-f(u) \bigr\rangle +2\tau \bigl\langle y_{k}-t_{k}, f(y_{k}) \bigr\rangle \\ &{}+2\tau \bigl\langle u-y_{k}, f(u) \bigr\rangle . \end{aligned}$$

By \(C_{1}, \langle u-y_{k}, f(y_{k})-f(u)\rangle\le0, \forall k\ge1\). Consequently, \(\langle u-y_{k},f(u)\rangle\le0, \forall k\ge1\). Thus, from the last line of the above inequality and by inequality (4.4), we obtain that

$$\begin{aligned} \phi(u,t_{k})&\le\phi(u,v_{k})- \phi(t_{k},v_{k})+2\tau \bigl\langle y_{k}-t_{k}, f(y_{k}) \bigr\rangle \\ &=\phi(u,v_{k})-\phi(y_{k},v_{k})- \phi(t_{k},y_{k})+2 \bigl\langle t_{k}-y_{k},Jv_{k}- \tau f(y_{k})-Jy_{k} \bigr\rangle \\ &\le\phi(u,v_{k})-\phi(y_{k},v_{k})- \phi(t_{k},y_{k})+2\tau \bigl\langle t_{k}-y_{k}, f(v_{k})-f(y_{k}) \bigr\rangle . \end{aligned}$$
(4.20)

By condition \(C_{2}\) and Lemma 3.10, we have that

$$\begin{aligned} \phi(u,t_{k})&\le \phi(u,v_{k})- \phi(y_{k},v_{k})-\phi(t_{k},y_{k})+ \frac{\tau K}{\alpha} \bigl(\phi (t_{k},y_{k})+ \phi(y_{k},v_{k}) \bigr) \\ &=\phi(u,v_{k})-\gamma\phi(t_{k},y_{k})-\gamma \phi(y_{k},v_{k})\le\phi(u,v_{k}). \end{aligned}$$
(4.21)

Applying Lemma 3.8, inequality (4.21), and relative nonexpansivity of S, we obtain that

$$\begin{aligned} \phi(u,v_{k+1})&=\phi \bigl(u,J^{-1} \bigl(\beta Jv_{k}+(1-\beta )J(S{t_{k}}) \bigr) \\ &\le \beta\phi (u,v_{k})+(1-\beta)\phi(u,{t_{k}}) \bigr)- \beta (1-\beta )g \bigl( \bigl\Vert Jv_{k}-J(St_{k}) \bigr\Vert \bigr) \end{aligned}$$
(4.22)
$$\begin{aligned} &\le\beta\phi (u,v_{k})+(1-\beta) \bigl(\phi(u,v_{k})- \gamma\phi (t_{k},y_{k})-\gamma\phi(y_{k},v_{k}) \bigr)\le\phi(u,v_{k}) . \end{aligned}$$
(4.23)

This implies that \(\lim_{k\rightarrow\infty}\phi (u,v_{k})\) exists. Consequently, \(\{v_{k}\}_{k=1}^{\infty}\) is bounded. From inequality (4.21), \(\{t_{k}\}_{k=1}^{\infty}\) is bounded. Also, from inequality (4.22), we obtain that

$$\begin{aligned} &\phi(y_{k},v_{k})\le\frac{1}{\gamma(1-\beta)} \bigl( \phi (u,v_{k})-\phi (u,v_{k+1}) \bigr) \quad\text{and}\\ &\phi(t_{k},y_{k})\le\frac{1}{\gamma (1-\beta)} \bigl( \phi(u,v_{k})-\phi(u,v_{k+1}) \bigr). \end{aligned}$$

From these inequalities, we obtain that

$$ \lim_{k\rightarrow\infty} \phi(y_{k},v_{k})=0\quad \text{and}\quad\lim_{k\rightarrow\infty} \phi(t_{k},y_{k})=0. $$
(4.24)

By Lemma 3.7, it follows that \(\lim \Vert y_{k}-v_{k} \Vert =0\) and \(\lim \Vert t_{k}-y_{k} \Vert =0\). Consequently, we obtain \(\lim_{k\rightarrow\infty} \Vert v_{k}-t_{k} \Vert =0\).

Next, we show that \(\Omega_{\omega}(v_{k})\subset\mathcal{G}=F(S)\cap \mathrm{VI}(f,C)\), where \(\Omega_{\omega}(v_{k})\) is the set of weak sub-sequential limit of \(\{v_{k}\}\). Let \(x^{*}\in\Omega_{\omega}(v_{k})\) and \(\{v_{k_{j}}\} _{j=1}^{\infty}\) be a subsequence of \(\{v_{k}\}_{k=1}^{\infty}\) such that

$$ v_{k_{j}}\rightharpoonup x^{*} \text{ as } j\rightarrow\infty.\ \mathrm{Consequently},\ t_{k_{j}}\rightharpoonup x^{*} \text{ as } j \rightarrow\infty. $$

By definition of S, \(\{St_{k}\}_{k=1}^{\infty}\) is bounded. From inequalities (4.22) and (4.23), we have that

$$ g \bigl( \bigl\Vert Jv_{k}-J(St_{k}) \bigr\Vert \bigr) \le\frac{1}{\beta(1-\beta)} \bigl(\phi (u,v_{k})- \phi(u,{v_{k+1}}) \bigr). $$
(4.25)

Applying the property of g, we obtain that

$$ \lim_{k\rightarrow\infty} \bigl\Vert Jv_{k}-J(St_{k}) \bigr\Vert =0.$$

By the uniform continuity of \(J^{-1}\) on a bounded subset of \(E^{*}\), we get that

$$ \lim_{k\rightarrow\infty} \Vert v_{k}-St_{k} \Vert =0, $$
(4.26)

so that

$$ \Vert St_{k}-t_{k} \Vert \le \Vert St_{k}-v_{k} \Vert + \Vert v_{k}-t_{k} \Vert \rightarrow0 \quad\text{as $k\rightarrow\infty$}, $$
(4.27)

which implies that \(Sx^{*}=x^{*}\). Hence, \(x^{*}\in F(S)\).

Next, we show that \(x^{*}\in \mathrm{VI}(f,C)\). Following the same line of argument as in the proof of Theorem 4.3, we have that \(x^{*}\in \mathrm{VI}(f,C)\), and this implies that \(\Omega_{\omega}(v_{k})\subset \mathcal{G} \).

Define \(x_{k}:=\Pi_{\mathcal{G}}v_{k}\). Then \(\{x_{k}\}\subset\mathcal{G}\). Now, following the same line of argument as in the proof of Theorem 4.3, we obtain that \(u^{*}=p^{*}\). Hence, \(v_{k}\rightharpoonup u^{*}=\lim_{k\rightarrow\infty}x_{k}\).This proof is complete. □

4.3 A convergence theorem for a convex feasibility problem

In what follows, we shall make the following assumption.

\({C_{5}}\) :

\(\mathcal{V}:= \bigcap_{i=1}^{\infty}F(T_{i})\cap \mathrm{VI}(f,C)\neq\emptyset\), where \(F(T_{i}):=\{x\in E:T_{i} x=x, \forall i\ge 1\}\).

We now prove the following theorem.

Theorem 4.5

Let E be a uniformly smooth and 2-uniformly convex real Banach space with dual space \(E^{*}\). Let C be a nonempty closed and convex subset of E. Let \({T_{i}:E\rightarrow E,}\) \(i=1,2,\ldots \) , be a countable family of relatively nonexpansive maps and \({f:E\rightarrow E^{*}}\) be a map satisfying conditions \(C_{1}\) and \(C_{2}\) with \(\tau\in(0,\frac {\alpha }{K})\), and let \({\beta\in(0,1)}\). Assume that condition \(C_{5}\) holds and J is weakly sequentially continuous on E.Then the sequence \(\{ v_{k}\}_{k=1}^{\infty}\) generated iteratively by Algorithm 2 converges weakly to some \(v^{*}\in\mathcal{V}\), where

$$\begin{aligned} Sx=J^{-1} \Biggl(\sum_{i=1}^{\infty}\delta _{i} \bigl(\gamma_{i}Jx+(1-\gamma_{i})JT_{i}x \bigr) \Biggr),\qquad \sum_{i=1}^{\infty }\delta_{i}=1\quad \textit{and}\quad \{\gamma_{i}\}_{i=1}^{\infty}\subset(0,1). \end{aligned}$$

Proof

By Lemma 3.11, S is relatively nonexpansive and \(F(S)=\bigcap_{i=1}^{\infty}F(T_{i})\). Also, by Lemma 4.4, the result of Theorem 4.5 follows. □

Corollary 4.6

Let H be a real Hilbert space, and let C be a nonempty closed and convex subset of H. Let \(T_{i}:H\rightarrow H, i=1,2,\ldots \) , be a countable family of nonexpansive maps and \(f:H\rightarrow H\) be a monotone and K-Lipschitz map. Let the sequence \(\{v_{k}\}_{k=1}^{\infty}\) be generated iteratively by

$$ \textstyle\begin{cases} v_{1}\in E \quad\textit{and}\quad \tau>0,\\ y_{k}= P_{C}(v_{k}-\tau f(v_{k})), \\ T_{k} = \{w\in E: \langle w-y_{k},(v_{k}-\tau f(v_{k}))-y_{k} \rangle \le0 \},\\ v_{k+1}= (\beta v_{k}+(1-\beta)SP_{T_{k}}(v_{k}-\tau f(y_{k}) ), \quad\forall k\ge1. \end{cases}\displaystyle . $$
(4.28)

Assume that \(C_{1}, C_{2}\), and \(C_{5}\) hold with \(\tau\in (0,\frac{1}{K})\), and let \(\beta\in(0,1)\). Then \(\{v_{k}\}_{k=1}^{\infty}\) converges weakly to \(v^{*}\in\mathcal{V}:= \bigcap_{i=1}^{\infty}F(T_{i})\cap \mathrm{VI}(f,C)\), where \(Sx= (\sum_{i=1}^{\infty}\delta_{i} (\gamma_{i}x+(1-\gamma_{i})T_{i}x ) )\), \(\sum_{i=1}^{\infty}\delta_{i}=1\) and \(\{\gamma_{i}\} _{i=1}^{\infty}\subset(0,1)\).

Proof

In a Hilbert space, J is the identity map and \(\phi (y,z)= \Vert y-z \Vert ^{2}, \forall y,z\in H\). Thus, the conclusion follows from Theorem 4.5. □

Annotations. The result of Corollary 4.6 is an immediate consequence of Theorem 4.5.

5 Discussion

All the theorems of this paper are applicable in \(l_{p}\) spaces, \(1< p\le2\), since these spaces are uniformly smooth and 2-uniformly convex, and on these spaces, the normalized duality map is weakly sequentially continuous. The analytical representations of the duality map in these spaces, where \({p^{-1} + q^{-1} =1}\) (see, e.g., Theorem 4.3, Alber and Ryazantseva [2]; p. 36) are:

$$\begin{aligned} &Jx= \Vert x \Vert ^{2-p}_{l_{p}}y\in l_{p},\quad y= \bigl\{ \vert x_{1} \vert ^{p-2}x_{1}, \vert x_{2} \vert ^{p-2}x_{2},\ldots \bigr\} , x= \{x_{1},x_{2},\ldots\}, \\ &J^{-1}x= \Vert x \Vert ^{2-q}_{l_{q}}y\in l_{q},\quad y= \bigl\{ \vert x_{1} \vert ^{q-2}x_{1}, \vert x_{2} \vert ^{q-2}x_{2},\ldots \bigr\} , x= \{x_{1},x_{2}, \ldots\}. \end{aligned}$$
  • Theorem 4.3, which approximates a solution of a variational inequality problem, extends Theorem 5.1 of Censor et al. [6] from a Hilbert space to the more general uniformly smooth and 2-uniformly convex real Banach space with weakly sequentially continuous duality map.

  • Theorem 4.5, which approximates a common solution of a variational inequality problem and a common fixed point of a countable family of relatively nonexpansive maps, extends Theorem 7.1 of Censor et al. [6] from a Hilbert space to a uniformly smooth and 2-uniformly convex real Banach space with weakly sequentially continuous duality map, and from a single nonexpansive map to a countable family of relatively nonexpansive maps.

  • The control parameters in Algorithm 2 of Theorem 4.5 are two arbitrarily fixed constants \(\beta\in(0,1)\) and \(\tau \in(0,1)\) which are to be computed once and then used at each step of the iteration process, while the parameters in equation (1.5) studied by Censor et al. [6] are \({\alpha_{k}\in(0,1)}\) and \(\tau\in(0,1)\), and \({\alpha_{k}}\) is to be computed at each step of the iteration process. Consequently, the sequence of Algorithm 2 is of Krasnoselskii type and the sequence defined by equation (1.5) is of Mann type. It is well known that a Krasnoselskii-type sequence converges as fast as a geometric progression, which is slightly better than the convergence rate obtained from any Mann-type sequence.

6 Conclusion

In this paper, we considered Krasnoselskii-type subgradient extragradient algorithms for approximating a common element of solutions of variational inequality problems and fixed points of a countable family of relatively nonexpansive maps in a uniformly smooth and 2-uniformly convex real Banach space. A weak convergence of the sequence generated by our algorithm is proved. Furthermore, results obtained are applied in \(l_{p}\)-spaces, \(1< p\le2\).