1 Introduction

Let \(\mathcal{Q}^{*}\) be the dual space of a real normed linear space \(\mathcal{Q}\) and \(\mathcal{D}\) be a nonempty, closed, and convex subset of \(\mathcal{Q}\). In this paper, we study the classical variational inequality of Fichera [1] and Stampacchia [2], the equilibrium problem of Blum and Otelli [3], and some fixed point problems.

Let \(A:\mathcal{D}\to \mathcal{Q}^{*}\) be a given map. A variational inequality problem is the following:

$$ \text{find}\quad x\in \mathcal{D}\quad \text{such that}\quad \langle y-x,Ax \rangle \ge 0 \quad \text{for all }y\in \mathcal{D}. $$
(1.1)

Variational inequality was first developed to solve equilibrium problems, precisely the Signorini problem posed by Antonio [4] in the year 1959. This problem was later solved by Fichera [1] in the year 1963. In 1964, Stampacchia [2] studied the regularity problem for partial differential equations and thereby coined the name “variational inequality”, stating nothing but the principle of complementary virtual work in its inequality form. Variational inequality has been found to have numerous applications in many areas of sciences; see, for example, [57].

For earlier and more recent results on the existence of solutions and iterative methods for solving variational inequalities; see, for example, [819].

Let \(\Theta :\mathcal{D}\times \mathcal{D}\to \mathbb{R}\) be a bifunctional. An equilibrium problem is to find

$$ x\in \mathcal{D}\quad \text{such that}\quad \Theta (x,y)\ge 0 \quad \text{for all }y\in \mathcal{D}. $$
(1.2)

Numerous problems in physics, optimization, and economics reduce to a problem of finding solutions of inequality (1.2). Some methods have been proposed to solve equilibrium problems in Hilbert spaces and more general Banach spaces; see, for example, [2025].

We remark that the following conditions will be needed in solving the equilibrium problem (1.2):

\((A_{1})\):

\(\Theta (x,x)=0\) for all \(x\in \mathcal{D}\);

\((A_{2})\):

Θ is monotone, i.e., \(\Theta (x,y)+\Theta (y,x)\le 0\) for all \(x,y\in \mathcal{D}\);

\((A_{3})\):

\(\limsup_{t\downarrow 0}\Theta (x+ t(z-x),y)\le \Theta (x,y)\) for all \(x,y,z\in \mathcal{D}\);

\((A_{4})\):

for all \(x\in \mathcal{D}\), \(\Theta (x,\cdot )\) is convex and lower semi-continuous.

A map \(A :\mathcal{Q}\to \mathcal{Q}\) is called accretive if, for each \(x,y\in \mathcal{Q}\), there exists \(j(x-y)\in J(x-y)\) such that

$$ \bigl\langle Ax-Ay,j(x-y) \bigr\rangle \ge 0, $$
(1.3)

where \(J:\mathcal{Q}\to 2^{\mathcal{Q}^{*}}\) is the normalized duality map. In Hilbert spaces, accretive maps are called monotone.

Accretive maps were introduced independently in the year 1967 by Browder [9] and Kato [26]. Interest in this class of maps stems mainly from their firm connection with the existence theory for nonlinear equations of evolution in Banach spaces, i.e., equations of the form

$$\begin{aligned} \textstyle\begin{cases} x'(s)+ Ax(s)=v(x(s)),\quad s\geq 0, \\ x(0)=x_{0}. \end{cases}\displaystyle \end{aligned}$$
(1.4)

At equilibrium state, and setting \(v\equiv 0\) in equation (1.4), we obtain the following equation:

$$ Ax=0. $$
(1.5)

In many cases, where the map A is accretive, solutions of equation (1.5) represent the equilibrium state of the system described by equation (1.4).

For solving equation (1.5), Browder [9] in the year 1967 introduced a self-map \(S:=I-A\) on a real Banach space, which he called a pseudo-contractive map. Approximating zeros of accretive maps is equivalent to approximating fixed points of pseudo-contractive maps, assuming existence of such zeros. For earlier and more recent results on the approximation of fixed points of pseudo-contractive maps, the reader may consult any of the following: [2736].

A map \(A:\mathcal{Q}\to \mathcal{Q}^{*}\) is called monotone if, for each \(x, y \in \mathcal{Q}\), the following inequality holds:

$$ \langle x-y,Ax-Ay\rangle \ge 0. $$

The sub-differential of a convex and proper function h, defined on a real Banach space and denoted by ∂h, is a monotone map, and for each x in the domain of h, \(0\in \partial h(x)\) if and only if x minimizes the function h; see, for example, [37]. In particular, setting the sub-differential equivalently as A, we have \(0\in Ax\), which reduces to \(Ax=0\) for the case where \(\partial h\equiv A\) is single-valued. Therefore, approximating zeros of such monotone maps is equivalent to finding a minimizer of some convex function.

It is obvious that the fixed point technique introduced by Browder in the year 1967 for approximating zeros of accretive maps is not applicable in this case, where A from a real Banach space to its dual space is monotone.

Hence, there is the need to develop techniques for approximating zeros of monotone maps.

To approximate zeros of monotone maps, Zegeye [38] in the year 2008 introduced a map \(S:=J-A\) from a real Banach space to its dual space. He called the map semi-pseudo mapping, where \(A:\mathcal{Q}\to 2^{\mathcal{Q}^{*}}\) is a monotone map. Also, in the year 2016, Chidume and Idu [39] studied this class of maps and called it J-pseudo-contractive.

An element x in \(\mathcal{D}\) is called a semi-fixed point (J-fixed point) of S from \(\mathcal{D}\) to \(\mathcal{Q}^{*}\) if

$$ Sx=Jx, $$
(1.6)

where J is the normalized duality map and single-valued in this case; see, for example, [38, 39].

Approximating zeros of these monotone maps is equivalent to approximating J-fixed points of J-pseudo-contractive maps, assuming the existence of such zeros, which is also equivalent to finding a minimizer of some convex functions.

We remark that in real Hilbert spaces, and also smooth and strictly convex real Banach spaces, the notion of J-fixed points coincides with the classical definition of fixed points. However, if the space is not strictly convex, J may fail to be one-to-one. Thus, the inverse of J may not exist. For more recent works on J-fixed points, see, for example, [4048].

In 2014, Zegeye and Shazard [49] studied the problem of finding a common solution in the set of fixed points of a Lipschitz pseudo-contractive map S and solution sets of a variational inequality for a γ-inverse strongly monotone map A in a real Hilbert space H by considering the following iterative algorithm:

$$ \textstyle\begin{cases} x_{o}\in C\subset H, \\ y_{n}=(1-\beta _{n})x_{n}+\beta _{n} Sx_{n}, \\ x_{n+1} =P_{C} [(1-\alpha _{n})(\delta _{n}x_{n}+\theta _{n}Sy_{n}+ \gamma _{n}P_{C}[I-\gamma _{n}A] x_{n}) ], \end{cases} $$
(1.7)

where \(P_{C}\) is the metric projection from H onto C and \(\{\delta _{n}\}\), \(\{\gamma _{n}\}\), \(\{\theta _{n}\}\), \(\{\alpha _{n} \}\), \(\{\beta _{n}\}\) are sequences in \(\mathopen]0,1\mathclose[\) satisfying appropriate conditions. They proved that the sequence generated by the algorithm converges strongly to an element in the solution set of the problem.

Also, in the year 2015, Alghamdi et al. [17] studied a Halpern-type extragradient method, to approximate a common solution of a variational inequality and a fixed point problem of a continuous pseudo-contractive map in a real Hilbert space. They proved the following theorem.

Theorem 1.1

Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(S:C\rightarrow C\) be a continuous pseudo-contractive map. Let \(A:C\rightarrow H\) be an L-Lipschitz monotone map. Assume that \({\mathcal{F}:= F(S)\cap VI(A,C)\neq \emptyset }\), where \(F(S)\) is a fixed point set of S and \(VI(A,C)\) is the solution set of a variational inequality. Let \(\{x_{n}\}\) be a sequence generated by

$$ \textstyle\begin{cases} x_{o},\quad u\in C, \\ z_{n}=P_{C} [x_{n}-\gamma _{n}Ax_{n}], \\ x_{n+1} :=\alpha _{n} u+(1-\alpha _{n})(a_{n}x_{n}+ b_{n}K^{S}_{r_{n}}x_{n} +c_{n}P_{C}[x_{n}-\gamma _{n}Az_{n}] ), \end{cases} $$

where \(\{\gamma _{n}\}\subset \mathopen[a,b\mathclose[ \subset \mathopen]0,\frac{1}{L}\mathclose[\), and \(\{a_{n}\},\{b_{n}\},\{c_{n}\}\subset (a,b)\subset \mathopen]0,1\mathclose[\), \(\{\alpha _{n}\}\subset \mathopen]0,c\mathclose[\subset \mathopen]0,1\mathclose[\) satisfying the following conditions: \((i)\) \(a_{n}+b_{n}+c_{n}=1\), \((ii)\) \(\lim \alpha _{n}=0\), \(\sum \alpha _{n}=\infty \), and \(K^{S}_{r_{n}}\) is a resolvent map for S from H unto C. Then \(\{x_{n}\}\) converges strongly to the point \(x^{*}\) of \(\mathcal{F}\) nearest to u.

Motivated by the results in [17, 39, 49], we present in this paper a Halpern-type subgradient-extragradient algorithm for which the sequence generated by the algorithm converges strongly to a common solution of a variational inequality, an equilibrium problem, and J-fixed points of a continuous J-pseudo-contractive map in a uniformly smooth and two-uniformly convex real Banach space. Also, the theorem is applied to approximate a common solution of a variational inequality, an equilibrium problem, and a convex minimization problem. Moreover, a numerical example is given to illustrate the implementability of our algorithm. Finally, the theorem proved complements, improves, and unifies some related recent results in the literature.

2 Preliminaries

Let \(\mathcal{Q}^{*}\) be the dual space of a real normed linear space \(\mathcal{Q}\) and \(\mathcal{D}\) be a nonempty, closed, and convex subset of \(\mathcal{Q}\). We denote \(x_{n}\rightharpoonup x^{*}\) and \({x_{n}\rightarrow x^{*}}\) to indicate that the sequence \(\{x_{n}\}\) converges weakly to \(x^{*}\) and converges strongly to \(x^{*}\), respectively. Also, \(VI(A,\mathcal{D})\), \(EP(\Theta )\), and \(F_{J}(S)\) denote the set of solutions of variational inequalities, the set of solutions of equilibrium problems, and the set of J-fixed points of S, respectively.

Let \(\mathcal{Q}\) be a smooth real normed linear space and \(\phi : \mathcal{Q}\times \mathcal{Q}\rightarrow \mathbb{R}\) be a map defined by

$$ \phi (x,y):= \Vert x \Vert ^{2}-2\langle x,Jy \rangle + \Vert y \Vert ^{2} \quad \textrm{for all } x,y\in \mathcal{Q}, $$
(2.1)

where \(J:\mathcal{Q}\rightarrow 2^{\mathcal{Q}^{*}}\) is the normalized duality map defined by

$$ J(x):= \bigl\{ x^{*}\in \mathcal{Q}^{*}: \bigl\langle x,x^{*} \bigr\rangle = \Vert x \Vert \bigl\Vert x^{*} \bigr\Vert , \Vert x \Vert = \bigl\Vert x^{*} \bigr\Vert \bigr\} . $$

The map ϕ was introduced by Alber [7] and has been studied extensively by a host of other authors.

For any \(x,y,z\in \mathcal{Q}\) and \(\alpha \in \mathopen]0,1\mathclose[\), the following properties are true:

(\(P_{1}\)):

\(( \Vert x \Vert - \Vert y \Vert )^{2}\le \phi (x,y)\le ( \Vert x \Vert + \Vert y \Vert )^{2}\),

(\(P_{2}\)):

\(\phi (x,J^{-1}(\alpha Jy + (1-\alpha )Jz) )\le \alpha \phi (x,y)+(1- \alpha )\phi (x,z) \),

(\(P_{3}\)):

\(\phi (x,z)=\phi (x,y)+\phi (y,z)+2\langle y-x,Jz-Jy\rangle \),

(\(P_{4}\)):

\(\phi (x,y)\le \Vert x \Vert \Vert Jx-Jy \Vert + \Vert y \Vert \Vert x-y \Vert \).

Definition 2.1

([7])

Let \(\mathcal{Q}\) be a smooth, strictly convex, and reflexive real Banach space, and let \(\mathcal{D}\) be a nonempty, closed, and convex subset of \(\mathcal{Q}\). The map \(\Pi _{\mathcal{D}}:\mathcal{Q}\rightarrow \mathcal{D}\) defined by \(\tilde{x}=\Pi _{\mathcal{D}} (x) \in \mathcal{D}\) such that \(\phi (\tilde{x},x)=\inf_{y\in \mathcal{D}}\phi (y,x)\) is called the generalized projection of \(\mathcal{Q}\) onto \(\mathcal{D}\).

Definition 2.2

([9])

A map \(S:\mathcal{Q}\to \mathcal{Q}\) is called a pseudo-contractive map if, for all \(x, y\in \mathcal{Q}\), there exists \(j(x-y)\) in \(J(x-y) \) such that

$$ \bigl\langle Sx-Sy,j(x-y) \bigr\rangle \le \Vert x-y \Vert ^{2}, $$

where J is the normalized duality map on \(\mathcal{Q}\).

Definition 2.3

([39])

A map \(S:\mathcal{Q}\rightarrow \mathcal{Q}^{*}\) is called a J-pseudo-contractive map if, for all \(x, y\in \mathcal{Q}\), the following inequality holds:

$$ \langle x-y,Sx-Sy \rangle \le \langle x-y,Jx-Jy \rangle . $$

In real Hilbert spaces, the of notion pseudo-contractive maps coincides with that of J-pseudo-contractive maps.

Definition 2.4

([37])

The sub-differential of a convex function h is a map \(\partial h:\mathcal{Q}\rightarrow 2^{\mathcal{Q}^{*}}\), defined by

$$ \partial h(x)= \bigl\{ x^{*}\in \mathcal{Q}^{*}: h(y)- h(x) \ge \bigl\langle y-x,x^{*} \bigr\rangle , \forall y\in \mathcal{Q} \bigr\} . $$

Lemma 2.5

([7])

Let \(\mathcal{D}\) be a nonempty, closed, and convex subset of a smooth, strictly convex, and reflexive real Banach space \(\mathcal{Q}\). Then:

  1. 1.

    If \(x\in \mathcal{Q}\) and \(y \in \mathcal{D}\), then \(\tilde{x} =\Pi _{\mathcal{D}}x\) if and only if \(\langle \tilde{x}-y, Jx-J\tilde{x} \rangle \geq 0\) for all \(y\in \mathcal{D}\), where \(\Pi _{\mathcal{D}}\) is a generalized projection of \(\mathcal{Q}\) onto \(\mathcal{D}\) in Definition 2.1;

  2. 2.

    \(\phi (y,\tilde{x})+\phi (\tilde{x},x)\leq \phi (y,x) \) for all \(x\in \mathcal{Q}\), \(y \in \mathcal{D}\).

Lemma 2.6

([50])

Let \(\mathcal{Q}\) be a two-uniformly convex and smooth real Banach space. Then there exists a positive constant c such that

$$ \phi (x,y)\ge c \Vert x-y \Vert ^{2}, \quad \forall x,y\in \mathcal{Q}. $$

Remark 1

Without loss of generality, we may assume \(c \in \mathopen]0,1\mathclose[\).

Lemma 2.7

([51])

Let \(\{x_{n}\}\) and \(\{y_{n}\}\) be two sequences of a uniformly convex and smooth real Banach space. If either \(\{x_{n}\}\) or \(\{y_{n}\}\) is bounded and \(\lim_{n\to \infty }\phi (x_{n},y_{n} )=0\), then \(\lim_{n\to \infty } \Vert x_{n}-y_{n} \Vert = 0 \).

Remark 2

Using Condition \((P_{4})\), the converse of Lemma 2.7 is also true whenever \(\{x_{n}\}\) and \(\{y_{n}\}\) are bounded.

Lemma 2.8

([52])

Let \(\mathcal{D}\) be a nonempty, closed, and convex subset of a reflexive space \(\mathcal{Q}\) and A be a monotone and hemi-continuous map from \(\mathcal{D}\) into \(\mathcal{Q}^{*}\). Let \(\mathcal{R}\subset \mathcal{Q}\times \mathcal{Q}^{*}\) be a map defined by

$$ \mathcal{R}x = \textstyle\begin{cases} Ax+N_{\mathcal{D}}(x), &\textit{if } x\in \mathcal{D}, \\ \emptyset ,&\textit{if }x\notin \mathcal{D}, \end{cases} $$

where \(N_{\mathcal{D}}(x)\) is defined as follows: \(N_{\mathcal{D}}(x)=\{w^{*}\in \mathcal{Q}^{*}:\langle x-z,w^{*} \rangle \geq 0, \forall z\in \mathcal{D}\}\). Then \(\mathcal{R}\) is maximal monotone and \(\mathcal{R}^{-1}(0)=VI(A,\mathcal{D})\).

Lemma 2.9

([53])

Let \(\{ a_{n} \}\) be a sequence of nonnegative numbers satisfying the condition

$$ a_{n+1} \leq (1-\alpha _{n})a_{n} +\alpha _{n} \beta _{n}, \quad n\geq 0, $$

where \(\{ \alpha _{n} \}\) and \(\{ \beta _{n} \}\) are sequences of real numbers such that \((i)\) \(\{\alpha _{n}\}\subset [0,1]\) and \(\sum \alpha _{n}= \infty \); \((ii)\) \({\limsup } \beta _{n} \leq 0\). Then \({\lim } a_{n}=0 \).

Lemma 2.10

([54])

Let \(\{a_{n}\}\) be a sequence of real numbers such that there exists a subsequence \(\{a_{m_{j}}\}\) of \(\{a_{n}\}\) such that \(a_{m_{j}} < a_{m_{j}+1}\) for all \(j\in \mathbb{N}\). Then there exists a nondecreasing sequence \(\{n_{k}\}\) of \(\mathbb{N}\) such that \(\lim_{k\to \infty } n_{k}=\infty \), and the following properties are satisfied by all (sufficiently large) number \(k\in \mathbb{N}\):

$$ a_{n_{k}}\leq a_{n_{k}+1} \quad \textit{and} \quad a_{k} \leq a_{n_{k}+1}. $$

In fact, \(n_{k}\) is the largest number n in the set \(\{1,\ldots , k\}\) such that \(a_{n} < a_{n+1}\) holds.

Lemma 2.11

([55])

Let \(\mathcal{Q}^{*}\) be the dual space of a reflexive, strictly convex, and smooth Banach space \(\mathcal{Q}\). Then

$$ V \bigl(x,x^{*} \bigr)+2 \bigl\langle J^{-1}x^{*}-x,y^{*} \bigr\rangle \leq V \bigl(x,x^{*}+y^{*} \bigr) \quad \textrm{for all } x\in \mathcal{Q} \textrm{ and } x^{*},y^{*} \in \mathcal{Q}^{*}. $$

Lemma 2.12

([47])

Let \(\mathcal{Q}^{*}\) be the dual space of a uniformly smooth and strictly convex real Banach space \(\mathcal{Q}\). Let \(\mathcal{D}\) be a nonempty, closed, and convex subset of \(\mathcal{Q}\) and \(S:\mathcal{D}\rightarrow \mathcal{Q}^{*}\) be a continuous J-pseudo-contractive map. Let \(r>0\) and \(x\in \mathcal{Q}\). Then the following conditions hold:

  1. 1.

    There exists \(z\in \mathcal{D}\) such that \(\langle w-z,Sz \rangle - \frac{1}{r} \langle w-z,(1+r)Jz-Jx \rangle \le 0\), \(\forall w\in \mathcal{D}\).

  2. 2.

    Define a map \(T^{S}_{r}:\mathcal{Q}\rightarrow \mathcal{D}\) by

    $$ T^{S}_{r}(x):= \biggl\{ z\in \mathcal{D}: \langle w-z,Sz \rangle - \frac{1}{r} \bigl\langle w-z,(1+r)Jz-Jx \bigr\rangle \le 0, \forall w\in \mathcal{D} \biggr\} , \quad x \in \mathcal{Q}. $$

Then:

\((a)\):

\(T^{S}_{r}\) is single-valued;

\((b)\):

\(T^{S}_{r}\) is a firmly nonexpansive-type map, i.e.,

$$\begin{aligned}& \bigl\langle T^{S}_{r}x-T^{S}_{r}y,JT^{S}_{r}x-JT^{S}_{r}y \bigr\rangle \le \bigl\langle T^{S}_{r}x-T^{S}_{r}y,Jx-Jy \bigr\rangle ,\quad \forall x,y\in \mathcal{Q} ; \end{aligned}$$
\((c)\):

\(F(T^{S}_{r})=F_{J}(S)\), where \(F(T^{S}_{r})\) denotes the fixed point set of the map \(T^{S}_{r}\);

\((d)\):

\(F_{J}(S)\) is closed and convex;

\((e)\):

\(\phi (q,T^{S}_{r}x)+\phi (T^{S}_{r}x,x)\le \phi (q,x)\), \(\forall q \in F(T^{S}_{r})\), \(x \in \mathcal{Q}\).

3 Main results

In the sequel, \(c\in \mathopen]0,1\mathclose[\) is the constant appearing in Lemma 2.6.

Theorem 3.1

Let \(\mathcal{Q}^{*}\) be the dual space of a uniformly smooth and two-uniformly convex real Banach space \(\mathcal{Q}\). Let \(\mathcal{D}\) be a nonempty, closed, and convex subset of \(\mathcal{Q}\). Let \({A:\mathcal{D} \to \mathcal{Q}^{*}}\) be a monotone and L-Lipschitz map, \(\Theta : \mathcal{D}\times \mathcal{D}\to \mathbb{R}\) be a bi-functional satisfying conditions \(A_{1}\) to \(A_{4}\), and \(S:\mathcal{D}\to \mathcal{Q}^{*}\) be a continuous J-pseudo-contractive map with \(\Omega :=F_{J}(S)\cap VI(A,\mathcal{D})\cap EP(\Theta )\neq \emptyset \). Let the sequence \(\{x_{n}\}\) be generated by

$$ \textstyle\begin{cases} x_{0} \in \mathcal{D}, \\ z_{n}= \Pi _{\mathcal{D}}J^{-1}(Jx_{n}-\tau Ax_{n}), \\ T_{n}=\{w\in \mathcal{Q}: \langle w-z_{n},Jx_{n}-\tau Ax_{n}-Jz_{n} \rangle \leq 0 \}, \\ x_{n+1}=J^{-1} (\alpha _{n} Jx_{0}+(1-\alpha _{n})[\beta Jv_{n}+(1- \beta )Jw_{n}] ), \end{cases} $$
(3.1)

where \(v_{n}=K^{\Theta }_{r_{n}}T_{r_{n}}^{S}x_{n}\) with \(K_{r_{n}}^{\Theta }\) and \(T_{r_{n}}^{S}\) as the resolvent maps for Θ and S, respectively, \(\{r_{n}\}\subset \mathopen[a,\infty \mathclose[\) for some \(a>0\), \(w_{n}=\Pi _{T_{n}} J^{-1}( Jx_{n}-\tau Az_{n})\), \(\tau \in \mathopen]0,1\mathclose[\) with \(\tau <\frac{c}{L}\) and \(\{\alpha _{n}\} \subset \mathopen]0,1\mathclose[\) with \(\lim \alpha _{n} =0\) and \(\sum \alpha _{n}=\infty \). Then the sequence \(\{x_{n}\}\) converges strongly to a point \(\Pi _{\Omega }x_{0}\).

Proof

We divide the proof into two steps.

Step 1. We show that \(\{x_{n}\}\) is bounded.

Let \(p\in \Omega \). Setting \(t_{n}:=J^{-1}(Jx_{n}-\tau Az_{n})\) and \(y_{n}=T_{r_{n}}^{S}x_{n}\), then \(v_{n}=K_{r_{n}}^{\Theta }y_{n}\).

Applying the definition of A, Lemma 2.5, and property \(P_{3}\), we compute as follows:

$$\begin{aligned} \phi (p,w_{n}) \leq &\phi (p,t_{n})-\phi (w_{n}, t_{n}) \\ =& \Vert p \Vert ^{2}-2\langle p,Jx_{n}-\tau Az_{n}\rangle - \Vert w_{n} \Vert ^{2}+2 \langle w_{n},Jx_{n}-\tau Az_{n}\rangle \\ =&\phi (p,x_{n})-\phi (w_{n},x_{n}) +2\tau \langle p-w_{n},Az_{n} \rangle \\ =&\phi (p,x_{n})-\phi (w_{n},x_{n}) +2\tau \bigl[ \langle p-z_{n},Az_{n}-Ap \rangle +\langle p-z_{n},Ap\rangle +\langle z_{n}-w_{n},Az_{n} \rangle \bigr] \\ \le &\phi (p,x_{n})-\phi (w_{n},x_{n}) +2\tau \langle z_{n}-w_{n},Az_{n} \rangle \\ =&\phi (p,x_{n})- \bigl[\phi (w_{n},z_{n})+ \phi (z_{n},x_{n})+2\langle z_{n}-w_{n},Jx_{n}-Jz_{n} \rangle \bigr] +2\tau \langle z_{n}-w_{n},Az_{n} \rangle \\ =&\phi (p,x_{n})-\phi (w_{n},z_{n})-\phi (z_{n},x_{n})+2\langle w_{n}-z_{n},Jx_{n}- \tau Az_{n}-Jz_{n}\rangle. \end{aligned}$$
(3.2)

Since \(w_{n}\in T_{n}\), we have that \(\langle w_{n}-z_{n},Jx_{n}-\tau Ax_{n}-Jz_{n} \rangle \leq 0\). Thus,

$$\begin{aligned} \langle w_{n}-z_{n},Jx_{n}-\tau Az_{n}-Jz_{n} \rangle =&\langle w_{n}-z_{n},Jx_{n}- \tau Ax_{n}-Jz_{n} \rangle +\tau \langle w_{n}-z_{n}, Ax_{n}-Az_{n} \rangle \\ \le & \tau \langle w_{n}-z_{n}, Ax_{n}-Az_{n} \rangle . \end{aligned}$$
(3.3)

Substituting inequality (3.3) in inequality (3.2) and using the Lipschitz condition of A and Lemma 2.6, we have

$$\begin{aligned} \phi (p,w_{n}) \leq &\phi (p,x_{n})-\phi (w_{n},z_{n})-\phi (z_{n},x_{n})+2 \tau \langle w_{n}-z_{n},Ax_{n}-Az_{n} \rangle \\ \leq &\phi (p,x_{n})-\phi (w_{n},z_{n})-\phi (z_{n},x_{n})+2\tau L \Vert w_{n}-z_{n} \Vert \Vert x_{n}-z_{n} \Vert \\ \leq &\phi (p,x_{n})-\phi (w_{n},z_{n})-\phi (z_{n},x_{n})+ \frac{\tau L}{c} \bigl(\phi (w_{n},z_{n})+\phi (z_{n},x_{n}) \bigr) \\ =&\phi (p,x_{n})- \biggl(1-\frac{\tau L}{c} \biggr) \bigl(\phi (w_{n},z_{n})+ \phi (z_{n},x_{n}) \bigr). \end{aligned}$$
(3.4)

Also, by a result of Blum and Otelli [3], and also by Lemma 2.12(2(e)), we have

$$\begin{aligned} \phi (p,v_{n}) \leq &\phi (p,y_{n})-\phi (v_{n},y_{n}) \\ \leq &\phi (p,x_{n})-\phi (y_{n},x_{n})-\phi (v_{n},y_{n}). \end{aligned}$$
(3.5)

Now, using the recursion formula (3.1), property \(P_{2}\), inequalities (3.4) and (3.5), we have

$$\begin{aligned} \phi (p,x_{n+1}) =&\phi \bigl(p,J^{-1} \bigl(\alpha _{n} Jx_{0}+(1- \alpha _{n}) \bigl[\beta Jv_{n}+(1-\beta )Jw_{n} \bigr] \bigr) \bigr) \\ \le &\alpha _{n}\phi (p,x_{0})+(1-\alpha _{n}) \bigl[\beta \phi (p,v_{n})+(1- \beta )\phi (p,w_{n}) \bigr] \\ \le &\alpha _{n}\phi (p,x_{0})+(1-\alpha _{n}) \beta \bigl[\phi (p,x_{n})- \phi (y_{n},x_{n})- \phi (v_{n},y_{n}) \bigr] \\ & {} + (1-\alpha _{n}) (1-\beta ) \biggl[\phi (p,x_{n})- \biggl(1- \frac{\tau L}{c} \biggr) \bigl(\phi (w_{n},z_{n})+ \phi (z_{n},x_{n}) \bigr) \biggr] \\ \le &\alpha _{n}\phi (p,x_{0})+(1-\alpha _{n}) \phi (p,x_{n}) -(1- \alpha _{n})\beta \bigl[\phi (y_{n},x_{n})+\phi (v_{n},y_{n}) \bigr] \\ & {} - (1-\alpha _{n}) (1-\beta ) \biggl(1-\frac{\tau L}{c} \biggr) \bigl[\phi (w_{n},z_{n})+\phi (z_{n},x_{n}) \bigr] \end{aligned}$$
(3.6)
$$\begin{aligned} \le &\alpha _{n}\phi (p,x_{0})+(1-\alpha _{n}) \phi (p,x_{n}) \\ \le & \max \bigl\{ \phi (p,x_{0}), \phi (p,x_{n}) \bigr\} . \end{aligned}$$
(3.7)

Hence, by induction, we have that \(\phi (p,x_{n})\le \phi (p,x_{0})\), \(\forall n\ge 0\), which implies that \(\{\phi (p,x_{n})\}\) is bounded. By property \(P_{1}\), \(\{x_{n}\}\) is bounded. Consequently, \(\{w_{n}\}\), \(\{v_{n}\}\), \(\{y_{n}\}\), and \(\{z_{n}\}\) are bounded.

Step 2. We show that \(\{x_{n}\}\) converges strongly to a point \(u:=\Pi _{\Omega }x_{0}\).

Two cases arise.

Case 1. There exists \(N_{0}\in \mathbb{N}\) such that \(\phi ( u,x_{n})\ge \phi ( u,x_{n+1})\), \(\forall n\ge N_{0}\).

This implies that \(\lim_{n\rightarrow \infty } \phi ( u,x_{n})\) exists.

Claim 1. \(\lim_{n\rightarrow \infty } \Vert y_{n}-x_{n} \Vert = \lim_{n\rightarrow \infty } \Vert v_{n}-y_{n} \Vert = \lim_{n\rightarrow \infty } \Vert w_{n}-z_{n} \Vert = \lim_{n\rightarrow \infty } \Vert z_{n}-x_{n} \Vert = \lim_{n\rightarrow \infty } \Vert x_{n+1}-x_{n} \Vert =0\).

Setting \(\sigma :=(1-\beta )(1-\alpha _{n}) (1-\frac{\tau L}{c} )>0\) and \(\xi :=(1-\alpha _{n})\beta >0\), from inequality (3.6), we have

$$\begin{aligned}& \phi (y_{n},x_{n})+\phi (v_{n},y_{n}) \le {\xi }^{-1} \bigl(\phi ( u,x_{n}) - \phi ( u,x_{n+1})+ \alpha _{n}\phi (u,x_{0}) \bigr), \end{aligned}$$
(3.8)
$$\begin{aligned}& \phi (w_{n},z_{n})+\phi (z_{n},x_{n}) \le {\sigma }^{-1} \bigl(\phi ( u,x_{n}) - \phi ( u,x_{n+1})+ \alpha _{n}\phi (u,x_{0}) \bigr) . \end{aligned}$$
(3.9)

Using the condition on \(\{\alpha _{n}\}\) and taking limit on both sides of inequalities (3.8) and (3.9), we have

$$ \lim_{n\rightarrow \infty }\phi ( y_{n},x_{n})= \lim_{n\rightarrow \infty }\phi ( v_{n},y_{n})= \lim _{n\rightarrow \infty }\phi ( w_{n},z_{n})= \lim _{n\rightarrow \infty }\phi ( z_{n},x_{n})=0. $$
(3.10)

By Lemma 2.7 and equation (3.10), we obtain that

$$ \lim_{n\rightarrow \infty } \Vert y_{n}-x_{n} \Vert = \lim_{n\rightarrow \infty } \Vert v_{n}-y_{n} \Vert = \lim_{n\rightarrow \infty } \Vert w_{n}-z_{n} \Vert = \lim_{n\rightarrow \infty } \Vert z_{n}-x_{n} \Vert =0. $$
(3.11)

Combining equation (3.11), recursion formula (3.1) and applying the triangle inequality, we have

$$ \lim_{n\rightarrow \infty } \Vert v_{n}-x_{n} \Vert = \lim_{n\rightarrow \infty } \Vert w_{n}-x_{n} \Vert = \lim_{n\rightarrow \infty } \Vert x_{n+1}-x_{n} \Vert =0. $$
(3.12)

Claim 2. We show that \(\Omega _{w}(x_{n}) \subset \Omega \); \(\Omega _{\omega }(x_{n})\) is the set of weak sub-sequential limits of \(\{x_{n}\}\).

First, we show that \(\Omega _{w}(x_{n}) \subset VI(A,\mathcal{D})\).

Let \(x^{*}\in \Omega _{\omega }(x_{n})\) and \(\{x_{n_{j}}\}\) be a subsequence of \(\{x_{n}\}\) such that \(x_{n_{j}}\rightharpoonup x^{*}\) as \(j\rightarrow \infty\).

Applying the weak convergence of \(\{x_{n}\}\) and equation (3.11), we have that \(y_{n_{j}}\rightharpoonup x^{*}\) as \(j\rightarrow \infty \).

Let

$$ \mathcal{R}x= \textstyle\begin{cases} Ax+N_{\mathcal{D}}(x) & \textrm{if } x \in \mathcal{D}, \\ \emptyset & \textrm{if } x \notin \mathcal{D}, \end{cases} $$

be as defined in Lemma 2.8. Then \(\mathcal{R}\) is maximal monotone, and \(0\in \mathcal{R}x \) ⇔ \(x\in VI(A,\mathcal{D})\). It is known that if \(\mathcal{R}\) is maximal monotone, then given \((x,v^{*})\in \mathcal{Q}\times \mathcal{Q}^{*} \) such that if \(\langle x-y, v^{*}-y^{*} \rangle \geq 0 \), \(\forall (y,y^{*}) \in G(\mathcal{R}) \), where \(G(\mathcal{R})\) denotes the graph of \(\mathcal{R}\), one has \(v^{*} \in \mathcal{R}x \).

Claim. \((x^{*},0)\in G(\mathcal{R})\).

Let \((x,z^{*}) \in G(\mathcal{R})\). It suffices to show that \(\langle x-x^{*}, z^{*} \rangle \geq 0 \).

Now, \((x,z^{*}) \in G(\mathcal{R})\implies z^{*} \in \mathcal{R}x = Ax+N_{\mathcal{D}}(x)\), ⇒ \(z^{*}-Ax \in N_{\mathcal{D}}(x)\).

This implies that \(\langle x-t, z^{*} - Ax \rangle \geq 0\), \(\forall t \in \mathcal{D}\). In particular, \(\langle x-z_{n} , z^{*} - Ax \rangle \geq 0 \).

But \(z_{n}=\Pi _{\mathcal{D}}J^{-1} (Jx_{n}-\tau Ax_{n} )\), \(\forall n \ge 0\), and \(x\in \mathcal{D}\). By the characterization of the generalized projection, we have

$$ \langle z_{n}-x,Jx_{n}-\tau Ax_{n} -Jz_{n} \rangle \ge 0. $$

This implies that

$$ \biggl\langle x-z_{n}, \frac{Jz_{n}-Jx_{n}}{\tau } + Ax_{n} \biggr\rangle \geq 0, \quad \forall n \geq 0. $$
(3.13)

Using inequality (3.13) and the fact that \(\langle x-z_{n} , z^{*} - Ax \rangle \geq 0\), we get that

$$\begin{aligned} \bigl\langle x-z_{n_{j}}, z^{*} \bigr\rangle &\geq \langle x-z_{n_{j}}, Ax \rangle \geq \langle x-z_{n_{j}}, Ax \rangle - \biggl\langle x-z_{n_{j}}, \frac{Jz_{n_{j}}-Jx_{n_{j}}}{\tau } + Ax_{n_{j}} \biggr\rangle \\ &= \langle x-z_{{n_{j}}}, Az_{{n_{j}}}- Ax_{{n_{j}}} \rangle - \biggl\langle x-z_{{n_{j}}}, \frac{Jz_{{n_{j}}}-Jx_{{n_{j}}}}{\tau } \biggr\rangle . \end{aligned}$$

Applying the monotonicity condition on A, equation (3.11), and the uniform continuity of J on bounded subset sets of \(\mathcal{Q}\), we have

$$ \bigl\langle x-x^{*}, z^{*} \bigr\rangle \geq 0, $$

which implies that \(\Omega _{w}(x_{n}) \subset VI(A,\mathcal{D})\).

Next, we show that \(\Omega _{w}(x_{n}) \subset F_{J}(S)\).

Since \(\lim_{n\rightarrow \infty } \Vert y_{n}-x_{n} \Vert =0\) and J is uniformly continuous on bounded set, and also \(\{r_{n}\}\subset \mathopen[a,\infty \mathclose[\) by assumption, we get that \(\lim_{n\rightarrow \infty } \frac{ \Vert Jy_{n} -J x_{n} \Vert }{r_{n}}=0\). But \(y_{n}=T_{r_{n}}^{S}x_{n}\). By Lemma 2.12(2), we have

$$ \langle y-y_{n},Sy_{n} \rangle - \frac{1}{r_{n}} \bigl\langle y-y_{n},(1+r_{n})Jy_{n}-Jx_{n} \bigr\rangle \le 0, \quad \forall y\in \mathcal{D}. $$
(3.14)

Let \(\alpha \in \mathopen]0,1\mathclose]\) and \(y\in \mathcal{D}\). Then \(y_{\alpha }=\alpha y+(1-\alpha )x^{*}\in \mathcal{D}\). By inequality (3.14), Definition 2.3, and for some constant \(M_{0}>0\), we get that

$$\begin{aligned} \langle y_{n_{j}}-y_{\alpha },Sy_{\alpha } \rangle \ge & \langle y_{n_{j}}-y_{\alpha },Sy_{\alpha } \rangle + \langle y_{\alpha }-y_{n_{j}},Sy_{n_{j}} \rangle - \frac{1}{r_{n_{j}}} \bigl\langle y_{\alpha }-y_{n_{j}},(1+r_{n_{j}})Jy_{n_{j}}-Jx_{n_{j}} \bigr\rangle \\ =& \langle y_{n_{j}}-y_{\alpha },Sy_{\alpha }-Sy_{n_{j}} \rangle - \frac{1}{r_{n_{j}}} \bigl\langle y_{\alpha }-y_{n_{j}},(1+r_{n_{j}})Jy_{n_{j}}-Jx_{n_{j}} \bigr\rangle \\ \ge & \langle y_{n_{j}}-y_{\alpha },Jy_{\alpha }-Jy_{n_{j}} \rangle - \frac{1}{r_{n_{j}}} \bigl\langle y_{\alpha }-y_{n_{j}},(1+r_{n_{j}})Jy_{n_{j}}-Jx_{n_{j}} \bigr\rangle \\ \ge & \langle y_{n_{j}}-y_{\alpha },Jy_{\alpha } \rangle -M_{0} \frac{ \Vert Jy_{n_{j}}-Jx_{n_{j}} \Vert }{r_{n_{j}}}. \end{aligned}$$
(3.15)

Taking limit on both sides of inequality (3.15), we have

$$ \bigl\langle x^{*}-y_{\alpha },Sy_{\alpha } \bigr\rangle \ge \bigl\langle x^{*}-y_{\alpha },Jy_{\alpha } \bigr\rangle . $$
(3.16)

From inequality (3.16), we have

$$ \bigl\langle x^{*}-y,S \bigl(x^{*}+\alpha \bigl(y-x^{*} \bigr) \bigr) \bigr\rangle \ge \bigl\langle x^{*}-y,J \bigl(x^{*}+ \alpha \bigl(y-x^{*} \bigr) \bigr) \bigr\rangle . $$
(3.17)

Using the fact that S is continuous and J is uniformly continuous on bounded subsets of \(\mathcal{Q}\), letting \(\alpha \downarrow 0\), we get from inequality (3.17) that

$$ \bigl\langle x^{*}-y,Sx^{*} \bigr\rangle \ge \bigl\langle x^{*}-y,Jx^{*} \bigr\rangle , \quad \forall y \in \mathcal{D} \quad \iff \quad 0\ge \bigl\langle x^{*}-y,Jx^{*}-Sx^{*} \bigr\rangle , \quad \forall y\in \mathcal{D}. $$

Set \(y:=J^{-1}(Sx^{*})\). Since \(\mathcal{Q}^{*}\) is strictly convex and \(J^{-1}\) is monotone, we get that

$$ \bigl\langle x^{*}-J^{-1} \bigl(Sx^{*} \bigr),Jx^{*}-Sx^{*} \bigr\rangle =0, $$
(3.18)

which implies that \(Sx^{*}=Jx^{*}\). Thus, \(x^{*}\in F_{J}(S)\), which implies that \(\Omega _{w}(x_{n})\subset F_{J}(S)\).

Finally, we show that \(\Omega _{w}(x_{n}) \subset EP(\Theta )\).

Since \(\lim_{n\rightarrow \infty } \Vert v_{n}-y_{n} \Vert =0\) and J is uniformly continuous on bounded sets, and also \(\{r_{n}\}\subset \mathopen[a,\infty\mathclose[\) by assumption, we get that \(\lim_{n\rightarrow \infty } \frac{ \Vert Jv_{n} -J y_{n} \Vert }{r_{n}}=0\). But \(v_{n}=K_{r_{n}}^{\Theta }y_{n}\). By a result of Blum and Otelli [3], we have

$$ \Theta (v_{n},y)+\frac{1}{r_{n}}\langle y-v_{n},Jv_{n}- Jy_{n}\rangle \ge 0, \quad \forall y\in \mathcal{D}. $$
(3.19)

By \(A_{2}\), we have that \(\frac{1}{r_{n_{j}}}\langle y-v_{n_{j}}, Jv_{n_{j}} -Jy_{n_{j}}\rangle \ge \Theta (y,v_{n_{j}})\). Since \({y\mapsto \Theta (v,y)}\) is convex and lower semi-continuous, we obtain from the above inequality that \(0\ge \Theta (y,x^{*})\), \(\forall y\in \mathcal{D}\). For \(\alpha \in \mathopen]0,1\mathclose]\) and \(y\in \mathcal{D}\), letting \(y_{\alpha }=\alpha y+(1-\alpha )x^{*}\), then \(y_{\alpha }\in \mathcal{D}\), since \(\mathcal{D}\) is closed and convex. Hence,

$$ 0\ge \Theta \bigl(y_{\alpha },x^{*} \bigr), \quad \forall y\in \mathcal{D}. $$

By \(A_{1}\) and \(A_{4}\), we have

$$ \begin{aligned}[b] 0={}&\Theta (y_{\alpha },y_{\alpha })\le \alpha \Theta (y_{\alpha },y) +(1- \alpha )\Theta \bigl(y_{\alpha },x^{*} \bigr) \le \alpha \Theta (y_{\alpha },y)\\\le {}&\Theta \bigl(x^{*}+ \alpha \bigl(y-x^{*} \bigr),y \bigr).\end{aligned} $$
(3.20)

Letting \(\alpha \downarrow 0\), by \(A_{3}\), we obtain that \(\Theta (x^{*},y)\ge 0\). Hence, \(\Omega _{w}(x_{n}) \subset EP(\Theta )\). Using this and the fact that \(\Omega _{w}(x_{n}) \subset VI(A,\mathcal{D})\) and \(\Omega _{w}(x_{n}) \subset F_{J}(S)\), we conclude that

$$ x^{*}\in \Omega :=F_{J}(S)\cap VI(A,\mathcal{D})\cap EP( \Theta ). $$

Claim 3. We show that \(\{x_{n}\}\) converges strongly to the point \(u:=\Pi _{\Omega }x_{0}\).

Since \(\{x_{n}\}\) is bounded, then there exists a subsequence \(\{x_{n_{j}}\}\) of \(\{x_{n}\}\) such that \(x_{n_{j}}\rightharpoonup w\) and

$$ \limsup_{n\to \infty } \langle x_{n}-u,Jx_{0}-Ju \rangle =\lim_{j \to \infty }\langle x_{n_{j}}-u,Jx_{0}-Ju \rangle =\langle w-u,Jx_{0}-Ju \rangle . $$
(3.21)

Now, applying Lemma 2.11, inequalities (3.4), (3.5), equation (3.12), and some \(M_{0}>0\), we have

$$\begin{aligned} \phi (u,x_{n+1}) =&V \bigl(u, \alpha _{n} Jx_{0}+(1-\alpha _{n}) \bigl[ \beta Jv_{n}+(1- \beta )Jw_{n} \bigr] \bigr) \\ \le &V \bigl(u,\alpha _{n} Ju+(1-\alpha _{n}) \bigl[\beta Jv_{n}+(1-\beta )Jw_{n} \bigr] \bigr)+2\alpha _{n}\langle x_{n+1}-u,Jx_{0}-Ju\rangle \\ \le &(1-\alpha _{n})V \bigl(u,\beta Jv_{n}+(1-\beta )Jw_{n} \bigr)+2\alpha _{n} \langle x_{n+1}-u,Jx_{0}-Ju \rangle \\ \le &(1-\alpha _{n}) \bigl[\beta V(u,Jv_{n})+(1-\beta )V(u,Jw_{n}) \bigr]+2 \alpha _{n}\langle x_{n+1}-u,Jx_{0}-Ju\rangle \\ =&(1-\alpha _{n}) \bigl[\beta \phi (u,v_{n})+(1-\beta ) \phi (u,w_{n}) \bigr]+2 \alpha _{n}\langle x_{n+1}-u,Jx_{0}-Ju\rangle \\ \le &(1-\alpha _{n})\phi (u,x_{n})+2\alpha _{n} \bigl(\langle x_{n}-u,Jx_{0}-Ju \rangle + \Vert x_{n+1}-x_{n} \Vert M_{0} \bigr) . \end{aligned}$$
(3.22)

By inequality (3.21), Lemmas 2.5 and 2.9, it follows from inequality (3.22) that \(\lim_{n\rightarrow \infty }\phi (u, x_{n})= 0\). Hence, by Lemma 2.7, we get that \(\lim_{n\rightarrow \infty } \Vert x_{n}-u \Vert = 0\).

Case 2. There exists a subsequence \(\{x_{m_{j}}\} \subset \{x_{n}\}\) such that \(\phi ( u,x_{m_{j}+1})> \phi ( u,x_{m_{j}})\) for all \(j\in \mathbb{N}\), \(u \in \Omega \). By Lemma 2.10, there exists a nondecreasing sequence \(\{n_{i}\}\subset \mathbb{N}\) such that \(\lim_{i \to \infty } n_{i}=\infty \) and the following inequalities hold:

$$ \phi (u,x_{n_{i}}) \leq \phi (u,x_{n_{i}+1}) \quad \textrm{and} \quad \phi (u,x_{i})\leq \phi (u,x_{n_{i}+1}) \quad \textrm{for all } i\in \mathbb{N}. $$

Now, from inequality (3.6), we have

$$\begin{aligned} \phi (u,x_{n_{i}}) \le &\phi (u,x_{n_{i}+1}) \\ \le &\alpha _{n}\phi (u,x_{0})+(1-\alpha _{n}) \phi (u,x_{n_{i}}) -(1- \alpha _{n_{i}})\beta \bigl[\phi (y_{n_{i}},x_{n_{i}})+\phi (v_{n_{i}},y_{n_{i}}) \bigr] \\ & {} - (1-\alpha _{n_{i}}) (1-\beta ) \biggl(1-\frac{\tau L}{c} \biggr) \bigl[\phi (w_{n_{i}},z_{n_{i}})+\phi (z_{n_{i}},x_{n_{i}}) \bigr]. \end{aligned}$$
(3.23)

From inequality (3.23), with \(\sigma =(1-\beta )(1-\alpha _{n_{i}}) (1-\frac{\tau L}{c} )>0\) and \(\xi =(1-\alpha _{n_{i}})\beta >0\), we have

$$\begin{aligned}& \phi (y_{n_{i}},x_{n_{i}})+\phi (v_{n_{i}},y_{n_{i}}) \leq \xi ^{-1} \alpha _{n_{i}}\phi (u,x_{0}), \\& \phi (w_{n_{i}},z_{n_{i}})+\phi (z_{n_{i}},x_{n_{i}}) \le \sigma ^{-1} \alpha _{n_{i}}\phi (u,x_{0}). \end{aligned}$$
(3.24)

Since \(\alpha _{n_{i}} \to 0\) as \(i\to \infty \), we get that

$$ \lim_{i\rightarrow \infty }\phi (y_{n_{i}},x_{n_{i}})= \lim _{i\rightarrow \infty }\phi (v_{n_{i}},y_{n_{i}})= \lim _{i\rightarrow \infty }\phi (w_{n_{i}},z_{n_{i}})= \lim _{i\rightarrow \infty }\phi (z_{n_{i}},x_{n_{i}})=0. $$

Using similar arguments as in Case 1 above, we have the fact that

  1. (1)

    \(\lim_{i\rightarrow \infty } \Vert y_{n_{i}}-x_{n_{i}} \Vert = \lim_{i\rightarrow \infty } \Vert v_{n_{i}}-y_{n_{i}} \Vert = \lim_{i\rightarrow \infty } \Vert w_{n_{i}}-z_{n_{i}} \Vert = \lim_{i\rightarrow \infty } \Vert z_{n_{i}}-x_{n_{i}} \Vert = \lim_{i\rightarrow \infty } \Vert x_{n_{i}+1}-x_{n_{i}} \Vert =0\);

  2. (2)

    \(\Omega _{w}(x_{n_{i}})\subset \Omega :=F_{J}(S)\cap VI(A,\mathcal{D}) \cap EP(\Theta )\).

Next, we show that \(\{x_{i}\}\) converges strongly to a point \(u:=\Pi _{\Omega }x_{0}\).Since \(\{x_{n_{i}}\}\) is bounded, there exists a subsequence \(\{x_{n_{i_{j}}}\}\) of \(\{x_{n_{i}}\}\) such that \(x_{n_{i_{j}}}\rightharpoonup z \) as \(j\to \infty \) and

$$ \limsup_{i \to \infty } \langle x_{n_{i}}-u, Jx_{0}-Ju \rangle = \lim_{j\to \infty } \langle x_{n_{i_{j}}}-u,Jx_{0}-Ju \rangle = \langle z-u, Jx_{0}-Ju\rangle . $$
(3.25)

From inequality (3.22) and Lemma 2.10, we get that

$$\begin{aligned} \phi (u,x_{n_{i}+1}) \le &(1-\alpha _{n_{i}})\phi (u,x_{n_{i}})+2 \alpha _{n_{i}} \bigl(\langle x_{{n_{i}}}-u,Jx_{0}-Ju \rangle + \Vert x_{n_{i}+1}-x_{n_{i}} \Vert M_{0} \bigr) \\ \le &(1-\alpha _{n_{i}})\phi (u,x_{n_{i}+1})+2\alpha _{n_{i}} \bigl( \langle x_{{n_{i}}}-u,Jx_{0}-Ju\rangle + \Vert x_{n_{i}+1}-x_{n_{i}} \Vert M_{0} \bigr) . \end{aligned}$$

Since \(\alpha _{n_{i}} >0\), for all \(i\geq 1\), we get that

$$ \phi (u,x_{i})\le \phi (u,x_{n_{i}+1})\le 2\langle x_{{n_{i}}}-u,Jx_{0}-Ju \rangle + \Vert x_{n_{i}+1}-x_{n_{i}} \Vert M_{0}. $$

By Lemma 2.5 and the fact that \(\lim_{i\to \infty } \Vert x_{n_{i}+1}-x_{n_{i}} \Vert =0\), we have

$$ \limsup_{i \to \infty } \phi (u,x_{i})\leq \limsup _{i\to \infty } 2 \langle x_{n_{i}}-u,Jx_{0}-Ju \rangle + 2M_{0}\limsup_{i\to \infty } \Vert x_{n_{i}+1}-x_{n_{i}} \Vert , $$

which implies that \(\limsup_{i\to \infty } \phi (u,x_{i}) \leq 0\). By Lemma 2.7, we conclude that \(x_{i} \to u\), as \(i\to \infty \). □

Corollary 3.2

Let \(\mathcal{Q}^{*}\) be the dual space of a uniformly smooth and two-uniformly convex real Banach space \(\mathcal{Q}\). Let \(\mathcal{D}\) be a nonempty, closed, and convex subset of \(\mathcal{Q}\). Let \({A:\mathcal{D} \to \mathcal{Q}^{*}}\) be a monotone and L-Lipschitz map, \(\Theta : \mathcal{D}\times \mathcal{D}\to \mathbb{R}\) be a bi-functional satisfying conditions \(A_{1}\) to \(A_{4}\), with \(K_{r_{n}}^{\Theta }\) as the resolvent map of Θ. Let \(B:\mathcal{D}\to \mathcal{Q}^{*}\) be a continuous monotone map with \(\Omega :=B^{-1}(0)\cap VI(A,\mathcal{D})\cap EP(\Theta )\neq \emptyset \) and \(\{x_{n}\}\) be a sequence generated by algorithm (3.1). Assume \(\tau \in \mathopen]0,1\mathclose[\) with \(\tau <\frac{c}{L}\), \(\{r_{n}\}\subset \mathopen[a,\infty \mathclose[\) for some \(a>0\) and \(\{\alpha _{n}\} \subset \mathopen]0,1\mathclose[\) with \(\lim \alpha _{n} =0\) and \(\sum \alpha _{n}=\infty \). Then the sequence \(\{x_{n}\}\) converges strongly to a point \(\Pi _{\Omega } x_{0}\).

Proof

Set \(S:=J-B\). Then we have that S is a continuous J-pseudo-contractive map with \({\Omega :=F_{J}(S)\cap VI(A,\mathcal{D})\cap EP(\Theta )=B^{-1}(0) \cap VI(A,\mathcal{D})\cap EP(\Theta )}\). Hence, the result follows from Theorem 3.1. □

4 Application to convex optimization problem

In this section, we apply our theorem to finding a minimizer of a convex function defined on a real Banach space.

Theorem 4.1

Let \(\mathcal{Q}^{*}\) be the dual space of a uniformly smooth and two-uniformly convex real Banach space \(\mathcal{Q}\). Let \(\mathcal{D}\) be a nonempty, closed, and convex subset of \(\mathcal{Q}\). Let \({A:\mathcal{D} \to \mathcal{Q}^{*}}\) be a monotone and L-Lipschitz map, \(\Theta : \mathcal{D}\times \mathcal{D}\to \mathbb{R}\) be a bi-functional satisfying conditions \(A_{1}\) to \(A_{4}\), with \(K_{r_{n}}^{\Theta }\) as the resolvent map of Θ. Let \(h:\mathcal{D}\to \mathcal{Q}\) be a Fréchet differentiable and convex functions and \(d h: \mathcal{D}\to \mathcal{Q}^{*}\) be a monotone and continuous map with \({\Omega :=d h^{-1}(0)\cap VI(A,\mathcal{D})\cap EP(\Theta )\neq \emptyset }\). Let \(\{x_{n}\}\) be a sequence generated by algorithm (3.1). Assume \(\tau \in \mathopen]0,1\mathclose[\) with \(\tau <\frac{c}{L}\), \(\{r_{n}\}\subset \mathopen[a,\infty \mathclose[\) for some \(a>0\) and \(\{\alpha _{n}\} \subset \mathopen]0,1\mathclose[\) with \(\lim \alpha _{n} =0\) and \(\sum \alpha _{n}=\infty \). Then the sequence \(\{x_{n}\}\) converges strongly to a point \(\Pi _{\Omega } x_{0}\).

Proof

Setting \(d h=B\) in Corollary 3.2, then \(J-d h\) is a continuous J-pseudo-contractive map. Furthermore, we get that \({d h^{-1}}(0)\cap VI(A,\mathcal{D})\cap EP(\Theta )= {\arg \inf } _{{ y\in \mathcal{D}}} h(y)\cap VI(A,\mathcal{D}) \cap EP(\Theta )\). Therefore, the result follows from Corollary 3.2. □

5 Numerical experiment

Here, we present an example to confirm the implementability of our algorithm (3.1).

Example 1

Let \(\mathcal{Q}=L^{\mathbb{R}}_{p}([0,1])\), \(1< p\le 2\).Then \(\mathcal{Q}^{*}=L^{\mathbb{R}}_{q}([0,1])\), \(\frac{1}{p}+\frac{1}{q}=1\). Let \(\mathcal{D}:=\overline{B_{p}(0,1)}\subset \mathcal{Q}\),

$$\begin{aligned}& \Vert x \Vert _{\mathcal{L}_{p}}:={ \biggl( \int _{0}^{1} \bigl\vert x(t) \bigr\vert ^{p}\,dt \biggr)}^{\frac{1}{p}} \quad \text{and} \\& T= \biggl\{ w \in \mathcal{Q}: \int _{0}^{1} \bigl([w-z](t)[Jx-\tau Ax-Jz](t) \bigr)\,dt\le 0 \biggr\} , \end{aligned}$$

where \(A:\mathcal{D}\to \mathcal{Q}^{*}\) is a map defined by

$$ (Ax) (t)=Jx(t) \quad \text{for all }t\in [0,1]. $$

Clearly, A monotone and L-Lipschitz and \(VI(A,\mathcal{D})=\{0\}\).

Let \(B:\mathcal{D}\to \mathcal{Q}^{*}\) be a map defined by

$$ (Bx) (t)=(1+t)Jx(t)\quad \text{for all }t\in [0,1]. $$

Clearly, B is monotone and continuous. Define \(S:=J-B\). Therefore, S is a continuous J-pseudo-contractive map with \(F_{J}(S)=\{0\}\). Furthermore, from Lemma 2.12, we have

$$ T^{S}_{r}(x):= \biggl\{ z\in \mathcal{D}: \langle w-z,Sz\rangle - \frac{1}{r} \bigl\langle w-z,(1+r)Jz-Jx \bigr\rangle \le 0, \forall w\in \mathcal{D} \biggr\} , \quad x \in \mathcal{Q}. $$

Therefore, for any \(x\in \mathcal{Q}\) and for some \(z\in \mathcal{Q}\), we have

$$ \bigl(T^{S}_{r}x \bigr) (t)=J^{-1} \biggl(\frac{Jx(t)}{1+tr} \biggr), \quad x \in \mathcal{Q}. $$

Let \(\Theta : \mathcal{D}\times \mathcal{D}\rightarrow \mathbb{R}\) be a map defined by \(\Theta (x,y)=\langle y-x,Jx\rangle \), \(\forall y\in \mathcal{D}\).

Clearly, Θ satisfies conditions \(A_{1}\) to \(A_{4}\) and \(0\in EP(\Theta )\). Moreover, from a result of Blum and Otelli, we have

$$ K^{\Theta }_{r}(x):= \biggl\{ z\in \Theta (z,y)+ \frac{1}{r}\langle y-z,Jz-Jx \rangle \ge 0, \forall y\in \mathcal{D} \biggr\} . $$

Thus, for any \(x\in \mathcal{Q}\) and for some \(z\in \mathcal{D}\), we have that

$$ \bigl\langle y,Jz(r+1)-Jx \bigr\rangle \ge \bigl\langle z,Jz(r+1)-Jx \bigr\rangle , \quad \forall y\in \mathcal{D}. $$

Hence,

$$ \bigl(K^{{\Theta }}_{r}(x) \bigr) (t)={\frac{x(t)}{(r+1)}}\quad \text{for all }t \in [0,1]. $$

Therefore, \(\Omega :=F_{J}(S)\cap VI(A,\mathcal{D})\cap EP(\Theta )=\{0\}\).

Let \(P_{T}:\mathbb{R}\rightarrow T\) and \(P_{\mathcal{D}}:\mathbb{R}\rightarrow \mathcal{D}\) be maps defined by

$$\begin{aligned}& P_{T}{(u)}:= \textstyle\begin{cases} u-\max \{0,\frac{\langle a,u-z\rangle }{ \Vert a \Vert ^{2}} \}a, & \textrm{if } a\neq 0, \\ u, & \textrm{if } a=0, \end{cases}\displaystyle \\& P_{\mathcal{D}}{(x)}:= \textstyle\begin{cases} x, & \textrm{if } x\in \mathcal{D}, \\ x_{0}+\frac{r}{ \Vert x-x_{0} \Vert }(x-x_{0}), & \textrm{if } x\notin \mathcal{D}, \end{cases}\displaystyle \end{aligned}$$

where \(u:=Jx-\tau A(z)\) and \(a:= Jx-\tau A(x)-Jz\).

For implementation, we choose \(p=2\), \(\beta =\frac{1}{2}\), \(\tau =0.000868\), \(\alpha _{n}=\frac{1}{(n+5)}\), and \(r_{n}=10\), \(\forall n\ge 0\).

Then we compute the \((n+1)\)th iteration as follows:

$$\begin{aligned}& \textstyle\begin{cases} x_{0}(t)=e^{-t} \quad \text{or} \quad x_{0}(t)=\sin (t), \\ z_{n}(t)= \textstyle\begin{cases} 0.999132 x_{n}(t), & \text{if } \Vert x_{n} \Vert \le 1, \\ 0.999132\frac{x_{n}(t)}{ \Vert x_{n} \Vert }, & \text{if } \Vert x_{n} \Vert >1, \end{cases}\displaystyle \\ \text{setting:} \\ a_{n}(t)=0.999132 x_{n}(t)-z_{n}(t), \\ w_{n}(t)=x_{n}(t)-0.000868z_{n}(t), \\ w_{n}(t)-z_{n}(t)=x_{n}(t)-1.000868 z_{n}(t), \\ v_{n}(t)= \frac{x_{n}(t)}{22(10t+1)}+\frac{1}{2}\cdot \textstyle\begin{cases} w_{n}(t)-\max \{ 0, \frac{\int _{0}^{1} a_{n}(t)(w_{n}(t)-z_{n}(t))\,dt}{ \Vert a_{n} \Vert ^{2}} \} \cdot a_{n}(t), & \text{if }a_{n}(t)\neq 0, \\ w_{n}(t), & \text{if }a_{n}(t)= 0, \end{cases}\displaystyle \\ x_{n+1}(t)= \frac{1}{n+5} x_{0}(t)+ (1- \frac{1}{n+1} )v_{n}(t). \end{cases}\displaystyle \end{aligned}$$

Remark 3

Theorem 3.1 extends and improves the results in [17, 49] in the following ways:

\((a)\):

Theorem 3.1 extends the results in [17, 49] from a real Hilbert space to a uniformly smooth and two-uniformly convex real Banach space.

\((b)\):

In Theorem 3.1, a continuous J-pseudo-contractive map was studied, which contains the class of Lipschitz pseudo-contractive maps studied in [49].

\((c)\):

The theorem in [17, 49] did not study equilibrium problems, whereas in Theorem 3.1 it was studied.

\((d)\):

Finally, a subgradient-extragradient algorithm has an advantage in computing over the extragradient method proposed in [12] (see also [13]).

6 Conclusion

In this paper, we constructed a new Halpern-type subgradient-extragradient iterative algorithm whose sequence approximates a common solution of some nonlinear problems in Banach spaces. Also, the theorem is applied to approximate a common solution of a variational inequality, an equilibrium problem, and a convex minimization problem. Moreover, the theorem proved is applicable in \(L_{p}\) (\(l_{p}\) or \(W_{p}^{m}(\Omega )\) spaces, \(1< p\le 2\), where \(W_{p}^{m}(\Omega )\) denotes the usual Sobolev space. The analytical representations of the duality map in these spaces where \({p^{-1} + q^{-1} =1}\) is given in Theorem 3.1 of [55], page 36. Finally, a numerical example is given to illustrate the implementability of our algorithm.