Abstract
In this paper, we propose and analyze a hybrid iterative method for finding a common element of the set of solutions of a generalized equilibrium problem, the set of solutions of a variational inequality problem, and the set of fixed points of a relatively nonexpansive mapping in a real Banach space. Further, we prove the strong convergence of the sequences generated by the iterative scheme. Finally, we derive some consequences from our main result. Our work is an improvement and extension of some previously known results recently obtained by many authors.
Similar content being viewed by others
1 Introduction
Let X be a real Banach space with its dual space \(X^{*}\), let \(\langle \cdot,\cdot\rangle\) be the duality pairing between X and \(X^{*}\), and let \(\|\cdot\|\) denote the norm of X and \(X^{*}\). Let K be a nonempty closed convex subset of X, and let \(2^{X}\neq\emptyset\) be the set of all subsets of X.
Let \(G, \xi:K\times K\to\mathbf{R}\) be bifunctions. The generalized equilibrium problem (GEP) is finding \(x \in K\) such that
We denote the solution set of GEP (1.1) by Sol(GEP(1.1)). Problem (1.1) includes fixed point problems, optimization problems, variational inequality problems, Nash equilibrium problems, etc. as particular cases. In recent past, many iterative methods have been proposed to solve GEP (1.1); see, for example, [1–4].
For \(\xi=0\), GEP (1.1) reduces to the following equilibrium problem (EP): Find \(x\in K\) such that
Problem (1.2) was introduced and studied by Blum and Oettli [5].
The variational inequality problem (VIP) is to find \(x\in K\) such that
where \(S:K\to X^{*}\) is a nonlinear mapping. We denote the solution set of VIP (1.3) by Sol(VIP(1.3)).
A mapping \(S:K\to X^{*}\) is said to be
-
(i)
monotone if \(\langle x-y, Sx-Sy \rangle\geq0\) for all \(x,y\in K\);
-
(ii)
γ-inverse strongly monotone if there exists a positive real number γ such that \(\langle x-y, Sx-Sy \rangle\geq\gamma\|Sx-Sy\|^{2}\) for all \(x,y\in K\);
-
(iii)
Lipschitz continuous if there exists a constant \(L>0\) such that \(\|Sx-Sy\|\leq L\|x-y\|\).
If S is γ-inverse strongly monotone, then it is Lipschitz continuous with constant \(\frac{1}{\gamma}\), that is, \(\|Sx-Sy\|\leq\frac{1}{\gamma}\|x-y\|\) for all \(x,y\in K\).
The fixed point problem (FPP):
where \(T:K\to K\) is a nonlinear mapping, and \(\operatorname{Fix}(T)\) is the fixed point set.
In 2009, Takahashi and Zembayashi [1] studied weak and strong convergence theorems for finding a common solution of EP (1.2) and FPP (1.4) of a relatively nonexpansive mapping in a real Banach space. Later on, Petrot et al. [2] extended the work [1] by using the hybrid projection method, which plays an important role for establishing strong convergence results.
Nadezhkina et al. [6] proposed a convex combination of a nonexpansive mapping and the extragradient method and considered the iterative scheme by the hybrid method. They proved the strong convergence theorem in a Hilbert space.
Very recently, in 2015, Nakajo et al. [7] proposed a composition and convex combination of a relatively nonexpansive mapping and the gradient method. Further, they proved the strong convergence to a common element of solutions of the variational inequality problem and fixed point problem by using the hybrid method.
Motivated and inspired by the recent work of Takahashi and Zembayashi [1], Petrot et al. [2], Nadezhkina et al. [6], and Nakajo et al. [7], we propose an iterative scheme to find the common solution of GEP (1.1), VIP (1.3), and FPP (1.4) for a relatively nonexpansive mapping in a real Banach space. Further, by using the hybrid projection we prove the strong convergence of the sequences generated by the iterative algorithm, which improves and extends the corresponding results of [3, 4, 8–10].
2 Preliminaries
Now, we use the following results and definitions to prove our main result.
The normalized duality mapping is defined as
for every \(u\in X\), where \(J:X\to2^{X^{*}}\).
The mapping \(\rho_{X}:[0, \infty)\to[0, \infty) \) defined by
is called the modulus of smoothness of X. The space X is said to be smooth if \(\rho_{X}(s)> 0\) for all \(s > 0\), and X is called uniformly smooth if \(\frac{\rho_{X}(s)}{s}\to0\) as \(s\to0\). A Banach space X is said to be q-uniformly smooth if there exists a fixed constant \(c>0\) such that \(\rho_{X}(s)\leq c s^{q}\). It is well known that if X is q-uniformly smooth, then \(q\leq2\), and X is uniformly smooth. Note that if X is uniformly smooth, then J is uniformly continuous on bounded subsets of X.
The modulus of convexity of X is the function \(\delta_{X}:(0, 2]\to [0, 1]\) defined by
for \(t\in(0, 2]\). A Banach space X is said to be uniformly convex if \(\delta_{X}(t)>0\) for all \(t\in(0, 2]\). Let \(p>1\). The space X is said to be p-uniformly convex if there exists a constant \(c>0\) such that \(\delta_{X}(t)\geq ct^{p}\) for all \(t\in(0, 2]\). Note that every p-uniformly convex space is uniformly convex (for more details, see [11]).
Let X be a smooth, strictly convex, and reflexive Banach space.
Following Takahashi and Zembayashi [1], a point \(x_{0}\in K\) is said to be an asymptotic fixed point of T if K contains a sequence \(\{x_{n}\}\) that converges weakly to \(x_{0}\) such that \(\lim_{n\to\infty}\|x_{n}-Tx_{n}\|=0\). The set of asymptotic fixed points of T is denoted by \(\widehat{\operatorname{Fix}}(T)\). A mapping T from K into itself is said to be relatively nonexpansive if \(\operatorname{Fix}(T)\neq\emptyset\), \(\widehat{\operatorname {Fix}}(T)=\operatorname{Fix}(T)\), and \(\phi(x_{0},Tx)\leq\phi (x_{0},x)\) for all \(x\in K\) and \(x_{0}\in\operatorname{Fix}(T) \), where \(\phi:X\times X\to\mathbf{R}_{+} \) is the Lyapunov functional defined by
The generalized projection \(\Pi_{K}:X\to K\) is defined as
where \(\phi(u,x)\) is defined by (2.1) (for more details, see [12]).
Lemma 2.1
Let X be a smooth, strictly convex, and reflexive Banach space, and let \(K\neq\emptyset\) be a closed convex subset of X. Then, the following hold:
-
(i)
\(\phi(x,\Pi_{K}u)+\phi(\Pi_{K}u,u)\leq\phi(x,u)\) for all \(x\in K\), \(u\in X\).
-
(ii)
For \(u\in X\) and \(x\in K\), we have
$$x=\Pi_{K}(u)\quad\Leftrightarrow\quad\langle x-y,Ju-Jx\rangle\geq0 \quad\textit{for all } y\in K. $$
Remark 2.1
([1])
-
(i)
Using (2.1), we get
$$\bigl(\|u\|-\|v\|\bigr)^{2}\leq\phi(u,v)\leq\bigl(\|u\|+\|v\|\bigr)^{2}\quad \text{for all } u,v\in X. $$ -
(ii)
If \(X=H\) is a real Hilbert space, then \(\phi(u,v)=(\|u\| -\|v\|)^{2}\), and \(\Pi_{K}=P_{K}\), the metric projection of H onto K.
-
(iii)
If X is a smooth, strictly convex, and reflexive Banach space, then \(\phi(u,v)=0\) for \(u,v\in X \) if and only if \(u=v\).
Lemma 2.2
([11])
Let X be a smooth Banach space. Then, the following are equivalent:
-
(i)
X is 2-uniformly convex.
-
(ii)
There exists a constant \(c_{1}>0\) such that \(\|u+v\| ^{2}\geq\|u\|^{2}+2\langle v,Ju \rangle+c_{1}\|v\|^{2}\) for all \(u, v\in X\).
Lemma 2.3
([11])
Let X be a 2-uniformly convex and smooth Banach space. Then \(\phi(u,v)\geq c_{1}\|u-v\|^{2}\) for all \(u, v\in X\), where \(c_{1}\) is the constant in Lemma 2.2.
Lemma 2.4
([11])
Let X be a 2-uniformly convex Banach space. Then, for all \(u, v\in X\), we have
where J is the normalized duality mapping of X, and \(0< c_{1}\leq1\).
Lemma 2.5
([10])
Let \(K\neq\emptyset\) be a closed convex subset of a smooth, strictly convex, and reflexive Banach space X, and let \(T:K\to K\) be a relatively nonexpansive mapping. Then, \(\operatorname{Fix}(T)\) is closed and convex.
Lemma 2.6
([13])
Let X be a smooth and uniformly convex Banach space, and let \(\{u_{n}\}\) and \(\{v_{n}\}\) be sequences in X such that either \(\{u_{n}\}\) or \(\{v_{n}\}\) is bounded. If \(\lim_{n\to\infty}\phi(u_{n},v_{n})=0\), then \(\lim_{n\to\infty}\|u_{n}-v_{n}\|=0\).
Lemma 2.7
([14])
Let K be a nonempty closed convex subset of a Banach space X, and let S be a monotone and hemicontinuous operator of K into \(X^{*}\). Define the mapping \(M\subset X\times X^{*}\) as
where \(N_{K}(z):=\{u\in X^{*}:\langle z-x,u\rangle\geq0, \forall x\in K\}\) is the normal cone to K at \(z \in K\). Then, M is maximal monotone, and \(M^{-1}(0)=\textit{Sol}(\textit{VIP}(\text{1.3}))\).
Lemma 2.8
Let X be a uniformly convex Banach space, and let \(r>0\). Then there exists a strictly increasing, continuous, and convex function \(g:[0,2r]\to\mathbf{R}\) such that \({g(0)=0}\) and
for all \(x,y\in B_{r}\) and \(\alpha\in[0,1]\), where \(B_{r}=\{u\in X:\|u\| \leq r\}\).
Lemma 2.9
([13])
Let X be a smooth and uniformly convex Banach space, and let \(r>0\). Then there exists a strictly increasing, continuous, and convex function \(g:[0,2r]\to\mathbf{R}\) such that \(g(0)=0\) and
The function \(F:X\times X^{*}\to\mathbf{R} \) defined by
was studied by Alber [12], that is, \(F(u,u^{*})=\phi (u,J^{-1}u^{*})\) for \(u\in X\) and \(u^{*}\in X^{*}\).
Lemma 2.10
([12])
Let X be a reflexive strictly convex and smooth Banach space with its dual \(X^{*}\). Then
Assumption 2.1
Let G and ξ satisfy the following conditions:
-
(i)
\(G(x,x)=0\) for \(x \in K\).
-
(ii)
G is monotone, that is, \(G(x,y)+G(y,x)\leq0\) for \(x,y \in K\).
-
(iii)
For each \(y\in K\), \(x\rightarrow G(x,y)\) is weakly upper semicontinuous.
-
(iv)
For each \(x\in K\), \(y\rightarrow G(x,y)\) is convex and lower semicontinuous.
-
(v)
\(\xi(\cdot,\cdot)\) is weakly continuous, and \(\xi(\cdot ,y)\) is convex.
-
(vi)
ξ is skew-symmetric, that is,
$$\xi(x,x)-\xi(x,y)+\xi(y,y)-\xi(y,x)\geq0 \quad\mbox{for all } x,y \in K. $$
Theorem 2.1
Let K be a nonempty closed and convex subset of a smooth, strictly convex, and reflexive Banach space X. Let \(G,\xi:K\times K\to\mathbf{R}\) be nonlinear mappings satisfying Assumption 2.1. For \(t>0\) and \(u\in X\), define the mapping \(\Upsilon _{t}:X\to K\) as follows:
Then, the following conclusions hold:
-
(i)
\(\Upsilon_{t}\) is single-valued;
-
(ii)
\(\Upsilon_{t}\) is firmly nonexpansive mapping, that is, for all \(u_{1}, u_{2}\in X\),
$$\langle\Upsilon_{t}u_{1}-\Upsilon_{t}u_{2},J \Upsilon_{t}u_{1}-J\Upsilon _{t}u_{2} \rangle\leq\langle\Upsilon_{t}u_{1}-\Upsilon _{t}u_{2},Ju_{1}-Ju_{2} \rangle; $$ -
(iii)
\(\operatorname{Fix}(\Upsilon_{t})=\textit{Sol}(\textit{GEP}(\text{1.1}))\);
-
(iv)
Sol(GEP(1.1)) is closed and convex.
Proof
(i) We claim that \(\Upsilon_{t}\) is single-valued. Indeed, for \(x\in K\) and \(t>0\), let \(z_{1}, z_{2}\in\Upsilon_{t}(x)\). Then
and
Letting \(y=z_{2}\) in (2.2) and \(y=z_{1}\) in (2.3) and then adding, we have
Since G is monotone, ξ is skew symmetric, and since \(t>0\), we have
Using the strict convexity of X, we get \(z_{1}=z_{2} \). Thus, \(\Upsilon_{t}\) is single-valued.
(ii) For any \(u_{1}, u_{2}\in X\), let \(x_{1}=\Upsilon_{t}u_{1}\) and \(x_{2}=\Upsilon_{t}u_{2}\). Then
and
By putting \(y=x_{2}\) in (2.4) and \(y=x_{1}\) in (2.5) and taking their sum, we have
Using the monotonicity of G and properties of ξ, we have
Hence, we have
or
that is,
Thus, \(\Upsilon_{t}\) is a firmly nonexpansive mapping.
(iii) Let \(x\in\operatorname{Fix}(\Upsilon_{t})\). Then
and so
Thus, \(x\in\text{Sol(GEP(1.1))}\).
Let \(x\in\text{Sol(GEP(1.1))}\). Then
and so
Hence, \(x\in\operatorname{Fix}(\Upsilon_{t})\). Thus, \(\operatorname {Fix}(\Upsilon_{t})=\text{Sol(GEP(1.1))}\).
(iv) First, we show that \(\Upsilon_{t}\) is a relatively nonexpansive mapping.
Using the definition of ξ, for any \(u_{1}, u_{2}\in X\), we have
and
Since \(\Upsilon_{t}\) is firmly nonexpansive, from the above two equalities we have
Thus,
Taking \(u_{2}=u\in\operatorname{Fix}(\Upsilon_{t}) \), we have
Further, we prove that \(\widehat{\operatorname{Fix}}(\Upsilon _{t})=\text{Sol(GEP(1.1))}\).
Let \(x\in\widehat{\operatorname{Fix}}(\Upsilon_{t})\). Then there exists a sequence \(\{u_{n}\}\subset X\) such that \(u_{n}\rightharpoonup x\) and \(\lim_{n\to\infty}\|u_{n}-\Upsilon_{t}u_{n}\|=0\). Thus, \(\Upsilon_{t}u_{n}\rightharpoonup x\). Hence, we get \(x\in K\).
Since J is uniformly continuous on bounded sets, we have
From the definition of \(\Upsilon_{t}\), for any \(y\in K\), we have
Let \(y_{p}= (1-p)x+py\) for \(p\in(0,1]\). Since \(y\in K\) and \(x\in K\), we have \(y_{p}\in K\), and thus
Since ξ is weakly continuous and G is weakly lower semicontinuous in the second argument, letting \({n\to\infty}\), we get
For \(p > 0\), we have
Dividing by \(p>0\) and letting \(p\to0_{+}\), we have
This implies that \(x\in\text{Sol(GEP(1.1))}\), and hence \(\operatorname{Fix}(\Upsilon_{t})= \text{Sol(GEP(1.1))}= \widehat {\operatorname{Fix}}(\Upsilon_{t})\). Thus, \(\Upsilon_{t}\) be a relatively nonexpansive mapping. By Lemma 2.5, \(\text{Sol(GEP(1.1))}=\operatorname{Fix } (\Upsilon_{t})\) is closed and convex. □
Next, we have the following lemma whose proof is on the similar lines of the proof of Lemma 2.9 [1] and hence omitted.
Lemma 2.11
Let X, K, G, ξ, \(\Upsilon_{t}\) be same as in Theorem 2.1, and let \(t>0\). Then, for \(x\in X\) and \(u\in \operatorname{Fix}(\Upsilon_{t})\), we have
3 Main result
Now, we prove the following convergence theorem.
Theorem 3.1
Let X be a 2-uniformly convex and uniformly smooth Banach space, and let K be a nonempty closed and convex subset of X. Let \(S:K\to X^{*}\) be a γ-inverse strongly monotone mapping with constant \(\gamma\in(0,1)\), let \(G, \xi:K\times K\to\mathbf{R}\) be nonlinear mappings satisfying Assumption 2.1, and let \(T:K\to K\) be a relatively nonexpansive mapping such that \(\Gamma :=\textit{Sol}(\textit{GEP}(\text{1.1}))\cap\textit{Sol}(\textit{VIP}(\text{1.3}))\cap\operatorname {Fix}(T) \neq\emptyset\). Let the iterative sequence \(\{x_{n}\}\) be generated as follows:
such that
where J is the normalized duality mapping on X, \(t_{n}\in(0,\infty )\), and \(\{\lambda_{n}\}\) and \(\{\theta_{n}\}\) are the sequences in \((0, \infty)\) and (0,1) satisfying the following:
-
(i)
\(0<\liminf_{n\to\infty}\lambda_{n}\leq\limsup_{n\to\infty }\lambda_{n}<\frac{c_{1}^{2}\gamma}{2}\), where \(c_{1}\) is the constant in Lemma 2.2;
-
(ii)
\(0<\liminf_{n\to\infty}\theta_{n}\leq\limsup_{n\to\infty }\theta_{n}<1 \).
Then, \(\{x_{n}\}\) converges strongly to \(\prod_{\Gamma} x\), where \(\prod_{\Gamma}x\) is the generalized projection of X onto Γ.
Proof
Since T is a relatively nonexpansive mapping from K into itself, it follows from Lemma 2.5 and Theorem 2.1(iv) that Γ is closed and convex. First, we show that \(P_{n}\cap Q_{n}\) is closed and convex for all \(n\in N\cup\{0\}\). By the definition of \(Q_{n}\) it is closed and convex. Further, by the definition of ϕ we observe that \(P_{n}\) is closed and
and hence \(P_{n}\) is closed and convex for all \(n \in N\cup\{0\}\). Thus, \(P_{n}\cap Q_{n}\) is closed and convex for all \(n \in N\cup\{0\}\).
Next, we show that \(\Gamma\subset P_{n}\cap Q_{n}\) and \(\{x_{n}\}\) is well defined.
Let \(x^{*}\in\Gamma\). Then \(x^{*}\in\text{Sol(VIP(1.3))}\), that is, \(\langle z_{n}-x^{*},Sz_{n} \rangle\geq\langle z_{n}-x^{*},Sx^{*} \rangle\geq0\).
Since \(x^{*}\in\Gamma\), using Lemma 2.1, we have
Thus,
for each \(n\in N\cup\{0\}\), which implies
Since \(u_{n}=\Upsilon_{t_{n}}y_{n}\) for all \(n\in N\cup\{0\}\) and \(\Upsilon_{t_{n}}\) is relatively nonexpansive, we have
Now, we estimate
By (3.1), (3.2), and (3.3) we observe that
This implies that \(x^{*}\in P_{n}\). Therefore, \(\Gamma\subset P_{n}\) for all \(n\in N\cup\{0\}\).
Next, we show by induction that \(\Gamma\subset P_{n}\cap Q_{n}\) for all \(n\in N\cup\{0\}\). Since \(Q_{0}=K\), we have \(\Gamma\subset P_{0}\cap Q_{0}\). Suppose that \(\Gamma\subset P_{k}\cap Q_{k}\) for some \(k\in N\cup\{0\}\). Then there exists \(x_{k+1}\in P_{k}\cap Q_{k}\) such that \(x_{k+1}=\prod_{P_{k}\cap Q_{k}}x \). From the definition of \(x_{k+1}\) we have, for all \(z\in P_{k}\cap Q_{k}\),
Since \(\Gamma\subset P_{k}\cap Q_{k}\), we have
and hence \(z\in Q_{k+1}\). So, we have \(\Gamma\subset Q_{k+1}\). Therefore, we have \(\Gamma\subset P_{k+1}\cap Q_{k+1}\).
Thus, we have that \(\Gamma\subset P_{n}\cap Q_{n}\) for all \(n\in N\cup \{0\}\). This means that \(\{x_{n}\}\) is well-defined.
Further, we show that the sequence \(\{x_{n}\}\) converges strongly to \(x^{*} =\prod_{\Gamma}x\in\Gamma\).
By the definition of \(Q_{n}\) we get \(x_{n}=\prod_{Q_{n}}x\). Using \(x_{n}=\prod_{Q_{n}}x\) and Lemma 2.1, we have, for all \(x^{*}\in \Gamma\subset Q_{n}\),
Thus \(\{\phi(x_{n},x)\}\) is bounded. Therefore \(\{x_{n}\}\) is bounded.
Letting \(x^{*}\in\Gamma\), we have
So, \(\{Sx_{n}\}\) is bounded.
From \(\phi(z_{n}, J^{-1}(Jx_{n}-\lambda_{n}Sx_{n}))\leq\phi(x^{*}, J^{-1}(Jx_{n}-\lambda_{n}Sx_{n}))\) we have
Denote \(M=\sup\{\|x_{n}\|,\|Sx_{n}\|\}\). Now, we have
Thus \(\{z_{n}\}\) is bounded.
Since \(x_{n+1}=\prod_{P_{n}\cap Q_{n}}x\in P_{n}\cap Q_{n}\subset Q_{n}\) and \(x_{n}=\prod_{Q_{n}}x\), from the definition of \(\prod_{Q_{n}}\) we have
Thus \(\{\phi(x_{n},x)\}\) is nondecreasing. So, the limit of \(\{\phi (x_{n},x)\}\) exists. By the construction of \(Q_{n}\) we have \(Q_{m}\subset Q_{n}\) and \(x_{m}=\prod_{Q_{m}}x\in Q_{n}\) for \(m\geq n\). It follows that
Letting \(m,n\to\infty\), we have \(\phi(x_{m},x_{n})\to0\), and hence, applying Lemma 2.6, we have \(\|x_{m}-x_{n}\|\to0\) as \(m,n\to\infty\). Thus \(\{x_{n}\}\) is a Cauchy sequence. Since X is a Banach space and K is closed and convex, we can assume that \(x_{n}\to{x^{*}}\in K\) as \(n\to\infty\). From (3.4) we get
which implies
Using Lemma 2.6, we get
By Lemma 2.3 and \(x_{n+1}\in P_{n}\) we estimate
Using (3.5), (3.6), and the inequality \(\sup_{n\in N}\lambda_{n}< \frac {c_{1}^{2}\gamma}{2}\), we have
which implies
Using (3.6) and (3.7), we have
The uniform continuity of J implies that
Using the property of ϕ and Lemma 2.8, we have, for all \(x^{*}\in \Gamma\),
Next, we estimate
From this, using Lemma 2.4 and the inequality \(\|Sx\|\leq\|Sx-Sx^{*}\|\) for \(x\in K\) and \(x^{*}\in\Gamma\), we have
From (3.11) and (3.10) we have
Using (3.2) in (3.12), we have
Since \(\lambda_{n}\leq\frac{c_{1}^{2}\gamma}{2}\), we get
Now,
It follows from (3.8) and (3.9) that
Thus, from (3.14) and (3.15) we have
Using Lemma 2.9, we obtain
Since \(J^{-1}\) is uniformly norm-to-norm continuous, we have
From (3.13) we have
which implies that
Using Lemmas 2.1 and 2.10 and (3.4), we estimate
By Lemma 2.4, using the inequality \(\|Sx\|\leq\|Sx-Sx^{*}\|\) for \(x\in K\), \(x^{*}\in\Gamma\), we have
It follows from (3.17) and Lemma 2.6 that
Thus \(z_{n}\to x^{*}\) as \(n\to\infty\).
Since \(u_{n}= \Upsilon_{t_{n}}y_{n}\), using Lemma 2.11 and (3.11), we get
From this, using (3.15) and the restrictions on the sequences \(\{\theta _{n}\}\) and \(\{\lambda_{n}\}\), we get
By Lemma 2.6,
Using the uniform continuity of J, we have
which implies that \(x^{*}\in\operatorname{Fix}(T)\).
Further, we show that \(x^{*}\in\text{Sol(VIP(1.3))}\). Since \(\{x_{n}\}\) is bounded, there exists a subsequence \(\{x_{n_{k}}\}\) of \(\{x_{n}\}\) that converges weakly to \(x^{*}\). Define the mapping \(M\subset X\times X^{*}\) as follows:
where \(N_{K}(z):=\{w\in X:\langle z-x,w\rangle\geq0, \forall x\in K\}\) is the normal cone to K at \(z \in K\). By Lemma 2.7, M is a maximal monotone operator, and \(M^{-1}(0)=\operatorname{VI}(K,S)\). Let \((z,w)\in \operatorname{graph}(M)\). Since \(w\in M(z)=S(z)+N_{K}(z)\), we get \(w-Sz\in N_{K}(z)\). Since \(z_{n}\in K\), we obtain
On the other hand, \(z_{n_{k}}=\prod_{K}J^{-1}(Jx_{n_{k}}-\lambda _{n_{k}}Sx_{n_{k}})\), and using Lemma 2.1, we obtain
and thus
Therefore, it follows from the monotonicity of S, (3.21), and (3.22) that
where \(\rho=\sup_{k\in N}\{\|z-z_{n_{k}}\|\}\) and \(a<\limsup\lambda _{n}\). Taking the limit as \(k\to\infty\) and using the fact that \(\{\| z-z_{n_{k}}\|\}_{k\in N}\) is bounded, we see that \(\langle z-x^{*},w \rangle\geq0 \). Thus \(x^{*}\in\text{Sol(VIP(1.3))}\).
Next, we prove that \(x^{*}\in\text{Sol(GEP(1.1))}\).
The relation \(u_{n}=\Upsilon_{t_{n}}y_{n}\) implies that
Let \(y_{p}=(1-p)x^{*}+py\) for \(p\in(0,1]\). Since \(y\in K\) and \(x^{*}\in K\), we get \(y_{p}\in K\), and hence
Using (3.20) and \(\liminf_{n\to\infty}t_{n}>0 \), we have
Further, since ξ is weakly continuous and G is weakly lower semicontinuous in the second argument, letting \({n\to\infty}\), we get
Now, for \(p > 0\),
Dividing by \(p>0\) and letting \(p\to0_{+}\), we have
Thus, \(x^{*}\in\text{Sol(GEP(1.1))}\), and hence \(x^{*}\in\Gamma\). □
Finally, we have the following consequences of Theorem 3.1.
Corollary 3.1
Let X be a 2-uniformly convex and uniformly smooth Banach space, and let K be a nonempty closed and convex subset of X. Let \(S:K\to X^{*}\) be a γ-inverse strongly monotone mapping with constant \(\gamma\in(0,1)\), let \(G:K\times K\to \mathbf{R}\) be a nonlinear mapping satisfying Assumption 2.1(i)-(iv), and let \(T:K\to K\) be a relatively nonexpansive mapping such that \(\Gamma:=\textit{Sol}(\textit{EP}(\text{1.2}))\cap\textit{Sol}(\textit{VIP}(\text{1.3}))\cap \operatorname{Fix}(T) \neq\emptyset\). Let the iterative sequence \(\{ x_{n}\}\) be generated as follows:
such that
where J is the normalized duality mapping on X, \(t_{n}\in(0,\infty )\), and \(\{\lambda_{n}\}\) and \(\{\theta_{n}\}\) are the sequences in \((0, \infty)\) and (0,1) satisfying the following:
-
(i)
\(0<\liminf_{n\to\infty}\lambda_{n}\leq\limsup_{n\to\infty }\lambda_{n}<\frac{c_{1}^{2}\gamma}{2}\), where \(c_{1}\) is the constant in Lemma 2.2;
-
(ii)
\(0<\liminf_{n\to\infty}\theta_{n}\leq\limsup_{n\to\infty }\theta_{n}<1 \).
Then, \(\{x_{n}\}\) converges strongly to \(\prod_{\Gamma} x\).
Proof
The proof follows by taking \(\xi=0\) in Theorem 3.1. □
Corollary 3.2
Let X be a 2-uniformly convex and uniformly smooth Banach space, and let K be a nonempty closed and convex subset of X. Let \(S:K\to X^{*}\) be a γ-inverse strongly monotone mapping with constant \(\gamma\in(0,1)\), and let \(T:K\to K\) be a relatively nonexpansive mapping such that \(\Gamma:= \textit {Sol}(\textit{VIP}(\text{1.3}))\cap\operatorname{Fix}(T) \neq\emptyset\). Let the iterative sequence \(\{x_{n}\}\) be generated as follows:
where J is the normalized duality mapping on X, and \(\{\lambda_{n}\} \) and \(\{\theta_{n}\}\) are sequences in \((0, \infty)\) and \((0,1)\) satisfying the following:
-
(i)
\(0<\liminf_{n\to\infty}\lambda_{n}\leq\limsup_{n\to\infty }\lambda_{n}<\frac{c_{1}^{2}\gamma}{2}\), where \(c_{1}\) is the constant in Lemma 2.2;
-
(ii)
\(0<\liminf_{n\to\infty}\theta_{n}\leq\limsup_{n\to\infty }\theta_{n}<1 \).
Then, \(\{x_{n}\}\) converges strongly to \(\prod_{\Gamma} x\).
Proof
The proof follows by taking \(\xi=0\) and \(G=0\) in Theorem 3.1. □
4 Conclusion
In this paper, we propose an iterative algorithm to find the common solution of the generalized equilibrium problem, variational inequality problem, and fixed point problem for a relatively nonexpansive mapping in a real Banach space. Further, using the hybrid projection method, we proved the strong convergence of the sequences generated by the iterative algorithm. Finally, we derived some consequences from our main result. The result presented in this paper is an improvement and extension of the corresponding results of [3, 4, 8–10].
References
Takahashi, W, Zembayashi, K: Strong and weak convergence theorems for equilibrium problems and relatively nonexpansive mappings in Banach spaces. Nonlinear Anal. 70, 45-57 (2009)
Petrot, N, Wattanawitoon, K, Kumam, P: A hybrid projection method for generalized mixed equilibrium problems and fixed point problems in Banach spaces. Nonlinear Anal. 4, 631-643 (2010)
Takahashi, S, Takahashi, W: Viscosity approximation method for equilibrium problems and fixed point problems in Hilbert space. J. Math. Anal. Appl. 331, 506-515 (2007)
Djafari Rouhani, B, Farid, M, Kazmi, KR: Common solution to generalized mixed quilibrium problem and fixed point problem for a nonexpansive semigroup in Hilbert space. J. Korean Math. Soc. 53(1), 89-114 (2016)
Blum, E, Oettli, W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123-145 (1994)
Nadezhkina, N, Takahashi, W: Strong convergence theorem by a hybrid method for nonexpansive mappings and Lipschitz-continuous monotone mappings. SIAM J. Optim. 16(4), 1230-1241 (2006)
Nakajo, K: Strong convergence for gradient projection method and relatively nonexpansive mappings in Banach spaces. Appl. Math. Comput. 271, 251-258 (2015)
Kazmi, KR, Rizvi, SH: A hybrid extragradient method for approximating the common solutions of a variational inequality, a system of variational inequalities, a mixed equilibrium problem and a fixed point problem. Appl. Math. Comput. 218, 5439-5452 (2012)
Tada, A, Takahashi, W: Strong convergence theorem for an equilibrium problem and a nonexpansive mapping. In: Takahashi, W, Tanaka, T (eds.) Nonlinear Analysis and Convex Analysis, pp. 609-617. Yokohama, Japan (2006)
Matsushita, S, Takahashi, W: Weak and strong convergence theorems for relatively nonexpansive mappings in Banach spaces. Fixed Point Theory Appl. 2004, 37-47 (2004)
Xu, HK: Inequalities in Banach spaces with applications. Nonlinear Anal. 16, 1127-1138 (1991)
Alber, YI: Metric and generalized projection operators in Banach spaces. In: Kartosatos, AG (ed.) Theory and Applications of Nonlinear Operators of Accretive and Monotone Type, pp. 15-50. Marcel Dekker, New York (1996)
Kamimura, S, Takahashi, W: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13, 938-945 (2002)
Rockafellar, RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877-898 (1976)
Zálinescu, C: On uniformly convex functions. J. Math. Anal. Appl. 95, 344-374 (1983)
Acknowledgements
The authors are grateful to the referees for useful suggestions, which improved the contents of this paper. The first, second and third authors gratefully acknowledge Qassim University, represented by the Deanship of Scientific Research, for the material support for this research under the number 1971-qec-2016-1-12-s during the academic year 1437 AH/2016 AD.
Author information
Authors and Affiliations
Contributions
All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Farid, M., Irfan, S.S., Khan, M.F. et al. Strong convergence of gradient projection method for generalized equilibrium problem in a Banach space. J Inequal Appl 2017, 297 (2017). https://doi.org/10.1186/s13660-017-1574-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-017-1574-x