Abstract
In this paper, we first propose a weak convergence algorithm, called the linesearch algorithm, for solving a split equilibrium problem and nonexpansive mapping (SEPNM) in real Hilbert spaces, in which the first bifunction is pseudomonotone with respect to its solution set, the second bifunction is monotone, and fixed point mappings are nonexpansive. In this algorithm, we combine the extragradient method incorporated with the Armijo linesearch rule for solving equilibrium problems and the Mann method for finding a fixed point of an nonexpansive mapping. We then combine the proposed algorithm with hybrid cutting technique to get a strong convergence algorithm for SEPNM. Special cases of these algorithms are also given.
Similar content being viewed by others
1 Introduction
Throughout the paper, unless otherwise is stated, we assume that \(\mathbb{H}_{1}\) and \(\mathbb{H}_{2}\) are real Hilbert spaces endowed with inner products and induced norms denoted by \(\langle\cdot, \cdot \rangle\) and \(\| \cdot\|\), respectively, whereas \(\mathbb{H}\) refers to any of these spaces. We write \(x^{k} \to x\) or \(x^{k} \rightharpoonup x\) iff \(x^{k}\) converges strongly or weakly to x, respectively, as \(k \to \infty\). Let C, Q be nonempty closed convex subsets in \(\mathbb{H}_{1}\), \(\mathbb {H}_{2}\), respectively, and \(A: \mathbb{H}_{1} \to \mathbb{H}_{2}\) be a bounded linear operator. The split feasible problem (SFP) in the sense of Censor and Elfving [1] is to find \(x^{*} \in C\) such that \(Ax^{*} \in Q\). It turns out that SFP provides a unified framework for the study of many significant real-world problems such as in signal processing, medical image reconstruction, intensity-modulated radiation therapy, et cetera; see, for example, [2–5]. To find a solution of SFP in finite-dimensional Hilbert spaces, a basic scheme proposed by Byrne [6], called the CQ-algorithm, is defined as follows:
where I is the identity mapping, and \(P_{C}\) is projection mapping onto C. Xu [7] investigated the SFP setting in infinite-dimensional Hilbert spaces. In this case, the CQ-algorithm becomes
where \(A^{*}\) is the adjoint operator of A.
The split feasibility problem when C or Q are fixed points of mappings or common fixed points of mappings and solutions of variational inequality problems was considered in some recent research papers; see, for instance, [8–15]. Recently, Moudafi [16] (see also [17–20]) considered the split equilibrium problems (SEP), more precisely:
Let \(f : C \times C \to\mathbb{R}\), \(g : Q \times Q \to\mathbb{R}\) be equilibrium bifunctions, that is, \(f(x, x) = g(u, u) = 0\) for all \(x \in C\) and \(u \in Q\). The split equilibrium problem takes the form
where \(\operatorname{Sol}(C, f)\) is the solution set of the following equilibrium problem (\(\operatorname{EP}(C, f)\)):
and \(\operatorname{Sol}(Q, g)\) is the solution set of the equilibrium problem \(\operatorname{EP}(Q, g)\). See [21, 22] for more detail on equilibrium problems.
For obtaining a solution of SEP, He [23] introduced an iterative method, which generates a sequence \(\{x^{k}\}\) by
Under certain conditions on bifunctions and parameters, the author shows that \(\{x^{k}\}\), \(\{y^{k}\}\) weakly converges to a solution of SEP, provided that f and g are monotone on C and Q, respectively.
On the other hand, many researchers have been proposed numerical algorithms for finding a common element of the set of solutions of monotone equilibrium problems and the set of fixed points of nonexpansive mappings; see, for example, [24–26] and the references therein.
This paper focuses mainly on a split equilibrium problem and nonexpansive mapping involving pseudomonotone and monotone equilibrium bifunctions in real Hilbert spaces. In detail, let \(f : C \times C \to \mathbb{R}\) be a pseudomonotone bifunction with respect to its solution set, \(g : Q \times Q \to\mathbb{R}\) be a monotone bifunction, and \(S: C \to C\) and \(T: Q \to Q\) be nonexpansive mappings. The problem considered in this paper can be stated as follows (\(\operatorname{SEPNM}(C, Q, A, f, g, S, T)\) or SEPNM for short):
where \(\operatorname{Fix}(S)\) and \(\operatorname{Fix}(T)\) are the fixed points of the mappings S and T, respectively.
It should be noticed that, under the monotonicity assumption on f and g, the solution sets \(\operatorname{Sol}(C, f)\) and \(\operatorname{Sol}(C, g)\) of the equilibrium problems \(\operatorname{EP}(C, f)\) and \(\operatorname{EP}(Q, g)\) are closed convex sets whenever f and g are lower semicontinuous and convex with respect to the second variables. In addition, the nonexpansiveness assumption of S and T also implies that \(\operatorname{Fix}(S)\) and \(\operatorname{Fix}(T)\) are closed convex sets. Hence, \(\operatorname{Sol}(C, f) \cap\operatorname{Fix}(S)\) and \(\operatorname {Sol}(Q, g) \cap\operatorname{Fix}(T)\) are closed convex sets. However, the main difficulty is that, even if these sets are convex, they are not given explicitly as in a standard mathematical programming problem, and therefore the projection onto those sets cannot be computed, and consequently, available methods (see, e.g., [2, 27, 28] and the references therein) cannot be applied for solving SEPNM directly.
In this paper, we first propose a weak convergence algorithm for solving SEPNM by using a combination of the extragradient method with Armijo linesearch type rule for an equilibrium problem [29] (see also [30–32] for more detail on extragradient algorithms) and the Mann method [33] (see also [34, 35]) for a fixed point problem. We then combine this algorithm with hybrid cutting technique [36] (see also [37]) to get a strong convergence algorithm for SEPNM.
The paper is organized as follows. The next section presents some preliminary results. A weak convergence algorithm and its special case are presented in Section 3. In the last section, we combine the method presented in Section 3 with the hybrid projection method for obtaining a strong convergence algorithm for SEPNM.
2 Preliminaries
Let \(\mathbb{H}\) be a real Hilbert space, and C a nonempty closed convex subset of \(\mathbb{H}\). By \(P_{C}\) we denote the metric projection operator onto C, that is,
The following well-known results will be used in the sequel.
Lemma 1
Suppose that C is a nonempty closed convex subset in \(\mathbb{H}\). Then \(P_{C}\) has the following properties:
-
(a)
\(P_{C}(x)\) is singleton and well defined for every x;
-
(b)
\(z=P_{C}(x)\) if and only if \(\langle x-z, y-z\rangle\leq0\), \(\forall y\in C\);
-
(c)
\(\Vert P_{C}(x)-P_{C}(y) \Vert^{2} \leq\langle P_{C}(x) - P_{C}(y), x-y \rangle\), \(\forall x, y \in\mathbb{H}\);
-
(d)
\(\Vert P_{C}(x)-P_{C}(y)\Vert^{2} \leq\Vert x-y \Vert^{2} - \Vert x-P_{C}(x) - y + P_{C}(y) \Vert^{2}\), \(\forall x, y \in\mathbb{H}\).
Lemma 2
Let \(\mathbb{H}\) be a real Hilbert space. Then, for all \(x, y\in\mathbb {H}\) and \(\alpha\in[0, 1]\), we have
Lemma 3
(Opial’s condition)
For any sequence \(\{x^{k}\}\subset\mathbb{H}\) with \(x^{k}\rightharpoonup x\), we have the inequality
for all \(y\in\mathbb{H}\) such that \(y \neq x\).
Definition 1
We say that an operator \(T: \mathbb{H} \to\mathbb{H}\) is demiclosed at 0 if, for any sequence \(\{x^{k}\}\) such that \(x^{k} \rightharpoonup x\) and \(Tx^{k} \to0\) as \(k \to\infty\), we have \(Tx = 0\).
It is well known that, for a nonexpansive operator \(T: \mathbb{H} \to \mathbb{H}\), the operator \(I - T\) is demiclosed at 0; see [38], Lemma 2.
Now, we assume that the equilibrium bifunctions \(g : Q \times Q \to\mathbb {R}\) and \(f: C \times C \to\mathbb{R}\) satisfy the following assumptions, respectively.
Assumption A
- (A1):
-
g is monotone on Q, that is, \(g(u, v) + g(v, u) \leq 0\) for all \(u, v \in Q\);
- (A2):
-
\(g(u, \cdot)\) is convex and lower semicontinuous on Q for each \(u \in Q\);
- (A3):
-
for all \(u, v, w \in Q\),
$$\limsup_{\lambda\downarrow0}g\bigl(\lambda w+(1-\lambda)u, v\bigr)\leq g(u, v). $$
Assumption B
- (B1):
-
f is pseudomonotone on C, that is, if \(f(x, y)\geq 0\) implies \(f(y, x)\leq0\) for all \(x, y\in C\);
- (B2):
-
\(f(x, \cdot)\) is convex and subdifferentiable on C for all \(x\in C\);
- (B3):
-
f is jointly weakly continuous on \(C\times C\) in the sense that, if \(x, y\in C\) and \(\{x^{k}\}, \{y^{k}\}\subset C\) converge weakly to x and y, respectively, then \(f(x^{k}, y^{k}) \to f(x, y)\) as \(k \to+\infty\).
Let φ be an equilibrium bifunction defined on \(C \times C\). For \(x, y \in C\), we denote by \(\partial_{2}\varphi(x, y)\) the subgradient of the convex function \(\varphi(x, \cdot)\) at y, that is,
In particular,
Let Δ be an open convex set containing C. The next lemma can be considered as an infinite-dimensional version of Theorem 24.5 in [39].
Lemma 4
([40], Proposition 4.3)
Let \(\varphi: \varDelta \times \varDelta \to\mathbb{R}\) be an equilibrium bifunction satisfying conditions (A1) on Δ and (A2) on C. Let \(\bar{x}, \bar{y} \in \varDelta \), and let \(\{x^{k}\}\), \(\{y^{k}\}\) be two sequences in Δ converging weakly to x̄, ȳ, respectively. Then, for any \(\varepsilon > 0\), there exist \(\eta>0\) and \(k_{\varepsilon } \in\mathbb{N}\) such that
for every \(k \geq k_{\varepsilon }\), where B denotes the closed unit ball in \(\mathbb{H}\).
Lemma 5
Let the equilibrium bifunction φ satisfy assumptions (A1) on Δ and (A2) on C, and \(\{x^{k} \} \subset C \), \(0 < \underline{\rho} \leq\bar{\rho} \), \(\{\rho_{k}\} \subset[\underline{\rho} , \bar{\rho}] \). Consider the sequence \(\{ y^{k}\}\) defined as
If \(\{x^{k} \}\) is bounded, then \(\{y^{k}\}\) is also bounded.
Proof
First, we show that if \(\{x^{k} \}\) converges weakly to \(x^{*}\), then \(\{ y^{k}\}\) is bounded. Indeed,
and
Therefore,
In addition, for all \(\hat{\xi}^{k} \in\partial_{2}\varphi(x^{k}, x^{k})\), we have
This implies
Hence,
Because \(\{\rho_{k}\}\) is bounded, \(\{x^{k}\}\) converges weakly to \(x^{*}\) and \(\hat{\xi}^{k} \in\partial_{2}\varphi(x^{k}, x^{k})\). By Lemma 4 the sequence \(\{\hat{\xi}^{k}\}\) is bounded; combining this with the boundedness of \(\{x^{k}\}\), we get that \(\{y^{k}\}\) is also bounded.
Now let us prove Lemma 5. Suppose that \(\{y^{k}\}\) is unbounded, that is, there exists a subsequence \(\{y^{k_{i}}\} \subseteq \{y^{k}\}\) such that \(\lim_{i \to\infty}\|y^{k_{i}}\| = + \infty\). By the boundedness of \(\{x^{k}\}\) this implies that \(\{x^{k_{i}}\}\) is also bounded, and without loss of generality, we may assume that \(\{x^{k_{i}}\} \) converges weakly to some \(x^{*}\). By the same argument as before, we get that \(\{y^{k_{i}}\}\) is bounded, a contradiction. Therefore, \(\{y^{k}\}\) is bounded. □
The following lemmas are well known in the theory of monotone equilibrium problems.
Lemma 6
([21])
Let g satisfy Assumption A. Then, for all \(\alpha>0\) and \(u \in \mathbb{H}\), there exists \(w \in Q\) such that
Lemma 7
([41])
Under the assumptions of Lemma 6, the mapping \(T_{\alpha}^{g}\) defined on \(\mathbb{H}\) as
has following properties:
-
(i)
\(T_{\alpha}^{g}\) is single-valued;
-
(ii)
\(T_{\alpha}^{g}\) is firmly nonexpansive, that is, for any \(u, v\in\mathbb{H}\),
$$\bigl\Vert T_{\alpha}^{g}(u)-T_{\alpha}^{g}(v) \bigr\Vert ^{2}\leq\bigl\langle T_{\alpha}^{g}(u)-T_{\alpha}^{g}(v), u-v\bigr\rangle ; $$ -
(iii)
\(\operatorname{Fix}(T_{\alpha}^{g}) = \operatorname{Sol}(Q, g)\);
-
(iv)
\(\operatorname{Sol}(Q, g)\) is closed and convex.
Lemma 8
([23])
Under the assumptions of Lemma 7, for \(\alpha, \beta>0\) and \(u, v\in\mathbb{H}\), we have
3 A weak convergence algorithm
Algorithm 1
-
Initialization. Pick \(x^{0} \in C\) and choose the parameters \(\beta, \eta, \theta\in(0, 1)\), \(0 < \underline{\rho} \leq\bar{\rho}\), \(\{ \rho_{k} \} \subset[\underline{\rho}, \bar{\rho }]\), \(0 < \underline{\gamma} \leq\bar{\gamma} < 2\), \(\{ \gamma_{k} \} \subset[\underline{\gamma}, \bar{\gamma} ] \), \(0 < \alpha\), \(\{\alpha _{k}\} \subset [\alpha, +\infty)\), \(\mu\in(0, \frac{1}{\|A\|})\).
-
Iteration k (\(k = 0, 1, 2, \ldots \)). Having \(x^{k}\), do the following steps:
-
Step 1.
Solve the strongly convex program
$$CP\bigl(x^{k}\bigr)\quad \min \biggl\{ f\bigl(x^{k}, y\bigr) + \frac{1}{2\rho_{k}} \bigl\Vert y-x^{k}\bigr\Vert ^{2}: y \in C \biggr\} $$to obtain its unique solution \(y^{k}\).
If \(y^{k} = x^{k}\), then set \(u^{k} = x^{k}\) and go to Step 4. Otherwise, go to Step 2.
-
Step 2.
(Armijo linesearch rule) Find \(m_{k}\) as the smallest positive integer number m such that
$$ \textstyle\begin{cases} z^{k,m} = (1 - \eta^{m})x^{k} + \eta^{m} y^{m}, \\ f(z^{k,m}, x^{k}) - f(z^{k,m}, y^{k}) \geq\frac{\theta}{2 \rho_{k}}\|x^{k} - y^{k}\|^{2}. \end{cases} $$(3.1)Set \(\eta_{k} = \eta^{m_{k}}\), \(z^{k} = z^{k, m_{k}}\).
-
Step 3.
Select \(\xi^{k} \in\partial_{2}f(z^{k}, x^{k})\) and compute \(\sigma_{k} = \frac{f(z^{k}, x^{k})}{\|\xi^{k}\|^{2}}\), \(u^{k} = P_{C}(x^{k} - \gamma_{k}\sigma_{k}\xi^{k})\).
-
Step 4.
$$ \textstyle\begin{cases} v^{k}=(1-\beta)u^{k}+\beta Su^{k}, \\ w^{k}=T_{\alpha_{k}}^{g}Av^{k}. \end{cases} $$
-
Step 5.
Take \(x^{k+1}=P_{C}(v^{k}+\mu A^{*}(Tw^{k}-Av^{k}))\) and go to iteration k with k replaced by \(k+1\).
-
Step 1.
Lemma 9
Suppose that \(p \in\operatorname{Sol}(C, f)\), \(f(x, \cdot)\) is convex and subdifferentiable on C for all \(x \in C\) and that f is pseudomonotone on C. Then, we have:
-
(a)
The Armijo linesearch rule (3.1) is well defined;
-
(b)
\(f(z^{k}, x^{k}) > 0\);
-
(c)
\(0 \notin\partial_{2}f(z^{k}, x^{k})\);
-
(d)
$$ \bigl\Vert u^{k} - p\bigr\Vert \leq\bigl\Vert x^{k} - p\bigr\Vert ^{2} - \gamma_{k}( 2 - \gamma_{k}) \bigl(\sigma_{k}\bigl\Vert \xi ^{k} \bigr\Vert \bigr)^{2}. $$
Proof
The proof of Lemma 9 when \(\mathbb{H}_{1}\) is a finite-dimensional space can be found, for example, in [29]. When its dimension is infinite, it can be done in the same way. So we omit it. □
Theorem 1
Let C and Q be two nonempty closed convex subsets in \(\mathbb{H}_{1}\) and \(\mathbb{H}_{2}\), respectively. Let \(S: C \to C\); \(T: Q \to Q\) be nonexpansive mappings, and let bifunctions g and f satisfy Assumptions A and B, respectively. Let \(A: \mathbb{H}_{1} \to\mathbb {H}_{2}\) be a bounded linear operator with its adjoint \(A^{*}\). If \(\varOmega = \{x^{*} \in\operatorname{Sol}(C, f) \cap\operatorname{Fix}(S): Ax^{*} \in\operatorname{Fix}(Q, g) \cap\operatorname{Fix}(T) \} \neq \emptyset\), then the sequences \(\{x^{k}\}\), \(\{u^{k}\}\), \(\{v^{k}\}\) converge weakly to an element \(p \in \varOmega \), and \(\{w^{k}\}\) converges weakly to \(Ap \in\operatorname{Sol}(Q, g) \cap\operatorname{Fix}(T)\).
Proof
Let \(x^{*} \in \varOmega \). Then \(x^{*} \in\operatorname{Sol}(C, f) \cap \operatorname{Fix}(S)\) and \(Ax^{*} \in\operatorname{Sol}(Q, g) \cap \operatorname{Fix}(T)\).
From Lemma 9(d) we have
By Step 4 we get
Thus,
Assertions (iii) and (ii) in Lemma 7 imply that
Hence,
Because of the nonexpansiveness of the mapping T, we receive from the last inequality that
Using (3.3), we have
By the definition of \(x^{k+1}\) we have
In combination with (3.4) and (3.2), the last inequality becomes
In view of (3.2), (3.5), and \(\mu\in(0, \frac{1}{\Vert A\Vert ^{2}})\), we get
and
Therefore, \(\lim_{k \to+\infty} \Vert x^{k}-x^{*} \Vert\) does exist, and we get from (3.6) and (3.7) that
From (3.8) and the inequality
we get
Besides, Lemma 9(d) implies
Hence,
In view of (3.8), we get
In addition, by the definition of \(u^{k}\), \(u^{k} = P_{C}( x^{k} - \gamma _{k}\sigma_{k} \xi^{k} ) \). We have
So we get from (3.10) that
Using \(v^{k}=(1-\beta)u^{k} + \beta Su^{k}\), Lemma 2, and the nonexpansiveness of S, we have
Therefore,
Combining the last inequality with (3.8), we obtain that
In addition,
Therefore, we get from (3.11) and (3.13) that
Because \(\lim_{k \to+\infty}\Vert x^{k}-x^{*}\Vert\) exists, \(\{x^{k}\}\) is bounded. By Lemma 5, \(\{y^{k}\}\) is bounded, and consequently \(\{z^{k}\}\) is bounded. By Lemma 4, \(\{\xi^{k}\}\) is bounded. Step 3 and (3.10) yield
We have
so, we get from (3.1) that
Combining this with (3.15), we have
Suppose that p is a weak accumulation point of \(\{x^{k}\}\), that is, there exists a subsequence \(\{x^{k_{j}}\}\) of \(\{x^{k}\}\) such that \(x^{k_{j}}\) converges weakly to \(p \in C\) as \(j \to+\infty\). Then, it follows from (3.11) and (3.14) that \(u^{k_{j}} \rightharpoonup p\), \(v^{k_{j}} \rightharpoonup p\), and \(Av^{k_{j}} \rightharpoonup Ap\).
Since \(\lim_{k \to+\infty}\Vert w^{k}-Av^{k}\Vert=0\), we deduce that \(w^{k_{j}} \rightharpoonup Ap\). Because \(\{w^{k}\} \subset Q\) and Q is closed and convex, we have that \(Ap \in Q\).
From (3.16) we get
We now consider two distinct cases.
Case 1. \(\limsup_{i \to\infty}\eta_{k_{i}} > 0\).
In this case, there exist \(\bar{\eta} > 0\) and a subsequence of \(\{\eta _{k_{i}}\}\), denoted again by \(\{\eta_{k_{i}}\}\), such that, for some \(i_{0} > 0\), \(\eta_{{k}_{i}} > \bar{\eta}\) for all \(i \geq i_{0} \). Using this fact and (3.17), we have
Recall that \(x^{k} \rightharpoonup p\), together with (3.18), implies that \(y^{k_{i}} \rightharpoonup p\) as \(i \to\infty\).
By the definition of \(y^{k_{i}}\),
we have
so there exists \(\hat{\xi}^{k_{i}} \in\partial_{2}f(x^{k_{i}}, y^{k_{i}})\) such that
Combining this with
yields
Since
from (3.19) we get that
Letting \(i \to\infty\), by the weak continuity of f and (3.18), from (3.20) we obtain in the limit that
Hence,
which means that p is a solution of \(\operatorname{EP}(C, f)\).
Case 2. \(\lim_{i \to\infty}{\eta_{k_{i}}} = 0\).
From the boundedness of \(\{y^{k_{i}}\}\), without loss of generality, we may assume that \(y^{k_{i}} \rightharpoonup \bar{y}\) as \(i \to\infty\).
Replacing y by \(x^{k_{i}}\) in (3.19), we get
On the other hand, by the Armijo linesearch rule (3.1), for \(m_{k_{i}} - 1\), we have
Combining this with (3.21), we get
According to the algorithm, we have \(z^{k_{i}, m_{k_{i}} - 1} = (1-\eta ^{m_{k_{i}} - 1})x^{k_{i}} + \eta^{m_{k_{i}} - 1}y^{k_{i}}\). Since \(\eta^{k_{i}, m_{k_{i}} - 1} \to0\), \(x^{k_{i}} \) converges weakly to p, and \(y^{k_{i}}\) converges weakly to ȳ, this implies that \(z^{k_{i}, m_{k_{i}} - 1} \rightharpoonup p\) as \(i \to\infty\). Beside that, \(\{\frac{1}{\rho _{k_{i}}}\|y^{k_{i}} - x^{k_{i}}\|^{2}\}\) is bounded, so without loss of generality we may assume that \(\lim_{i \to+\infty}\frac{1}{\rho _{k_{i}}}\|y^{k_{i}} - x^{k_{i}}\|^{2}\) exists. Hence, in the limit, from (3.22) we get that
Therefore, \(f(p, \bar{y}) = 0\) and \(\lim_{i \to+\infty}\|y^{k_{i}} - x^{k_{i}}\|^{2} = 0\). By Case 1 we get \(p \in\operatorname{Sol}(C, f)\).
Besides that, (3.13) implies that \(\|Su^{k_{j}} - u^{k_{j}}\| \to0\) as \(j \to\infty\); together with \(u^{k_{j}} \rightharpoonup p \) and the demiclosedness of \(I - S\), we get \(p \in\operatorname{Fix}(S)\).
Therefore,
Next, we need to show that \(Ap \in\operatorname{Sol}(Q, g) \cap \operatorname{Fix}(T)\).
Indeed, we have \(\operatorname{Sol}(Q, g)= \operatorname{Fix}(T_{\beta}^{g})\). So, if \(T_{\beta}^{g}Ap \neq Ap\), then, using Opial’s condition, we have
So it follows from (3.8) and Lemma 8 that
a contradiction. Thus, \(Ap \in\operatorname{Fix}(T_{\alpha}^{g}) = \operatorname{Sol}(Q, g)\).
Moreover, (3.9) shows that \(\lim_{j \to\infty} \|Tw^{k_{j}} - w^{k_{j}}\| = 0\). Combining this with \(w^{k_{j}} \rightharpoonup Ap\) and the fact that \(I - T\) is demiclosed at 0, it is immediate that \(Ap \in \operatorname{Fix}(T)\). Therefore,
From (3.23) and (3.24) we obtain that \(p \in \varOmega \).
To complete the proof, we must show that the whole sequence \(\{x^{k}\}\) converges weakly to p. Indeed, if there exists a subsequence \(\{ x^{l_{i}}\}\) of \(\{x^{k}\}\) such that \(x^{l_{i}} \rightharpoonup q\) with \(q \neq p \), then we have \(q \in \varOmega \). By Opial’s condition this yields
a contradiction. Hence, \(\{x^{k}\}\) converges weakly to p.
Combining this with (3.8), it is immediate that \(\{u^{k}\} \), \(\{ v^{k}\}\) also converge weakly to p and \(w^{k} \rightharpoonup Ap \in \operatorname{Sol}(Q, g) \cap\operatorname{Fix}(T)\). □
A particular case of the problem SEPNM is the split equilibrium problem SEP, that is, \(S = I_{\mathbb{H}_{1}}\) and \(T = I_{\mathbb{H}_{2}}\). In this case, we have the following linesearch algorithm for SEP.
Algorithm 2
-
Initialization. Pick \(x^{0} \in C\) and choose the parameters \(\eta, \theta\in(0, 1)\), \(0 < \underline{\rho} \leq\bar {\rho}\), \(\{ \rho_{k} \} \subset[\underline{\rho}, \bar{\rho}]\), \(0 < \underline{\gamma} \leq\bar{\gamma} < 2\), \(\{ \gamma_{k} \} \subset [\underline{\gamma}, \bar{\gamma} ] \), \(0 < \alpha\), \(\{\alpha_{k}\} \subset [\alpha, +\infty)\), \(\mu\in(0, \frac{1}{\|A\|})\).
-
Iteration k (\(k = 0, 1, 2, \ldots \)). Having \(x^{k}\), do the following steps:
-
Step 1.
Solve the strongly convex program
$$CP\bigl(x^{k}\bigr)\quad \min \biggl\{ f\bigl(x^{k}, y\bigr) + \frac{1}{2\rho_{k}} \bigl\Vert y-x^{k}\bigr\Vert ^{2}: y \in C \biggr\} $$to obtain its unique solution \(y^{k}\).
If \(y^{k} = x^{k}\), then set \(u^{k} = x^{k}\) and go to Step 4. Otherwise, go to Step 2.
-
Step 2.
(Armijo linesearch rule) Find \(m_{k}\) as the smallest positive integer number m such that
$$ \textstyle\begin{cases} z^{k,m} = (1 - \eta^{m})x^{k} + \eta^{m} y^{m}, \\ f(z^{k,m}, x^{k}) - f(z^{k,m}, y^{k}) \geq\frac{\theta}{2 \rho_{k}}\|x^{k} - y^{k}\|^{2}. \end{cases} $$Set \(\eta_{k} = \eta^{m_{k}}\), \(z^{k} = z^{k, m_{k}}\).
-
Step 3.
Select \(\xi^{k} \in\partial_{2}f(z^{k}, x^{k})\) and compute \(\sigma_{k} = \frac{f(z^{k}, x^{k})}{\|\xi^{k}\|^{2}}\), \(u^{k} = P_{C}(x^{k} - \gamma_{k}\sigma_{k}\xi^{k})\).
-
Step 4.
\(w^{k}=T_{\alpha_{k}}^{g}Au^{k}\).
-
Step 5.
Take \(x^{k+1}=P_{C}(u^{k}+\mu A^{*}(w^{k}-Au^{k}))\) and go to iteration k with k is replaced by \(k+1\).
-
Step 1.
The following corollary is an immediate consequence of Theorem 1.
Corollary 1
Suppose that g, f are bifunctions satisfying Assumptions A and B, respectively. Let \(A: \mathbb{H}_{1} \to\mathbb{H}_{2}\) be a bounded linear operator with its adjoint \(A^{*}\). If \(\varOmega = \{x^{*} \in\operatorname{Sol}(C, f): Ax^{*} \in\operatorname {Sol}(Q, g) \} \neq\emptyset\), then the sequences \(\{x^{k}\}\) and \(\{ u^{k}\}\) converge weakly to an element \(p \in \varOmega \), and \(\{w^{k}\}\) converges weakly to \(Ap \in\operatorname{Sol}(Q, g)\).
4 A strong convergence algorithm
Algorithm 3
-
Initialization. Pick \(x^{g} \in C_{0} = C\) and choose the parameters \(\beta, \eta, \theta\in(0, 1)\), \(0 < \underline{\rho} \leq \bar{\rho} \), \(\{ \rho_{k} \} \subset[\underline{\rho} , \bar{\rho } ]\), \(0 < \underline{\gamma} \leq\bar{\gamma} < 2 \), \(\{ \gamma_{k} \} \subset[ \underline{\gamma}, \bar{\gamma}] \), \(0 < \alpha\), \(\{\alpha _{k}\} \subset [\alpha, +\infty)\), \(\mu\in(0, \frac{1}{\|A\|})\).
-
Iteration k (\(k = 0, 1, 2, \ldots \)). Having \(x^{k}\), do the following steps:
-
Step 1.
Solve the strongly convex program
$$CP\bigl(x^{k}\bigr)\quad \min \biggl\{ f\bigl(x^{k}, y\bigr) + \frac{1}{2\rho_{k}} \bigl\Vert y-x^{k}\bigr\Vert ^{2}: y \in C \biggr\} $$to obtain its unique solution \(y^{k}\).
If \(y^{k} = x^{k}\), then set \(u^{k} = x^{k}\) and go to Step 4. Otherwise, go to Step 2.
-
Step 2.
(Armijo linesearch rule) Find \(m_{k}\) as the smallest positive integer number m such that
$$ \textstyle\begin{cases} z^{k,m} = (1 - \eta^{m})x^{k} + \eta^{m} y^{m}, \\ f(z^{k,m}, x^{k}) - f(z^{k,m}, y^{k}) \geq\frac{\theta}{2 \rho_{k}}\|x^{k} - y^{k}\|^{2}. \end{cases} $$(4.1)Set \(\eta_{k} = \eta^{m_{k}}\), \(z^{k} = z^{k, m_{k}}\).
-
Step 3.
Select \(\xi^{k} \in\partial_{2}f(z^{k}, x^{k})\) and compute \(\sigma_{k} = \frac{f(z^{k}, x^{k})}{\|\xi^{k}\|^{2}}\), \(u^{k} = P_{C}(x^{k} - \gamma_{k}\sigma_{k}\xi^{k})\).
-
Step 4.
$$ \textstyle\begin{cases} v^{k}=(1-\beta)u^{k} + \beta Su^{k}, \\ w^{k}=T_{\beta_{k}}^{g}Av^{k}. \end{cases} $$
-
Step 5.
\(t^{k}=P_{C}(v^{k}+\mu A^{*}(Tw^{k}-Av^{k}))\).
-
Step 6.
Define \(C_{k+1} = \{x \in C_{k}: \|x - t^{k}\| \leq \| x - v^{k}\| \leq\|x - x^{k}\| \}\). Compute \(x^{k+1} = P_{C_{k+1}}(x^{g})\) and go to iteration k with k is replaced by \(k+1\).
-
Step 1.
Theorem 2
Let C and Q be two nonempty closed convex subsets in \(\mathbb{H}_{1}\) and \(\mathbb{H}_{2}\), respectively. Let \(S: C \to C\); \(T: Q \to Q\) be nonexpansive mappings, and let bifunctions g and f satisfy Assumptions A and B, respectively. Let \(A: \mathbb{H}_{1} \to\mathbb {H}_{2}\) be a bounded linear operator with its adjoint \(A^{*}\). If \(\varOmega = \{x^{*} \in\operatorname{Sol}(C, f) \cap\operatorname {Fix}(S): Ax^{*} \in\operatorname{Sol}(Q, g) \cap\operatorname{Fix}(T) \} \neq\emptyset\), then the sequences \(\{x^{k}\}\), \(\{u^{k}\}\), \(\{v^{k}\}\) converge strongly to an element \(p \in \varOmega \), and \(\{w^{k}\}\) converges strongly to \(Ap \in\operatorname{Sol}(Q, g) \cap\operatorname{Fix}(T)\).
Proof
First, we observe that the linesearch rule (4.1) is well defined by Lemma 9. Let \(x^{*} \in \varOmega \). From (3.5), (3.12), and (3.2) we have
Since \(\mu\in (0, \frac{1}{\Vert A\Vert^{2}} )\), (4.2) implies that
Since \(x^{*} \in C_{0}\), from(4.3) we get by induction that \(x^{*} \in C_{k}\) for all \(k \in\mathbb{N}^{*}\)and, consequently, \(\varOmega \subset C_{k}\) for all k.
By setting
it is clear that \(D_{k}\) is closed and convex for all k. In addition, \(C_{0} = C\) is also closed and convex, and \(C_{k+1}=C_{k} \cap D_{k}\). Hence, \(C_{k}\) is closed and convex for all k.
From the definition of \(x^{k + 1}\) we have \(x^{k+1}\in C_{k+1}\subset C_{k}\) and \(x^{k}=P_{C_{k}}(x^{g})\), so
Since \(x^{*} \in C_{k+1}\), this implies that
Thus,
Consequently, \(\{\Vert x^{k}-x^{g}\Vert\}\) is nondecreasing and bounded, so \(\lim_{k \to+\infty}\Vert x^{k}-x^{g}\Vert\) does exist.
Combining this with (4.3), we obtain that \(\{t^{k}\}\) and \(\{v^{k}\}\) are also bounded.
For all \(m > n\), we have that \(x^{m} \in C_{m} \subset C_{n}\) and \(x^{n}=P_{C_{n}}(x^{g})\). Combining this fact with Lemma 1, we get
Since \(\lim_{k \to+\infty}\Vert x^{k}-x^{g}\Vert\) exists, this implies that \(\lim_{m,n \to\infty} \Vert x^{m}-x^{n} \Vert = 0\), i.e., \(\{x^{k}\} \) is a Cauchy sequence, so
By Step 6 we get
Therefore,
and
So, from (4.5), (4.6), and (4.4) we get that
In view of (4.2) and (4.7), we have
Since \(\beta\in(0, 1)\) and \(\mu\in(0, \frac{1}{\|A\|})\), we deduce from (4.8) that
In addition, from the inequality
combined with (4.9), we get
Besides, (3.11), (4.6), and \(\lim_{k \to+\infty}x^{k}=p\) it imply
Since
from (4.9) and (4.11) we get that \(\|Sp - p\| = 0\), that is, \(p \in\operatorname{Fix}(S)\).
From (3.16) we have
We now consider two distinct cases.
Case 1. \(\limsup_{k \to\infty}\eta_{k} > 0\).
Then there exist \(\bar{\eta} > 0\) and a subsequence \(\{ \eta_{k_{i}} \} \subset\{ \eta_{k} \}\) such that \(\eta_{{k}_{i}} > \bar{\eta}\) for all i. So we get from (4.12) that
Since \(x^{k} \to p\), (4.13) implies that \(y^{k_{i}} \to p\) as \(i \to\infty\).
For each \(y \in C\), we get from (3.20) that
Letting \(i \to\infty\), by the continuity of f, since \(x^{k_{i}} \to p\) and \(y^{k_{i}} \to p\), in the limit, from (4.14) we obtain that
Hence,
so p is a solution of \(\operatorname{EP}(C, f)\).
Case 2. \(\lim_{k \to\infty}{\eta_{k}} = 0\).
From the boundedness of \(\{y^{k}\}\) we deduce that there exists \(\{ y^{k_{i}}\} \subset\{y^{k}\} \) such that \(y^{k_{i}} \rightharpoonup \bar {y}\) as \(i \to\infty\).
Replacing y by \(y^{k_{i}}\) in (3.19), we get
In the other hand, by the Armijo linesearch rule (4.1), for \(m_{k_{i}} - 1\), there exists \(z^{k_{i}, m_{k_{i}}-1} \) such that
Combining this with (4.15), we get
According to the algorithm, we have \(z^{k_{i}, m_{k_{i}} - 1} = (1-\eta ^{m_{k_{i}} - 1})x^{k_{i}} + \eta^{m_{k_{i}} - 1}y^{k_{i}}\). Since \(\eta^{k_{i}, m_{k_{i}} - 1} \to0\), \(x^{k_{i}} \) converges strongly to p, and \(y^{k_{i}}\) converges weakly to ȳ, this implies that \(z^{k_{i}, m_{k_{i}} - 1} \to p\) as \(i \to\infty\). Besides that, \(\{\frac{1}{\rho _{k_{i}}}\|y^{k_{i}} - x^{k_{i}}\|^{2}\}\) is bounded, so, without loss of generality, we may assume that \(\lim_{i \to+\infty}\frac{1}{\rho _{k_{i}}}\|y^{k_{i}} - x^{k_{i}}\|^{2}\) exists. Hence, we get in the limit (4.16) that
Therefore, \(f(p, \bar{y}) = 0\) and \(\lim_{i \to+\infty}\|y^{k_{i}} - x^{k_{i}}\|^{2} = 0\). By Case 1 it is immediate that \(p \in\operatorname{Sol}(C, f)\). So
We obtain from (4.11) that \(\lim_{k \to+\infty}Av^{k}=Ap\). Combining this with (4.9) yields
Moreover,
In view of (4.10) and (4.18), we obtain \(\|TAp - Ap\| = 0\). Hence, \(Ap \in\operatorname{Fix}(T)\).
In addition,
where the last inequality comes from Lemma 8. Letting \(k \to \infty\) and recalling that \(\lim_{k \to+\infty}Av^{k}=Ap\), from (4.9) we get
Therefore, \(Ap \in\operatorname{Fix}(T_{\alpha}^{g}) = \operatorname {Sol}(Q, g)\).
Hence,
Combining this with (4.17), we conclude that \(p \in \varOmega \). The proof is completed. □
When \(S=I_{\mathbb{H}_{1}}\) and \(T=I_{\mathbb{H}_{2}}\), Algorithm 3 becomes as follows.
Algorithm 4
-
Initialization. Pick \(x^{g} \in C_{0} = C\) and choose the parameters \(\eta, \theta\in(0, 1)\), \(0 < \underline{\rho} \leq \bar {\rho} \), \(\{ \rho_{k} \} \subset[\underline{\rho} , \bar{\rho} ]\), \(0 < \underline{\gamma} \leq\bar{\gamma} < 2 \), \(\{ \gamma_{k} \} \subset[ \underline{\gamma}, \bar{\gamma}] \), \(0 < \alpha\), \(\{\alpha _{k}\} \subset [\alpha, +\infty)\), \(\mu\in(0, \frac{1}{\|A\|})\).
-
Iteration k (\(k = 0, 1, 2, \ldots \)). Having \(x^{k}\), do the following steps:
-
Step 1.
Solve the strongly convex program
$$CP\bigl(x^{k}\bigr)\quad \min \biggl\{ f\bigl(x^{k}, y\bigr) + \frac{1}{2\rho_{k}} \bigl\Vert y-x^{k}\bigr\Vert ^{2}: y \in C \biggr\} $$to obtain its unique solution \(y^{k}\).
If \(y^{k} = x^{k}\), then set \(u^{k} = x^{k}\) and go to Step 4. Otherwise, go to Step 2.
-
Step 2.
(Armijo linesearch rule) Find \(m_{k}\) as the smallest positive integer number m such that
$$ \textstyle\begin{cases} z^{k,m} = (1 - \eta^{m})x^{k} + \eta^{m} y^{m}, \\ f(z^{k,m}, x^{k}) - f(z^{k,m}, y^{k}) \geq\frac{\theta}{2 \rho_{k}}\|x^{k} - y^{k}\|^{2}. \end{cases} $$Set \(\eta_{k} = \eta^{m_{k}}\), \(z^{k} = z^{k, m_{k}}\).
-
Step 3.
Select \(\xi^{k} \in\partial_{2}f(z^{k}, x^{k})\) and compute \(\sigma_{k} = \frac{f(z^{k}, x^{k})}{\|\xi^{k}\|^{2}}\), \(u^{k} = P_{C}(x^{k} - \gamma_{k}\sigma_{k}\xi^{k})\).
-
Step 4.
\(w^{k}=T_{\beta_{k}}^{g}Au^{k}\).
-
Step 5.
\(t^{k}=P_{C}(u^{k}+\mu A^{*}(w^{k}-Au^{k}))\).
-
Step 6.
Define \(C_{k+1} = \{x \in C_{k}: \|x - t^{k}\| \leq \| x - u^{k}\| \leq\|x - x^{k}\| \}\). Compute \(x^{k+1} = P_{C_{k+1}}(x^{g})\) and go to iteration k with k is replaced by \(k+1\).
-
Step 1.
The following result is an immediate consequence of Theorem 2.
Corollary 2
Let \(g: Q \times Q \to\mathbb{R}\) be a bifunction satisfying Assumption A, and \(f: C \times C \to\mathbb{R}\) be a bifunction satisfying Assumption B. Let \(A: \mathbb{H}_{1} \to\mathbb{H}_{2}\) be a bounded linear operator with its adjoint \(A^{*}\). If \(\varOmega =\{x^{*}\in\operatorname{Sol}(C, f): Ax^{*} \in\operatorname {Sol}(Q, g)\}\neq\emptyset\), then the sequences \(\{x^{k}\}\) and \(\{u^{k}\} \) converge strongly to an element \(p \in \varOmega \), and \(\{w^{k}\}\) converges strongly to \(Ap \in\operatorname{Sol}(Q, g)\).
5 Conclusion
Two linesearch algorithms for solving a split equilibrium problem and nonexpansive mapping \(\operatorname{SEPNM}(C, Q, A, f, g, S, T)\) in Hilbert spaces have been proposed, in which the bifunction f is pseudomonotone on C with respect to its solution set, the bifunction g is monotone on Q, and S and T are nonexpansive mappings. The weak and strong convergence of iteration sequences generated by the algorithms to a solution of this problem are obtained.
References
Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994)
Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 18, 103-120 (2004)
Censor, Y, Bortfeld, T, Martin, B, Trofimov, A: A unified approach for inversion problem in intensity-modulated radiation therapy. Phys. Med. Biol. 51, 2353-2365 (2006)
Censor, Y, Elfving, T, Kopf, N, Bortfeld, T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 21, 2071-2084 (2005)
Censor, Y, Motova, XA, Segal, A: Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 327, 1244-1256 (2007)
Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441-453 (2002)
Xu, HK: Iterative methods for the split feasibility problem in infinite dimensional Hilbert spaces. Inverse Probl. 26, 105018 (2010)
Ansari, QH, Rehan, A: An iterative method for split hierarchical monotone variational inclusions. Fixed Point Theory Appl. 2015, 121 (2015)
Ansari, AH, Rehan, A, Wen, CF: Split hierarchical variational inequality problems and fixed point problems for nonexpansive mappings. Fixed Point Theory Appl. 2015, 274 (2015)
Censor, Y, Segal, A: The split common fixed point problem for directed operators. J. Convex Anal. 16, 587-600 (2009)
Cui, H, Wang, F: Iterative methods for the split common fixed point problem in Hilbert spaces. Fixed Point Theory Appl. 2014, 78 (2014)
Eslamian, M: General algorithms for split common fixed point problem of demicontractive mappings. Optimization (2015). doi:10.1080/02331934.2015.1053883
Kraikaew, R, Saejung, S: On split common fixed point problems. J. Math. Anal. Appl. 415, 513-524 (2014)
Moudafi, A: The split common fixed point problem for demicontractive mappings. Inverse Probl. 26, 055007 (2010)
Sitthithakerngkiet, K, Deepho, J, Kumam, P: A hybrid viscosity algorithm via modify the hybrid steepest descent method for solving the split variational inclusion and fixed point problems. Appl. Math. Comput. 250, 986-1001 (2015)
Moudafi, A: Split monotone variational inclusions. J. Optim. Theory Appl. 150, 275-283 (2011)
Censor, Y, Gibali, A, Reich, S: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301-323 (2012)
Deepho, J, Kumm, W, Kumm, P: A new hybrid projection algorithm for solving the split generalized equilibrium problems and the system of variational inequality problems. J. Math. Model. Algorithms 13, 405-423 (2014)
Deepho, J, Martínez-Moreno, J, Kumam, P: A viscosity of Cesàro mean approximation method for split generalized equilibrium, variational inequality and fixed point problems. J. Nonlinear Sci. Appl. 9, 1475-1496 (2016)
Deepho, J, Martínez-Moreno, J, Sitthithakerngkiet, K, Kumam, P: Convergence analysis of hybrid projection with Cesàro mean method for the split equilibrium and general system of finite variational inequalities. J. Comput. Appl. Math. (2015). doi:10.1016/j.cam.2015.10.006
Blum, E, Oettli, W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 127-149 (1994)
Muu, LD, Oettli, W: Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. TMA 18, 1159-1166 (1992)
He, Z: The split equilibrium problem and its convergence algorithms. J. Inequal. Appl. (2012). doi:10.1186/1029-242X-2012-162
Bnouhachem, A, Al-Homidan, S, Ansari, QH: An iterative method for common solutions of equilibrium problems and hierarchical fixed point problems. Fixed Point Theory Appl. 2014, 194 (2014)
Kumam, W, Piri, H, Kumam, P: Solutions of system of equilibrium and variational inequality problems on fixed points of infinite family of nonexpansive mappings. Appl. Math. Comput. 248, 441-455 (2014)
Kumam, W, Witthayarat, U, Kumam, P, Suantai, S, Wattanawitoon, K: Convergence theorem for equilibrium problem and Bregman strongly nonexpansive mappings in Banach spaces. Optimization 65, 265-280 (2016)
Bauschke, HH, Borwein, JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367-426 (1996)
López, G, Martín-Márquez, V, Wang, F, Xu, HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28, 085004 (2012)
Tran, DQ, Muu, LD, Nguyen, VH: Extragradient algorithms extended to equilibrium problems. Optimization 57, 749-776 (2008)
Dinh, BV, Muu, LD: A projection algorithm for solving pseudomonotone equilibrium problems and it’s application to a class of bilevel equilibria. Optimization 64, 559-575 (2015)
Facchinei, F, Pang, JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, New York (2003)
Korpelevich, GM: An extragradient method for finding saddle points and other problems. Èkon. Mat. Metody 12, 747-756 (1976)
Mann, WR: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506-610 (1953)
Anh, TV, Muu, LD: A projection-fixed point method for a class of bilevel variational inequalities with split fixed point constraints. Optimization (2015). doi:10.1080/02331934.2015.1101599
Moudafi, A: Krasnoselski-Mann iteration for hierarchical fixed-point problems. Inverse Probl. 23, 1635-1640 (2007)
Takahashi, W, Takeuchi, Y, Kubota, R: Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 341, 276-286 (2008)
Censor, Y, Gibali, A, Reich, S: Strong convergence of subgradient extragradient methods for variational inequality problem in Hilbert space. Optim. Methods Softw. 26, 827-845 (2011)
Opial, Z: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591-597 (1967)
Rockafellar, RT: Convex Analysis. Princeton University Press, Princeton (1970)
Vuong, PT, Strodiot, JJ, Nguyen, VH: Extragradient methods and linesearch algorithms for solving Ky Fan inequalities and fixed point problems. J. Optim. Theory Appl. 155, 605-627 (2013)
Combettes, PL, Hirstoaga, A: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117-136 (2005)
Acknowledgements
This work was supported by a Research Grant of Pukyong National University (2016).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Dinh, B.V., Son, D.X., Jiao, L. et al. Linesearch algorithms for split equilibrium problems and nonexpansive mappings. Fixed Point Theory Appl 2016, 27 (2016). https://doi.org/10.1186/s13663-016-0518-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13663-016-0518-3