Abstract
In this paper, we present a new extragradient algorithm for approximating a solution of the split equilibrium problems and split fixed point problems. The strong convergence theorems are proved in the framework of Hilbert spaces under some mild conditions. We apply the obtained main result for the problem of finding a solution of split variational inequality problems and split fixed point problems and a numerical example and computational results are also provided.
Similar content being viewed by others
1 Introduction
Let C and D be nonempty closed and convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, and let \(H_{1}\) and \(H_{2}\) be endowed with an inner product \(\langle \cdot , \cdot \rangle \) and the corresponding norm \(\|\cdot \|\). By → and ⇀, we denote strong convergence and weak convergence, respectively. Suppose that \(f\colon C\times C\rightarrow \mathbb{R}\) be a bifunction. The equilibrium problem (EP) is to find \(z\in C\) such that
The solution set of the equilibrium problem is denoted by \(\operatorname{EP} (f)\). The equilibrium problem is a generalization of many mathematical models such as variational inequalities, fixed point problems, and optimization problems; see [6, 14, 17, 18, 20, 35]. In 2013, Anh [2] introduced an extragradient algorithm for finding a common element of fixed point set of a nonexpansive mapping and solution set of an equilibrium problem on pseudomonotone and Lipschitz-type continuous bifunction in real Hilbert space. The author proved the strong convergence of the generated sequence under some condition on it. Since then, many authors considered the EP and related problems and proved weak and strong convergence. See, for example [1,2,3,4, 11, 21, 26, 41].
Moudafi [32] (see also He [25]) introduced the split equilibrium problem (SEP) which is to find \(z\in C\) such that
where \(L\colon H_{1}\rightarrow H_{2}\) is a bounded linear operator and \(g\colon D\times D\rightarrow \mathbb{R}\) be another bifunction. It is well known that SEP is a generalization of equilibrium problem by considering \(g=0\) and \(D=H_{2}\).
He [25] used the proximal method and introduced an iterative method and showed that the generated sequence converges weakly to a solution of SEP under suitable conditions on parameters provided that f, g are monotone bifunctions on C and D, respectively.
Problem SEP is an extension of many mathematical models which have been considered and studied intensively by several authors recently: split variational inequality problems [12], split common fixed point problems [7, 13, 16, 19, 28, 31, 36, 38,39,40], and the split feasibility problems which have been used for studying medical image reconstruction, sensor networks, intensity modulated radiation therapy, and data compression; see [5, 8,9,10] and the references quoted therein.
In this paper, motivated and inspired by the above literature, we consider a new extragradient algorithm for finding a common solution of split equilibrium problem of pseudomonotone and Lipschitz-type continuous bifunctions and split fixed point problem of nonexpansive mappings in real Hilbert space. That is, we are interested in considering the following problem: let \(H_{1}\) and \(H_{2}\) be real Hilbert spaces and C and D be nonempty closed and convex subsets of \(H_{1}\) and \(H_{2}\), respectively. Let \(f\colon C\times C\rightarrow \mathbb{R}\) and \(g\colon D\times D\rightarrow \mathbb{R}\) be pseudomonotone and Lipschitz-type continuous bifunctions, \(T\colon C \rightarrow C\) and \(S\colon D\rightarrow D\) be nonexpansive mappings and \(L\colon H_{1}\rightarrow H_{2}\) be a bounded linear operator, we consider the problem of finding a solution \(p\in C\) such that
where \(F(T)\) is the fixed points set of T and \(\varOmega \neq \emptyset \). Under some mild conditions, the strong convergence theorem will be provided.
The paper is organized as follows. Section 2 gathers some definitions and lemmas of geometry of real Hilbert spaces and monotone bifunctions, which will be needed in the remaining sections. In Sect. 3, we prepare a new extragradient algorithm and prove the strong convergence theorem. In Sect. 4, the results of Sect. 3 are applied to solve split variational inequality problems and split fixed point problem of nonexpansive mappings. Finally, in Sect. 5, the numerical experiments are showed and discussed.
2 Preliminaries
We now provide some basic concepts, definitions and lemmas which will be used in the sequel. Let C be a closed and convex subset of a real Hilbert space H. The operator \(P_{C}\) is called a metric projection operator if it assigns to each \(x\in H\) its nearest point \(y\in C\) such that
An element y is called the metric projection of x onto C and denoted by \(P_{C}x\). It exists and is unique at any point of the real Hilbert space. It is well known that the metric projection operator \(P_{C}\) is continuous.
Lemma 2.1
Let H is a real Hilbert space and C is a nonempty, closed and convex subset of H. Then, for all \(x\in H\), the element \(z=P_{C}x\) if and only if
The metric projection satisfies in the following inequality:
therefore the metric projection is firmly nonexpansive operator in H. For more information concerning the metric projection, please see Sect. 3 of [24].
Lemma 2.2
([23])
Let H be a real Hilbert space and \(T:H\rightarrow H\) be a nonexpansive mapping with \(F(T)\neq \emptyset \). Then the mapping \(I -T\) is demiclosed at zero, that is, if \(\{x_{n}\}\) is a sequence in H such that \(x_{n}\rightharpoonup x\) and \(\|x_{n} -Tx_{n}\|\rightarrow 0\), then \(x \in F(T)\).
Lemma 2.3
([42])
Assume that \(\{a_{n}\}\) is a sequence of nonnegative numbers such that
where \(\{\gamma _{n}\}\) is a sequence in \((0,1)\) and \(\{\delta _{n}\}\) is a sequence in \(\mathbb{R}\) such that
-
(i)
\(\lim_{n\rightarrow \infty }\gamma _{n}=0\), \(\sum^{\infty }_{n=1}\gamma _{n}=\infty \),
-
(ii)
\(\limsup_{n\rightarrow \infty }{\delta _{n} } \leq 0\).
Then \(\lim_{n\rightarrow \infty }a_{n}=0\).
Lemma 2.4
([30])
Let \(\{a_{n}\}\) be a sequence of real numbers such that there exists a subsequence \(\{n_{i}\}\) of \(\{n\}\) such that \(a_{n_{i}}< a_{n_{i}+1}\) for all \(i\in \mathbb{N}\). Then there exists a nondecreasing sequence \(\{m_{k}\}\subset \mathbb{N}\) such that \(m_{k}\rightarrow \infty \) as \(k\rightarrow \infty \) and the following properties are satisfied by all (sufficiently large) numbers \(k\in \mathbb{N}\):
In fact, \(m_{k} = \max \{ j\leq k : a_{j} < a_{j+1}\}\).
Definition 2.5
A bifunction \(f\colon C\times C\rightarrow \mathbb{R}\) is said to be
-
monotone on C if
$$ f(x,y)+f(y,x)\leq 0, \quad \forall x, y\in C; $$ -
pseudomonotone on C if
$$ f(x,y) \geq 0\quad \Longrightarrow\quad f(y,x)\leq 0,\quad \forall x, y\in C; $$ -
Lipschitz-type continuous on C if there exist two positive constants \(c_{1}\) and \(c_{2}\) such that
$$ f(x,y)+ f(y,z)\geq f(x,z)-c_{1} \Vert x-y \Vert ^{2} -c_{2} \Vert y-z \Vert ^{2},\quad \forall x, y,z\in C. $$
Let C be a nonempty closed and convex subset of a real Hilbert space H and \(f : C\times C \rightarrow \mathbb{R}\) be a bifunction, we will assume the following conditions:
-
(A1)
f is pseudomonotone on C and \(f(x,x)=0\) for all \(x\in C\);
-
(A2)
f is weakly continuous on \(C\times C\) in the sense that if \(x,y\in C\) and \(\{x_{n}\}, \{y_{n}\}\subset C\) converge weakly to x and y, respectively, then \(f(x_{n},y_{n})\rightarrow f(x,y)\) as \(n\rightarrow \infty \);
-
(A3)
\(f(x, \cdot )\) is convex and subdifferentiable on C for every fixed \(x\in C\);
-
(A4)
f is Lipschitz-type continuous on C with two positive constants \(c_{1}\) and \(c_{2}\).
It is easy to show that under assumptions (A1)–(A3), the solution set \(\operatorname{EP}(f)\) is closed and convex (see, for instance [34]).
We need the following lemma to prove our main results.
Lemma 2.6
([2])
Assume that f satisfies (A1), (A3), (A4) such that \(\operatorname{EP}(f)\) is nonempty and \(0 < \rho _{0} < \min \{\frac{1}{2c_{1}},\frac{1}{2c_{2}}\} \). If \(x_{0} \in C\), and \(y_{0}\), \(z_{0}\) are defined by
then
-
(i)
\(\rho _{0}\) \([f(x_{0},y) - f(x_{0},y_{0})] \geq \langle y _{0} - x_{0},y_{0} - y \rangle \), \(\forall y \in C\);
-
(ii)
\(\|z_{0} - p\|^{2}\) \(\leq \|x_{0} - p\|^{2} - (1 - 2\rho _{0}c_{1})\|x_{0} - y_{0}\|^{2} - (1 - 2\rho _{0}c_{2})\|y_{0} - z_{0} \|^{2}\), \(\forall p \in \operatorname{EP}(f)\).
3 Main results
In this section, we present our main algorithm and show the strong convergence theorem for finding a common solution of split equilibrium problem of pseudomonotone and Lipschitz-type continuous bifunctions and split fixed point problem of nonexpansive mappings in real Hilbert space.
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces and C and D be nonempty closed and convex subsets of \(H_{1}\) and \(H_{2}\), respectively. Suppose that \(f\colon C\times C\rightarrow \mathbb{R}\) and \(g\colon D \times D\rightarrow \mathbb{R}\) be bifunctions. Let \(L\colon H_{1} \rightarrow H_{2}\) be a bounded linear operator with its adjoint \(L^{*}\), \(T\colon C\rightarrow C\) and \(S\colon D\rightarrow D\) be nonexpansive mappings and \(h \colon C\rightarrow C\) be a ρ-contraction mapping. We introduce the following extragradient algorithm for solving the split equilibrium problem and fixed point problem.
Algorithm 3.1
Choose \(x_{1}\in H_{1}\). The control parameters \(\lambda _{n}\), \(\mu _{n}\), \(\alpha _{n}\), \(\beta _{n}\), \(\delta _{n}\) satisfy the following conditions:
Let \(\{x_{n}\}\) be a sequence generated by
Theorem 3.2
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces and C and D be nonempty closed and convex subsets of \(H_{1}\) and \(H_{2}\), respectively. Suppose that \(f\colon C\times C\rightarrow \mathbb{R}\) and \(g\colon D \times D\rightarrow \mathbb{R}\) be bifunctions which satisfy (A1)–(A4) with some positive constants \(\{c_{1}, c_{2} \}\) and \(\{d_{1}, d_{2} \}\), respectively. Let \(L\colon H_{1}\rightarrow H_{2}\) be a bounded linear operator with its adjoint \(L^{*}\), \(T\colon C\rightarrow C\) and \(S\colon D\rightarrow D\) be nonexpansive mappings, \(h \colon C\rightarrow C\) be a ρ-contraction mapping and \(\varOmega \neq \emptyset \). Then the sequence \(\{x_{n}\}\) generated by Algorithm 3.1 converges strongly to \(q=P_{\varOmega }h(q)\).
Proof
Let \(p\in \varOmega \). So, \(p\in \operatorname{EP}(f)\cap F(T)\subset C\) and \(Lp\in \operatorname{EP}(g) \cap F(S)\subset D\). Since \(P_{D}\) is firmly nonexpansive, we get
and hence
Since S is nonexpansive, \(Lp\in F(S)\) and using Lemma 2.6 and the definition of \(u_{n}\) and \(v_{n}\), we have
for each \(n\in \mathbb{N}\). From (3.1), (3.2) and the assumptions, we obtain
By (3.3), we get
This implies that
Since \(P_{C}\) is nonexpansive and by (3.4), we obtain
then we obtain
By Lemma 2.6, the definition of \(t_{n}\) and \(z_{n}\) and the assumptions we have
for each \(n\in \mathbb{N}\). From (3.6) and (3.7), we get
Set \(q_{n}=\beta _{n}x_{n}+(1-\beta _{n})Tz_{n}\). It follows from (3.8) that
By the definition of \(x_{n+1}\) and (3.9), we obtain
This implies that the sequence \(\{x_{n}\}\) is bounded. By (3.6) and (3.8), the sequences \(\{y_{n}\}\) and \(\{z_{n}\}\) are bounded too.
By Lemma 2.6, (3.6), the definition of \(q_{n}\) and assumptions on \(\beta _{n}\) and \(\delta _{n}\), we get
Therefore,
and hence
where
By (3.9), we have
So, we get
where \(M_{0}=\sup \lbrace \|x_{n}-p\|^{2}, n\in \mathbb{N} \rbrace \), put \(\gamma _{n}=\frac{2(1-\rho )\alpha _{n}}{1-\alpha _{n}\rho }\) for each \(n\in \mathbb{N}\). By the assumptions on \(\alpha _{n}\), we have
Since \(P_{\varOmega }h\) is a contraction on C, there exists \(q\in \varOmega \) such that \(q=P_{\varOmega }h(q)\). We prove that the sequence \(\{x_{n}\}\) converges strongly to \(q=P_{\varOmega }h(q)\). In order to prove it, let us consider two cases.
Case 1. Suppose that there exists \(n_{0}\in \mathbb{N}\) such that \(\{\|x_{n}-q\|\}_{n=n_{0}}^{\infty }\) is nonincreasing. In this case, the limit of \(\{\|x_{n}-q\|\}\) exists. This together with the assumptions on \(\{\alpha _{n}\}\), \(\{\beta _{n}\}\), \(\{\lambda _{n}\}\) and (3.10) implies that
On the other hands, from the definition of \(x_{n+1}\) and (3.8), we get
and hence
Since the limit of \(\{\|x_{n}-q\|\}\) exists and by the assumptions on \(\{\alpha _{n}\}\) and \(\{\beta _{n}\}\), we obtain
From (3.9) and (3.11), we have
Again, since the limit of \(\{\|x_{n}-q\|\}\) exists and \(\alpha _{n} \rightarrow 0\), it follows that
and hence
and by (3.9), we get
We also get from (3.6), (3.7) and (3.18)
which implies that
It follows from (3.2) that
So,
and hence
From (3.20) and (3.23), we get
It follows from \(x_{n}\in C\), the definition of \(y_{n}\) and (3.20) that
Because \(\{x_{n}\}\) is bounded, there exists a subsequence \(\{x_{n _{k}}\}\) of \(\{x_{n}\}\) such that \(\{x_{n_{k}}\}\) converges weakly to some x̄, as \(k\rightarrow \infty \) and
Consequently \(\{Lx_{n_{k}}\}\) converges weakly to Lx̄. By (3.24), \(\{v_{n_{k}}\}\) converges weakly to Lx̄. We show that \(\bar{x}\in \varOmega \). We know that \(x_{n}\in C\) and \(v_{n}\in D\), for each \(n\in \mathbb{N}\). Since C and D are closed and convex sets, so C and D are weakly closed, therefore, \(\bar{x}\in C\) and \(L\bar{x}\in D\). From (3.25) and (3.14), we see that \(\{y_{n_{k}}\}\), \(\{t_{n_{k}}\}\) and \(\{z_{n_{k}}\}\) converge weakly to x̄. By (3.22) and (3.23), we also see that \(\{u_{n_{k}}\}\) and \(\{P_{D}(Lx_{n_{k}})\}\) converge weakly to Lx̄. Algorithm 3.1 and assertion (i) in Lemma 2.6 imply that
and
Hence, it follows that
and
Letting \(k\rightarrow \infty \), by the hypothesis on \(\{\lambda _{n}\}\), \(\{\mu _{n}\}\), (3.14), (3.22) and the weak continuity of f and g (condition (A2)), we obtain
This means that \(\bar{x}\in \operatorname{EP}(f)\) and \(L\bar{x}\in \operatorname{EP}(g)\). It follows from (3.14), (3.16) and (3.25) that
This together with Lemma 2.2 implies that \(\bar{x}\in F(T)\). On the other hand, from (3.21) and (3.23), we get
and using again Lemma 2.2, we obtain \(L\bar{x}\in F(S)\). Then we proved that \(\bar{x}\in \operatorname{EP}(f)\cap F(T)\) and \(L\bar{x}\in \operatorname{EP}(g) \cap F(S)\), that is, \(\bar{x}\in \varOmega \). By Lemma 2.1, \(\bar{x}\in \varOmega \) and (3.26), we get
Finally, from (3.12), (3.13), (3.27) and Lemma 2.3, we find that the sequence \(\{x_{n}\}\) converges strongly to q.
Case 2. Suppose that there exists a subsequence \(\{n_{i}\}\) of \(\{n\}\) such that
According to Lemma 2.4, there exists a nondecreasing sequence \(\{m_{k}\}\subset \mathbb{N}\) such that \(m_{k}\rightarrow \infty \),
From this and (3.10), we get
This together with the assumptions on \(\{\alpha _{n}\}\), \(\{\beta _{n} \}\) and \(\{\lambda _{n}\}\) implies that
From (3.15), we have
By the hypothesis on \(\{\alpha _{n}\}\) and \(\{\beta _{n}\}\), we have
By (3.17), we get
Since the sequence \(\{x_{n}\}\) is bounded and \(\alpha _{n}\rightarrow 0\), we obtain
By the same argument as Case 1, we have
It follows from (3.12) and (3.28) that
and hence
Since \(\gamma _{m_{k}}>0\) and using (3.28) we get
Taking the limit in the above inequality as \(k\rightarrow \infty \), we conclude that \(x_{k}\) converges strongly to \(q=P_{\varOmega }h(q)\). □
4 Application to variational inequality problems
In this section, we apply Theorem 3.2 for finding a solution of a variational inequality problems for a monotone and Lipschitz-type continuous mapping. Let H be a real Hilbert space, C be a nonempty and convex subset of H and \(A\colon C\rightarrow C\) be a nonlinear operator. The mapping A is said to be
-
monotone on C if
$$ \langle Ax-Ay,x-y\rangle \geq 0,\quad \forall x, y\in C; $$ -
pseudomonotone on C if
$$ \langle Ax,y-x\rangle \geq 0\quad \Longrightarrow\quad \langle Ay,x-y\rangle \leq 0,\quad \forall x, y\in C; $$ -
L-Lipschitz continuous on C if there exists a positive constant L such that
$$ \Vert Ax-Ay \Vert \leq L \Vert x-y \Vert ,\quad \forall x, y\in C. $$
The variational inequality problem is to find \(x^{*}\in C\) such that
For each \(x,y\in C\), we define \(f(x,y)=\langle Ax,y-x\rangle \), then the equilibrium problem (1.1) become the variational inequality problem (4.1). We denote the set of solutions of the problem (4.1) by \(\operatorname{VI}(C,A)\). We assume that A satisfies the following conditions:
-
(B1)
A is pseudomonotone on C;
-
(B2)
A is weak to strong continuous on C that is, \(Ax_{n}\rightarrow Ax\) for each sequence \(\{x_{n}\}\subset C\) converging weakly to x;
-
(B3)
A is \(\mathrm{L}_{1}\)-Lipschitz continuous on C for some positive constant \(\mathrm{L}_{1}>0\).
Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces and C and D be nonempty closed and convex subsets of \(H_{1}\) and \(H_{2}\), respectively. Suppose that \(A\colon C\rightarrow C \) and \(B\colon D\rightarrow D\) are \(\mathrm{L}_{1}\) and \(\mathrm{L}_{2}\)-Lipschitz continuous on C and D, respectively. Let \(L\colon H_{1}\rightarrow H_{2}\) be a bounded linear operator with its adjoint \(L^{*}\), \(T\colon C\rightarrow C\) and \(S\colon D\rightarrow D\) be nonexpansive mappings and \(h \colon C \rightarrow C\) be a ρ-contraction mapping. We consider the following extragradient algorithm for solving the split variational inequality problems and fixed point problems.
Algorithm 4.1
Choose \(x_{1}\in H_{1}\). The control parameters \(\lambda _{n}\), \(\mu _{n}\), \(\alpha _{n}\), \(\beta _{n}\), \(\delta _{n}\) satisfy the following conditions:
Let \(\{x_{n}\}\) be a sequence generated by
Theorem 4.2
Let \(A\colon C\rightarrow C \) and \(B\colon D\rightarrow D\) be mappings such that assumptions (B1)–(B3) hold with some positive constants \(\mathrm{L}_{1}>0\) and \(\mathrm{L}_{2}>0\), respectively and \(\varOmega := \{p\in \operatorname{VI}(C,A)\cap F(T), Lp\in \operatorname{VI}(D,B)\cap F(S)\} \neq \emptyset \). Then the sequence \(\{x_{n}\}\) generated by Algorithm 4.1 converges strongly to \(q=P_{\varOmega }h(q)\).
Proof
Since the mapping A is satisfied the assumptions (B1)–(B3), it is easy to check that the bifunction \(f(x,y)=\langle Ax,y-x\rangle \) satisfies conditions (A1)–(A3). Moreover, since A is \(\mathrm{L} _{1}\)-Lipschitz continuous on C, it follows that
Then f is Lipschitz-type continuous on C with \(c_{1}=c_{2}=\frac{L _{1}}{2}\), and hence f satisfies condition (A4).
It follows from the definitions of f and \(y_{n}\) that
and similarly, we can get \(u_{n}=P_{D} ( P_{D}(Lx_{n})-\mu _{n} B (P_{D}(Lx_{n}) ) )\), \(v_{n}=P_{D} ( P_{D}(Lx _{n})-\mu _{n} B (u_{n}) )\), and \(z_{n}=P_{C} ( y_{n}-\lambda _{n} At_{n} )\). Then the extragradient Algorithm 3.1 reduces to the Algorithm 4.1 and we get the conclusion from and Theorem 3.2. □
5 Numerical experiments
In this section, we give examples and numerical results to support Theorem 3.2. In addition, we compare the introduced algorithm with the parallel extragradient algorithm, which was presented in [27].
We consider the bifunctions f and g which are given in the form of Nash–Cournot oligopolistic equilibrium models of electricity markets [15, 34],
where \(P, Q \in \mathbb{R}^{k\times k}\) and \(U, V \in \mathbb{R} ^{m\times m}\) are symmetric positive semidefinite matrices such that \(P - Q\) and \(U - V\) are positive semidefinite matrices. The bifunctions f and g satisfy conditions (A1)–(A4) (see [37]). Indeed, f and g are Lipshitz-type continuous with constants \(c_{1} = c _{2} = \frac{1}{2}\|P-Q\|\) and \(d_{1} = d_{2} = \frac{1}{2}\|U-V\|\), respectively. Notice that, if \(b_{1} = \max \{c_{1}, d_{1}\}\) and \(b_{2} = \max \{c_{2}, d_{2}\}\), then both bifunctions f and g are Lipshitz-type continuous with constants \(b_{1}\) and \(b_{2}\).
The following numerical experiments are written in Matlab R2015b and performed on a Desktop with Intel(R) Core(TM) i3 CPU M 390 @ 2.67 GHz 2.67 GHz and RAM 4.00 GB.
Example 5.1
Let the bifunctions f and g be given as (5.1) and (5.2), respectively. We will be concerned with the following boxes: \(C = \prod_{i=1}^{k} [-5,5]\), \(D = \prod_{j=1}^{m} [-20,20]\), \(\overline{C} = \prod_{i=1}^{k} [-3,3]\) and \(\overline{D} = \prod_{j=1}^{m} [-10,10]\). The nonexpansive mappings \(T : C\rightarrow C\) and \(S : D\rightarrow D\) are given by \(T =P_{\overline{C}}\) and \(S =P_{\overline{D}}\), respectively. The contraction mapping \(h : C \rightarrow C\) is a \(k \times k\) matrix such that \(\| h \| < 1\), while the linear operator \(L : \mathbb{R}^{k} \rightarrow \mathbb{R} ^{m}\) is a \(m \times k\) matrix.
In this numerical experiment, the matrices P, Q, U, and V are randomly generated in the interval \([-5,5]\) such that they satisfy above required properties. Besides, the matrices h and L are randomly generated in the interval \((0,\frac{1}{k})\) and \([-2,2]\), respectively. We randomly generated starting point \(x_{1} \in \mathbb{R}^{k}\) in the interval \([-20,20]\) with the following control parameters: \(\delta _{n} = \frac{1}{2 \|L\|^{2}}\), \(\alpha _{n} = \frac{1}{n+2}\) and \(\mu _{n} = \lambda _{n} = \frac{1}{4\max \{b_{1},b_{2}\}}\). The following three cases of the control parameter \(\beta _{n}\) are considered:
-
Case 1.
\(\beta _{n} = 10^{-10} + \frac{1}{n+1}\).
-
Case 2.
\(\beta _{n} = 0.5\).
-
Case 3.
\(\beta _{n} = 0.99 - \frac{1}{n+1}\).
Note that to obtain the vector \(u_{n}\), in the Algorithm 3.1, we need to solve the optimization problem
which is equivalent to the following convex quadratic problem:
where \(J = 2\mu _{n} V + I_{m}\) and \(K = \mu _{n} UP_{D}(Lx_{n}) - \mu _{n} VP_{D}(Lx_{n}) - P_{D}(Lx_{n})\) (see [27]).
On the other hand, in order to obtain the vector \(v_{n}\), we need to solve the following convex quadratic problem:
where \(\overline{J} = J \) and \(\overline{K} = \mu _{n} Uu_{n} - \mu _{n} Vu_{n} - P_{D}(Lx_{n})\). Similarly, to obtain the vectors \(t_{n}\) and \(z_{n}\), we have to consider the convex quadratic problems in the same way as in (5.3) and (5.4), respectively. We use the Matlab Optimization Toolbox to solve vectors \(u_{n}\), \(v_{n}\), \(t_{n}\) and \(z_{n}\). The Algorithm 3.1 is tested by using the stopping criterion \(\|x_{n+1}-x_{n}\| < 10^{-3}\). In Table 1, we randomly take 10 starting points and the presented results are in average.
From Table 1, we may suggest that a smallest size of parameter \(\beta _{n}\), as \(\beta _{n} = 10^{-10} + \frac{1}{n+1}\), provides less computational times and iterations than other cases.
Example 5.2
We consider the problem (1.3) when \(T = I_{\mathbb{R}^{k}}\) and \(S = I_{\mathbb{R}^{m}}\) are identity mappings on \(\mathbb{R}^{k}\) and \(\mathbb{R}^{m}\), respectively. It follows that the problem (1.3) becomes the split equilibrium problem which was considered in [27]. In this case, we compare the Algorithm 3.1 with the parallel extragradient algorithm (PEA), which was in [27, Corollary 3.1]. For this numerical experiment, we consider the problem setting and the control parameters as in Example 5.1, but only for the case of parameter \(\beta _{n}\) is \(10^{-10} + \frac{1}{n+1}\). The starting point \(x_{1} \in \mathbb{R} ^{k}\) is randomly generated in the interval \([-5,5]\). We compare Algorithm 3.1 with PEA by using the stopping criterion \(\|x_{n+1}-x_{n}\| < 10^{-3}\). In Table 2, we randomly take 10 starting points and the presented results are in average.
From Table 2, we see that both computational times and iterations of Algorithm 3.1 are less than those of PEA.
6 Conclusions
We introduce a new extragradient algorithm and its convergence theorem for the split equilibrium problems and split fixed point problems. We also apply the main result to the problem of split variational inequality problems and split fixed point problems. Some numerical example and computational results are provided for discussing the possible usefulness of the results which are presented in this paper. We would like to note that this paper convinces us to consider the future research directions, for example, to consider the convergence analysis and the more general cases of the problem (like the non-convex case) directions; one may see [22, 29, 33] for more inspiration.
References
Anh, P.N.: Strong convergence theorems for nonexpansive mappings and Ky Fan inequalities. J. Optim. Theory Appl. 154, 303–320 (2012)
Anh, P.N.: A hybrid extragradient method extended to fixed point problems and equilibrium problems. Optimization 62, 271–283 (2013)
Anh, P.N., An, L.T.H.: The subgradient extragradient method extended to equilibrium problems. Optimization 64, 225–248 (2015)
Anh, P.N., Le Thi, H.A.: An Armijo-type method for pseudomonotone equilibrium problems and its applications. J. Glob. Optim. 57, 803–820 (2013)
Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367–426 (1996)
Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994)
Byrne, C., Censor, Y., Gibali, A., Reich, S.: The split common null point problem. J. Nonlinear Convex Anal. 13, 759–775 (2012)
Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensitymodulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2006)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)
Censor, Y., Elfving, T., Kopf, N., Bortfeld, T.: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 21, 2071–2084 (2005)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)
Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 59(2), 301–323 (2012)
Censor, Y., Segal, A.: The split common fixed point problem for directed operators. J. Convex Anal. 16, 587–600 (2009)
Combettes, P.L.: The convex feasibility problem in image recovery. In: Hawkes, P. (ed.) Advances in Imaging and Electron Physics, pp. 155–270. Academic Press, New York (1996)
Contreras, J., Klusch, M., Krawczyk, J.B.: Numerical solution to Nash–Cournot equilibria in coupled constraint electricity markets. IEEE Trans. Power Syst. 19, 195–206 (2004)
Dadashi, V.: Shrinking projection algorithms for the split common null point problem. Bull. Aust. Math. Soc. 96(2), 299–306 (2017)
Dadashi, V., Khatibzadeh, H.: On the weak and strong convergence of the proximal point algorithm in reflexive Banach spaces. Optimization 66(9), 1487–1494 (2017)
Dadashi, V., Postolache, M.: Hybrid proximal point algorithm and applications to equilibrium problems and convex programming. J. Optim. Theory Appl. 174, 518–529 (2017)
Dadashi, V., Postolache, M.: Forward–backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math. (2019). https://doi.org/10.1007/s40065-018-0236-2
Daniele, P., Giannessi, F., Maugeri, A.: Equilibrium Problems and Variational Models. Kluwer Academic, Dordrecht (2003)
Dinh, B.V., Kim, D.S.: Projection algorithms for solving nonmonotone equilibrium problems in Hilbert space. J. Comput. Appl. Math. 302, 106–117 (2016)
Gibali, A., Küfer, K.-H., Süss, P.: Successive linear programming approach for solving the nonlinear split feasibility problem. J. Nonlinear Convex Anal. 15, 345–353 (2014)
Goebel, K., Kirk, W.A.: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics, vol. 28. Cambridge University Press, Cambridge (1990)
Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984)
He, Z.: The split equilibrium problem and its convergence algorithms. J. Inequal. Appl. (2012). https://doi.org/10.1186/1029-242X-2012-162
Hieu, D.V., Muu, L.D., Anh, P.K.: Parallel hybrid extragradient methods for pseudomotone equilibrium problems and nonexpansive mappings. Numer. Algorithms 73, 197–217 (2016)
Kim, D.S., Dinh, B.V.: Parallel extragradient algorithms for multiple set split equilibrium problems in Hilbert spaces. Numer. Algorithms 77, 741–761 (2018)
Kraikaew, R., Saejung, S.: On split common fixed point problems. J. Math. Anal. Appl. 415, 513–524 (2014)
Li, Z., Han, D., Zhang, W.: A self-adaptive projection-type method for nonlinear multiple-sets split feasibility problem. Inverse Probl. Sci. Eng. 21, 155–170 (2012)
Mainge, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16, 899–912 (2008)
Moudafi, A.: The split common fixed-point problem for demicontractive mappings. Inverse Probl. 26, 055007 (2010)
Moudafi, A.: Split monotone variational inclusions. J. Optim. Theory Appl. 150, 275–283 (2011)
Penfold, S., Zalas, R., Casiraghi, M., Brooke, M., Censor, Y., Schulte, R.: Sparsity constrained split feasibility for dose–volume constraints in inverse planning of intensity-modulated photon or proton therapy. Phys. Med. Biol. 62, 3599–3618 (2017)
Quoc, T.D., Anh, P.N., Muu, L.D.: Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 52, 139–159 (2012)
Reich, S., Sabach, S.: Three strong convergence theorems regarding iterative methods for solving equilibrium problems in reflexive Banach spaces. Contemp. Math. 568, 225–240 (2012)
Suwannaprapa, M., Petrot, N., Suantai, S.: Weak convergence theorems for split feasibility problems on zeros of the sum of monotone operators and fixed point sets in Hilbert spaces. Fixed Point Theory Appl. 2017, 6 (2017)
Tran, D.Q., Muu, L.D., Nguyen, V.H.: Extragradient algorithms extended to equilibrium problems. Optimization 57, 749–776 (2008)
Tuyen, T.M.: A strong convergence theorem for the split common null point problem in Banach spaces. Appl. Math. Optim. (2017). https://doi.org/10.1007/s00245-017-9427-z
Tuyen, T.M., Ha, N.S.: A strong convergence theorem for solving the split feasibility and fixed point problems in Banach spaces. J. Fixed Point Theory Appl. 20, 140 (2018)
Tuyen, T.M., Ha, N.S., Thuy, N.T.T.: A shrinking projection method for solving the split common null point problem in Banach spaces. Numer. Algorithms (2018). https://doi.org/10.1007/s11075-018-0572-5
Vuong, P.T., Strodiot, J.J., Nguyen, V.H.: Extragradient methods and linear algorithms for solving Ky Fan inequalities and fixed point problems. J. Optim. Theory Appl. 155, 605–627 (2012)
Xu, H.K.: Iterative algorithm for nonlinear operators. J. Lond. Math. Soc. 2, 1–17 (2002)
Acknowledgements
The authors are grateful to anonymous referees for their comments and remarks which helped to improve the paper. Vahid Dadashi is supported by Sari Branch, Islamic Azad University.
Funding
This work is partially supported by Naresuan University.
Author information
Authors and Affiliations
Contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Petrot, N., Rabbani, M., Khonchaliew, M. et al. A new extragradient algorithm for split equilibrium problems and fixed point problems. J Inequal Appl 2019, 137 (2019). https://doi.org/10.1186/s13660-019-2086-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-019-2086-7