Abstract
For the purpose of this article, we introduce a modified form of a generalized system of variational inclusions, called the generalized system of modified variational inclusion problems (GSMVIP). This problem reduces to the classical variational inclusion and variational inequalities problems. Motivated by several recent results related to the subgradient extragradient method, we propose a new subgradient extragradient method for finding a common element of the set of solutions of GSMVIP and the set of a finite family of variational inequalities problems. Under suitable assumptions, strong convergence theorems have been proved in the framework of a Hilbert space. In addition, some numerical results indicate that the proposed method is effective.
Similar content being viewed by others
1 Introduction
Throughout this paper, let H be a real Hilbert space and C be a nonempty closed convex subset of H with the inner product \(\langle \cdot ,\cdot \rangle \) and norm \(\Vert \cdot \Vert \). Let \(T:C \rightarrow C\) be a mapping. Then T is called nonexpansive if \(\Vert Tx-Ty \Vert \leq \Vert x-y \Vert \), for all \(x,y \in C\). We denote by \(F(T)\) the set of fixed points of T, that is, \(F(T) = \{ x \in C : Tx = x \} \). It is well known that \(F(T)\) is closed convex and also nonempty.
Let \(B: H \rightarrow H\) be a mapping and \(M: H \rightarrow 2^{H}\) be a multi-valued mapping. The variational inclusion problem is to find \(x \in H\) such that
where θ is the zero vector in H. The set of solutions of (1) is denoted by \(VI(H,B,M)\). This problem has received much attention due to its applications in large variety of problems arising in convex programming, variational inequalities, split feasibility problems, and minimization problems. To be more precise, some concrete problems in machine learning, image processing, and linear inverse problems can be modeled mathematically by this formulation.
The variational inequality problem (VIP) is to find a point \(u \in C\) such that
The set of solutions of the variational inequality problem is denoted by \(VI(C,A)\). This problem is an important tool in economics, engineering and mathematics. It includes, as special cases, many problems of nonlinear analysis such as optimization, optimal control problems, saddle point problems and mathematical programming; see, for example, [1–4].
It is well known that one of the most popular methods for solving the problem (VIP) is the extragradient method proposed by Korpelevich [5]. The extragradient method is needed to calculate two projections onto the feasible set C in each iteration. So, in the case that the set C is not simple to project on to it, as analyzed in some remarks of the authors in [6], when the subset is a closed expression as in the case of a ball or a half-space, the projection onto the feasible subset C can be computed easily. This can affect the efficiency of the used method. In recent years, the extragradient method has received great attention by many authors, who improved it in various ways; see, e.g. [7–13] and the references therein.
In 2011, Censor et al. [12] proposed the subgradient extragradient method for solving variational inequality problems as follows:
for each \(n \geq 1\), where \(\lambda \in (0,1/L)\). In this method, they have replaced the second projection in Korpelevich’s extragradient method by a projection on to a half-space, which is computed explicitly.
Motivated by the problem (1), in this paper, we introduce a new problem of the system of variational inclusions in a real Hilbert space as follows:
Let H be a real Hilbert space and let \(A : H \rightarrow H\) be mapping and \(M_{A}, M_{B} : H \rightarrow 2^{H}\) be set value mapping. We consider the problem for finding \(x^{*} \in H\) such that
where θ is the zero mapping in H, which is called a generalized system of modified variational inclusion problems (in short, GSMVIP). The set of solutions of (4) is denoted by Ω,i.e., \(\Omega = \{ x^{*}\in H:\theta \in Ax^{*}+M_{A}x^{*}\text{ and }\theta \in Ax^{*} + M_{B} x^{*} \} \). In particular, if \(M_{A} = M_{B}\), then the problem (4) reduces to the problem (1) and if \(J_{M_{A},\lambda _{A}} = J_{M_{B},\lambda _{B}} = P_{C}\), then the problem (4) reduces to VIP.
In 2012, Kangtunyakarn [14] modified the set of variational inequality problems as follows:
where A and B are the mappings of C into H.
In order to develop efficient algorithms for finding solution of a finite family variational inequalities problem, inspired by problem (5), we define the new half-space \(Q_{n} = \{z \in H : \langle (I-\lambda \sum_{i=1}^{N} a_{i}A_{i})x_{n}-y_{n},y_{n}-z \rangle \geq 0 \} \), which as a tool to prove the strong convergence theorem. In particular, if we put \(i=1\), then \(Q_{n}\) reduces to \(T_{n}\) in subgradient extragradient method (3). However, the sequence \(\{ x_{n} \} \) generated by (3) converges weakly to a solution of the variational inequality problem.
In this paper, motivated by recent research [7, 12] and [14], we introduce a new problem (4) and the new iterative scheme for finding a common element of the set of a finite family of variational inequalities problems and the set of solutions of the proposed problem (4) in a real Hilbert space. Then we establish and prove the strong convergence theorem under some proper conditions. Furthermore, we also give some various examples to support our main result.
2 Preliminaries
In this section, we give some useful lemmas that will be needed to prove our main result.
Let C be a nonempty closed convex subset of a real Hilbert space H. We denote strong convergence and weak convergence by the notations → and ⇀, respectively. For every \(x \in H\), there exists a unique nearest point \(P_{C} x \in C\) such that
\(P_{C}\) is called a metric projection of H onto C. It follows that
Lemma 2.1
([15])
Given \(x \in H\) and \(y \in C\). Then \(y = P_{C} x\) if and only if we have the inequality
Definition 2.2
Let \(M : H \rightarrow 2^{H}\) be a multi-valued mapping.
(i) The graph \(G(M)\) of M is defined by
(ii) the operator M is called a maximal monotone operator if M is monotone, i.e.
and the graph \(G(M)\) of M is not properly contained in the graph of any other monotone operator. It is clear that a monotone mapping M is maximal if and only if for any \((x,u) \in H \times H\), \(\langle u-v,x-y \rangle \geq 0\) for every \((y,v) \in G(M)\) implies that \(u \in M(x)\).
Let \(M: H \rightarrow 2^{H}\) be a multi-valued maximal monotone mapping, then the single-valued mapping \(J_{M,\lambda }: H \rightarrow H\) defined by
is called the resolvent operator associated with M where λ is positive number and I is an identity mapping; see [16]. Note that \(J_{M,\lambda }\) is a nonexpansive mapping.
Definition 2.3
Let \(A: C \rightarrow H\) be a mapping.
(i) A is called μ-Lipschitz continuous if there exists a nonnegative real number \(\mu \geq 0\) such that
(ii) A is called α-inverse strongly monotone if there exists a nonnegative real number \(\alpha \geq 0\) such that
Lemma 2.4
([14])
Let C be a nonempty closed convex subset of a real Hilbert space H and let A,B: C →H be α- and β-inverse strongly monotone mappings, respectively, with \(\alpha ,\beta > 0\) and \(VI(C,A) \cap VI(C,B) \neq \emptyset \). Then
Furthermore, if \(0 < \gamma < \min \{ 2\alpha ,2\beta \} \), we find that \(I - \gamma (aA + (1-a)B)\) is a nonexpansive mapping.
Remark 2.5
For every \(i = 1,2,\ldots,N\) the mapping \(A_{i} : C\rightarrow H \) be \(\alpha _{i} \)-inverse strongly monotone mappings with \(\eta = \min_{1,2,\ldots,N} \{ \alpha _{i} \} \) and \(\bigcap_{i=1}^{N} VI(C,A_{i}) \neq \emptyset \). Then
where \(\sum_{i=1}^{N} a_{i} = 1\) and \(0< a_{i}<1\) for every \(i = 1,2,\ldots,N\). Moreover, we find that \(\sum_{i=1}^{N} a_{i}A_{i}\) is monotone and is a μ-Lipschitz continuous mapping.
Proof
It easy to see that \(\sum_{i=k+1}^{N}\frac{a_{i}}{\prod_{j=1}^{k}(1-a_{j})}A_{i}\) is η-inverse strongly monotone mappings with \(\eta = \min \{ \beta _{i} \} \) for each \(i = 2,\ldots,N\) and \(k = 1,2,\ldots,N-1\).
Take \(N = 3\) and let \(VI(C,A_{1}) \cap VI(C,A_{2}) \cap VI(C,A_{3}) \neq \emptyset \). By using Lemma 2.4, we have
where \(a_{1} , a_{2} ,a_{3} \in (0,1)\) and \(\sum_{i=1}^{3} a_{i} = 1\).
Take \(N = 4\) and let \(\bigcap_{i=1}^{4} VI(C,A_{i}) \neq \emptyset \). By using Lemma 2.4 and (8), we have
where \(a_{1} , a_{2} ,a_{3}, a_{4} \in (0,1)\) and \(\sum_{i=1}^{4} a_{i} = 1\).
In the same way, if \(\bigcap_{i=1}^{N} VI(C,A_{i}) \neq \emptyset \), we obtain
where \(a_{i} \in (0,1)\), for each \(i = 1,2,\ldots,N\), and \(\sum_{i=1}^{N} a_{i} = 1\). □
Lemma 2.6
In real Hilbert spaces H, the following well-known results hold:
(i) For all \(x,y \in H\) and \(\alpha \in [0,1]\),
(ii) \(\Vert x + y \Vert ^{2} \leq \Vert x \Vert ^{2} + 2 \langle y, x+y \rangle \) for all \(x,y \in H\).
Lemma 2.7
([17])
Let C be a nonempty closed and convex subset of a real Hilbert space H. If \(T:C \rightarrow C\) is a nonexpansive mapping with \(F(T) \neq \emptyset \), then the mapping \(I-T\) is demiclosed at 0, i.e., if \(\{ x_{n} \} \) is a sequence in C weakly converging to \(x \in C\) and if \(\{ x_{n} - T x_{n} \} \) converges strongly to 0, then \(x \in F(T)\).
Lemma 2.8
([17])
Let \(\{ s_{n} \} \) be a sequence of nonnegative real numbers satisfying
where \(\{\alpha _{n}\}\) is a sequence in (0,1) and \(\{ \delta _{n} \} \) is a sequence such that
(1) \(\sum_{n=1}^{\infty } \alpha _{n}= \infty \);
(2) \(\limsup_{n \rightarrow 0}\frac{\delta _{n}}{\alpha _{n}} \leq 0\) or \(\sum_{n=1}^{\infty } |\delta _{n}| = \infty \).
Then \(\lim_{n \rightarrow 0} s_{n} = 0\).
Lemma 2.9
([17])
Each Hilbert space H satisfies Opial’s condition, i.e., for any sequence \(\{ x_{n} \} \) with \(x_{n} \rightharpoonup x\), the inequality
holds for every \(y \in H\) with \(x \neq y\).
Lemma 2.10
([16])
\(u \in H\) is a solution of variational inclusion (1) if and only if \(u=J_{M,\lambda }(u-\lambda Bu)\), \(\forall \lambda > 0\), i.e.,
If \(\lambda \in (0,2\alpha ]\), then \(VI(H,B,M)\) is a closed convex subset in H.
The next lemma presents the association of the fixed point of a nonlinear mapping and the solution of GSMVIP under suitable conditions on the parameters.
Lemma 2.11
Let H be a real Hilbert space and let \(A_{G} : H \rightarrow H\) be an α-inverse strongly monotone mapping. Let \(M_{A}, M_{B} : H \rightarrow 2^{H}\) be multi-value maximum monotone mappings with \(\Omega \neq \emptyset \). \(x^{*} \in \Omega \) if and only if \(x^{*} = Gx^{*}\), where \(G : H \rightarrow H\) is a mapping defined by
for all \(x \in H\), \(b \in (0,1)\) and \(\lambda _{A},\lambda _{B} \in (0,2\alpha )\). Moreover, we see that G is a nonexpansive mapping.
Proof
Let the conditions hold.
\((\Rightarrow )\) Let \(x^{*} \in \Omega \), we have \(x \in H\) such that \(\theta \in A_{G}x^{*} + M_{A}x^{*}\) and \(\theta \in A_{G}x^{*} + M_{B}x^{*}\), that is, \(x^{*} \in VI(H,A_{G},M_{A})\) and \(x^{*} \in VI(H,A_{G},M_{B})\).
From Lemma 2.10, we have \(x^{*} \in F(J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G}))\) and \(x^{*} \in F(J_{M_{B},\lambda _{B}}(I-\lambda _{B}A_{G}))\).
It implies that
and
By the definition of G, (11) and (12), we have
\((\Leftarrow )\) Let \(x^{*} = G(x^{*})\). Applying the same method of Lemma 2.1 (2) in [16], we find that \(J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G})\) and \(J_{M_{B},\lambda _{B}}(I-\lambda _{B}A_{G})\) are nonexpansive mappings.
Since \(x^{*} = G(x^{*})\), we have
Let \(y \in \Omega \), we have \(\theta \in A_{G}y + M_{A}y\) and \(\theta \in A_{G}y + M_{B}y\).
From Lemma 2.10, it implies that
\(y \in F(J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G})) \cap F(J_{M_{B}, \lambda _{B}}(I-\lambda _{B}A_{G}))\). Then
It implies that \(\Vert x^{*} - J_{M_{B},\lambda _{B}}(I-\lambda _{B}A_{G})x^{*} \Vert = 0\).
That is, \(x^{*} \in F(J_{M_{B},\lambda _{B}}(I-\lambda _{B}A_{G}))\).
Since \(x^{*} = G(x^{*}) \) and \(x^{*} \in F(J_{M_{B},\lambda _{B}}(I-\lambda _{B}A_{G}))\), we have
Therefore \(x^{*} \in F(J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G}))\).
From Lemma 2.10, \(x^{*} \in F(J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G}))\) and \(x^{*} \in F(J_{M_{B},\lambda _{B}}(I-\lambda _{B}A_{G}))\), we have \(\theta \in A_{G}x^{*} + M_{A}x^{*}\) and \(\theta \in A_{G}x^{*} + M_{B}x^{*}\). Then \(x^{*} \in \Omega \).
Applying (13), we can conclude that G is a nonexpansive mapping. □
We give some examples to support Lemma 2.11 and show that Lemma 2.11 is not true if some condition fails.
Example 2.12
Let \(H = \mathcal{R}^{2}\) be the two dimensional space of real numbers with an inner product \(\langle \cdot ,\cdot \rangle : \mathcal{R}^{2} \times \mathcal{R}^{2} \rightarrow \mathcal{R}\) defined by \(\langle \mathbf{x,y} \rangle = x \cdot y = x_{1}y_{1} + x_{2}y_{2}\), for all \(\mathbf{x} = (x_{1},x_{2}) \in \mathcal{R}^{2} , \mathbf{y} = (y_{1},y_{2}) \in \mathcal{R}^{2}\) and a usual norm \(\Vert \cdot \Vert : \mathcal{R}^{2} \times \mathcal{R}^{2} \rightarrow \mathcal{R}\) give by \(\Vert \mathbf{x} \Vert = \sqrt{x_{1}^{2} + x_{2}^{2}}\) for all \(\mathbf{x} = (x_{1},x_{2}) \in \mathcal{R}^{2}\) and \(A_{G}\): \(\mathcal{R}^{2} \rightarrow \mathcal{R}^{2}\) defined by \(A_{G}((x_{1},x_{2})) = (x_{1} - 5,x_{2} - 5)\). Let \(M_{A} : \mathcal{R}^{2} \rightarrow 2^{\mathcal{R}^{2}}\) be defined by \(\{ (2x_{1}-1,2x_{2} - 1) \} \) and \(M_{B} : \mathcal{R}^{2} \rightarrow 2^{\mathcal{R}^{2}}\) be defined by \(\{ (\frac{x_{1}}{2}+2,\frac{x_{2}}{2}+2) \} \). Show that \((2,2) \in F(G)\).
Solution. It is obvious that \(\Omega = \{(2,2)\}\). Choose \(\lambda _{A} = \frac{1}{2}\). From \(M_{A}(x_{1},x_{2}) = \{ (2x_{1}-1,2x_{2} - 1) \} \) and the resolvent of \(M_{A}\), \(J_{M_{A},\lambda _{A}}x = (I+\lambda _{A} M_{A})^{-1}x\) for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\), we have
for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\). Choose \(\lambda _{B} = 1\). From \(M_{B}(x_{1},x_{2}) = \{ (\frac{x_{1}}{2}+2,\frac{x_{2}}{2}+2) \} \) and the resolvent of \(M_{B}\), \(J_{M_{B},\lambda _{B}}x= (I+ \lambda _{B} M_{B})^{-1}x\) for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\), we have
for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\). It is easy to see that \(A_{G}\) is 1-inverse strongly monotone. Choose \(b = \frac{1}{4}\). From (14) and (15), we have
for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\). By Lemma 2.11, we have \((2,2) \in F(G)\).
Example 2.13
Let \(H = \mathcal{R}^{2}\) be the two dimensional space of real numbers with an inner product \(\langle \cdot ,\cdot \rangle : \mathcal{R}^{2} \times \mathcal{R}^{2} \rightarrow \mathcal{R}\) defined by \(\langle \mathbf{x,y} \rangle = x \cdot y = x_{1}y_{1} + x_{2}y_{2}\), for all \(\mathbf{x} = (x_{1},x_{2}) \in \mathcal{R}^{2} , \mathbf{y} = (y_{1},y_{2}) \in \mathcal{R}^{2}\) and a usual norm \(\Vert \cdot \Vert : \mathcal{R}^{2} \times \mathcal{R}^{2} \rightarrow \mathcal{R}\) give by \(\Vert \mathbf{x} \Vert = \sqrt{x_{1}^{2} + x_{2}^{2}}\) for all \(\mathbf{x} = (x_{1},x_{2}) \in \mathcal{R}^{2}\) and \(A_{G}\): \(\mathcal{R}^{2} \rightarrow \mathcal{R}^{2}\) defined by \(A_{G}((x_{1},x_{2})) = (x_{1} - 5,x_{2} - 5)\). Let \(M_{A} : \mathcal{R}^{2} \rightarrow 2^{\mathcal{R}^{2}}\) be defined by \(\{ (2x_{1}-1,2x_{2} - 1) \} \) and \(M_{B} : \mathcal{R}^{2} \rightarrow 2^{\mathcal{R}^{2}}\) be defined by \(\{ (\frac{x_{1}}{2}+2,\frac{x_{2}}{2}+2) \} \). Show that \((2,2) \notin F(G)\).
Solution. It is obvious that \(\Omega = \{(2,2)\}\). Choose \(\lambda _{A} = 2\). From \(M_{A}(x_{1},x_{2}) = \{ (2x_{1}-1,2x_{2} - 1) \} \) and the resolvent of \(M_{A}\), \(J_{M_{A},\lambda _{A}}x = (I+\lambda _{A} M_{A})^{-1}x\) for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\), we have
for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\). Choose \(\lambda _{B} = 4\). From \(M_{B}(x_{1},x_{2}) = \{ (\frac{x_{1}}{2}+2,\frac{x_{2}}{2}+2) \} \) and the resolvent of \(M_{B}\), \(J_{M_{B},\lambda _{B}}x= (I+ \lambda _{B} M_{B})^{-1}x\) for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\), we have
for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\). Choose \(b = \frac{1}{4}\). From (16), (17) and \(A_{G}\) being 1-inverse strongly monotone, we have
for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\). By Lemma 2.11, we have \((2,2) \notin F(G)\).
Lemma 2.14
([18])
Let \(\{ \Gamma _{n} \} \) be a sequence of real numbers that do not decrease at infinity, in the sense that there exists a subsequence \(\{ \Gamma _{n_{j}} \} \) of \(\{\Gamma _{n}\}\) such that \(\Gamma _{n_{j}} < \Gamma _{{n_{j}}+1}\) for all \(j \geq 0\). Also we consider the sequence of integers \(\{ \tau (n) \} _{n \geq n_{0}}\) defined by
Then \(\{ \tau (n) \} _{n \geq n_{0}}\) is a nondecreasing sequence verifying \(\lim_{n \rightarrow \infty } \tau (n) = \infty \) and, for all \(n \geq n_{0}\),
Lemma 2.15
Let H be a real Hilbert space, for every \(i = 1,2,\ldots,N\), let \(A_{i} : H \rightarrow H \) be \(\alpha _{i}\)-inverse strongly monotone mappings with \(\eta = \min \{ \alpha _{i} \} \). Let \(\{ x_{n} \} _{n=1}^{\infty }\) and \(\{ y_{n} \} _{n=1}^{\infty }\) be a sequence generated by \(y_{n} = P_{C}(I-\lambda \sum_{i=1}^{N} a_{i}A_{i})x_{n}\), \(Q_{n} = \{ z \in H : \langle (I-\lambda \sum_{i=1}^{N} a_{i}A_{i})x_{n}-y_{n},y_{n}-z \rangle \geq 0 \} \) and \(x^{*} \in \bigcap_{i=1}^{N} VI(C,A_{i})\) for all \(i = 1,2,\ldots,N\). Then the following inequality is fulfilled:
where \(\sum_{i=1}^{N} a_{i} = 1, 0 < a_{i} < 1\) and \(\lambda \in (0,\eta )\) with \(\eta = \min_{i=1,2,\ldots,N} \{ \alpha _{i} \} \) for every \(i = 1,2,\ldots,N\).
Proof
Since \(x^{*} \in \bigcap_{i=1}^{N} VI(C,A_{i})\), we have \(x^{*} \in VI(C,A_{i})\) for every \(i = 1,2,\ldots,N\) and (6), we obtain
From the monotonicity of \(\sum_{i=1}^{N} a_{i}A_{i}\), we have
It implies that
□
3 Main result
In this section, we prove the strong convergence of the sequence acquired from the proposed iterative methods for finding a common element of the set of finite family variational inequalities problems and the set of solutions of the proposed problem.
Theorem 3.1
Let H be a real Hilbert space. For \(i = 1,2,\ldots,N\), let \(A_{i} : H \rightarrow H\) be \(\alpha _{i}\)-inverse strongly monotone mappings and let \(A_{G} : H \rightarrow H \) be \(\alpha _{G}\)-inverse strongly monotone mappings. Define the mapping \(G:H \rightarrow H\) by \(G(x) = J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G})(bx+(1-b)J_{M_{B}, \lambda _{B}}(I-\lambda _{B}A_{G})x)\) for all \(x \in H\), \(b \in (0,1)\) and \(\lambda _{A},\lambda _{B} \in (0,2\alpha _{G})\). Assume that \(\Gamma = \bigcap_{i=1}^{N} VI(C,A_{i}) \cap F(G) \neq \emptyset \). Let the sequence \(\{ y_{n} \} \) and \(\{ x_{n} \} \) be generated by \(x_{1} , u \in H\) and
where \(\sum_{i=1}^{N} a_{i} = 1, 0 < a_{i} < 1 , \{ \alpha _{n} \} , \{ \beta _{n} \} , \{ \gamma _{n} \} \subset [0, 1]\) with \(\alpha _{n}+\beta _{n}+\gamma _{n} = 1\), \(\lambda \in (0,\eta )\) with \(\eta = \min_{i=1,2,\ldots,N} \{ \alpha _{i} \} \).
Suppose the following conditions hold:
(i) \(\sum_{n=0}^{\infty }\alpha _{n} = \infty \), \(\lim_{n \rightarrow \infty } \alpha _{n} = 0\),
(ii) \(0< c<\beta _{n}\), \(\gamma _{n} \leq d <1\).
Then \(\{ x_{n} \} \) converges strongly to \(x^{*} \in \Gamma \) where \(x^{*} = P_{\Gamma }u\).
Proof
We must show that \(\{ x_{n} \} \) is bounded. Let \(z_{n} = P_{Q_{n}}(x_{n} - \lambda \sum_{i=1}^{N} a_{i}A_{i}y_{n})\).
We consider
where \(t_{n} = \frac{\beta _{n} z_{n} + \gamma _{n} Gx_{n}}{1-\alpha _{n}}\). Letting \(x^{*} \in \Gamma = \bigcap_{i=1}^{N} VI(C,A_{i}) \cap F(G)\), we have
From definition of \(x_{n+1}\) and (22), we consider
By Lemma 2.15 and \(\lambda \in (0,1)\), we have
By induction,
then \(\{ x_{n} \} \) is a bounded sequence.
We use
It implies that
Let \(S_{n} := \beta _{n}(1-\frac{\lambda }{\eta }) \Vert z_{n} - y_{n} \Vert ^{2} + \beta _{n}(1-\frac{\lambda }{\eta }) \Vert x_{n} - y_{n} \Vert ^{2} + \frac{\beta _{n} \gamma _{n}}{1-\alpha _{n}} \Vert z_{n} - Gx_{n} \Vert ^{2}\).
Then we have
Now, we consider two possible cases:
Case 1. Put \(\Gamma _{n} := \Vert x_{n} - x^{*} \Vert ^{2}\) for all \(n \in \mathcal{N}\).
Assume that there is \(n_{0} \geq 0\) such that, for each \(n \geq n_{0}\), \(\Gamma _{n+1} \leq \Gamma _{n}\).
In this case, \(\lim_{n \rightarrow \infty } \Gamma _{n}\) exists and \(\lim_{n \rightarrow \infty } (\Gamma _{n} - \Gamma _{n+1}) = 0\).
Since \(\lim_{n \rightarrow \infty } \alpha _{n} = 0\), it follows from (27) that \(\lim_{n \rightarrow \infty } S_{n} = 0\).
Therefore, we have \(\lim_{n \rightarrow \infty } \beta _{n}(1-\frac{\lambda }{\eta }) \Vert z_{n} - y_{n} \Vert ^{2} = 0\), \(\lim_{n \rightarrow \infty } \beta _{n}(1-\frac{\lambda }{\eta }) \Vert x_{n} - y_{n} \Vert ^{2} = 0\) and \(\lim_{n \rightarrow \infty } \frac{\beta _{n} \gamma _{n}}{1-\alpha _{n}} \Vert z_{n} - Gx_{n} \Vert ^{2} = 0\).
From the assumptions i), ii), we obtain
Hence, we obtain
From (28), we have
We now show that \(\limsup_{n \rightarrow \infty } \langle u - x^{*}, x_{n} - x^{*} \rangle \leq 0\).
We can choose a subsequence \(\{ x_{n_{i}} \} \) of \(\{ x_{n} \} \) such that
Because \(\{ x_{n} \} \) is a bounded sequence in H, there exists a subsequence of \(\{ x_{n} \} \) that converges weakly to an element in H. Without loss of generality, we can assume that \(x_{n_{i}} \rightharpoonup w\) where \(w \in H\). Since \(\lim_{n \rightarrow \infty } \Vert x_{n} - z_{n} \Vert = 0\), we have \(z_{n_{i}} \rightharpoonup w\).
Since \(\lim_{n \rightarrow \infty } \Vert x_{n} - y_{n} \Vert = 0\), \(y_{n_{i}} \rightharpoonup w\).
Assume that \(w \notin \bigcap_{i=1}^{N} VI(C,A_{i}) \). So, we have \(w \notin F(P_{C}(I-\lambda \sum_{i=1}^{N} a_{i}A_{i}))\).
Then we have \(w \neq P_{C}(I-\lambda \sum_{i=1}^{N} a_{i}A_{i})w\). By the nonexpansiveness of \(P_{C}(I-\lambda \sum_{i=1}^{N} a_{i}A_{i})\), (28) and Opial’s property, we have
This is a contradiction; we have \(w \in VI(C,\sum_{i=1}^{N}a_{i}A_{i})\). From Remark 2.5, we have
Assume that \(w \notin F(G)\). Then we have \(w \neq Gw\). From (29) and Opial’s property, we have
This is a contradiction; we have
From (31) and (32), we have \(w \in \bigcap_{i=1}^{N} VI(C,A_{i})\cap F(G)\).
Therefore, we get
where \(x^{*} = P_{\Gamma }u\).
Next, we show that \(\{ x_{n} \} \) converges strongly to \(x^{*}\), where \(x^{*} = P_{\Gamma }u\).
From the nonexpansiveness of G, (22) and (24), we have
From the definition of \(x_{n}\), (34) and \(x^{*} = P_{\Gamma }u\), we have
By applying Lemma 2.8 to (35), we find that the sequence \(\{ x_{n} \} \) converges strongly to \(x^{*}\).
Case 2. Assume that there exists a subsequence \(\{ \Gamma _{n_{i}} \} \subset \{ \Gamma _{n} \} \) such that \(\Gamma _{n_{i}} \leq \Gamma _{n_{i} + 1}\) for all \(i \in \mathcal{N}\). In this case, we can define \(\tau : \mathcal{N} \rightarrow \mathcal{N}\) by \(\tau (n) = \max \{ k \leq n : \Gamma _{k} < \Gamma _{k+1} \} \).
Then we have \(\tau (n) \rightarrow \infty \) as \(n \rightarrow \infty \) and \(\Gamma _{\tau (n)} < \Gamma _{\tau (n)+1}\). So, we have from (26)
Arguing as in Case 1, we have
Because \(\{ x_{\tau (n)} \} \) is a bounded sequence, there exists a subsequence \(\{ x_{\tau (n_{j})} \} \) such that
Following the same argument as the proof of Case 1 for \(\{ x_{\tau (n_{j})} \} \), we have
and
where \(\alpha _{\tau (n)} \rightarrow 0\), \(\sum_{n=1}^{\infty }\alpha _{\tau (n)} = \infty \) and \(\limsup_{n \rightarrow \infty } \langle u - x^{*}, x_{\tau (n)+1} - x^{*} \rangle \leq 0\).
Hence, by Lemma 2.8, we have \(\lim_{n \rightarrow \infty } \Vert x_{\tau (n)} - x^{*} \Vert = 0\) and \(\lim_{n \rightarrow \infty } \Vert x_{\tau (n)+1} - x^{*} \Vert = 0\)
Therefore, by Lemma 2.14, we have
Hence, \(\{ x_{n} \} \) converge strongly to \(x^{*} = P_{\Gamma }u\). This completes the proof of the main theorem. □
4 Application
In 2013, Kangtunyakarn [14] introduced a modification of the system of variational inequalities as follows: finding \((x^{*},z^{*})\in C \times C \) such that
where \(D_{1}, D_{2} : C \rightarrow H\) be two mappings, for every \(\lambda _{1}, \lambda _{2} \geq 0\) and \(a \in [0,1]\).
Let h be a proper lower semicontinuous convex function of H into \((-\infty ,+\infty ]\). The subdifferential ∂h of h is defined by
for all \(x \in H\). From Rockafellar [19], we find that ∂h is a maximal monotone operator. Let C be a nonempty closed convex subset of H and \(i_{C}\) be the indicator function of C, i.e.,
Then \(i_{C}\) is a proper, lower semicontinuous and convex function on H and so the subdifferential \(\partial i_{C}\) of \(i_{C}\) is a maximal monotone operator. The resolvent operator \(J_{\partial i_{C},r}\) of \(i_{C}\) for \(\lambda > 0\), can be defined by \(J_{\partial i_{C},r}(x) = (I + \lambda \partial i_{C})^{-1}(x), x \in H\). We have \(J_{\partial i_{C},r}(x) = P_{C} x\), for all \(x \in H\) and \(\lambda > 0\). As a special case, if \(M_{A} = M_{B} = \partial i_{C}\) in Lemma 2.11, we find that \(J_{M_{A},\lambda _{A}}= J_{M_{B},\lambda _{B}} = P_{C}\). So we obtain the following result.
Lemma 4.1
([14])
Let C be a nonempty closed convex subset of a real Hilbert space H and let \(D_{1}, D_{2} : C \rightarrow H\) be mappings. For every \(\lambda _{1}, \lambda _{2} > 0\) and \(b \in [0,1]\), the following statements are equivalent:
(a) \((x^{*},z^{*})\in C \times C \) is a solution of problem (37),
(b) \(x^{*}\) is a fixed point of the mapping \(\widehat{G}: C \rightarrow C\), i.e., \(x^{*} \in F(T)\), defined by
where \(z^{*} = P_{C}(I-\lambda _{2}D_{2})x^{*}\)
Theorem 4.2
Let H be a real Hilbert space. For \(i = 1,2,\ldots,N\), let \(A_{i} : H \rightarrow H\) be \(\alpha _{i}\)-inverse strongly monotone mappings and let \(A_{G} : H \rightarrow H \) be \(\alpha _{G}\)-inverse strongly monotone mappings. Define the mapping \(\widehat{G}: H \rightarrow H\) by (38). Assume that \(\Gamma = \bigcap_{i=1}^{N} VI(C,A_{i}) \cap F(T) \neq \emptyset \). Let the sequence \(\{ y_{n} \} \) and \(\{ x_{n} \} \) be generated by \(x_{1} , u \in H\) and
where \(\sum_{i=1}^{N} a_{i} = 1, 0 < a_{i} < 1 , \{ \alpha _{n} \} , \{ \beta _{n} \} , \{ \gamma _{n} \} \subset [0, 1]\) with \(\alpha _{n}+\beta _{n}+\gamma _{n} = 1\), \(\lambda \in (0,\eta )\) with \(\eta = \min_{i=1,2,\ldots,N} \{ \alpha _{i} \} \).
Suppose the following conditions hold:
(i) \(\sum_{n=0}^{\infty }\alpha _{n} = \infty \), \(\lim_{n \rightarrow \infty } \alpha _{n} = 0\).
(ii) \(0< c<\beta _{n}\), \(\gamma _{n} \leq d <1\).
Then \(\{ x_{n} \} \) converges strongly to \(x^{*} \in \Gamma \) where \(x^{*} = P_{\Gamma }u\).
Proof
Taking \(J_{M_{A},\lambda _{A}} = J_{M_{B},\lambda _{B}}= P_{C}\) in Theorem 3.1, we obtain the desired conclusion. □
In order to apply our main result, we give the following lemma.
Lemma 4.3
([14])
Let C be a nonempty closed convex subset of real Hilbert space H. Let \(T,S : C \rightarrow C\) be nonexpansive mappings. Define a mapping \(B^{A} : C \rightarrow C\) by \(B^{A}x j = T(aI + (1a)S)x\) for every \(x \in C\) and \(a \in (0,1)\). Then \(F(B^{A}) = F(T) \cap F(S)\) and \(B^{A}\) is a nonexpansive mapping.
We apply our Theorem 3.1, by using with Lemma 4.3 ([14]), to find a solution of the variational inclusion problem.
Lemma 4.4
Let H be a real Hilbert space and let \(A_{G} : H \rightarrow H\) be \(\alpha _{G}\)-inverse strongly monotone mappings. Let \(M_{A}, M_{B} : H \rightarrow 2^{H}\) be a multi-value maximum monotone mapping with \(VI(H,A_{G},M_{A}) \cap VI(H,A_{G},M_{B}) \neq \emptyset \). Define a mapping \(G: H \rightarrow H\) as in Lemma 2.11for all \(x \in H\), \(a \in (0,1)\) and \(\lambda _{A},\lambda _{B} \in (0,2\alpha _{G})\). Then \(F(G) = VI(H,A_{G},M_{A}) \cap VI(H,A_{G},M_{B})\).
Proof
Let \(x,y \in C\). From Lemma 2.11, we find that G is nonexpansive and \(J_{M_{A},\lambda _{A}}(I - \lambda _{A} A_{G})\) and \(J_{M_{B},\lambda _{B}}(I - \lambda _{B} A_{G})\) are nonexpansive. Since
and Lemma 4.3, we have
By Lemma 2.10, we have
□
Theorem 4.5
Let H be a real Hilbert space. For \(i = 1,2,\ldots,N\), let \(A_{i} : H \rightarrow H\) be \(\alpha _{i}\)-inverse strongly monotone mappings and let \(A_{G} : H \rightarrow H \) be \(\alpha _{G}\)-inverse strongly monotone mappings. Define the mapping \(G:H \rightarrow H\) by \(G(x) = J_{M_{A},\lambda _{A}}(I-\lambda _{A}A_{G})(bx+(1-b)J_{M_{B}, \lambda _{B}}(I-\lambda _{B}A_{G})x)\) for all \(x \in H\), \(b \in (0,1)\) and \(\lambda _{A},\lambda _{B} \in (0,2\alpha _{G})\). Assume that \(\Gamma = \bigcap_{i=1}^{N} VI(C,A_{i}) \cap VI(H,A_{G},M_{A}) \cap VI(H,A_{G},M_{B}) \neq \emptyset \). Let the sequence \(\{ y_{n} \} \) and \(\{ x_{n} \} \) be generated by \(x_{1} , u \in H\) and
where \(\sum_{i=1}^{N} a_{i} = 1, 0 < a_{i} < 1 , \{ \alpha _{n} \} , \{ \beta _{n} \} , \{ \gamma _{n} \} \subset [0, 1]\) with \(\alpha _{n}+\beta _{n}+\gamma _{n} = 1\), \(\lambda \in (0,\eta )\) with \(\eta = \min_{i=1,2,\ldots,N} \{ \alpha _{i} \} \).
Suppose the following conditions hold:
(i) \(\sum_{n=0}^{\infty }\alpha _{n} = \infty \), \(\lim_{n \rightarrow \infty } \alpha _{n} = 0\).
(ii) \(0< c<\beta _{n}\), \(\gamma _{n} \leq d <1\).
Then \(\{ x_{n} \} \) converges strongly to \(x^{*} \in \Gamma \) where \(x^{*} = P_{\Gamma }u\).
Proof
From Lemma 4.4, and Theorem 3.1, we obtain the desired conclusion. □
Remark 4.6
if \(VI(H,A_{G},M_{A}) \cap VI(H,A_{G},M_{B}) \neq \emptyset \), then observe that \(VI(H,A_{G},M_{A}) \cap VI(H,A_{G},M_{B}) =\Omega \).
5 Example and numerical results
In this section, we give an example supporting Theorem 3.1.
Example 5.1
Let \(H = \mathcal{R}^{2}\) be the two dimensional space of real numbers with an inner product \(\langle \cdot ,\cdot \rangle : \mathcal{R}^{2} \times \mathcal{R}^{2} \rightarrow \mathcal{R}\) defined by \(\langle x,y \rangle = x \cdot y = x_{1}y_{1} + x_{2}y_{2}\) and the usual norm \(\Vert \cdot \Vert : \mathcal{R}^{2} \times \mathcal{R}^{2} \rightarrow \mathcal{R}\) given by \(\Vert x \Vert = \sqrt{x_{1}^{2} + x_{2}^{2}}\) for all \(x=(x_{1},x_{2}) \in \mathcal{R}^{2}\). Let \(C_{1} = \{ (x_{1},x_{2}) \in H | -2x_{1} + x_{2} \leq 1 \} \) and \(C_{2} = \{ (x_{1},x_{2}) \in H | 4x_{1} - 2x_{2} \leq 3 \} \). Define the mapping \(A_{1} : C_{1} \rightarrow \mathcal{R}^{2}\) by \(A_{1}(x_{1},x_{2}) = (\frac{3x_{1}}{2},\frac{3x_{2}}{2})\). Define the mapping \(A_{2} : C_{2} \rightarrow \mathcal{R}^{2}\) by \(A_{2}(x_{1},x_{2}) =(2x_{1},2x_{2})\). Let the mapping \(A_{G} : \mathcal{R}^{2} \rightarrow \mathcal{R}^{2}\) be defined by \(A_{G}(x_{1},x_{2}) = (x_{1}+1,x_{2}+1)\). Let \(C = C_{1} \cap C_{2}\). We have
Let \(x_{1},u \in \mathcal{R}^{2}\), \(\{x_{n}\}_{n=0}^{\infty }\) and \(\{y_{n}\}_{n=0}^{\infty }\) be generated by
where \(\{ \alpha _{n} \} = \frac{1}{12n}, \{ \beta _{n} \} = \frac{5n-2}{12n}, \{ \gamma _{n} \} = \frac{7n+1}{12n} \subset [0, 1]\) and \(a = 0.5 \in (0,1)\). Show that \(\{ x_{n} \} \) and \(\{ y_{n} \} \) converge strongly to \((0,0)\).
Solution. Since \(A_{1},A_{2}\) and \(A_{G}\) are \(\frac{2}{3} , \frac{1}{2}\) and 1-inverse strongly monotone mappings, respectively, \(\eta = \frac{1}{2}\). Choose \(\lambda _{A} = \frac{1}{2} ,\lambda _{B} =1 \in (0,2\alpha _{G}) \) and \(b = \frac{1}{4}\), we obtain \(G(x_{1},x_{2}) = (\frac{x_{1}}{16} , \frac{x_{2}}{16}) \). Choose \(\lambda = \frac{1}{4} \in (0,\eta )\). It is easy to see that the sequences \(\{ \alpha _{n} \} , \{ \beta _{n} \} \) and \(\{ \gamma _{n} \} \) satisfy all conditions in Theorem 3.1 and \((0,0) \in VI(C,A_{1}) \cap VI(C,A_{2}) \cap F(G)\). From Theorem 3.1, we can conclude that the sequence \(\{ x_{n} \} \) and \(\{ y_{n} \} \) converge strongly to \((0,0)\).
Example 5.2
Let \(H = L_{2}([-1,1])\) with product \(\langle f,g \rangle = \int _{-1}^{1} f(t)g(t)\,dt\) and the associated norm given as \(\Vert f \Vert := \sqrt{\int _{-1}^{1} f(t)g(t)\,dt}\) for all \(f,g \in L_{2}([-1,1])\). Take \(C = \{ x \in H : \Vert x \Vert \leq 2 \} \). Define the mapping \(A_{1} : L_{2}([-1,1]) \rightarrow L_{2}([-1,1]) \) by \(A_{1}(h(t)) = h(t)-2t\) for all \(t \in [-1,1]\). Define the mapping \(A_{2} : L_{2}([-1,1]) \rightarrow L_{2}([-1,1])\) by \(A_{2}(h(t)) =\frac{3}{2}h(t) - 3t\) for all \(t \in [-1,1]\). Let the mapping \(A_{G} : L_{2}([-1,1]) \rightarrow L_{2}([-1,1])\) be defined by \(A_{G}(h(t)) = h(t) - 5t\) for all \(t \in [-1,1]\). We have
Let \(i=1,2 , x_{1},u \in \mathcal{R}^{2}\), \(\{ x_{n} \} _{n=0}^{\infty }\) and \(\{ y_{n} \} _{n=0}^{\infty }\) be generated by (21) where \(\{ \alpha _{n} \} = \frac{1}{12n}, \{ \beta _{n} \} = \frac{5n-2}{12n}, \{ \gamma _{n} \} = \frac{7n+1}{12n} \subset [0, 1]\) and \(a = 0.4 \in (0,1)\). Show that \(\{ x_{n} \} \) and \(\{ y_{n} \} \) converge strongly to 2t.
Solution. Since \(A_{1},A_{2}\) and \(A_{G}\) are \(\frac{1}{2} , \frac{1}{3}\) and 1-inverse strongly monotone mappings, respectively, \(\eta = \frac{1}{2}\). Choose \(\lambda _{A} = \frac{1}{2} ,\lambda _{B} =1 \in (0,2\alpha _{G}) \) and \(b = \frac{1}{4}\), we obtain \(G(h(t)) = \frac{h(t)}{16} \). Choose \(\lambda = \frac{1}{4} \in (0,\eta )\). It is easy to see that the sequences \(\{ \alpha _{n} \} , \{ \beta _{n} \} \) and \(\{ \gamma _{n} \} \) satisfy all conditions in Theorem 3.1 and \(2t \in VI(C,A_{1}) \cap VI(C,A_{2}) \cap F(G)\). From Theorem 3.1, we can conclude that the sequences \(\{ x_{n} \} \) and \(\{ y_{n} \} \) converge strongly to 2I.
Example 5.3
Let \(f:H \rightarrow \mathcal{R}\) be a convex function. Consider the following convex optimization problem:
and
It is well known that \(x^{*} \in C\) solves (42) and (43) if and only if \(x^{*} \in C\) satisfies the following variational inequalities:
and
that is, \(x^{*} \in VI(C,\nabla f) \cap VI(C,\nabla g)\). Let \(H = \mathcal{R}\). Take \(C = [1,10]\). Define the mapping \(f : [1,10] \rightarrow \mathcal{R}\) by \(f(x)=\frac{(x-1)^{2}}{3}+1\). Define the mapping \(g : [1,10] \rightarrow \mathcal{R}\) by \(g(x)=\frac{x^{2}}{2}-\ln {x}-\frac{1}{2}\).
Let \(x_{1},u \in \mathcal{R}^{2}\). From (21), we find that \(\{ x_{n} \} _{n=0}^{\infty }\) and \(\{ y_{n} \} _{n=0}^{\infty }\) are generated by
where \(\{ \alpha _{n} \} = \frac{1}{12n}, \{ \beta _{n} \} = \frac{5n-2}{12n}, \{ \gamma _{n} \} = \frac{7n+1}{12n} \subset [0, 1]\) and \(a = 0.5 \in (0,1)\). Show that \(\{ x_{n} \} \) and \(\{ y_{n} \} \) converge strongly to 1.
Solution. Since f and g are convex and differentiable with \(f'(x)= \frac{2(x-1)}{3}\) and \(g'(x) =x-\frac{1}{x}\). It implies that ∇f and ∇g are \(\frac{2}{3}\) and 1-inverse strongly monotone mappings, respectively. Choose \(\eta = \frac{1}{2}\), \(\lambda _{A} = \frac{1}{2} ,\lambda _{B} =1 \in (0,2\alpha _{G}) \) and \(b = \frac{1}{4}\), we obtain \(G(x) = \frac{x}{12}+\frac{11}{12} \). Choose \(\lambda = \frac{1}{4} \in (0,\eta )\). It is easy to see that the sequences \(\{ \alpha _{n} \} , \{ \beta _{n} \} \) and \(\{ \gamma _{n} \} \) satisfy all conditions in Theorem 3.1 and \(1 \in VI(C,\nabla f) \cap VI(C,\nabla g) \cap F(G)\). From Theorem 3.1, we can conclude that the sequences \(\{ x_{n} \} \) and \(\{ y_{n} \} \) converge strongly to 1.
Remark 5.4
According to Tables 1–3 and Figs. 1–3, it is shown that our Algorithm (21) converges to an element of the set \(\bigcap_{i=1}^{N} VI(C,A_{i}) \cap F(G)\) at a faster rate than Algorithm (3). Therefore, our algorithm is more efficient.
6 Conclusion
In this paper, we have proposed a new problem, called a generalized system of modified variational inclusion problems (GSMVIP). This problem can be reduced to a classical variational inclusion problem and a classical variational inequalities problem. Moreover, we study the half-space
which can be reduced to \(T_{n}\) in Algorithm (3). In order to solve the GSMVIP and the set of a finite family of variational inequalities problem, we have presented a new subgradient extragradient algorithm which uses \(Q_{n}\) and show that it converges to a solution of the GSMVIP and the set of a finite family of variational inequalities problem under suitable conditions. Therefore, our algorithm improves the algorithm proposed by Censor et al. [12]. The efficiency of the proposed algorithm has also been illustrated by several numerical experiments.
Availability of data and materials
Not applicable.
References
Aubin, J.P., Ekeland, I.: Applied Nonlinear Analysis. Wiley, New York (1984)
Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Program. 63, 123–145 (1994)
Dafermos, S.C.: Traffic equilibrium and variational inequalities. Transp. Sci. 14, 42–54 (1980)
Dafermos, S.C., Mckelvey, S.C.: Partitionable variational inequalities with applications to network and economic equilibrium. J. Optim. Theory Appl. 73, 243–268 (1992)
Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Ekon. Mat. Met. 12, 747–756 (1976)
Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301–323 (2012)
Hieu, D.V., Thong, D.V.: New extragradient like algorithms for strongly pseudomonotone variational inequalities. J. Glob. Optim. 70, 385–399 (2018)
Hieu, D.V., Thong, D.V.: A new projection method for a class of variational inequalities. Appl. Anal. 98(13), 2423–2439 (2019)
Malitsky, Y.V., Semenov, V.V.: A hybrid method without extrapolation step for solving variational inequalities problems. J. Glob. Optim. 61, 193–202 (2015)
Malitsky, Y.V.: Projected reflected gradient methods for monotone variational inequalities. SIAM J. Optim. 25, 502–520 (2015)
Yao, Y., Marino, G., Muglia, L.: A modified Korpelevich’s method convergent to the minimum-norm solution of a variational inequality. Optimization 63, 559–569 (2014)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)
Solodov, M.V., Svaiter, B.F.: A new projection method for variational inequalities problems. SIAM J. Control Optim. 37, 765–776 (1999)
Kangtunyakarn, A.: An iterative algorithm to approximate a common element of the set of common fixed points for a finite family of strict pseudo-contractions and the set of solutions for a modified system of variational inequalities. Fixed Point Theory Appl. 2013, 143 (2013)
Brezis, H.: Operateurs maximaux monotone et semi-groupes de contractions dans les espaces de Hilbert. Math. Stud. 5, 759–775 (1973)
Zhang, S.S., Lee, J.H.M., Chan, C.K.: Algorithms of common solutions for quasi variational inclusion and fixed point problems. Appl. Math. Mech. 29, 571–578 (2008)
Xu, H.K.: An iterative approach to quadric optimization. J. Optim. Theory Appl. 116, 659–678 (2003)
Mainge, P.E.: A hybrid extragradient viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 49, 1499–1515 (2008)
Rockafellar, R.T.: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (2005)
Xu, H.K.: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279–291 (2004)
Maing, P.E.: Approximation method for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 325, 469–479 (2007)
Acknowledgements
This work is supported by King Mongkut’s Institute of Technology Ladkrabang.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
The two authors contributed equally and significantly in writing this article. Both authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kheawborisut, A., Kangtunyakarn, A. Modified subgradient extragradient method for system of variational inclusion problem and finite family of variational inequalities problem in real Hilbert space. J Inequal Appl 2021, 53 (2021). https://doi.org/10.1186/s13660-021-02583-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-021-02583-1