Abstract
In this paper, we study the concept of split variational inequality problem with multiple output sets when the cost operators are pseudomonotone and non-Lipschitz. We introduce a new Mann-type inertial projection and contraction method with self-adaptive step sizes for approximating the solution of the problem in the framework of Hilbert spaces. Under some mild conditions on the control parameters and without prior knowledge of the operator norms, we prove a strong convergence theorem for the proposed algorithm. We point out that while the cost operators are non-Lipschitz, our proposed method does not require any linesearch method but uses a more efficient self-adaptive step size technique that generates a non-monotonic sequence of step sizes. Finally, we apply our result to study certain classes of optimization problems and we present several numerical experiments to illustrate the applicability of the proposed method. Several of the existing results in the literature could be viewed as special cases of our result in this study.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let H be a real Hilbert space with an inner product \(\langle \cdot ,\cdot \rangle \) and induced norm \(||\cdot ||.\) Let C be a nonempty, closed and convex subset of H, and let \(A:H\rightarrow H\) be a mapping. The variational inequality problem (VIP) is formulated as finding a point \(p\in C\) such that
We denote the solution set of the VIP (1.1) by VI(C, A). Variational inequality theory was first introduced independently by Fichera [13] and Stampacchia [34]. The VIP is a fundamental problem in optimization theory, which unifies several important concepts in applied mathematics, such as the necessary network equilibrium problems, optimality conditions, systems of nonlinear equations and complementarity problems (e.g. see [4, 5, 20]). In the recent years, the VIP has attracted the attention of researchers due to its numerous applications in diverse fields, such as in optimization theory, economics, structural analysis, operations research, sciences and engineering (see [10, 17, 36] and the references therein). Several authors have proposed and studied different iterative methods for approximating the solution of the VIP (see [2, 7, 16, 25, 26] and references therein).
The split inverse problem (SIP) is another area of research which has recently received great research attention (see [42] and the references therein) due to its several applications in different fields, for instance, in signal processing, phase retrieval, medical image reconstruction, data compression, intensity-modulated radiation therapy, etc. (e.g. see [8, 9, 18, 22, 29]). The SIP model is formulated as follows:
such that
where \(H_1\) and \(H_2\) are real Hilbert spaces, IP\(_1\) denotes an inverse problem formulated in \(H_1\) and IP\(_2\) denotes an inverse problem formulated in \(H_2,\) and \(T: H_1 \rightarrow H_2\) is a bounded linear operator.
In 1994, Censor and Elfving in [9] introduced the first instance of the SIP called the split feasibility problem (SFP) for modelling inverse problems that arise from medical image reconstruction. The SFP finds application in the control theory, approximation theory, signal processing, geophysics, communications, biomedical engineering, etc. [8, 23, 31, 32]. Let C and Q be nonempty, closed and convex subsets of Hilbert spaces \(H_1\) and \(H_2,\) respectively, and let \(T:H_1\rightarrow H_2\) be a bounded linear operator. The SFP is defined as follows:
Several iterative algorithms for solving the SFP (1.4) have been constructed and investigated by researchers (see, e.g. [8, 23, 24] and the references therein).
An important generalization of the SFP is the split variational inequality problem (SVIP) introduced by Censor et al. [10]. The SVIP is formulated as follows:
such that
where \(A_1:H_1\rightarrow H_1, A_2:H_2\rightarrow H_2\) are single-valued operators. Several authors have studied and proposed different iterative methods for approximating the solution of SVIP (see [19, 21, 37] and the references therein).
In 2020, Reich and Tuyen [28] introduced and studied the concept of split feasibility problem with multiple output sets in Hilbert spaces (SFPMOS), which is formulated as follows: Find a point \(u^\dagger \) such that
where \(T_i:H\rightarrow H_i, ~ i=1,2,...,N\), are bounded linear operators, C and \(Q_i\) are nonempty, closed and convex subsets of Hilbert spaces H and \( H_i,i=1,2,\ldots ,N,\) respectively.
Moreover, Reich and Tuyen [30] proposed the following two algorithms for approximating the solution of SFPMOS (1.7) in Hilbert spaces:
and
where \(f:C\rightarrow C\) is a strict contraction, \(\{\gamma _n\}\subset (0,\infty )\) and \(\{\alpha _n\}\subset (0,1).\) The authors obtained weak and strong convergence result for Algorithm (1.8) and Algorithm (1.9), respectively.
In this paper, we study the split variational inequality problem with multiple output sets. Let \(H, H_i,i=1,2,...,N,\) be real Hilbert spaces and let \(C, C_i\) be nonempty, closed and convex subsets of real Hilbert spaces H and \( H_i,i=1,2,...,N,\) respectively. Let \(T_i:H\rightarrow H_i,i=1,2,...,N,\) be bounded linear operators and let \(A:H\rightarrow H, A_i:H_i\rightarrow H_i, i=1,2,...,N,\) be single-valued operators. The split variational inequality problem with multiple output sets (SVIPMOS) is formulated as finding a point \(x^*\in C\) such that
It is clear that the SVIPMOS (1.10) generalizes the SFPMOS (1.7).
In the last couple of years, developing iterative methods with a high rate of convergence for solving optimization problems has become of great interest to researchers. One of the approaches employed by researchers to achieve this objective is the inertial technique. This technique originates from an implicit time discretization method (the heavy ball method) of second-order dynamical systems. In recent years, several authors have constructed highly efficient iterative methods by employing the inertial technique, see, e.g., [1, 3, 11, 14, 38, 40].
In this paper, we propose and analyze a new Mann-type inertial projection and contraction algorithm with self-adaptive step sizes for approximating the solution SVIPMOS (1.10) when the cost operators are pseudomonotone and non-Lipschitz. While the cost operators are non-Lipschitz, our proposed method does not involve any line search method but uses a more efficient self-adaptive step size technique which generates a non-monotonic sequence of step sizes. Furthermore, we prove that the sequence generated by our proposed method converges to the minimum-norm solution of the problem in Hilbert spaces. Finally, we apply our result to study certain classes of optimization problems and we present several numerical experiments to demonstrate the applicability of our proposed algorithm.
The outline of the paper is as follows: In Sect. 2, we give some definitions and results required for the convergence analysis. In Sect. 3, we present the proposed algorithm and in Sect. 4 we analyze the convergence of our proposed method. In Sect. 5 we apply our result to study certain classes of optimization problems, and in Sect. 6 we carry out several numerical experiments with graphical illustrations. Finally, we give some concluding remarks in Sect. 7.
2 Preliminaries
Definition 2.1
[2, 16] An operator \(A:H\rightarrow H\) is said to be
-
(i)
\(\alpha \)-strongly monotone, if there exists \(\alpha >0\) such that
$$\begin{aligned} \langle x-y, Ax-Ay\rangle \ge \alpha \Vert x-y\Vert ^2,~~ \forall ~x,y \in H; \end{aligned}$$ -
(ii)
monotone, if
$$\begin{aligned} \langle x-y, Ax-Ay \rangle \ge 0,\quad \forall ~ x,y\in H; \end{aligned}$$ -
(iii)
pseudomonotone, if
$$\begin{aligned} \langle Ay, x-y \rangle \ge 0 \implies ~\langle Ax,x-y \rangle \ge 0,~\forall x,y \in H, \end{aligned}$$ -
(iv)
L-Lipschitz continuous, if there exists a constant \(L>0\) such that
$$\begin{aligned} ||Ax-Ay||\le L||x-y||,\quad \forall ~ x,y\in H; \end{aligned}$$ -
(v)
uniformly continuous, if for every \(\epsilon >0,\) there exists \(\delta =\delta (\epsilon )>0,\) such that
$$\begin{aligned} \Vert Ax-Ay\Vert<\epsilon \quad \text {whenever}\quad \Vert x-y\Vert <\delta ,\quad \forall x,y\in H; \end{aligned}$$
Remark 2.2
We note that the following implications hold: \((i)\implies (ii)\implies (iii)\) but the converses are not generally true. We also point out that uniform continuity is a weaker notion than Lipschitz continuity.
It is well known that if D is a convex subset of H, then \(A:D\rightarrow H\) is uniformly continuous if and only if, for every \(\epsilon >0,\) there exists a constant \(K<+\infty \) such that
Lemma 2.3
[27, 39] Let H be a real Hilbert space. Then the following results hold for all \(x,y\in H\) and \(\delta \in (0, 1):\)
-
(i)
\(||x + y||^2 \le ||x||^2 + 2\langle y, x + y \rangle ;\)
-
(ii)
\(||x + y||^2 = ||x||^2 + 2\langle x, y \rangle + ||y||^2;\)
-
(iii)
\(||\delta x + (1-\delta ) y||^2 = \delta ||x||^2 + (1-\delta )||y||^2 -\delta (1-\delta )||x-y||^2.\)
Lemma 2.4
([33]) Let \(\{a_n\}\) be a sequence of nonnegative real numbers, \(\{\alpha _n\}\) be a sequence in (0, 1) with \(\sum _{n=1}^\infty \alpha _n = \infty \) and \(\{b_n\}\) be a sequence of real numbers. Assume that
If \(\limsup \nolimits _{k\rightarrow \infty }b_{n_k}\le 0\) for every subsequence \(\{a_{n_k}\}\) of \(\{a_n\}\) satisfying \(\liminf \nolimits _{k\rightarrow \infty }(a_{n_{k+1}} - a_{n_k})\ge 0,\) then \(\lim \nolimits _{n\rightarrow \infty }a_n =0.\)
Lemma 2.5
[35] Suppose \(\{\lambda _n\}\) and \(\{\theta _n\}\) are two nonnegative real sequences such that
If \(\sum _{n=1}^{\infty }\phi _n<\infty ,\) then \(\lim \nolimits _{n\rightarrow \infty }\lambda _n\) exists.
Lemma 2.6
[12] Consider the VIP (1.1) with C being a nonempty, closed, convex subset of a real Hilbert space H and \(A:C\rightarrow H\) being pseudomonotone and continuous. Then p is a solution of VIP (1.1) if and only if
3 Main Results
In this section, we present our proposed algorithm for solving the SVIPMOS (1.10). We analyze the convergence of the proposed method under the following conditions:
Let \(C, C_i\) be nonempty, closed and convex subsets of real Hilbert spaces \(H, H_i,i=1,2,...,N,\) respectively, and let \(T_i:H\rightarrow H_i,i=1,2,...,N,\) be bounded linear operators with adjoints \(T^*_i.\) Let \(A:H\rightarrow H, A_i:H_i\rightarrow H_i, i=1,2,...,N,\) be uniformly continuous pseudomonotone operators satisfying the following property:
Moreover, we assume that the solution set \(\Omega \ne \emptyset \) and the control parameters satisfy the following conditions:
Assumption A
-
(A1)
\(\{\alpha _n\} \subset (0,1), \lim \nolimits _{n\rightarrow \infty }\alpha _n=0, \sum _{n=1}^\infty \alpha _n = +\infty , \lim \nolimits _{n\rightarrow \infty }\frac{\epsilon _n}{\alpha _n}=0,\{\xi _n\}\subset [a,b]\subset (0,1-\alpha _n),\theta >0;\)
-
(A2)
\(0<c_i< c_i'<1,0<\phi _i< \phi _i'<1,0<k_i<k_i'<2, \{c_{n,i}\},\{\phi _{n,i}\},\{k_{n,i}\}\subset \mathbb {R}_+, \lim \nolimits _{n\rightarrow \infty }c_{n,i}=\lim \nolimits _{n\rightarrow \infty }\phi _{n,i}=\lim \nolimits _{n\rightarrow \infty }k_{n,i}=0, \lambda _{1,i}>0,~\forall ~i=0,1,2,\ldots ,N;\)
-
(A3)
\(\{\rho _{n,i}\}\subset \mathbb {R}_+,\sum _{n=1}^{\infty }\rho _{n,i}<+\infty ,0<a_i\le \delta _{n,i}\le b_i<1, \sum _{i=0}^{N}\delta _{n,i}=1\) for each \(n\ge 1.\)
Now, the algorithm is presented as follows:
Algorithm 3.1. | |
---|---|
Step 0. Select initial points \(x_0, x_1 \in H.\) Let \(C_0=C,~ T_0=I^H,~ A_0=A\) and set \(n=1.\) | |
Step 1. Given the \((n-1)th\) and nth iterates, choose \(\theta _n\) such that \(0\le \theta _n\le \hat{\theta }_n\) with \(\hat{\theta }_n\) defined by | |
\(\hat{\theta }_n = {\left\{ \begin{array}{ll} \min \Big \{\theta ,~ \frac{\epsilon _n}{\Vert x_n - x_{n-1}\Vert }\Big \}, \quad \text {if}~ x_n \ne x_{n-1},\\ \theta , \quad \quad \quad \quad \quad \text {otherwise.} \end{array}\right. }\) (3.2) | |
Step 2. Compute | |
\(w_n = x_n + \theta _n(x_n - x_{n-1}).\) | |
Step 3. Compute | |
\(y_{n,i} = P_{C_i}(T_iw_n-\lambda _{n,i}A_iT_iw_n)\) | |
\(\lambda _{n+1,i} = {\left\{ \begin{array}{ll} \min \{\frac{(c_{n,i}+c_i)\Vert T_iw_n - y_{n,i}\Vert }{\Vert A_iT_iw_n - A_iy_{n,i}\Vert },~~ \lambda _{n,i}+\rho _{n,i}\},&{}\text {if}\quad A_iT_iw_n \\ &{}- A_iy_{n,i}\ne 0,\\ \lambda _{n,i}+\rho _{n,i},&{} \text {otherwise.} \end{array}\right. }\) | |
\(z_{n,i}=T_iw_n-\beta _{n,i}r_{n,i},\) | |
where | |
\(r_{n,i}=T_iw_n-y_{n,i}-\lambda _{n,i}(A_iT_iw_n - A_iy_{n,i})\) | |
and | |
\(\beta _{n,i}={\left\{ \begin{array}{ll} (k_i+k_{n,i})\frac{\langle T_iw_n - y_{n,i},r_{n,i}\rangle }{\Vert r_{n,i}\Vert ^2},&{}\text {if}\quad r_{n,i}\ne 0\\ 0,&{}\text {otherwise.} \end{array}\right. }\) | |
Step 4. Compute | |
\(b_n = \sum _{i=0}^{N}\delta _{n,i}\big (w_n+\eta _{n,i}T_i^*(z_{n,i}-T_iw_n)\big ),\) | |
where | |
\(\eta _{n,i}={\left\{ \begin{array}{ll} \frac{(\phi _{n,i}+\phi _i)\Vert T_iw_n-z_{n,i}\Vert ^2}{\Vert T_i^*(T_iw_n-z_{n,i})\Vert ^2},&{}\text {if}\quad \Vert T_i^*(T_iw_n-z_{n,i})\Vert \ne 0,\\ 0,&{}\text {otherwise.} \end{array}\right. }\) (3.3) | |
Step 5. Compute | |
\(x_{n+1}= (1-\alpha _n-\xi _n)w_n+\xi _nb_n.\) | |
Set \(n:= n +1 \) and return to Step 1. |
Remark 3.2
By conditions (C1) and (C2), it follows from (3.2) that
Remark 3.3
Observe that while the cost operators \(A_i,i=0,1,2,\ldots ,N\) are non-Lipschitz, our method does not require any linesearch technique, which could be computationally too expensive too implement. Rather, we employ self-adaptive step sizes that only require simple computations of known information per iteration.
4 Convergence Analysis
First, we prove some lemmas needed for our strong convergence theorem.
Lemma 4.1
Suppose \(\{\lambda _{n,i}\}\) is the sequence generated by Algorithm 3.1 such that Assumption A holds. Then \(\{\lambda _{n,i}\}\) is well defined for each \(i=0,1,2,\ldots ,N\) and \(\lim \nolimits _{n\rightarrow \infty }\lambda _{n,i}=\lambda _{1,i}\in [\min \{\frac{c_i}{M_i},\lambda _{1,i}\}, \lambda _{1,i}+\Phi _i],\) where \(\Phi _i=\sum _{n=1}^{\infty }\rho _{n,i}.\)
Proof
Since \(A_i\) is uniformly continuous for each \(i=0,1,2,\ldots ,N,\) then by (2.1) we have that for any given \(\epsilon _i>0,\) there exists \(K_i<+\infty \) such that \(\Vert A_iT_iw_n-A_iy_{n,i}\Vert \le K_i\Vert T_iw_n-y_{n,i}\Vert +\epsilon _i.\) Hence, for the case \(A_iT_iw_n-A_iy_{n,i}\ne 0\) for all \(n\ge 1\) we have
where \(\epsilon _i =\mu _i\Vert T_iw_n-y_{n,i}\Vert \) for some \(\mu _i\in (0,1)\) and \(M_i=K_i+\mu _i.\) Thus, by the definition of \(\lambda _{n+1},\) the sequence \(\{\lambda _{n,i}\}\) has lower bound \(\min \{\frac{c_i}{M_i},\lambda _{1,i}\}\) and has upper bound \(\lambda _{1,i} + \Phi _i.\) By Lemma 2.5, the limit \(\lim \nolimits _{n\rightarrow \infty }\lambda _{n,i}\) exists and we denote by \(\lambda _i=\lim \nolimits _{n\rightarrow \infty }\lambda _{n,i}.\) It is clear that \(\lambda _i\in \big [\min \{\frac{c_i}{M_i},\lambda _{1,i}\},\lambda _{1,i}+\Phi _i\big ]\) for each \(i=0,1,2\ldots ,N.\) \(\square \)
Lemma 4.2
If \(\Vert T_i^*(T_iw_n-z_{n,i})\Vert \ne 0,\) then the sequence \(\{\eta _{n,i}\}\) defined by (3.3) has a positive lower bounded for each \(i=0,1,2,\ldots ,N.\)
Proof
If \(\Vert T_i^*(T_iw_n-z_{n,i})\Vert \ne 0,\) we have for each \(i=0,1,2,\ldots ,N\)
Since \(T_i\) is a bounded linear operator and \(\lim \nolimits _{n\rightarrow \infty }\phi _{n,i}=0\) for each \(i=0,1,2,\ldots ,N,\) we have
which implies that \(\frac{\phi _i}{\Vert T_i\Vert ^2}\) is a lower bound of \(\{\eta _{n,i}\}\) for each \(i=0,1,2,\ldots ,N\).
\(\square \)
Lemma 4.3
Suppose Assumption A of Algorithm 3.1 holds. Then, there exists a positive integer N such that
Proof
Since \(0<k_i<k_i'<2\) and \(\lim \nolimits _{n\rightarrow \infty }k_{n,i}=0\) for each \(i=0,1,2,\ldots ,N,\) there exists a positive integer \(N_{1,i}\) such that
By similar argument, there exists a positive integer \(N_{2,i}\) for each \(i=0,1,2,\ldots ,N,\) such that
In addition, since \(0<c_i<c_i'< 1,\lim \nolimits _{n\rightarrow \infty }c_{n,i}=0\) and \(\lim \nolimits _{n\rightarrow \infty }\lambda _{n,i}=\lambda _i\) for each \(i=0,1,2,\ldots ,N,\) we have
Therefore, for each \(i=0,1,2,\ldots ,N,\) there exists a positive integer \(N_{3,i}\) such that
Now, by setting \(N=\max \{N_{1,i},~N_{2,i},~N_{3,i}:i=0,1,2,\ldots ,N\},\) the required result follows. \(\square \)
Lemma 4.4
Let \(\{x_n\}\) be a sequence generated by Algorithm 3.1 such that Assumption A holds. Then \(\{x_n\}\) is bounded.
Proof
Let \(p\in \Omega .\) This implies that \(T_ip\in VI(C_i,A_i),~ i=0,1,2,\ldots ,N.\) Then, by applying the triangular inequality, it follows from the definition of \(w_n\) that
By Remark (3.2), there exists \(M_1 > 0\) such that
Thus, it follows from (4.1) that
Since \(y_{n,i} = P_{C_i}(T_iw_n-\lambda _{n,i}A_iT_iw_n)\) and \(T_ip\in VI(C_i,A_i),~i=0,1,2\ldots , N,\) by the property of the projection map it follows that
Moreover, since \(y_{n,i}\in C_i,~i=0,1,2,\ldots ,N,\) we have
which follows from the pseudomonotonicity of \(A_i\) that \(\langle A_iy_{n,i}, y_{n,i}-T_ip\rangle \ge 0.\) Since \(\lambda _{n,i}>0,~i=0,1,2,\)\(\ldots ,N,\) we have
From (4.3) and (4.4) we obtain
Now, applying the definition of \(r_{n,i}\) and (4.5) we get
Since \(z_{n,i}=T_iw_n-\beta _{n,i}r_{n,i},\) it follows that
By Lemma 4.3, there exists a positive integer N such that \(0<k_i+k_{n,i}<2~\forall n\ge N.\) From the definition of \(\beta _{n,i},\) if \(r_{n,i}\ne 0~ i=0,1,2,\ldots ,N,\) we have
Now, by applying Lemma 2.3, (4.6), (4.7) and (4.8) we get
Observe that if \(r_{n,i}=0,~i=0,1,2,\ldots ,N,\) (4.9) still holds.
Next, since the function \(\Vert \cdot \Vert ^2\) is convex, we have
By Lemma 4.3, there exists a positive integer N such that \(0<\phi _{n,i}+\phi _i<1,~~i=0,1,2,\ldots ,N\) for all \(n\ge N.\) Now, from (4.10) and by applying Lemma 2.3 and (4.9) we have
If \(\Vert T_i^*(z_{n,i}-T_iw_n)\Vert \ne 0,\) then using the definition of \(\eta _{n,i}\) we have
Thus, by applying (4.12) in (4.11) and substituting in (4.10) we have
Observe that if \(\Vert T_i^*(z_{n,i}-T_iw_n)\Vert =0,\) (4.13) still holds from (4.11).
By the definition of \(x_{n+1},\) we have
Applying Lemma 2.3(ii) together with (4.13) we have
which implies that
Now, applying (4.2) and (4.15) in (4.14), we have for all \(n\ge N\)
which implies that \(\{x_n\}\) is bounded. Hence, \(\{w_n\},\{y_{n,i}\},\{z_{n,i}\},\{y_{n,i}\},\{r_{n,i}\}\) and \(\{b_n\}\) are all bounded. \(\square \)
Lemma 4.5
Suppose \(\{w_n\}\) and \(\{b_n\}\) are two sequences generated by Algorithm 3.1 with subsequences \(\{w_{n_k}\}\) and \(\{b_{n_k}\},\) respectively, such that \(\lim \nolimits _{k\rightarrow \infty }\Vert w_{n_k}-b_{n_k}\Vert =0.\) If \(w_{n_k}\rightharpoonup z\in H,\) then \(z\in \Omega .\)
Proof
From (4.13), we have
From this, we obtain
Since by the hypothesis of the lemma \(\lim \nolimits _{k\rightarrow \infty }\Vert w_{n_k}-b_{n_k}\Vert =0,\) it follows from (4.17) that
which implies that
By the definition of \(\eta _{n,i},\) we have
which implies that
Since \(\{\Vert T_i^*(T_iw_{n_k}-z_{{n_k},i})\Vert \}\) is bounded, it follows that
Thus, we have
By the definition of \(\lambda _{n+1,i},\) it follows that
From Lemma 4.1 we know that \(\lim \nolimits _{k\rightarrow \infty }\lambda _{n_k,i}=\lambda _i,~~i=0,1,2,\ldots ,N\) and by Lemma 4.3, there exists a positive integer N such that \(1-\frac{\lambda _{n_k,i}}{\lambda _{n_k+1,i}}(c_{n_k,i}+c_i)>0,~~ \forall n\ge N,~ i=0,1,2,\ldots ,N.\) If \(r_{n,i}\ne 0,\) then by applying the continuity of \(A_i,\) the definitions of \(\beta _{n,i}, r_{n,i}\) and \(z_{n,i}~~ i=0,1,2,\ldots ,N,\) from (4.20) we have
Thus, we have
Since \(\lim \nolimits _{k\rightarrow \infty }c_{n_k,i}=k_{n_k,i}=0\) and by Lemma 4.1\(\lim \nolimits _{k\rightarrow \infty }\frac{\lambda _{n_k,i}}{\lambda _{n_k+1,i}}=1,~~i=0,1,2,\ldots ,N,\) then from (4.22) and by applying (4.18) we have
If \(r_{n,i}=0,\) from (4.20) we know that (4.23) still holds.
Since \(y_{n,i} = P_{C_i}(T_iw_n-\lambda _{n,i}A_iT_iw_n),\) by the property of the projection map we have
which implies that
From the last inequality, we get
By applying (4.23) and the fact that \(\lim \nolimits _{k\rightarrow \infty }\lambda _{n_k,i}=\lambda _i>0,\) from (4.24) we obtain
Observe that
By the continuity of \(A_i,\) from (4.23) we have
By applying (4.23) and (4.27), we obtain from (4.25) and (4.26) that
Next, let \(\{\Theta _{k,i}\}\) be a decreasing sequence of positive numbers such that \(\Theta _{k,i}\rightarrow 0\) as \(k\rightarrow \infty ,~i=0,1,2,\ldots ,N.\) For each k, let \(N_{k}\) denote the smallest positive integer such that
where the existence of \(N_{k}\) follows from (4.28). Since \(\{\Theta _{k,i}\}\) is decreasing, then \(\{N_{k}\}\) is increasing. Furthermore, since \(\{y_{N_{k},i}\}\subset C_i\) for each k, we can suppose \(A_iy_{N_{k},i}\ne 0\) (otherwise, \(y_{N_{k},i}\in VI(C_i,A_i),~i=0,1,2\ldots ,N\)) and let
Then, \(\langle A_iy_{N_{k},i}, u_{N_{k},i} \rangle =1\) for each \(k,~i=0,1,2,\ldots ,N.\) From (4.29), we obtain
By the pseudomonotonicity of \(A_i\), we obtain
which is equivalent to
To complete the proof, we need to show that \(\lim \nolimits _{k\rightarrow \infty }\Theta _{k,i}u_{N_{k},i}=0.\) Since \(w_{n_k}\rightharpoonup z\) and \(T_i\) is a bounded linear operator for each \(i=0,1,2,\ldots ,N,\) we have \(T_iw_{n_k}\rightharpoonup T_iz,~~\forall i=0,1,2,\ldots ,N.\) Thus, from (4.23) we get \(y_{n_k,i}\rightharpoonup T_iz,~~\forall ~ i=0,1,2,\ldots ,N.\) Since \(\{y_{n_k,i}\}\subset C_i,~i=0,1,2,\ldots ,N,\) we have \(T_iz\in C_i.\) If \(T_iz=0,~\forall ~ i=0,1,2,\ldots ,N,\) then \(T_iz\in VI(C_i,A_i)~\forall ~ i=0,1,2,\ldots ,N,\) which implies that \(z\in \Omega .\) On the contrary, we suppose \(T_iz\ne 0,~\forall ~ i=0,1,2,\ldots ,N.\) Since \(A_i\) satisfies condition (3.1), we have for all \(i=0,1,2,\ldots ,N\)
Using the facts that \(\{y_{N_{k},i}\} \subset \{y_{n_{k},i}\}\) and \(\Theta _{k,i}\rightarrow 0\) as \(k\rightarrow \infty ,~i=0,1,2\ldots ,N,\) we have
which implies that \(\limsup \nolimits _{k\rightarrow \infty }~\Theta _{k,i}u_{N_{k},i}=0.\) Applying the facts that \(A_i\) is continuous, \(\{y_{N_{k},i}\}\) and \(\{u_{N_{k},i}\}\) are bounded and \(\lim \nolimits _{k\rightarrow \infty }~\Theta _{k,i}u_{N_{k},i}=0,\) from (4.30) we obtain
From the last inequality, we obtain
By Lemma 2.6, we have
which implies that
Thus, we have \(z \in \bigcap _{i=0}^N T_i^{-1}\big (VI(C_i,A_i)\big ),\) which implies that \(z\in \Omega \) as required. \(\square \)
Lemma 4.6
Let \(\{x_n\}\) be a sequence generated by Algorithm 3.1 under Assumption A. Then, the following inequality holds for all \(p\in \Omega :\)
Proof
Let \(p\in \Omega . \) Then, by applying Lemma 2.3 together with the Cauchy-Schwartz inequality we have
where \(M_2:= \sup _{n\in \mathbb {N}}\{\Vert x_n - p\Vert , \theta _n\Vert x_n - x_{n-1}\Vert \}>0.\)
Next, by the definition of \(x_{n+1},\) (4.13), (4.31) and applying Lemma 2.3 we have
which is the required inequality. \(\square \)
Theorem 4.7
Let \(\{x_n\}\) be a sequence generated by Algorithm 3.1 such that Assumption A holds. Then, \(\{x_n\}\) converges strongly to \(\hat{x}\in \Omega ,\) where \(\hat{x}= \min \{\Vert p\Vert :p\in \Omega \}.\)
Proof
Let \(\hat{x}= \min \{\Vert p\Vert :p\in \Omega \},\) that is, \(\hat{x}=P_\Omega (0).\) Then, from Lemma 4.6 we obtain
where \(d_n=3M_2(1-\alpha _n)^2\frac{\theta _n}{\alpha _n}\Vert x_n - x_{n-1}\Vert +2\langle \hat{x},\hat{x}-x_{n+1} \rangle .\)
Now, we claim that the sequence \(\{\Vert x_n-\hat{x}\Vert \}\) converges to zero. In view of Lemma 2.4, it suffices to show that \(\limsup \nolimits _{k\rightarrow \infty }d_{n_k}\le 0\) for every subsequence \(\{\Vert x_{n_k}-\hat{x}\Vert \}\) of \(\{\Vert x_n-\hat{x}\Vert \}\) satisfying
Suppose that \(\{\Vert x_{n_k}-\hat{x}\Vert \}\) is a subsequence of \(\{\Vert x_n-\hat{x}\Vert \}\) such that (4.33) holds. Again, from Lemma 4.6, we obtain
By (4.33), Remark 3.2 and the fact that \(\lim \nolimits _{k\rightarrow \infty }\alpha _{n_k}=0,\) we have
Thus, we get
It follows that
By the definition of \(b_n\) and by applying (4.35), we obtain
From the definition of \(w_n\) and by Remark 3.2, we get
Next, from (4.36) and (4.37) we obtain
Applying (4.37), (4.38) and the fact that \(\lim \nolimits _{k\rightarrow \infty }\alpha _{n_k}=0\) we obtain
Since \(\{x_n\}\) is bounded, \(w_{\omega }(x_n)\ne \emptyset .\) Let \(x^*\in w_{\omega }(x_n)\) be an arbitrary element. Then, there exist a subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\) such that \(x_{n_k}\rightharpoonup x^*.\) It follows from (4.37) that \(w_{n_k}\rightharpoonup x^*.\) Now, invoking Lemma 4.5 and applying (4.36) we have \(x^*\in \Omega .\) Since \(x^*\in w_{\omega }(x_n)\) was chosen arbitrarily, it follows that \(w_{\omega }(x_n)\subset \Omega .\)
Next, by the boundedness of \(\{x_{n_k}\},\) there exists a subsequence \(\{x_{n_{k_j}}\}\) of \(\{x_{n_k}\}\) such that \(x_{n_{k_j}}\rightharpoonup q\) and
Since \(\hat{x}=P_\Omega (0),\) it follows from the property of the metric projection that
Hence, from (4.39) and (4.40) we obtain
Now, by Remark 3.2 and (4.41) we have \(\limsup \limits _{k\rightarrow \infty }d_{n_k}\le 0.\) Thus, by applying Lemma 2.4 it follows from (4.32) that \(\{\Vert x_n-\hat{x}\Vert \}\) converges to zero, which completes the proof. \(\square \)
5 Applications
5.1 Split Convex Minimization Problem with Multiple Output Sets
Let C be a nonempty, closed and convex subset of a real Hilbert space H. The convex minimization problem is formulated as finding a point \(x^*\in C\), such that
where g is a real-valued convex function. We denote the solution set of Problem (5.1) by \(\arg \min g.\)
Let \(C, C_i\) be nonempty, closed and convex subsets of real Hilbert spaces \(H, H_i,i=1,2,...,N,\) respectively, and let \(T_i:H\rightarrow H_i,i=1,2,...,N,\) be bounded linear operators with adjoints \(T^*_i.\) Let \(g:H\rightarrow \mathbb {R}, g_i:H_i\rightarrow \mathbb {R}\) be convex and differentiable functions. Here, we apply our result to approximate the solution of the following split convex minimization problem with multiple output sets (SCMPMOS): Find \(x^*\in C\) such that
We need the following lemma to establish our next result.
Lemma 5.1
[36] Let C be a nonempty, closed and convex subset of a real Banach space E. Let g be a convex function of E into \(\mathbb {R}.\) If g is Fréchet differentiable, then z is a solution of Problem (5.1) if and only if \(z\in VI(C,\triangledown g),\) where \(\triangledown g\) is the gradient of g.
Now, by applying Theorem 4.7 and Lemma 5.1, we obtain the following strong convergence theorem for approximating the solution of the SCMPMOS (5.2) in Hilbert spaces.
Theorem 5.2
Let \(C, C_i\) be nonempty, closed and convex subsets of real Hilbert spaces \(H, H_i,i=1,2,...,N,\) respectively, and let \(T_i:H\rightarrow H_i,i=1,2,...,N,\) be bounded linear operators with adjoints \(T^*_i.\) Let \(g:H\rightarrow \mathbb {R}, g_i:H_i\rightarrow \mathbb {R}\) be fréchet differentiable convex functions such that \(\triangledown g, \triangledown g_i\) are uniformly continuous. Suppose that Assumption A of Theorem 4.7 holds and the solution set \(\Gamma \ne \emptyset .\) Then, the sequence \(\{x_n\}\) generated by the following algorithm converges strongly to \(\hat{x}\in \Gamma ,\) where \(\hat{x}= \min \{\Vert p\Vert :p\in \Gamma \}.\)
Algorithm 5.3. | |
---|---|
Step 0. Select initial points \(x_0, x_1 \in H.\) Let \(C_0=C,~ T_0=I^H,~\triangledown g_0=\triangledown g\) and set \(n=1.\) | |
Step 1. Given the \((n-1)th\) and nth iterates, choose \(\theta _n\) such that \(0\le \theta _n\le \hat{\theta }_n\) with \(\hat{\theta }_n\) defined by | |
\(\hat{\theta }_n = {\left\{ \begin{array}{ll} \min \Big \{\theta ,~ \frac{\epsilon _n}{\Vert x_n - x_{n-1}\Vert }\Big \}, \quad \text {if}~ x_n \ne x_{n-1},\\ \theta , \quad \quad \text {otherwise.} \end{array}\right. }\) | |
Step 2. Compute | |
\(w_n = x_n + \theta _n(x_n - x_{n-1}).\) | |
Step 3. Compute | |
\(y_{n,i} = P_{C_i}(T_iw_n-\lambda _{n,i}\triangledown g_iT_iw_n)\) | |
\(\lambda _{n+1,i} = {\left\{ \begin{array}{ll} \min \{\frac{(c_{n,i}+c_i)\Vert T_iw_n - y_{n,i}\Vert }{\Vert g_iT_iw_n - g_iy_{n,i}\Vert },~~ \lambda _{n,i}+\rho _{n,i}\},&{}\text {if}\quad g_iT_iw_n - \\ g_iy_{n,i}\ne 0,\\ \lambda _{n,i}+\rho _{n,i},&{} \text {otherwise.} \end{array}\right. }\) | |
\(z_{n,i}=T_iw_n-\beta _{n,i}r_{n,i},\) | |
where | |
\(r_{n,i}=T_iw_n-y_{n,i}-\lambda _{n,i}(\triangledown g_iT_iw_n - \triangledown g_iy_{n,i})\) | |
and | |
\(\beta _{n,i}={\left\{ \begin{array}{ll} (k_i+k_{n,i})\frac{\langle T_iw_n - y_{n,i},r_{n,i}\rangle }{\Vert r_{n,i}\Vert ^2},&{}\text {if}\quad r_{n,i}\ne 0\\ 0,&{}\text {otherwise.} \end{array}\right. }\) | |
Step 4. Compute | |
\(b_n = \sum _{i=0}^{N}\delta _{n,i}\big (w_n+\eta _{n,i}T_i^*(z_{n,i}-T_iw_n)\big ),\) | |
where | |
\(\eta _{n,i}={\left\{ \begin{array}{ll} \frac{(\phi _{n,i}+\phi _i)\Vert T_iw_n-z_{n,i}\Vert ^2}{\Vert T_i^*(T_iw_n-z_{n,i})\Vert ^2},&{}\text {if}\quad \Vert T_i^*(T_iw_n-z_{n,i})\Vert \ne 0,\\ 0,&{}\text {otherwise.} \end{array}\right. }\) | |
Step 5. Compute | |
\(x_{n+1}= (1-\alpha _n-\xi _n)w_n+\xi _nb_n.\) | |
Set \(n:= n +1 \) and return to Step 1. |
Proof
Since \(g_i~,i=0,1,2,\ldots ,N\) are convex, then \(\triangledown g_i\) are monotone [36] and thus pseudomonotone. Consequently, the result follows by applying Lemma 5.1 and setting \(A_i=\triangledown g_i\) in Theorem 4.7. \(\square \)
5.2 Generalized Split Variational Inequality Problem
Finally, we apply our result to study the generalized split variational inequality problem (see [28]). Let \(C_i\) be nonempty, closed and convex subsets of real Hilbert spaces \(H_i,i=1,2,...,N,\) and let \(S_i:H_i\rightarrow H_{i+1},i=1,2,...,N-1,\) be bounded linear operators, such that \(S_i\ne 0.\) Let \(B_i:H_i\rightarrow H_i, i=1,2,...,N,\) be single-valued operators. The generalized split variational inequality problem (GSVIP) is formulated as finding a point \(x^*\in C_1\) such that
that is, \(x^*\in C_1\) such that
Observe that if we let \(C=C_1,A=B_1, A_i=B_{i+1}, ~1\le i\le N-1, T_1=S_1, T_2=S_2S_1,~\ldots ,~\) and \(T_{N-1}=S_{N-1}S_{N-2}\ldots S_1,\) then the SVIPMOS (1.10) becomes the GSVIP (5.3). Hence, we obtain the following result for approximating the solution of GSVIP (5.3) when the cost operators are pseudomonotone and uniformly continuous.
Theorem 5.4
Let \(C_i\) be nonempty, closed and convex subsets of real Hilbert spaces \(H_i,i=1,2,...,N,\) and let \(S_i:H_i\rightarrow H_{i+1},i=1,2,...,N-1,\) be bounded linear operators with adjoints \(S^*_i\) such that \(S_i\ne 0.\) Let \(B_i:H_i\rightarrow H_i,~1,2,...,N\) be uniformly continuous pseudomonotone operators satisfying condition (3.1), and suppose Assumption A of Theorem 4.7 holds and the solution set \(\Gamma \ne \emptyset .\) Then, the sequence \(\{x_n\}\) generated by the following algorithm converges strongly to \(\hat{x}\in \Gamma ,\) where \(\hat{x}= \min \{\Vert p\Vert :p\in \Gamma \}.\)
Algorithm 5.5. | |
---|---|
Step 0. Select initial points \(x_0, x_1 \in H_1.\) Let \(S_0=I^{H_1},~\hat{S}_{N-1}=S_{N-1}S_{N-2}\ldots S_0,~\hat{S}_{N-1}^*=S_0^*S_1^*\ldots S_{N-1}^*,~i=1,2,\ldots ,N\) and set \(n=1.\) | |
Step 1. Given the \((n-1)th\) and nth iterates, choose \(\theta _n\) such that \(0\le \theta _n\le \hat{\theta }_n\) with \(\hat{\theta }_n\) defined by | |
\(\hat{\theta }_n = {\left\{ \begin{array}{ll} \min \Big \{\theta ,~ \frac{\epsilon _n}{\Vert x_n - x_{n-1}\Vert }\Big \}, \quad \text {if}~ x_n \ne x_{n-1},\\ \theta , \quad \text {otherwise.} \end{array}\right. }\) | |
Step 2. Compute | |
\(w_n = x_n + \theta _n(x_n - x_{n-1}).\) | |
Step 3. Compute | |
\(y_{n,i} = P_{C_i}(\hat{S}_{i-1}w_n-\lambda _{n,i}B_i\hat{S}_{i-1}w_n)\) | |
\(\lambda _{n+1,i} = {\left\{ \begin{array}{ll} \min \{\frac{(c_{n,i}+c_i)\Vert \hat{S}_{i-1}w_n - y_{n,i}\Vert }{\Vert B_i\hat{S}_{i-1}w_n - B_iy_{n,i}\Vert },~~ \lambda _{n,i}+\rho _{n,i}\},&{}\text {if}\quad B_i\hat{S}_{i-1}w_n \\ &{}- B_iy_{n,i}\ne 0,\\ \lambda _{n,i}+\rho _{n,i},&{} \text {otherwise.} \end{array}\right. }\) | |
\(z_{n,i}=\hat{S}_{i-1}w_n-\beta _{n,i}r_{n,i},\) | |
where | |
\(r_{n,i}=\hat{S}_{i-1}w_n-y_{n,i}-\lambda _{n,i}(B_i\hat{S}_{i-1}w_n - B_iy_{n,i})\) | |
and | |
\(\beta _{n,i}={\left\{ \begin{array}{ll} (k_i+k_{n,i})\frac{\langle \hat{S}_{i-1}w_n - y_{n,i},r_{n,i}\rangle }{\Vert r_{n,i}\Vert ^2},&{}\text {if}\quad r_{n,i}\ne 0\\ 0,&{}\text {otherwise.} \end{array}\right. }\) | |
Step 4. Compute | |
\(b_n = \sum _{i=1}^{N}\delta _{n,i}\big (w_n+\eta _{n,i}\hat{S}_{i-1}^*(z_{n,i}-\hat{S}_{i-1}w_n)\big ),\) | |
where | |
\(\eta _{n,i}={\left\{ \begin{array}{ll} \frac{(\phi _{n,i}+\phi _i)\Vert \hat{S}_{i-1}w_n-z_{n,i}\Vert ^2}{\Vert \hat{S}_{i-1}^*(\hat{S}_{i-1}w_n-z_{n,i})\Vert ^2},&{}\text {if}\quad \Vert \hat{S}_{i-1}^*(\hat{S}_{i-1}w_n-z_{n,i})\Vert \ne 0,\\ 0,&{}\text {otherwise.} \end{array}\right. }\) | |
Step 5. Compute | |
\(x_{n+1}= (1-\alpha _n-\xi _n)w_n+\xi _nb_n.\) | |
Set \(n:= n +1 \) and return to Step 1. |
6 Numerical experiments
In this section, we present some numerical experiments to illustrate the implementability of our proposed method (Proposed Alg. 3.1). For simplicity, in all the experiments we consider the case when \(N=4.\) All numerical computations were carried out using Matlab version R2021(b).
In our computations, we choose \(\alpha _n = \frac{1}{2n+3},\epsilon _n = \frac{1}{(2n+3)^3},\xi _n=\frac{(1-\alpha _n)}{2},\theta =0.99,\lambda _{1,i}=i+1.2,c_i=0.97,\phi _i=0.98,k_i=1.96,\rho _{n,i}=\frac{10}{n^2},\delta _{n,i}=\frac{1}{5}.\)
We consider the following test examples in both finite and infinite dimensional Hilbert spaces for our numerical experiments.
Example 6.1
Let \(H_i=\mathbb {R}^{m},~i=0,1,\ldots ,4,\) and let \(A_i:\mathbb {R}^{m} \rightarrow \mathbb {R}^{m}\) be a linear operator defined by \(A_i(x)=Sx+q,\) where \(q \in \mathbb {R}^{m}\) and \(S=N N^{T}+Q+D, N\) is a \(m\times m\) matrix, Q is a \(m\times m\) skew-symmetric matrix, and D is a \(m\times m\) diagonal matrix with its diagonal entries being nonnegative (thus S is positive symmetric definite). We let \(C_i=\{x \in \mathbb {R}^{m}:-(i+2) \le x_j\le i+2, j=1,...,m \}.\) In this example, we generate randomly all the entries of N, Q in \([-3,3]\) while D is randomly generated in \([0,3], q=0\) and \(T_ix = \frac{3x}{i+3}.\)
Example 6.2
For each \(i=0,1,\ldots ,4,\) we define the feasible set \(C_i = \mathbb {R}^m,\) \(T_ix = \frac{2x}{i+2}\) and \(A_i(x) = Mx,\) where M is a square \(m\times m\) matrix given by
We note that M is a Hankel-type matrix with nonzero reverse diagonal.
Example 6.3
Let \(H_i=\mathbb {R}^2\) and \(C_i = [-1-i, 1+i]^2,~i=0,1,\ldots ,4.\) We define \(T_ix=\frac{4x}{i+4}\) and the cost operator \(A_i: \mathbb {R}^2\rightarrow \mathbb {R}^2\) is defined by
We consider the next example in infinite dimensional Hilbert space.
Example 6.4
Let \(H_i= (\ell _2(\mathbb {R}), \Vert \cdot \Vert _2), i=0,1,\ldots ,4,\) where \(\ell _2(\mathbb {R}):=\{x=(x_1,x_2,\ldots ,x_j,\ldots ), x_j\in \mathbb {R}:\sum _{j=1}^{\infty }|x_j|^2<\infty \}, ||x||_2=(\sum _{j=1}^{\infty }|x_j|^2)^{\frac{1}{2}}\) for all \(x\in \ell _2(\mathbb {R}).\) Let \(C_i:= \{x= (x_1,x_2,\ldots ,x_j,\ldots ,)\in E: \Vert x\Vert _2\le i+1\},\) and we define \(T_i=\frac{5x}{i+5}\) and the cost operator \(A_i:H_i\rightarrow H_i\) by \(A_ix = (\frac{1}{\Vert x\Vert + s} + \Vert x\Vert )x, \quad (s > 0; i = 0,1,\cdots 4).\) Then, \(A_i\) is uniformly continuous and pseudomonotone.
We test Examples 6.1, 6.2, 6.3 and 6.4 under the following experiments:
Experiment 6.5
In this experiment, we check the behavior of our method by fixing the other parameters and varying \(\phi _{n,i}\) in Example 6.1. We do this to check the effects of this parameter and the sensitivity of our method to it.
We consider \(\phi _{n,i} \in \{\frac{3}{(n + 1)}, \frac{5}{(n + 1)^2}, \frac{7}{(n + 1)^3}, \frac{9}{(n + 1)^4}, \frac{11}{(n + 1)^5}\}\) with \(m = 25,\) \(m = 50,\) \( m = 100\) and \(m = 200.\)
Using \(\Vert x_{n+1}-x_n\Vert < 10^{-3}\) as the stopping criterion, we plot the graphs of \(\Vert x_{n+1}-x_n\Vert \) against the number of iterations for each m.. The numerical results are reported in Figs. 1, 2, 3, 4 and Table 1.
Experiment 6.6
In this experiment, we check the behavior of our method by fixing the other parameters and varying \(c_{n,i}\) in Example 6.2. We do this to check the effects of this parameter and the sensitivity of our method on it.
We consider \(c_{n,i} \in \{\frac{15}{n^{0.1}},\frac{30}{n^{0.01}},\frac{45}{n^{0.001}},\frac{60}{n^{0.0001}},\frac{75}{n^{0.00001}}\}\) with \(m = 10,\) \( m = 20,\) \( m = 30\) and \( m = 40.\)
Using \(\Vert x_{n+1}-x_n\Vert < 10^{-3}\) as the stopping criterion, we plot the graphs of \(\Vert x_{n+1}-x_n\Vert \) against the number of iterations in each case. The numerical results are reported in Figures 5, 6, 7, 8 and Table 2.
Finally, we test Examples 6.3 and 6.4 under the following experiment:
Experiment 6.7
In this experiment, we check the behavior of our method by fixing the other parameters and varying \(k_{n,i}\) and \(c_{n,i}\) in Examples 6.3 and 6.4. We do this to check the effects of these parameters and the sensitivity of our method on them.
-
(1)
We consider \(k_{n,i} \in \{\frac{2}{(n + 1)}, \frac{4}{(2n + 1)^2}, \frac{6}{(3n + 1)^3}, \frac{8}{(4n + 1)^4}, \frac{10}{(5n + 1)^5}\}\) with the following two cases of initial values \(x_0\) and \(x_1:\)
-
Case I:
\(x_0 = (2, 3);\) \(x_1 = (3, 4);\)
-
Case II:
\(x_0 = (1, 3);\) \(x_1 = (2, 0).\)
Using \(\Vert x_{n+1}-x_n\Vert < 10^{-4}\) as the stopping criterion, we plot the graphs of \(\Vert x_{n+1}-x_n\Vert \) against the number of iterations in each case. The numerical results are reported in Figs. 9, 10 and Table 3.
-
Case I:
-
(2)
We consider \(c_{n,i} \in \{\frac{15}{n^{0.1}},\frac{30}{n^{0.01}},\frac{45}{n^{0.001}},\frac{60}{n^{0.0001}},\frac{75}{n^{0.00001}}\}\) with the following two cases of initial values \(x_0\) and \(x_1:\)
-
Case I:
\(x_0 = (3, 1, \frac{1}{3}, \cdots );\) \(x_1 = (\frac{1}{3}, \frac{1}{6},\frac{1}{12},\cdots );\)
-
Case II:
\(x_0 = (2, 1, \frac{1}{2},\cdots );\) \(x_1 = (\frac{1}{2}, \frac{1}{8}, \frac{1}{32}, \cdots ).\)
Using \(\Vert x_{n+1}-x_n\Vert < 10^{-4}\) as the stopping criterion, we plot the graphs of \(\Vert x_{n+1}-x_n\Vert \) against the number of iterations in each case. The numerical results are reported in Figs. 11, 12 and Table 4.
-
Case I:
Remark 6.8
Using different initial values, cases of m and varying the key parameters in Examples 6.1–6.4, we obtained the numerical results displayed in Tables 1, 2 and 3 and Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12. We noted the following from our numerical experiments:
-
(1)
In all the examples, the choice of the key parameters \(c_{n,i},\) \(k_{n,i}\) and \(\phi _{n,i}\) does not affect the number of iterations and no significant difference in the CPU time. Thus, our method is not sensitive to these key parameters for each initial value and case of m.
-
(2)
The number of iterations for our method remains consistent in all the examples and so well-behaved.
7 Conclusion
In this paper, we studied the concept of split variational inequality problem with multiple output sets when the cost operators are pseudomonotone and uniformly continuous. We proposed a new Mann-type inertial projection and contraction method with self-adaptive step sizes for approximating the solution of the
problem in the framework of Hilbert spaces. Under some mild conditions on the control sequences and without prior knowledge of the operator norms, we obtained strong convergence result for the proposed algorithm. Finally, we applied our result to study certain classes of optimization problems and we presented several numerical experiments to illustrate the applicability of the proposed method.
Availability of Data and Material
Not applicable.
References
Alakoya, T.O., Mewomo, O.T.: S-Iteration inertial subgradient extragradient method for variational inequality and fixed point problems. Optimization (2023). https://doi.org/10.1080/02331934.2023.2168482
Alakoya, T.O., Mewomo, O.T., Shehu, Y.: Strong convergence results for quasimonotone variational inequalities. Math. Methods Oper. Res. 2022, 30, Art. 47 (2022)
Alakoya, T.O., Uzor, V.A., Mewomo, O.T.: A new projection and contraction method for solving split monotone variational inclusion, pseudomonotone variational inequality, and common fixed point problems. Comput. Appl. Math. 42(1), 33, Paper No. 3 (2023)
Alakoya, T.O., Uzor, V.A., Mewomo, O.T., Yao, J.-C.: On system of monotone variational inclusion problems with fixed-point constraint. J. Inequal. Appl. 2022, 30, Art No. 47 (2022)
Aremu, K.O., Izuchukwu, C., Ogwo, G.N., Mewomo, O.T.: Multi-step iterative algorithm for minimization and fixed point problems in p-uniformly convex metric spaces. J. Ind. Manag. Optim. 17(4), 2161–2180 (2021)
Bot, R.I., Csetnek, E.R., Vuong, P.T.: The forward-backward-forward method from continuous and discrete perspective for pseudo-monotone variational inequalities in Hilbert spaces. Eur. J. Oper. Res. 287, 49–60 (2020)
Ceng, L.C., Petrusel, A., Qin, X., Yao, J.C.: Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints. Optimization 70, 1337–1358 (2021)
Censor, Y., Borteld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2006)
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994)
Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301–323 (2012)
Chang, S.-S., Yao, J.-C., Wang, L., Liu, M., Zhao, L.: On the inertial forward-backward splitting technique for solving a system of inclusion problems in Hilbert spaces. Optimization 70(12), 2511–2525 (2021)
Cottle, R.W., Yao, J.C.: Pseudomonotone complementary problems in Hilbert space. J. Optim. Theory Appl. 75, 281–295 (1992)
Fichera, G.: Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei VIII. Ser. Rend. Cl. Sci. Fis. Mat. Nat. 34, 138–142 (1963)
Gibali, A., Jolaoso, L.O., Mewomo, O.T., Taiwo, A.: Fast and simple Bregman projection methods for solving variational inequalities and related problems in Banach spaces. Results Math. 75(4), 36, Paper No. 179 (2020)
Gibali, A., Reich, S., Zalas, R.: Outer approximation methods for solving variational inequalities in Hilbert space. Optimization 66, 417–437 (2017)
Godwin, E.C., Alakoya, T.O., Mewomo, O.T., Yao, J.-C.: Relaxed inertial Tseng extragradient method for variational inequality and fixed point problems. Appl. Anal. 102(15), 4253–4278 (2023)
Godwin, E.C., Izuchukwu, C., Mewomo, O.T.: Image restoration using a modified relaxed inertial method for generalized split feasibility problems Math. Methods Appl. Sci. 46(5), 5521–5544 (2023)
Godwin, E.C., Mewomo, O.T., Alakoya, T.O.: A strongly convergent algorithm for solving multiple set split equality equilibrium and fixed point problems in Banach spaces. Proc. Edinb. Math. Soc. (2), (2023), DOI: S0013091523000251
He, H., Ling, C., Xu, H.K.: A relaxed projection method for split variational inequalities. J. Optim. Theory Appl. 166, 213–233 (2015)
Kassay, G., Reich, S., Sabach, S.: Iterative methods for solving systems of variational inequalities in reflexive Banach spaces. SIAM J. Optim. 21, 1319–1344 (2011)
Kim, J.K., Salahuddin, S., Lim, W.H.: General nonconvex split variational inequality problems. Korean J. Math. 25, 469–481 (2017)
Kim, J.K., Tuyen, T.M., Ha, M.T.: Two projection methods for solving the split common fixed point problem with multiple output sets in Hilbert spaces. Numer. Funct. Anal. Optim. 42(8), 973–988 (2021)
López, G., Martín-Márquez, V., Xu, H.K.: Iterative algorithms for the multiple-sets split feasibility problem. Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems, Medical Physics Publishing, Madison 243–279 (2010)
Moudafi, A., Thakur, B.S.: Solving proximal split feasibility problems without prior knowledge of operator norms. Optim. Lett. 8, 2099–2110 (2014)
Ogwo, G.N., Izuchukwu, C., Shehu, Y., Mewomo, O.T.: Convergence of relaxed inertial subgradient extragradient methods for quasimonotone variational inequality problems. J. Sci. Comput. 90, Art. 10 (2022)
Okeke, C.C., Mewomo, O.T.: On split equilibrium problem, variational inequality problem and fixed point problem for multi-valued mappings. Ann. Acad. Rom. Sci. Ser. Math. Appl. 9(2), 223–248 (2017)
Owolabi, A.O.-E., Alakoya, T.O., Taiwo, A., Mewomo, O.T.: A new inertial-projection algorithm for approximating common solution of variational inequality and fixed point problems of multivalued mappings. Numer. Algebra Control Optim. 12(2), 255–278 (2022)
Reich, S., Tuyen, T.M.: Iterative methods for solving the generalized split common null point problem in Hilbert spaces. Optimization 69, 1013–1038 (2020)
Reich, S., Tuyen, T.M.: The Generalized Fermat-Torricelli Problem in Hilbert Spaces. J. Optim. Theory Appl. 196, 78–97 (2023)
Reich, S., Tuyen, T.M.: The split feasibility problem with multiple output sets in Hilbert spaces. Optim. Lett. 14, 2335–2353 (2020)
Reich, S., Tuyen, T.M., Ha, M.T.N.: An optimization approach to solving the split feasibility problem in Hilbert spaces. J. Glob. Optim. 79, 837–852 (2021)
Reich, S., Tuyen, T.M., Thuy, N.T.T., et al.: A new self-adaptive algorithm for solving the split common fixed point problem with multiple output sets in Hilbert spaces. Numer. Algorithms 89, 1031–1047 (2022)
Saejung, S., Yotkaew, P.: Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 75, 742–750 (2012)
Stampacchia, G.: Formes bilineaires coercitives sur les ensembles convexes. C. R. Acad. Sci. Paris 258, 4413–4416 (1964)
Tan, K.K., Xu, H.K.: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 178, 301–308 (1993)
Tian, M., Jiang, B.: Inertial Haugazeau’s hybrid subgradient extragradient algorithm for variational inequality problems in Banach spaces. Optimization 70(5–6), 987–1007 (2021)
Tian, M., Jiang, B.N.: Weak convergence theorem for a class of split variational inequality problems and applications in Hilbert space. J. Ineq. Appl. 2017, 17, Art. no. 123 (2017)
Uzor, V.A., Alakoya, T.O., Mewomo, O.T.: Strong convergence of a self-adaptive inertial Tseng’s extragradient method for pseudomonotone variational inequalities and fixed point problems. Open Math. 20, 234–257 (2022)
Uzor, V.A., Alakoya, T.O., Mewomo, O.T.: On split monotone variational inclusion problem with multiple output sets with fixed point constraints. Comput. Methods Appl. Math. (2022). https://doi.org/10.1515/cmam-2022-0199
Wang, Z.-B., Long, X., Lei, Z.-Y., Chen, Z.-Y.: New self-adaptive methods with double inertial steps for solving splitting monotone variational inclusion problems with applications. Commun. Nonlinear Sci. Numer. Simul. 114, 106656 (2022)
Xia, Y., Wang, J.: A general methodology for designing globally convergent optimization neural networks. IEEE Trans. Neural Netw. 9, 1331–1343 (1998)
Wickramasinghe, M.U., Mewomo, O.T., Alakoya, T.O., Iyiola, O.S.: Mann-type approximation scheme for solving a new class of split inverse problems in Hilbert spaces. Applicable Anal. (2023). https://doi.org/10.1080/00036811.2023.2233977
Acknowledgements
The authors sincerely thank the anonymous referee for his careful reading, comments and useful suggestions. The research of the first author is wholly supported by the University of KwaZulu-Natal, Durban, South Africa Postdoctoral Fellowship. He is grateful for the funding and financial support. The second author is supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant number 119903). Opinions expressed and conclusions arrived are those of the authors and are not necessarily to be attributed to the NRF.
Funding
Open access funding provided by University of KwaZulu-Natal. The first author is funded by the University of KwaZulu-Natal, Durban, South Africa Postdoctoral Fellowship. The second author is supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903).
Author information
Authors and Affiliations
Contributions
Conceptualization of the article was given by OTM, methodology by TOA, formal analysis, investigation and writing—original draft preparation by TOA, software and validation by OTM, writing—review and editing by TOA and OTM, project administration by OTM. All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare that they have no competing interests.
Ethical Approval
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Alakoya, T.O., Mewomo, O.T. Mann-Type Inertial Projection and Contraction Method for Solving Split Pseudomonotone Variational Inequality Problem with Multiple Output Sets. Mediterr. J. Math. 20, 336 (2023). https://doi.org/10.1007/s00009-023-02535-7
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00009-023-02535-7
Keywords
- Split inverse problems
- variationl inequalities
- non-Lipschitz operators
- projection & contraction method
- inertial technique