Abstract
In the literature, several methods have been proposed for solving the split variational inequality problems and most of these methods require that the underlying operators are co-coercive while some of them require that the problem is transformed into a product space. These restrictive conditions affect the feasibility of these existing methods. In order to overcome these setbacks, we propose two new inertial projection and contraction methods for solving the split variational inequality problem in real Hilbert spaces without the co-coercive condition and without the product space formulation, which does not fully exploit the attractive splitting structure of the split variational inequality problem. The sequences generated by these methods converge strongly to the solution of the split varitional inequality problems in real Hilbert spaces under the assumptions that the operators are pseudomonotone, Lipschitz continuous and without the sequentially weakly continuity condition. Furthermore, we present several numerical experiments for the proposed methods and compare their performance with other related methods in the literature.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
1.1 Background and literature review
Let \({\mathcal {C}}\) and \({\mathcal {Q}}\) be two nonempty closed and convex subsets of real Hilbert spaces \({\mathcal {H}}_1\) and \({\mathcal {H}}_2\) respectively. Let \(T:{\mathcal {H}}_1\rightarrow {\mathcal {H}}_2\) be a bounded linear operator. Let \(F:{\mathcal {H}}_1\rightarrow {\mathcal {H}}_1,\) and \(A:{\mathcal {H}}_2\rightarrow {\mathcal {H}}_2\) be two operators. The Split Variational Inequality Problem (SVIP) is defined as follows:
Find \(x\in {\mathcal {C}} \) such that
and \(z=Tx \in {\mathcal {Q}}\) solves
The solution set of the SVIP (1.1)–(1.2) is denoted by \(\Gamma :=\Big \{z\in {VI}(A,{\mathcal {C}}): Tz\in {VI}(F,{\mathcal {Q}})\Big \},\) where \({VI}(A,{\mathcal {C}})\) is the solution set of (1.1) and \({VI}(F,{\mathcal {Q}})\) is the solution set of (1.2). Viewing the SVIP separately, we observe that the SVIP comprises of two classical Variational Inequality Problems (VIPs) (1.1)–(1.2). Thus, the SVIP is made up of a pair of VIPs that needs to be solved so that the image of the solution of the VIP \(z=Tx\) under a given bounded linear operator T in \({\mathcal {H}}_1\), is a solution of the other VIP in \({\mathcal {H}}_2.\) The SVIP (1.1)–(1.2) is a special model of the following Split Inverse Problem (SIP):
such that
where \(X_1\) and \(X_2\) are two vector spaces, \(T:X_1\rightarrow X_2\) is a bounded linear operator, \(IP_1\) and \(IP_2\) are two inverse problems in \(X_1\) and \(X_2\), respectively (see [8, 17]). Note that the first known case of the SIP is the following Split Convex Feasibility Problem (SCFP) introduced and studied by Censor and Elfving [15]:
Hence, the SVIP (1.1)–(1.2) can also be viewed as an interesting combination of the classical VIP (1.1) and the SCFP (1.5). Thus, it has wide applications in different fields such as data compression, signal processing, medical treatment of the intensity-modulated radiation therapy (IMRT), medical image reconstruction, phase retrieval, among others (for example, see [6, 7, 21]). Moreover, as special cases, the SVIP includes the split common fixed point problem, the split minimization problem and the split common null point problem (see [26, 36, 38] and references therein).
The classical VIPs have been studied by many researchers due to their applications in diverse fields (see, for example, [10,11,12,13,14, 22, 23, 28, 51]). They have been studied when the assumption on the cost operator is not necessarily co-coercive (see [46]) but very few authors have studied the SVIP when the cost operator is not co-coercive. Censor et al. [16] (see also [17]) proposed an iterative algorithm to solve the SVIP when the cost operators A and F are monotone and Lipschitz continuous. They transformed the SVIP into an equivalent constrained VIP (CVIP) in the product space \({\mathcal {H}}_1\times {\mathcal {H}}_2\) (see [16, Section 4]) and then solved the problem using the well-known subgradient extragradient method [18, 40]. This product space formation has some limitations which include:
-
the difficulty encountered when computing the projection onto some new product subspace formulations,
-
the difficulty encountered when translating the method back to the original spaces \({\mathcal {H}}_1\) and \({\mathcal {H}}_2,\) and
-
the fact that it does not fully exploit the splitting structure of the SVIP (1.1)–(1.2) (see, for example [16, p. 12]).
To circumvent these limitations, Censor et al. [16] proposed a projection-based method that does not require any product space formulation. This makes the projection-based method easier to implement. The proposed projection-based method is presented as follows: For \(x_1\in {\mathcal {H}}_1\), the sequence \(\{x_n\}\) is generated by
where \(\eta \in \left( 0,\frac{1}{L}\right) \) with L being the spectral radius of \(T^{*}T\) and \(T^*\) is the adjoint of T. The identity operator is denoted by I and \(P_{{\mathcal {C}}},P_{{\mathcal {Q}}}\) are metric projections onto \({\mathcal {C}},{\mathcal {Q}},\) respectively. They obtained a weak convergence of the sequence \(\{x_n\}\) generated by (1.6) to a solution of (1.1)–(1.2) under the condition that the solution set of problem (1.1)–(1.2) is nonempty, A, F are \(L_1,L_2\)-co-coercive operators respectively, \(\lambda \in [0,2\alpha ],\) where \(\alpha :=\min \{L_1,L_2\},\) and for all x which are solutions of (1.1),
Observe that Algorithm (1.6) does not require the product space formation, thus it fully exploits the attractive splitting structure of the SVIP (1.1)–(1.2). However, the authors obtained a weak convergence of this method under some strong assumptions that both mappings are required to be co-coercive and (1.7) holds. Many authors have studied several methods which do not rely on assumption (1.7) for solving SVIP and other related problems (see for example [37]), but their methods also relied on the co-coercivity of the cost operators.
In a quest to overcome these limitations, Tian and Jiang [48] proposed an iterative method and they defined it as follows:
where \(\{\tau _n\}\subset [a,b],~~~ \{\lambda _n\}\subset [c,d]\) for some \(c,d \in \left( 0,\frac{1}{L}\right) ,~~~ T:{\mathcal {H}}_1\rightarrow {\mathcal {H}}_2\) is a bounded linear operator, \(S:{\mathcal {H}}_2\rightarrow {\mathcal {H}}_2\) is a nonexpansive mapping and \(A:{\mathcal {C}}\rightarrow {\mathcal {H}}_1\) is a monotone and Lipschitz continuous mapping. They obtained a weak convergence result of the sequence generated by Algorithm (1.8) to the following problem; Find
where F(S) is the set of fixed points of S. Since strong convergence results are more desirable and more applicable than the weak convergence results in infinite dimensional spaces, there is need to develop algorithms that generate strong convergence sequences.
Tian and Jiang [47] modified Algorithm (1.8) into the following viscosity method and they defined it as follows:
where \(\{\tau _n\}\subset [a,b],~~~ \{\lambda _n\}\subset [c,d]\) for some \(c,d \in \left( 0,\frac{1}{L}\right) ,~~~ \{\alpha _n\}\subset (0,1),~~~~~ T:{\mathcal {H}}_1\rightarrow {\mathcal {H}}_2\) is a bounded linear operator, \(S:{\mathcal {H}}_2\rightarrow {\mathcal {H}}_2\) is a nonexpansive mapping, h is a contraction mapping and \(A:{\mathcal {C}}\rightarrow {\mathcal {H}}_1\) is a monotone and Lipschitz continuous mapping. We observe that the conditions on the underlying operators in Algorithms (1.8)–(1.10) does not require the strong co-coercive assumption but it involves computation of many projections which makes them computationally expensive and may affect the efficiency of Algorithms (1.8)–(1.10). Algorithms (1.8)–(1.10) can be used to solve the SVIP (1.1)–(1.2) if we set \(S=P_{{\mathcal {Q}}}(I-\lambda F)\) and let A be co-coercive. This implies that when solving the SVIP (1.1)–(1.2), these methods (Algorithm (1.8)–(1.10)) still relies on the co-coercive assumption on the underlying operator A. To weaken the condition on the underlying operators, Pham et al. [42] combined the Halpern method with the subgradient extragradient method for solving the SVIP (1.1)–(1.2) in real Hilbert spaces when the underlying operators A and F are pseudomonotone and Lipschitz continuous. The authors obtained a strong convergence result of their proposed method (see Appendix (6.1)) to a solution of the SVIP (1.1)–(1.2) under the following conditions:
for every sequences \(\{x_n\}\) and \(\{y_n\}\) in \({\mathcal {H}}_1\) converging weakly to \({\bar{x}}\) and \({\bar{y}}\) respectively, and
for every sequences \(\{c_n\}\) and \(\{d_n\}\) in \({\mathcal {H}}_2\) converging weakly to \({\bar{c}}\) and \({\bar{d}}\) respectively. We observe from Appendix (6.1) that the method of Pham et al. [42] involves the computation of two projections onto feasible sets and also two projections per iteration onto half spaces. Also, the knowledge of the bounded linear operator norm is required for the implementation of their method which is usually very difficult to compute and in some cases not known. In this connection, see also [45].
To accelerate the convergence of iterative methods for solving optimization problems, many authors have studied algorithms with inertial extrapolation step due to the improved convergence speed contributed by the presence of the inertial step. These algorithms have been tested in finding solutions of a number of problems and the results showed that the inertial step increases the rate of convergence of these methods (see [9, 31, 32, 50, 52], and other references therein). Recently, Ogwo et al. [39] proposed and studied two new methods with inertial steps for solving the split variational inequality problems in real Hilbert spaces without any product space formulation.
1.2 Motivation
In this paper, we are motivated by a traffic flow network for two cities (see, for example [44]). For City 1, we consider a traffic flow network with \(N_1\) number of nodes that are connected by oriented edges. We denote the set of edges of the network and the set of oriented pairs of the nodes by \(D_1\) and \(W_1\), respectively. Let \(w_1=(a_1,b_1)\in W_1\), where \(a_1\) and \(b_1\) represent the original node and the destination node, respectively, then \(P_{w_1}\) is the set of all paths from node \(a_1\) to node \(b_1\) and \(Q_1=\cup _{w_1\in W_1} P_{w_1}\) is the set of all paths in the network. For each path \(p_1\in Q_1\), let \(z_{p_1}\) be the path flow. We associate each \(w_1\in W_1\) with a positive number \(d_{w_1}\), which denotes the flow demand from \(a_1\) to \(b_1\). The feasible set of flows \({\mathcal {C}}\) is defined by
where
We can define the value of edge flow \(A_{d_1}\) for each edge \(d_1\in D_1\) (if the flow vector \(z_1\) is known) by
where
Also, we can define the value of costs for each edge \(d_1\in D_1\) by \(t_{d_1}=C_{d_1}(A_{d_1})\). Then, the value of costs for each path \(p_1\) can be found (see [30]) by
Similarly for City 2, the feasible set of flows \(\mathcal Q\) is defined by
Then, the value of costs for each path \(p_2\) can be found by
Now, consider \(T:{\mathbb {R}}^{|Q_1|}\rightarrow {\mathbb {R}}^{|Q_2|}\), bounded linear operator which connects City 1 to City 2. A feasible flow vector \(z_1^*\in {\mathcal {C}}\) is called the equilibrium vector if it satisfies
such that \(z_2^*=Tz_1^*\in {\mathcal {Q}}\) satisfies
This means that when the traffic network is at equilibrium, among all paths of \(P_{w_1}\) and \(P_{w_1}\), the path with traffic has the lowest cost for each city. It was established in [33] that the feasible flow vector \(z_1^*\in {\mathcal {C}}\) satisfies the traffic flow model (1.11)–(1.12) if and only if it solves the SVIP (1.1)–(1.2).
1.3 Contribution
Motivated by the above, our interest in this paper is to introduce and study two new inertial projection and contraction methods for solving the SVIP (1.1)–(1.2) in infinite dimensional real Hilbert spaces when the underlying operators are pseudomonotone and Lipschitz continuous. Our proposed methods for solving the SVIP (1.1)–(1.2) have the following features:
-
The choice of the inertial factor in our proposed methods for solving the SVIP (1.1)–(1.2) is new and different from what we have in the literature (see for example [9] and other reference therein).
-
Our methods do not require the SVIP (1.1)–(1.2) to be transformed into a product space, rather the methods efficiently and fully exploit the attractive splitting structure of the SVIP (1.1)–(1.2) thereby overcoming potential difficulties posed by the product space formulation.
-
Different from the existing methods for solving the SVIP (1.1)–(1.2), our proposed methods only require the underlying operators to be pseudomonotone, Lipschitz continuous and without the sequentially weakly continuity condition often used in the literature.
-
Our methods dispense the two extra projections onto half spaces used in [42]. Also, our methods do not depend on the knowledge of the bounded linear operator norm \(\Vert T\Vert \) unlike the methods in [16, 42], which require knowledge of the bounded linear operator norm \(\Vert T\Vert \). Thus, our methods can be easily implemented since algorithms that depend on the knowledge of the operator norm require computation of the norm of the bounded linear operator, which is difficult to compute and in some cases impossible to compute (see Theorem 2.4).
-
The proposed methods include inertial extrapolation step. The inertial step are often employed to increase the convergence speed of algorithms (see [3,4,5, 20, 25] and other references therein).
1.4 Organization of the paper
In Sect. 2, we present certain basic definitions and lemmas that will be required to prove the strong convergence results of our methods. We present and discuss our proposed methods in Sect. 3. In Sect. 4, we present the convergence analysis of these methods. In Sect. 5, we perform some numerical analysis of our methods and compare them with some related methods in the literature and then conclude in Sect. 6.
2 Preliminaries
In this section, we give some lemmas and definitions that will be useful in obtaining our convergence result. We denote the strong and weak convergence by \(\rightarrow \) and \(\rightharpoonup ,\) respectively. It is known that for a nonempty, closed and convex subset \({\mathcal {C}}\) of \({\mathcal {H}}\), the metric projection denoted by \(P_{\mathcal {C}}\) (see [49]), is a map defined on \({\mathcal {H}}\) onto \({\mathcal {C}}\) which assigns to each \(x\in {\mathcal {H}}\), the unique point in \({\mathcal {C}}\), denoted by \(P_{\mathcal {C}} x\) such that
The metric projection \(P_{\mathcal {C}}\) is characterized by the following inequality:
Furthermore, the \(P_{{\mathcal {C}}}\) is known to possess the following property
More information on the metric projection can be found, for example, in Section 3 of the book by Goebel and Reich [27] and in the paper by Kopecká and Reich [34].
Lemma 2.1
[1, 41] Let \({\mathcal {H}}\) be a real Hilbert space, then the following assertions hold:
-
(1)
\(2\langle x, y \rangle =\Vert x\Vert ^2+\Vert y\Vert ^2-\Vert x-y\Vert ^2=\Vert x+y\Vert ^2-\Vert x\Vert ^2-\Vert y\Vert ^2,~~\forall x,y \in {\mathcal {H}};\)
-
(2)
\(\Vert \alpha x+(1-\alpha )y\Vert ^2 = \alpha \Vert x\Vert ^2+(1-\alpha )\Vert y\Vert ^2-\alpha (1-\alpha )\Vert x-y\Vert ^2,~~\forall x,y \in {\mathcal {H}},~ \alpha \in {\mathbb {R}};\)
-
(3)
\(\Vert x-y\Vert ^2 \le \Vert x\Vert ^2+2\langle y, x-y \rangle , ~~\forall x,y \in {\mathcal {H}}.\)
Definition 2.2
Let \({\mathcal {H}}\) be a real Hilbert space and \(A:{\mathcal {H}}\rightarrow {\mathcal {H}}\) be a mapping. Then, A is said to be
-
(i)
L-Lipschitz continuous, if there exists \(L>0\) such that
$$\begin{aligned} \Vert Ax-Ay\Vert \le L\Vert x-y\Vert ,~ \forall ~~x,y\in {\mathcal {H}}, \end{aligned}$$ -
(ii)
L-co-coercive (or L-inverse strongly monotone), if there exists \(L>0\) such that
$$\begin{aligned} \big <Ax-Ay,x-y\big >\ge L\Vert Ax-Ay\Vert ^2,~~ \forall ~x,y \in {\mathcal {H}}, \end{aligned}$$ -
(iii)
monotone, if
$$\begin{aligned}\big <Ax-Ay,x-y\big >\ge 0,~~ \forall ~x,y \in {\mathcal {H}}, \end{aligned}$$ -
(iv)
pseudomonotone, if
$$\begin{aligned} \big<Ax,y-x \big> \ge 0 \implies ~\big <Ay,y-x \big > \ge 0,~~\forall ~x,y \in {\mathcal {H}}, \end{aligned}$$ -
(v)
sequentially weakly continuous, if for every sequence \(\{x_n\}\) that converges weakly to a point x, the sequence \(\{Ax_n\}\) converges weakly to Ax.
We clearly observe that L-co-coercive operators are \(\frac{1}{L}\)-Lipschitz continuous and monotone but the converse is not always true. We also observe from the definition above that \((ii)\implies (iii)\implies (iv)\) but the converse is not true.
Lemma 2.3
[19] Assume that \(A:{\mathcal {H}} \rightarrow {\mathcal {H}} \) is a continuous and pseudomonotone operator. Then, x is a solution of (1.1) if and only if \(\langle Ay,y -x \rangle \ge 0,~~ \forall y\in {\mathcal {C}}.\)
Theorem 2.4
[29, Theorem 2.3] Let \(p\in [1, \infty )\) be a rational number except for \(p=1, 2\). Unless \(P=NP\), there is no algorithm which computes the p-norm of a matrix with entries in \(\{-1, 0, 1\}\) to relative error with running time polynomial in the dimensions.
Lemma 2.5
[43] Let \({\mathcal {C}}\subseteq {\mathcal {H}}\) be a nonempty, closed and convex subset of a real Hilbert space \({\mathcal {H}}.\) Let \(u\in {\mathcal {H}}\) be arbitrarily given, \(z:=P_{{\mathcal {C}}}u,\) and \(\Omega :=\{x\in {\mathcal {H}}:\langle x-u, x-z\rangle \le 0\}.\) Then \(\Omega \cap {\mathcal {C}}=\{z\}.\)
3 Proposed methods
In this section, we present our proposed methods for solving the SVIP (1.1)–(1.2).
Assumption 3.1
Suppose that the following conditions hold:
-
(a)
The feasible sets \({\mathcal {C}}\) and \({\mathcal {Q}}\) are nonempty closed and convex subsets of the real Hilbert spaces \({\mathcal {H}}_1\) and \({\mathcal {H}}_2\), respectively.
-
(b)
\(A:{\mathcal {H}}_1 \rightarrow {\mathcal {H}}_1\) and \(F:{\mathcal {H}}_2 \rightarrow {\mathcal {H}}_2\) are pseudomonotone and Lipschitz continuous with Lipschitz constants \(L_1\) and \(L_2\), respectively.
-
(c)
\(A:{\mathcal {H}}_1\rightarrow {\mathcal {H}}_1\) and \(F:{\mathcal {H}}_2\rightarrow {\mathcal {H}}_2\) satisfy the following property whenever \(\{x_n\}\subset {\mathcal {C}}~~\text {and}~~\{y_n\}\subset {\mathcal {Q}},~~\text {and}~~ x_n\rightharpoonup x,~y_n\rightharpoonup y\) one has \(\Vert Ax\Vert \le \liminf \limits _{n\rightarrow \infty }\Vert Ax_n\Vert \) and \(\Vert Fy\Vert \le \liminf \limits _{n\rightarrow \infty }\Vert Fy_n\Vert .\)
-
(d)
\(T:{\mathcal {H}}_1 \rightarrow {\mathcal {H}}_2\) is a bounded linear operator and the solution set \(\Gamma :=\{z\in {{VI}(A,{\mathcal {C}})}: {Tz}\in {{VI}(F,{\mathcal {Q}})}\}\) is nonempty, where \({{VI}(A,{\mathcal {C}})}\) is the solution set of the classical VIP (1.1).
-
(e)
\(\{\alpha _n\}\subset (0,1]\) is non-increasing with \(\lim \nolimits _{n\rightarrow \infty }\alpha _n=0\) and \(\sum \limits _{n=1}^{\infty }\alpha _n=\infty .\)
-
(f)
\(0\le \theta _n\le \theta _{n+1}\le \theta <\frac{1}{3}, \sigma \in (0,\frac{1}{2}].\)
-
(g)
\(\{\phi _n\}\) and \(\{\psi _n\}\) are non-negative sequences such that \(\sum _{n=1}^\infty \phi _n<+\infty \) and \(\sum _{n=1}^\infty \psi _n<+\infty .\)
When the Lipschitz constants \(L_1\) and \(L_2\) are known, we present the following method for solving the SVIP (1.1)–(1.2).
Algorithm 3.2
Inertial projection and contraction method with fixed step size.
Step 0: Choose sequences \( \{\alpha _n\}^{\infty }_{n=1}\) and \(\{\theta _n\}^{\infty }_{n=1}\) such that the conditions from Assumption 3.1(e)–(f) hold and let \(\eta \ge 0, \gamma _i\in (0,2),i=1,2,~\mu \in (0,\frac{1}{L_1}),~\lambda \in (0,\frac{1}{L_2}), ~\) and \(x_0,x_1 \in {\mathcal {H}}_1\) be given arbitrarily. Set \(n:=1.\)
Step 1: Given the iterates \(x_{n-1}\) and \(x_n~~ (n \ge 1),\) \(\alpha _n\in (0,1)\) and \( \theta _n\in [0,\frac{1}{3}),\) compute
Step 2: Compute
where \(r_n:=Tw_n-y_n-\lambda (FTw_n-Fy_n)\) and \(\beta _n:= \frac{\langle Tw_n-y_n,r_n \rangle }{\Vert r_n\Vert ^2},\) if \(r_n\ne 0,\) otherwise \(\beta _n=0.\)
Step 3: Compute
where the step size \(\eta _n\) is chosen such that for some \(\epsilon >0,~~~\eta _n\in \Big (\epsilon ,~~ \frac{\Vert Tw_n-z_n\Vert ^2}{\Vert T^{*}(Tw_n-z_n)\Vert ^2}-\epsilon \Big ),\) if \(z_n\ne Tw_n;\) otherwise \(\eta _n=\eta .\)
Step 4: Compute
where \(v_n:=b_n-u_n-\mu (Ab_n- A u_n)\) and \(\gamma _n:=\frac{\langle b_n-u_n,v_n \rangle }{\Vert v_n\Vert ^2},\) if \(v_n \ne 0,\) otherwise \(\gamma _n=0.\)
Step 5: Compute
Set \(n:=n+1\) and go back to Step 1.
When the Lipschitz constants \(L_1\) and \(L_2\) are not known, we present the following method with adaptive step size for solving the SVIP (1.1)–(1.2).
Algorithm 3.3
Inertial projection and contraction method with adaptive step size strategy.
Step 0: Choose the control parameters such that conditions (e)–(g) of Assumption 3.1 hold and let \(\eta \ge 0, \gamma _i\in (0,2) ~a_i \in (0,1), i=1,2,\) \(\lambda _1>0\), \(\mu _1>0\), \(\alpha \ge 3\) and \(x_0,x_1 \in {\mathcal {H}}_1\) be given arbitrarily. Set \(n:=1.\)
Step 1: Given the iterates \(x_{n-1}\) and \(x_n~~ (n \ge 1),\) \(\alpha _n\in (0,1)\) and \( \theta _n\in [0,\frac{1}{3}),\) compute
Step 2: Compute
where \(r_n:=Tw_n-y_n-\lambda _n(FTw_n-Fy_n)\), \(\beta _n:= \frac{\langle Tw_n-y_n,r_n \rangle }{\Vert r_n\Vert ^2},\) if \(r_n\ne 0,\) otherwise \(\beta _n=0;\) and
Step 3: Compute
where the step size \(\eta _n\) is chosen such that for some \(\epsilon >0,~~~\eta _n\in \Big (\epsilon , ~~\frac{\Vert Tw_n-z_n\Vert ^2}{\Vert T^{*}(Tw_n-z_n)\Vert ^2}-\epsilon \Big ),\) if \(z_n\ne Tw_n\); otherwise \(\eta _n=\eta .\)
Step 4: Compute
where \(v_n:=b_n-u_n-\mu _n(Ab_n- A u_n)\), \(\gamma _n=\frac{\langle b_n-u_n,v_n \rangle }{\Vert v_n\Vert ^2},\) if \(v_n \ne 0,\) otherwise \(\gamma _n=0;\) and
Step 5: Compute
Set \(n:=n+1\) and go back to Step 1.
We outline and discuss some of the properties of our proposed methods.
Remark 3.4
-
The choice of the inertial factor \(\theta _n\in [0,\frac{1}{3})\) in Algorithms 3.2 and 3.3 is new and different from the choices in literature (see for example [2, 9, 24] and other references therein). As far as we know, this is the first time for which the inertial factor \(\theta _n\) is chosen such that \(\theta _n\in [0,\frac{1}{3})\) and solves the SVIP when the underlying operators are pseudomonotone and Lipschitz continuous.
-
Algorithm 3.3 uses simple step size rules in (3.1) and (3.2), which generate non-monotonic sequences of step sizes. The step sizes are constructed such that the dependence of the algorithm on the initial step sizes \(\lambda _1\) and \(\mu _1\) is reduced.
-
We point out that if the pseudomonotone operators A and F are sequentially weakly continuous, then A and F satisfy condition (c) but the converse is not true. Hence, condition (c) is strictly weaker than the sequentially weakly continuity condition commonly employed in the literature (e.g., see [9, 39]).
-
Algorithm 3.2 can be viewed as a modified inertial projection and contraction method involving one projection onto \({\mathcal {C}}\) per iteration for solving the classical VIP in \({\mathcal {H}}_1\). Algorithm 3.3 can be viewed as a modified inertial projection and contraction method involving one projection onto \({\mathcal {Q}}\) per iteration under a bounded linear operator T for solving VIP in \({\mathcal {H}}_2\). Our methods improves other methods in literature which requires extra projections onto half-spaces or feasible sets (see [42] (see Appendix 6.1) and other references therein). In Step 2 of Algorithms 3.2 and 3.3, \(r_n\) can be described as weighted average of \((Tw_n-y_n \sim \lambda FTw_n)\) and a hypothetical \((T{\tilde{w}}_n-{\tilde{y}}_n \sim \lambda FT{\tilde{w}}_n)\) in \({\mathcal {H}}_2\), where \(T{\tilde{w}}_n=Tw_n-\lambda FTw_n\) and \({\tilde{y}}_n=y_n-\lambda Fy_n\). In Step 4 of Algorithms 3.3 and 3.3, \(v_n\) follows similar description. From Step 2 and Step 4 of Algorithms 3.3 and 3.3, we have
$$\begin{aligned} \beta _n||r_n||^2=\langle Tw_n-y_n, r_n\rangle ,~\forall n\ge 1 \end{aligned}$$(3.3)holds for both \(r_n=0\) and \(r_n\ne 0\). Similarly, we have that
$$\begin{aligned} \gamma _n||v_n||^2=\langle b_n-u_n, v_n\rangle ,~\forall n\ge 1 \end{aligned}$$(3.4)holds for both \(v_n=0\) and \(v_n\ne 0\).
-
The step sizes \(\{\lambda _n\}\) and \(\{\mu _n\}\) given by (3.1) and (3.2), respectively are generated at each iteration by some simple computations which makes Algorithm 3.3 easier to implement since it does not require the prior knowledge of the Lipschitz constants \(L_1\) and \(L_2\).
-
Algorithms 3.2 and 3.3 does not require any product space formulation unlike other algorithms in literature which require that the problem be transformed into a product space (see [16] and other references therein). This makes our algorithms easier to implement since they do not encounter the difficulties that might be caused by the product space.
Remark 3.5
[39] The choice of the step size \(\eta _n\) in Step 3 of Algorithms 3.2 and 3.3 do not require the prior knowledge of the operator norm \(\Vert T\Vert .\) Furthermore, the value of \(\eta \) does not influence the algorithms, but it was introduced for the sake of clarity.
Lemma 3.6
[39] The step size \(\eta _n\) given in Step 3 of Algorithms 3.2 and 3.3 is well-defined.
4 Convergence analysis
Lemma 4.1
Let \(\{x_n\}\) be a sequence generated by Algorithm 3.2 under Assumption 3.1. Then, the following inequality holds:
Proof
Let \(p\in \Gamma .\) From the definition of \(y_n\) and the characteristic property of \(P_Q,\) we obtain
Since \(Tp\in {VI}(F,Q)\) and \(y_n\in Q,\) we have
and from the pseudomonotonicity of F we have,
Since \(\lambda >0,\) we obtain
Adding (4.2) and (4.3), we obtain
From (4.4) and the definition of \(r_n\) in Step 2, we obtain
which implies that
From the definition of \(z_n\) in Step 2, we have
Hence, from Lemma 2.1 (1), (3.3), (4.5) and (4.6) we obtain
Also, from Step 3, Lemma 2.1 and (4.7) we obtain
Thus, by the condition on \(\eta _n,\) we obtain that
Following similar argument used in obtaining (4.7), we obtain
From Step 5 we have,
Substituting (4.9) into (4.10), we have
From Step 4, we have \(t_n-w_n=\frac{1}{\sigma }(x_{n+1}-w_n).\) Substituting this into the previous equality we have,
where \(\zeta :=\Big (\frac{1}{\sigma }-1\Big ).\) \(\square \)
Lemma 4.2
Let \(\{x_n\}\) be a sequence generated by Algorithm 3.2 under Assumption 3.1. Then \(\forall \hspace{0.1cm} p\in \Gamma ,\) we have
Proof
From the definition of \(w_n\) and Lemma (2.1) (1), we have
Now, replacing p with \(x_{n+1}\) in (4.13), we obtain
Substituting (4.13) and (4.14) into (4.1) and from the condition on \(\sigma \), we have
Hence,
Applying Lemma 2.1 (1) to (4.15), we obtain
which implies that
Using the fact that \(\{\theta _n\}\) is non-decreasing and \(\{\alpha _n\}\) is non-increasing on (4.16), we obtain (4.12), which is the desired conclusion. \(\square \)
Lemma 4.3
Let \(\{x_n\}\) be a sequence generated by Algorithm 3.2 under Assumption 3.1. Then \(\{x_n\}\) is bounded.
Proof
Let \(p\in \Gamma ,\) then from (4.12) and Lemma 2.1, we have
From this we obtain
Let \(\rho _j:=e^{\sum \limits _{i=1}^{j}\alpha _i}, j\ge 1.\) Since \(e^x\ge x+1\) for all \(x\in {\mathbb {R}},\) we have
Since \(\{\alpha _n\}\subset (0,1]\) is non-increasing, we have
From (4.18) and (4.19), we obtain
Since \(\rho _n\le \rho _{n+1}, \rho _{n+1}=\rho _ne^{\alpha _{n+1}}\) and \(\{\alpha _n\}\subset (0,1]\) is non-increasing, we have
which implies that
Since \(\{\theta _n\}\subset [0,\theta ],\) we have
Since \(\theta \in [0,\frac{1}{3})\) and \(\lim \nolimits _{n\rightarrow \infty }\alpha _n=0,\) it follows that the right-hand side of (4.20) is bounded below by a positive number, i.e., there exists a constant \(\xi >0\) such that \(1-\theta _{n+1}(3+2(e^{\alpha _{n+1}}-1))-\alpha _n \ge \xi ,\) for all \(n\in {\mathbb {N}}\) sufficiently large, say for all \(n\ge n_0.\) Hence, we have
which implies that for all \(n\ge n_0,\)
Dividing the last inequality by \(\rho _{n+1}\) and omitting non-positive terms, we have
where \(t_n:=\sum _{i=1}^{n}\alpha _i.\) Since \(\alpha _k\in (0,1]\) for all \(k\in {\mathbb {N}},\) we observe that \( \alpha _ke^{t_k+1}\le e^2(e^{t_k}-e^{t_k-1}), \) for all \(k\ge 2,\) so that
Using (4.22), the fact that \(\{\theta _n\}\subset [0,\theta ]\subset [0,\frac{1}{3})\) and \(e^{-t_n+1}\le 1,\) we have
Applying (4.23), \(\theta \in [0,1)\) and the convergence of the geometric series, we obtain
Since \(\theta <1,\) it follows that \(\{x_n\}\) is bounded.
\(\square \)
Lemma 4.4
Let \(\{x_n\}\) be a sequence generated by Algorithm 3.2 under Assumption 3.1. Suppose
and
Then \(\{x_n\}\) converges strongly to p.
Proof
By the hypothesis of the lemma we have that
We claim that this implies
and from this it follows that \(\{x_n\}\) converges strongly to p. On the contrary, assume that this limit does not hold. Then there exists a subset \(K\subseteq {\mathbb {N}}\) and a constant \(\beta >0\) such that
Using (4.25) and the fact that \(\theta _n\le \theta <1,\) we have
Thus, we have \(\limsup \limits _{n\in K}\Vert x_n-p\Vert \le 0.\) Since \(\liminf \limits _{n\in K}\Vert x_n-p\Vert \ge 0 \) holds, it follows that \(\lim \nolimits _{n\in K}\Vert x_n-p\Vert =0.\)
Applying (4.26), we obtain
for all \(n\in K\) sufficiently large, which contradicts the assumption that \(\lim \nolimits _{n\rightarrow \infty }\Vert x_{n+1}-x_n\Vert =0.\) Hence, the result follows.
\(\square \)
Lemma 4.5
Let \(\{x_n\}\) be a sequence generated by Algorithm 3.2 under Assumption 3.1 such that \(\lim \nolimits _{n\rightarrow \infty }\Vert x_{n+1}-x_n\Vert =0.\) Suppose there exists a subsequence \(\{x_{n_k}\}\) of \(\{x_n\},\) which converges weakly to a point \(z\in {\mathcal {H}}_1\) and \(\lim \nolimits _{k \rightarrow \infty }\Vert b_{n_k}-w_{n_k}\Vert =0=\lim \limits _{k \rightarrow \infty }\Vert b_{n_k}-t_{n_k}\Vert ,\) then \(z\in \Gamma .\)
Proof
From the definition of \(w_n\) in Step 1 and by the statement of the hypothesis together with the fact that \(\lim \nolimits _{n \rightarrow \infty }\alpha _n=0,\) we obtain
Since the subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\) is weakly convergent to a point \(z\in {\mathcal {H}}_1,\) it follows that the subsequence \(\{w_{n_k}\}\) of \(\{w_n\}\) is also weakly convergent to \(z\in {\mathcal {H}}_1.\) Again, since T is a bounded linear operator, we obtain that \(\{Tw_{n_k}\}\) converges weakly to Tz.
Without loss of generality, we may assume that \(z_n\ne Tw_n,\) then \(\eta _n \in \Big (\epsilon , \frac{\Vert z_n-Tw_n\Vert ^2}{\Vert T^{*}(z_n-Tw_n)\Vert ^2}-\epsilon \Big ).\)
Hence, we obtain from (4.8) that
which implies that
From our hypothesis, we have
From (4.8) and (4.29), we have
Since \(\eta _{n_k}>\epsilon >0,\) we obtain
Using the definition of \(r_n\) in Step 2, we observe
Since \(\lambda \in (0, \frac{1}{L_2})\), we have that \(1-\lambda L_2>0\). Hence, from (4.31), (4.6) and (3.3) we obtain
which implies from (4.30) that
Since \(\{Tw_{n_k}\}\) converges weakly to Tz, then it follows from (4.32) that \(\{y_{n_k}\}\) also converges weakly to Tz. Also, since \(\{y_{n_k}\}\subset {\mathcal {Q}},\) we have that \(Tz\in {\mathcal {Q}}\).
By the characteristic property of \(P_{\mathcal {Q}},\) we obtain \(\forall ~ x\in {\mathcal {Q}}\) that
which implies
Hence, applying (4.32) in (4.33), we obtain that
Observe that
Since F is Lipschitz continuous on \({\mathcal {H}}_2\), we obtain from (4.32) that
Hence, from (4.32), (4.34) and (4.35), we obtain that
Next, we show that \(Tz \in \text{ VI }(F,{\mathcal {Q}}).\) Now, we choose a sequence \(\{\delta _k\}\) of positive numbers such that \(\delta _{k+1}\le \delta _k,~~~\forall ~ k\ge 1\) and \(\delta _k \rightarrow 0~~ \text {as}~~~ k \rightarrow \infty .\) From (4.36), we denote by \(N_k\) (for each \(k\ge 1\)), the smallest positive integer such that
Since \(\{\delta _k\}\) is decreasing, we have that \(\{N_k\}\) is increasing. Also, since \(\{y_{N_k}\}\subset {\mathcal {Q}}\) for all \(k\ge 1\), we can suppose \(Fy_{N_k}\ne 0\) (otherwise, \(y_{N_k}\) is a solution). Hence, we can set \(q_{N_k}=\frac{Fy_{N_k}}{\Vert Fy_{N_k}\Vert ^2}\) for each \(k \ge 1\). Then, \(\langle Fy_{N_k}, q_{N_k}\rangle =1\) for each \(k\ge 1.\)
Therefore, from (4.37) we have
which implies from the pseudomonotonicity of F on \({\mathcal {H}}_2\) that
This implies that
Now, if \(FTz=0,\) then \(Tz\in {VI}(F,{\mathcal {Q}}).\) So, we may suppose that \(FTz\ne 0.\) Since \(\{y_{n_k}\}\) converges weakly to Tz, then by Condition (c) we obtain
Since \(\{y_{n_k}\}\subset \{y_{N_k}\},\) we obtain that
Therefore, \(\lim \nolimits _{k \rightarrow \infty } \delta _kq_{N_k}=0.\) Thus, letting \(k\rightarrow \infty \) in (4.39), we have
which implies by Lemma 2.3 that \(Tz\in {VI}(F,Q).\)
Next, we show that \(z\in {VI}(A, C)\). Following similar method of proof used in obtaining (4.32) and noting our hypothesis \(\lim \nolimits _{k\rightarrow \infty }\Vert b_{n_k}-t_{n_k}\Vert =0\), we obtain
Following similar method of proof used in obtaining (4.36), we obtain from (4.41), the characteristic property of \(P_{\mathcal {C}}\) and the Lipschitz continuity of A on \({\mathcal {H}}_1\) that
From our hypothesis, (4.41) and the fact that \(\{w_{n_k}\}\) converges weakly to z, we obtain that the subsequences \(\{b_{n_k}\}\) and \(\{u_{n_k}\}\) of \(\{b_n\}\) and \(\{u_n\}\) respectively, converge weakly to z. Also, since \(\{u_{n_k}\}\subset {\mathcal {C}}\), we have that \(z\in {\mathcal {C}}\). Following similar method of proof used in obtaining (4.40), we obtain
which implies by Lemma 2.3, that \(z\in \text{ VI }(A,C).\) Hence, we conclude that \(z\in \Gamma .\) \(\square \)
Lemma 4.6
Let \(\{x_n\}\) be a sequence generated by Algorithm 3.2 under Assumption 3.1. Then, for each \(n\ge 1\)
Proof
Since \(\{\theta _n\}\in [0,\frac{1}{3})\) is non-decreasing, we have from Lemma 2.1 that
which is the desired conclusion. \(\square \)
We are now in a position to prove the main theorem for Algorithm 3.2.
Theorem 4.7
Let \(\{x_n\}\) be a sequence generated by Algorithm 3.2 under Assumption 3.1. Then, \(\{x_n\}\) converges strongly to \(p\in \Gamma ,\) where \(p=P_\Gamma x_0.\)
Proof
From Lemma 4.6 and (4.12), we obtain
We consider two cases for our proof.
CASE 1: Let \(z\in \Gamma .\) Suppose for some \(n_0\in {\mathbb {N}}\) large enough, we have \(v_{n+1}\le v_n\) for all \(n\ge n_0.\) Then by Lemma 4.6 we have \(v_n\ge 0, \forall n\ge 1\) and \(\lim \nolimits _{n\rightarrow \infty }v_n=\lim \limits _{n\rightarrow \infty }v_{n+1}\) exists. Since \(\{x_n\}\) is bounded, there exists a constant \(M>0\) such that \(2|\langle x_n-p,x_n-x_0\rangle |\le M.\) Hence, there exists \(N\in {\mathbb {N}}\) and \(\xi _1>0\) such that \((1-3\theta _{n+1}-\alpha _n)\ge \xi _1, \forall n\ge N.\) Hence, from (4.44) we have that for all \(n\ge N\)
Thus,
From the definition of \(w_n\) in Step 1 and by applying (4.45) together with the fact that \(\lim \nolimits _{n \rightarrow \infty }\alpha _n=0,\) we have
Consequently, we have
From (4.11), we have
where \(M_1:=\sup \nolimits _{n\ge 1}\{\Vert w_n-p\Vert +\Vert x_{n+1}-p\Vert \}.\) Hence
Similarly, we obtain from (4.11) that
Consequently, we have
Using the fact that \(\{x_n\}\) is bounded, \(\{v_n\}\) is convergent and \(\lim \nolimits _{n\rightarrow \infty }\alpha _n=0,\) we obtain from Lemma 4.6 that
which is the limit of \(\lim \nolimits _{n\rightarrow \infty }v_{n+1}.\) Consequently, from Lemma 4.6, we have \(\lambda \ge 0.\) We show that \(\lambda =0\) holds. So that it follows from Lemma 4.4 that the sequence \(\{x_n\}\) converges strongly to the solution p.
Suppose on the contrary \(\lambda >0.\) Since \(\{x_n\}\) is bounded by Lemma 4.3, there exists a subsequence \(\{x_{n_k}\}\) of \(\{x_{n}\}\) which converges weakly to z, such that
By applying (4.48) and (4.49), it follows from Lemma 4.5 that \(z\in \Gamma .\) Since \(p=P_\Gamma x_0,\) we obtain from (4.51)
which follows from (4.51) that
From (4.50), we have
and since \(\lambda >0,\) we have
for some sufficiently large \(n_1\in {\mathbb {N}}.\) Observe that
Then, by applying (4.52) we have
Again, using the assumption that \(\lambda >0,\) we have
for some sufficiently large \(n_2\in {\mathbb {N}}\) such that \(n_2\ge n_1.\) From (4.44), we have
Applying Lemma 4.6, it follows from the last inequality that
Since \(\lambda >0,\) this gives the summability of the sequence \(\{\alpha _n\}\) which contradicts \(\sum \nolimits _{n=1}^{\infty }\alpha _n=\infty .\) Therefore, we must have \(\lambda =0,\) and it follows that the sequence \(\{x_n\}\) converges strongly to \(p=P_\Gamma x_0\) as required.
CASE 2: Suppose that \(\{v_n\}\) is not monotonically decreasing. Let \(\tau :{\mathbb {N}}\rightarrow {\mathbb {N}}\) be defined for all \(n\ge n_0\) for some \(n_0\in {\mathbb {N}}\) large enough by
Observe that \(\tau _{(n)}\) is a non-decreasing sequence such that \( \tau _{(n)}\rightarrow \infty \) as \(n\rightarrow \infty \) and \(v_{\tau _{(n)}}\le v_{\tau _{(n)}+1}\) for all \(n\ge n_0.\) Similar to CASE 1, for some constant \(M>0,\) we obtain from (4.44) that
Consequently, we get
Also, following similar procedure as in CASE 1, we obtain
Observe from (4.44) that for \(j\ge 0,\) we have \(v_{j+1}<v_j\) when \(x_j\notin \Omega :=\{x\in {\mathcal {H}}:\langle x-x_0, x-p\rangle \le 0\}.\) Since \(v_{\tau _{(n)}}\le v_{\tau _{(n)}+1},\) we have that \(x_{\tau _{(n)}}\in \Omega \hspace{0.1cm} \forall ~ n\ge n_0.\) We have from Lemma 4.3 that \(\{x_{\tau _{(n)}}\}\) is bounded, hence there exists a subsequence, again say \(\{x_{\tau _{(n)}}\}\) which converges weakly to some \(z\in {\mathcal {H}}_1.\) Since \(\Omega \) is a closed and convex set, then it is weakly closed and it follows that \(z\in \Omega .\) By (4.55), it follows from Lemma 4.5 that \(z\in \Gamma .\) Hence, we have \(z\in \Omega \cap \Gamma .\) In view of Lemma 2.5, we know that \(\Omega \cap \Gamma \) contains only p as its element. Consequently, we have \(z=p.\) Moreover, since \(x_{\tau _n}\in \Omega \) we have
Taking the \(\lim \sup \) of the above inequality, we get
Thus,
We claim that this implies \(\lim \nolimits _{n \rightarrow \infty }v_{\tau _{(n)}+1}=0.\) From the definition of \(v_{\tau _{(n)+1}},\) we have
Using (4.54), (4.56), the boundedness of \(\{\theta _n\}\) and \(\{x_n\}\) and the fact that \(\lim \nolimits _{n\rightarrow \infty }\alpha _n=0,\) we obtain that \(\lim \nolimits _{n\rightarrow \infty }v_{\tau _{{(n)}+1}}=0.\)
Next, we show that \(\lim \nolimits _{n\rightarrow \infty }v_n=0.\) Observe that for all \(n\ge n_0,\) we have \(v_{\tau _{(n)}}\le v_{\tau _{{(n)}+1}}\) if \(n\ne \tau (n)\) since \(v_j>v_{j+1}\) for \(\tau (n)+1\le j\le n-1.\) It follows that \(\forall n\ge n_0,\) we have
Hence,
From Lemma 4.6, we have that
Thus,
Using the fact that \(\{x_n\}\) is bounded, \(\lim \nolimits _{n\rightarrow \infty }\alpha _n=0\) and by (4.44), we have
which implies from the definition of \(v_n\) that
Thus, by Lemma 4.4 we obtain that \(\{x_n\}\) converges strongly to \(p=P_\Gamma x_0,\) which completes the proof.
\(\square \)
Remark 4.8
[39]
-
Setting \({\mathcal {H}}_1={\mathcal {H}}_2={\mathcal {H}}\), \(F=0\) and \(T=I_{\mathcal {H}}\) (the identity operator on \({\mathcal {H}}\)) in Theorem 4.7, we obtain an inertial projection and contraction method requiring only one projection onto the feasible set \({\mathcal {C}}\) per iteration with fixed step size for solving the classical VIP (1.1) when A is pseudomonotone and Lipschitz continuous as a corollary.
-
The conclusions of Lemma 4.1, Lemma 4.5 and Theorem 4.7 still hold even if \(\mu \in (0, \frac{1}{L_1})\) and \(\lambda \in (0, \frac{1}{L_2})\) in Algorithm 3.2 are replaced with variable step sizes \(\mu _n\) and \(\lambda _n\), respectively such that
$$\begin{aligned} 0<\inf _{n\ge 1}\mu _n\le \sup _{n\ge 1} \mu _n<\frac{1}{L_1} ~~\text{ and }~~ 0<\inf _{n\ge 1}\lambda _n\le \sup _{n\ge 1} \lambda _n <\frac{1}{L_2}. \end{aligned}$$
For the convergence analysis of Algorithm 3.3, which does not require the Lipschitz constants of the underlying cost operators to be known, we first state the following lemma on the step size rules derived from [35]. The proof of the lemma is similar to the method of proof in [35]. Hence, we omit the proof here.
Lemma 4.9
Let \(\{\lambda _n\}\) and \(\{\mu _n\}\) be the sequences generated by (3.1) and (3.2), respectively. Then the sequences \(\{\lambda _n\}\) and \(\{\mu _n\}\) are well defined, and \(\lim \nolimits _{n\rightarrow \infty }\lambda _n=\lambda , \lim \limits _{n\rightarrow \infty }\mu _n=\mu ,\) where \(\lambda \in \Big [\min \big \{\frac{a_2}{L_2},\lambda _1\big \}, \lambda _1+\Phi \Big ],\) \(\mu \in \Big [\min \big \{\frac{a_1}{L_1},\mu _1\big \}, \mu _1+\Psi \Big ],\) and \(\Phi =\sum _{n=1}^{\infty }\phi _n,\Psi =\sum _{n=1}^{\infty }\psi _n.\)
Lemma 4.10
Let \(\{x_n\}\) be a sequence generated by Algorithm 3.3 under Assumption 3.1. Then,
and
Proof
From (3.1), we have that
holds for both \(FTw_n=Fy_n\) and \(FTw_n\ne Fy_n\). Similar to (4.31), we obtain
which implies that
which reduces to (4.57) when simplified further. In a similar manner, we get (4.58). \(\square \)
Remark 4.11
Replacing \(\mu \) and \(\lambda \) with \(\mu _n\) and \(\lambda _n\), respectively in Lemma 4.1, we obtain that \(\{x_n\}\) is bounded.
By Lemma 4.9, it follows that
and
Following the similar procedure used in Theorem 4.7, we obtain the following strong convergence theorem for Algorithm 3.3.
Theorem 4.12
Let \(\{x_n\}\) be a sequence generated by Algorithm 3.3 under Assumption 3.1. Then, \(\{x_n\}\) converges strongly to \(p\in \Gamma ,\) where \(p=P_\Gamma x_0.\)
Remark 4.13
-
The method of proof in this paper is different from the method of proof used in obtaining strong convergence for SVIPs.
-
Similar to Remark 4.8, we obtain an inertial projection and contraction method with adaptive step size, requiring only one projection onto the feasible set \({\mathcal {C}}\) per iteration, for solving the classical VIP (1.1) when A is pseudomonotone and Lipschitz continuous as a corollary.
-
When the operators A and F are monotone and Lipschitz continuous, we do not need them to satisfy Condition (c). This is because Condition (c) was only used after (4.33) to get the conclusion of Lemma 4.5. But, from (4.33) and the monotonicity oft F, we obtain
$$\begin{aligned} 0&\le \big<FTw_{n_k},x-Tw_{n_k}\big>+ \frac{1}{\lambda } \big<y_{n_k}-Tw_{n_k},x-y_{n_k}\big>+\big<FTw_{n_k},Tw_{n_k}-y_{n_k}\big>\nonumber \\&\le \left( \big<FTw_{n_k}-Fx,x-Tw_{n_k}\big>+\big<Fx,x-Tw_{n_k}\big>\right) + \frac{1}{\lambda }\Vert y_{n_k}-Tw_{n_k}\Vert \Vert x-y_{n_k}\Vert \\&\quad +\Vert FTw_{n_k}\Vert \Vert Tw_{n_k}-y_{n_k}\Vert \nonumber \\&\le \big <Fx,x-Tw_{n_k}\big >+ \frac{1}{\lambda } \Vert y_{n_k}-Tw_{n_k}\Vert \Vert x-y_{n_k}\Vert {+}\Vert FTw_{n_k}\Vert \Vert Tw_{n_k}{-}y_{n_k}\Vert ,~\forall x\in {\mathcal {Q}}, \end{aligned}$$Passing limit as \(k\rightarrow \infty ,\) noting that \(\{Tw_{n_k}\}\) converges weakly to Tz and applying (4.32), it follows from the last inequality that
$$\begin{aligned} \langle Fx,x-Tz\rangle \ge 0, ~~ \forall x\in {\mathcal {Q}}. \end{aligned}$$Consequently, by Lemma 2.3 we have that \(Tz\in {VI}(F,{\mathcal {Q}}).\) Similarly, we obtain that \(z\in {VI}(A,{\mathcal {C}}).\) Hence, we conclude that Lemma 4.5 holds.
-
Theorem 4.7 and Theorem 4.12 are still true if the operators A and F in finite dimensional spaces are only required to be pseudomonotone and Lipschitz continuous which is an improvement over the results in literature since no product space formulation is required even with the relaxed pseudomonotonicity assumption.
5 numerical experiment
In this section, using some test examples, we discuss the numerical behavior of our methods, Algorithm 3.2 (Proposed Alg. (1) and Algorithm 3.3 (Proposed Alg. (2)), as well as compare them with Algorithm (1.8) (Tian & Jiang Alg. (1)), Algorithm (1.10) (Tian & Jiang Alg. (2)), Appendix 6.1 (Pham et al. Alg.) and Appendix 6.2 (Ogwo et al. Alg.).
In our computations, we randomly choose \(x_0, x_1\in {\mathcal {H}}_1\), \(\gamma _1 = 1.8,\) \(\gamma _2 = 1.1,\) \(a_1 = 0.6, a_2 = 0.4,\) \(\lambda _1=0.85\) and \(\mu _1=0.9\). We choose \(\alpha _n=\frac{1}{n+1}, \theta _n=0.29,\sigma =0.45, \phi _n=\frac{1}{(n+1)^2}, \psi _n=\frac{1}{(n+2)^2}\) in Algorithms 3.2 and 3.3. Also, we choose \(\delta _n=\frac{1}{n+1}\), \(\theta _n=\frac{1}{2}-\delta _n,\) \(\alpha _n={\bar{\alpha }}_n,\) \(\tau _n=\frac{\delta _n}{n^{0.01}}\) and \(\alpha = 3\) in the method of Ogwo et al. [39, Algorithm 3.3]. Using MATLAB 2021(b) and the stopping criterion \(||x_{n+1} {-x}_{n}|| < 10^{-2},\) we plot the graphs of errors against the number of iterations in each case. The numerical results are reported in Figs. 1, 2, 3, 4, 5, 6, 7 and 8 and Tables 1 and 2.
Example 5.1
Let \({\mathcal {H}}_1={\mathcal {H}}_2 = L_2([0, 2\pi ])\) be equipped with inner product
and norm
Then we define \(A:L_2([0, 2\pi ])\rightarrow L_2([0, 2\pi ])\) by
From [46], we have that A is pseudomonotone and Lipschitz continuous but not monotone on \(L_2([0,1]).\)
Let \({\mathcal {C}} = \{x \in L_2([0, 2\pi ]): \langle y, x\rangle \le v\},\) where \(y = t+e^t\) and \(v = 1,\) then \({\mathcal {C}}\) is a nonempty closed and convex subset of \(L_2([0, 2\pi ])\). We define the metric projection \(P_{\mathcal {C}}\) as:
Also, let \({\mathcal {Q}}=\{x\in L_2([0, 2\pi ])~: \Vert x-a\Vert _{ l _2}\le d\},\) where \( a=t+3\) and \(d=2,\) then \({\mathcal {Q}}\) is a nonempty closed and convex subset of \(L_2([0, 2\pi ])\). We define \(P_{\mathcal {Q}}\) as:
We define the operator \(F: {\mathcal {Q}}\rightarrow L_2([0,2\pi ])\) by
where \({\mathcal {G}}:{\mathcal {Q}}\rightarrow {\mathbb {R}}\) is defined by \(g(x):=\frac{1}{1+\Vert x\Vert ^2}\) and \({\mathcal {M}}:L_2([0,2\pi ])\rightarrow L_2([0,2\pi ])\) is defined by \({\mathcal {M}}(x)(t):=\int _{0}^{t}x(s)ds,~~~\forall ~~ x\in L_2([0,2\pi ]),~~t\in [0,2\pi ]\). We have that \({\mathcal {G}}\) is \(\frac{16}{25}\)-Lipschitz continuous and \(\frac{1}{5}\le {\mathcal {G}}(x)\le 1,~~~\forall ~x\in {\mathcal {C}}\) (see [?]). Hence, from [?], we have that F is pseudomonotone and Lipschitz continuous but not monotone since \({\mathcal {M}}\) is a Volterra intergral mapping which is bounded and linear monotone.
Let \(T: L_2([0, 2\pi ]) \rightarrow L_2([0, 2\pi ])\) be defined by
where \({\mathcal {K}}\) is a continuous real-valued function on \([0, 2\pi ]\times [0,2\pi ]\). Then, T is a bounded linear operator with adjoint
In particular, we define \({\mathcal {K}}(s, t)=e^{-st}\) for all \(s, t \in [0, 2\pi ]\). For Algorithms (1.8) and (1.10), we define the mapping \(S: L_2([0,2\pi ])\rightarrow L_2([0,2\pi ])\) by
Then, S is nonexpansive. For Algorithm (1.10), we define \(h: L_2([0, 2\pi ]) \rightarrow L_2([0, 2\pi ])\) by
Then, h is a contraction mapping.
We consider the following cases for the numerical experiments of this example.
Case 1 Take \(x_0(t)= t + 2\) and \(x_1(t)= 0.7e^{-t}\).
Case 2 Take \(x_0(t)= 2t + 1\) and \(x_1(t)= e^{-3t}\).
Case 3 Take \(x_0(t)= 2t + 1\) and \(x_1(t)= e^{-t}\).
Case 4 Take \(x_0(t)= t^2 + 2t + 1\) and \(x_1(t)= e^{-3t}\).
Example 5.2
Let \({\mathcal {H}}_1=\left( l _2({\mathbb {R}}), ~||. ||_{ l _2}\right) ={\mathcal {H}}_2\), where \( l _2({\mathbb {R}}):=\{x=(x_1, x_2, x_3, \dots ),~x_i\in {\mathbb {R}}:\sum \limits _{i=1}^\infty |x_i|^2<\infty \}\) and \(||x||_{ l _2}:= \left( \sum \limits _{i=1}^\infty |x_i|^2 \right) ^{\frac{1}{2}},~~\forall x \in l _2({\mathbb {R}}).\)
Define the operators \(A,F: l _2({\mathbb {R}})\rightarrow l _2({\mathbb {R}})\) by \(A(x_1, x_2, x_3, \dots )=(3x_1e^{-x_1^2}, 0, 0, \dots )\) and \(F(x_1, x_2, x_3, \dots )=(7x_1e^{-x_1^2}, 0, 0, \dots )\) respectively. Then, A, F are pseudo-monotone, Lipschitz continuous and sequentially weakly continuous but not monotone. Let \(T: l _2({\mathbb {R}})\rightarrow l _2({\mathbb {R}})\) be defined by \(Tx=\left( 0, x_1, \frac{x_2}{2}, \frac{x_3}{3},...\right) \), for all \(x\in l _2({\mathbb {R}}).\) Then, T is a bounded linear operator on \(\ell _2({\mathbb {R}})\) with adjoint \(T^*y=\left( y_2, \frac{y_3}{2}, \frac{y_4}{3},...\right) \) for all \(y\in l _2({\mathbb {R}}).\)
Now, define \({\mathcal {C}}={\mathcal {Q}}=\{x\in l _2({\mathbb {R}}):||x-a||_{ l _2}\le b\},\) where \(a=(1, \frac{1}{2}, \frac{1}{3}, \cdots )\) and \(b=3\) for \({\mathcal {C}}\) and \(a=(\frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \cdots )\), \(b=1\) for \({\mathcal {Q}}\). Then \(\mathcal {C,Q}\) are nonempty closed and convex subsets of \({ l _2}({\mathbb {R}})\). Thus,
Furthermore, we define the mappings \(S, h: l _2({\mathbb {R}})\rightarrow l _2({\mathbb {R}})\) by \(Sx=-4(x_1, x_2, x_3, \dots )\) and \(hx=\left( \frac{x_1}{2},~\frac{x_2}{2}, \frac{x_3}{2}, \cdots \right) \) for all \(x\in l _2({\mathbb {R}})\), and consider the following cases for the starting point:
Case 1 Take \(x_0= \left( \frac{1}{5}, \frac{1}{15}, \frac{1}{45}, \cdots \right) \) and \(x_1= \left( 1, \frac{1}{2}, \frac{1}{4}, \cdots \right) \).
Case 2 Take \(x_0= \left( \frac{2}{5}, \frac{2}{15}, \frac{2}{45}, \cdots \right) \) and \(x_1=\left( 2, 1, \frac{1}{2}, \cdots \right) \).
Case 3 Take \(x_0= \left( \frac{1}{5} \frac{1}{15}, \frac{1}{45}, \cdots \right) \) and \(x_1= \left( \frac{1}{2}, \frac{1}{4}, \frac{1}{8}, \cdots \right) .\)
Case 4: Take \(x_0= (-3, \frac{3}{2}, -\frac{3}{4}, \cdots )\) and \(x_1= (\frac{1}{3}, \frac{2}{9}, \frac{4}{27}\cdots ).\)
6 Conclusion
We proposed and studied two new modified inertial projection and contraction methods for solving the SVIP (1.1)–(1.2) in infinite dimensional real Hilbert spaces where the underlying cost operators are pseudomonotone, Lipschitz continuous and without the sequentially weakly continuity condition. The first method is proposed when the Lipschitz constants of the underlying cost operators are known while the latter method involving adaptive step size strategies is proposed when the Lipschitz constants of the underlying cost operators are unknown. The two modified inertial projection and contraction methods proposed in our work require only one projection onto the feasible set per iteration and also do not require the SVIP (1.1)–(1.2) to be transformed into a product space. As far as we know, our choice of the inertial factor \(\theta _n\in [0,\frac{1}{3})\) has never been used in obtaining strong convergence result for SVIPs. Thus, our methods seem potentially more applicable than most of the existing methods for solving SVIPs. A direct consequence of our results is that our methods can be reduced to modified inertial projection and contraction methods requiring only one projection onto a feasible set per iteration for solving the classical VIP (1.1) when the underlying operator is pseudomonotone, Lipschitz continuous and without the sequentially weakly continuity condition often used in the literature. Finally, we perform some numerical experiments for our proposed methods and the results show that the inertial technique employed plays a significant role in speeding up the rate of convergence which makes our methods outperform other existing methods compared with for solving the SVIPs.
References
Alakoya, T.O., Uzor, V.A., Mewomo, O.T., Yao, J.-C.: On a system of monotone variational inclusion problems with fixed-point constraint. J. Inequal. Appl. 1, 2022 (2022) , Art No. 47, 30 pp
Alakoya, T.O., Uzor, V.A., Mewomo, O.T.: A new projection and contraction method for solving split monotone variational inclusion, pseudomonotone variational inequality, and common fixed point problems. Comput. Appl. Math. 42(1) (2023) Paper No. 3, 33 pp
Alvarez, F.: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 14(3), 773–782 (2004)
Attouch, H., Peypouquet, J., Redont, P.: A dynamical approach to an inertial forward–backward algorithm for convex minimization. SIAM J. Optim. 24(1), 232–256 (2014)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Brooke, M., Censor, Y., Gibali, A.: Dynamic string-averaging CQ-methods for the split feasibility problem with percentage violation constraints arising in radiation therapy treatment planning. Int. Trans. Oper. Res. (2020). https://doi.org/10.1111/itor.12929
Byrne, C.: A unified treatment for some iterative algorithms in signal processing and image reconstruction. Inverse Prob. 20, 103–120 (2004)
Byrne, C., Censor, Y., Gibali, A., Reich, S.: The split common null point problem. J. Nonlinear Convex Anal. 13, 759–775 (2012)
Ceng, L.C., Petrusel, A., Qin, X., Yao, J.C.: A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems. Fixed Point Theory 21(1), 93–108 (2020)
Ceng, L.C., Petrusel, A., Qin, X., Yao, J.C.: Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints. Optimization 70, 1337–1358 (2021)
Ceng, L.C., Petrusel, A., Qin, X., Yao, J.C.: Pseudomonotone variational inequalities and fixed points. Fixed Point Theory 22, 543–558 (2021)
Ceng, L.C., Shang, M.: Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings. Optimization 70, 715–740 (2021)
Ceng, L. C., Yao, J.C., Shehu, Y.: On Mann implicit composite subgradient extragradient methods for general systems of variational inequalities with hierarchical variational inequality constraints. J. Inequal. Appl., 2022 (2022), Paper No. 78, 28 pp
Ceng, L.C., Yuan, Q.: Composite inertial subgradient extragradient methods for variational inequalities and fixed point problems. J. Inequal. Appl., 2019 (2019), Paper No. 274, 20 pp
Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in product space. Numer. Algorithms 8, 221–239 (1994)
Censor, Y., Gibali, A., Reich, S.: The split variational inequality problem. Technion-Israel Inst. Technol. Haifa 59, 301–323 (2012)
Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301–323 (2012)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)
Cottle, R.W., Yao, J.C.: Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 75, 281–295 (1992)
Gibali, A., Jolaoso, L.O., Mewomo, A., Taiwo, A.: Fast and simple Bregman projection methods for solving variational inequalities and related problems in Banach spaces. Results Math. 75 (4) (2020), Paper No. 179, 36 pp
Gibali, A., Sabach, S., Voldman, S.: Non-convex split feasibility problems: models, algorithms and theory. Open J. Math. Optim., 1 (2020) 15 p
Gibali, A., Reich, S.: Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 26, 827–845 (2011)
Gibali, A., Reich, S.: Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 61, 1119–1132 (2012)
Godwin, E.C., Alakoya, T.O., Mewomo, O.T., Yao, J.-C.: Relaxed inertial Tseng extragradient method for variational inequality and fixed point problems. Appl. Anal. (2022). https://doi.org/10.1080/00036811.2022.2107913
Godwin, E.C., Izuchukwu, C., Mewomo, O.T.: Image restoration using a modified relaxed inertial method for generalized split feasibility problems Math. Methods Appl. Sci. 46(5), 5521–5544 (2023)
Godwin, E.C., Mewomo, O.T., Alakoya, T.O.: A strongly convergent algorithm for solving multiple set split equality equilibrium and fixed point problems in Banach spaces. Proc. Edinb. Math. Soc. (2) 66, 475–515 (2023)
Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marcel Dekker, New York (1984)
He, L., Cui, Y.L., Ceng, L.C., et al.: Strong convergence for monotone bilevel equilibria with constraints of variational inequalities and fixed points using subgradient extragradient implicit rule. J. Inequal. Appl. 2021 (2021), Paper No. 146, 37 pp
Hendrickx, J.M., Olshevsky, A.: Matrix \(P\)-norms are NP-hard to approximate if \(P\ne 1, 2,\infty \). SIAM J. Matrix Anal. Appl. 31, 2802–2812 (2010)
Hieu, V.D., Quy, P., et al.: Strong convergence of multi-parameter projection methods for variational inequality problems. Math. Model. Anal. 27, 242–262 (2022)
Izuchukwu, C., Ogwo, G.N., Mewomo, O.T.: An inertial method for solving generalized split feasibility problems over the solution set of monotone variational inclusions. Optimization 71(3), 583–611 (2022)
Izuchukwu, C., Shehu, Y., Reich, S.: An inertial-type method for solving image restoration problems. Soft. Comput. (2023). https://doi.org/10.1007/s00500-023-08921-3
Konnov, I.: Equilibrium Models and Variational Inequalities. Elsevier, Amsterdam (2007)
Kopecká, E., Reich, S.: A note on alternating projections in Hilbert space. J. Fixed Point Theory Appl. 12, 41–47 (2012)
Liu, H., Yang, J.: Weak convergence of iterative methods for solving quasimonotone variational inequalities. Comput. Optim. Appl. 77, 491–508 (2020)
Maingé, P.E.: A viscosity method with no spectral radius requirements for the split common fixed point problem. Eur. J. Oper. Res. 235, 17–27 (2014)
Moudafi, A.: Split monotone variational inclusions. J. Optim. Theory Appl. 150, 275–283 (2011)
Moudafi, A., Thakur, B.S.: Solving proximal split feasibility problems without prior knowledge of operator norms. Optim. Lett. 8(7), 2099–2110 (2014)
Ogwo, G.N., Izuchukwu, C., Mewomo, O.T.: Inertial methods for finding minimum-norm solutions of the split variational inequality problem beyond monotonicity. Numer. Algorithms 88(3), 1419–1456 (2021)
Ogwo, G. N., Izuchukwu, C., Shehu, Y., Mewomo, O. T.: Convergence of relaxed inertial subgradient extragradient methods for quasimonotone variational inequality problems. J. Sci. Comput. 90 (1) (2021) Paper No. 10, 1–35
Okeke, C.C., Mewomo, O.T.: On split equilibrium problem, variational inequality problem and fixed point problem for multi-valued mappings. Ann. Acad. Rom. Sci. Ser. Math. Appl. 9(2), 223–248 (2017)
Pham, V.H., Nguyen, D.H., Anh, T.V.: A strongly convergent modified Halpern subgradient extragradient method for solving the split variational inequality problem. Vietnam J. Math. 48, 187–204 (2020)
Shehu, Y., Li, X.H., Dong, Q.L.: An efficient projection-type method for monotone variational inequalities in Hilbert spaces. Numer. Algorithms 84, 365–388 (2020)
Singh, S., Gibali, A., Qin, X.: Cooperation in traffic network problems via evolutionary split variational inequalities. J. Ind. Manag. Optim. 18, 593–611 (2022)
Taiwo, A., Reich, S., Izuchukwu, C.: Strong convergence of two regularized relaxed extragradient schemes for solving the split feasibility and fixed point problem with multiple output sets. Appl. Anal. (2023). https://doi.org/10.1080/00036811.2023.2166495
Thong, D.V., Shehu, Y., Iyiola, O.S.: Weak and strong convergence theorems for solving pseudo-monotone variational inequalities with non-Lipschitz mappings. Numer. Algorithm 84, 795–823 (2019)
Tian, M., Jiang, B.-N.: Viscosity approximation methods for a class of generalized split feasibility problems with variational inequalities in Hilbert space. Numer. Funct. Anal. Optim. 40, 902–923 (2019)
Tian, M., Jiang, B.-N.: Weak convergence theorem for a class of split variational inequality problems and applications in Hilbert space. J. Inequal. Appl. (2017). https://doi.org/10.1186/s13660-017-1397-9
Uzor, V.A., Alakoya, T.O., Mewomo, O.T.: Strong convergence of a self-adaptive inertial Tseng’s extragradient method for pseudomonotone variational inequalities and fixed point problems. Open Math. 20, 234–257 (2022)
Uzor, V.A., Alakoya, T.O., Mewomo, O.T.: On split monotone variational inclusion problem with multiple output sets with fixed point constraints. Comput. Methods Appl. Math. 23(3), 729–949 (2023)
Uzor, V.A., Alakoya, T.O., Mewomo, O.T., Gibali, A.: Outer approximation projection and contraction method for solving variational inequalities. J Inequal. Appl. 2023 (1) (2023), Art. 141, 28 pp
Wickramasinghe, M.U., Mewomo, O.T., Alakoya, T.O., Iyiola, O.S.: Mann-type approximation scheme for solving a new class of split inverse problems in Hilbert spaces. Appl. Anal. (2023). https://doi.org/10.1080/00036811.2023.2233977
Acknowledgements
The authors sincerely thank the anonymous referees for their careful reading, constructive comments and useful suggestions that improved the manuscript. The first author is supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903). The second author acknowledges with thanks the scholarship and financial support from the University of KwaZulu-Natal (UKZN) Doctoral Scholarship. The research of the third author is wholly supported by the University of KwaZulu-Natal, Durban, South Africa Postdoctoral Fellowship. He is grateful for the funding and financial support. The research of the fourth author is wholly supported by the National Research Foundation (NRF) South Africa (S &F-DSI/NRF Free Standing Postdoctoral Fellowship; Grant Number: 120784). The third author also acknowledges the financial support from DSI/NRF, South Africa Center of Excellence in Mathematical and Statistical Sciences (CoE-MaSS) Postdoctoral Fellowship. Opinions expressed and conclusions arrived are those of the authors and are not necessarily to be attributed to the CoE-MaSS and NRF.
Funding
Open access funding provided by University of KwaZulu-Natal.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no Conflict of interest
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Appendix 6.1 Algorithm 1 of Pham et al. [42].
Step 0. Choose \(\mu _0, \lambda _0>0, ~~\mu , \lambda \in (0, 1),~\{\tau _n\}\subset [{\underline{\tau }},~ {\bar{\tau }}]\subset \left( 0, \frac{1}{||T||^2+1}\right) , ~\{\alpha _n\} \subset (0, 1) \, {{ such that}}\) | |
\(\lim \limits _{n\rightarrow \infty }\alpha _n=0\) \( {{ and}} \, \sum _{n=1}^\infty \alpha _n=\infty .\) | |
Step 1. Let \(x_1 \, \in \, {\mathcal {H}}_1. \, {{ Set}}\, n=1\). | |
Step 2. Compute | |
\( u_n=Tx_n,\) | |
\( v_n=P_{\mathcal {Q}}(u_n-\mu _n fu_n),\) | |
\(w_n=P_{{\mathcal {Q}}_n}(u_n-\mu _n fv_n),\) | |
where | |
\({\mathcal {Q}}_n=\{w_2\in {\mathcal {H}}_2 : \langle u_n-\mu _n fu_n-v_n, w_2-v_n\rangle \le 0\}\) | |
and | |
\(\mu _{n+1}=\left\{ \begin{array}{lll} \min \left\{ \frac{\mu ||u_n-v_n||}{||fu_n-fv_n||},~\mu _n\right\} ,&{} \text{ if }~fu_n\ne fv_n,\\ \mu _n,&{} {{ otherwise}}. \end{array}\right. \) | |
Step 3. Compute | |
\(y_n=x_n+\tau _n T^*(w_n-u_n),\) | |
\(z_n=P_{\mathcal {C}}(y_n-\lambda _n Ay_n),\) | |
\(t_n=P_{{\mathcal {C}}_n}(y_n-\lambda _n Az_n),\) | |
where | |
\({\mathcal {C}}_n=\{w_1\in {\mathcal {H}}_1 : \langle y_n-\lambda _n Ay_n-z_n, w_1-z_n\rangle \le 0\}\) | |
and | |
\( \lambda _{n+1}=\left\{ \begin{array}{lll} \min \left\{ \frac{\lambda ||y_n-z_n||}{||Ay_n-Az_n||},~\lambda _n\right\} ,&{} \text{ if }~Ay_n\ne Az_n,\\ \lambda _n,&{} {otherwise}. \end{array} \right. \) | |
Step 4. Compute | |
\( x_{n+1}=\alpha _n x_1+(1-\alpha _n) t_n. \) | |
Set \(n:= n+1\) and go back to Step 2. |
Appendix 6.2 Algorithm 3.3 of Ogwo et al. [39]
Step 0: Choose sequences \(\{\delta _n\}^{\infty }_{n=1}, \{\theta _n\}^{\infty }_{n=1}\) and \(\{\tau _n\}^{\infty }_{n=1}\) such that the conditions from Assumption 3.1 | |
(d)–(e) hold and let \(\eta \ge 0, \gamma _i\in (0,2), ~a_i \in (0,1), i=1,2, \lambda _1>0, \mu _1>0, \alpha \ge 3\) and \(x_0,x_1 \in {\mathcal {H}}_1\) | |
be given arbitrarily. Set n:=1. | |
Step 1: Given the iterates \(x_{n-1}\) and \(x_n\) for each \(n \ge 1,\) choose \(\alpha _n\) such that \(0\le \alpha _n \le {\bar{\alpha }}_n,\) where | |
\({\bar{\alpha }}_n:= \left\{ \begin{array}{lll} \min \left\{ \frac{n-1}{n+\alpha -1}, \frac{\tau _n}{\Vert x_n-x_{n-1}\Vert }\right\} , &{} \text{ if } ~ x_n \ne x_{n-1}\\ \frac{n-1}{n+\alpha -1}, &{} {{ otherwise}.} \end{array}\right. \quad \quad \quad \quad (6.1)\) | |
Step 2: Compute | |
\( w_n=x_n+\alpha _n(x_n-x_{n-1}),\) | |
\( y_n=P_{\mathcal {Q}}(Tw_n-\lambda _n FTw_n),\) | |
\( z_n=Tw_n-\gamma _2\beta _n r_n,\) | |
where \(r_n:=Tw_n-y_n-\lambda _n(FTw_n-Fy_n)\), \(\beta _n:= \frac{\langle Tw_n-y_n,r_n \rangle }{\Vert r_n\Vert ^2},\) if \(r_n\ne 0;\) otherwise, \(\beta _n=0;\) and | |
\(\lambda _{n+1}=\left\{ \begin{array}{lll} \min \left\{ \frac{a_2||Tw_n-y_n||}{||FTw_n-Fy_n||},~\lambda _n\right\} ,&{} \textit{if}~FTw_n\ne Fy_n\\ \lambda _n,&{} \textit{otherwise}. \end{array}\right. \quad \quad \quad \quad (6.2) \) | |
Step 3: Compute | |
\( b_n=w_n+\eta _n T^{*}(z_n-Tw_n),\) | |
where the stepsize \(\eta _n\) is chosen such that for small enough \(\epsilon >0,~~~\eta _n\in \left[ \epsilon , ~~\frac{\Vert Tw_n-z_n\Vert ^2}{\Vert T^{*}(Tw_n-z_n)\Vert ^2}-\epsilon \right] ,\) | |
if \(z_n\ne Tw_n\); otherwise, \(\eta _n=\eta .\) | |
Step 4: Compute | |
\( u_n= P_{\mathcal {C}} (b_n-\mu _n Ab_n),\) | |
\( t_n=b_n-\gamma _1\gamma _n v_n,\) | |
where \(v_n:=b_n-u_n-\mu _n(Ab_n- A u_n)\), \(\gamma _n=\frac{\langle b_n-u_n,v_n \rangle }{\Vert v_n\Vert ^2},\) if \(v_n \ne 0;\) otherwise, \(\gamma _n=0;\) and | |
\(\mu _{n+1}=\left\{ \begin{array}{lll} \min \left\{ \frac{a_1||b_n-u_n||}{||Au_n-Ab_n||},~\mu _n\right\} ,&{} \text{ if }~Ab_n\ne Au_n\\ \mu _n,&{} \text{ otherwise }. \end{array}\right. \quad \quad \quad \quad (6.3)\) | |
Step 5: Compute | |
\(x_{n+1}=(1-\theta _n-\delta _n)b_n+\theta _nt_n.\) | |
Set \(n:=n+1\) and go back to Step 1. |
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mewomo, O.T., Ogwo, G.N., Alakoya, T.O. et al. Strongly convergent inertial projection and contraction methods for split variational inequality problem. Rend. Circ. Mat. Palermo, II. Ser 73, 2069–2106 (2024). https://doi.org/10.1007/s12215-024-01022-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12215-024-01022-6
Keywords
- Split variational inequality problems
- Pseudomonotone operators
- Lipschitz continuous
- Projection and contraction methods
- Inertial extrapolation
- Minimum-norm solutions
- Product space formulation