Mann-Type Inertial Projection and Contraction Method for Solving Split Pseudomonotone Variational Inequality Problem with Multiple Output Sets

In this paper, we study the concept of split variational inequality problem with multiple output sets when the cost operators are pseudomonotone and non-Lipschitz. We introduce a new Mann-type inertial projection and contraction method with self-adaptive step sizes for approximating the solution of the problem in the framework of Hilbert spaces. Under some mild conditions on the control parameters and without prior knowledge of the operator norms, we prove a strong convergence theorem for the proposed algorithm. We point out that while the cost operators are non-Lipschitz, our proposed method does not require any linesearch method but uses a more efficient self-adaptive step size technique that generates a non-monotonic sequence of step sizes. Finally, we apply our result to study certain classes of optimization problems and we present several numerical experiments to illustrate the applicability of the proposed method. Several of the existing results in the literature could be viewed as special cases of our result in this study.


Introduction
Let H be a real Hilbert space with an inner product •, • and induced norm ||•||.Let C be a nonempty, closed and convex subset of H, and let A : H → H be a mapping.The variational inequality problem (VIP) is formulated as finding a point p ∈ C such that x − p, Ap ≥ 0, ∀ x ∈ C. (1.1) We denote the solution set of the VIP (1.1) by V I(C, A).Variational inequality theory was first introduced independently by Fichera [13] and Stampacchia [34].The VIP is a fundamental problem in optimization theory, which unifies several important concepts in applied mathematics, such as the necessary network equilibrium problems, optimality conditions, systems of nonlinear equations and complementarity problems (e.g.see [4,5,20]).In the recent years, the VIP has attracted the attention of researchers due to its numerous applications in diverse fields, such as in optimization theory, economics, structural analysis, operations research, sciences and engineering (see [10,17,36] and the references therein).Several authors have proposed and studied different iterative methods for approximating the solution of the VIP (see [2,7,16,25,26] and references therein).
The split inverse problem (SIP) is another area of research which has recently received great research attention (see [42] and the references therein) due to its several applications in different fields, for instance, in signal processing, phase retrieval, medical image reconstruction, data compression, intensitymodulated radiation therapy, etc. (e.g.see [8,9,18,22,29]).The SIP model is formulated as follows: where H 1 and H 2 are real Hilbert spaces, IP 1 denotes an inverse problem formulated in H 1 and IP 2 denotes an inverse problem formulated in H 2 , and T : H 1 → H 2 is a bounded linear operator.
In 1994, Censor and Elfving in [9] introduced the first instance of the SIP called the split feasibility problem (SFP) for modelling inverse problems that arise from medical image reconstruction.The SFP finds application in the control theory, approximation theory, signal processing, geophysics, communications, biomedical engineering, etc. [8,23,31,32].Let C and Q be nonempty, closed and convex subsets of Hilbert spaces H 1 and H 2 , respectively, and let T : H 1 → H 2 be a bounded linear operator.The SFP is defined as follows: Several iterative algorithms for solving the SFP (1.4) have been constructed and investigated by researchers (see, e.g.[8,23,24] and the references therein).An important generalization of the SFP is the split variational inequality problem (SVIP) introduced by Censor et al. [10].The SVIP is formulated as follows: Find x ∈ C that solves A 1 x, x − x ≥ 0, ∀x ∈ C (1.5) such that ŷ = T x ∈ H 2 solves A 2 ŷ, y − ŷ ≥ 0, ∀y ∈ Q, (1.6) where A 1 : H 1 → H 1 , A 2 : H 2 → H 2 are single-valued operators.Several authors have studied and proposed different iterative methods for approximating the solution of SVIP (see [19,21,37] and the references therein).
In 2020, Reich and Tuyen [28] introduced and studied the concept of split feasibility problem with multiple output sets in Hilbert spaces (SFPMOS), which is formulated as follows: Find a point u † such that where T i : H → H i , i = 1, 2, ..., N , are bounded linear operators, C and Q i are nonempty, closed and convex subsets of Hilbert spaces H and H i , i = 1, 2, . . ., N, respectively.Moreover, Reich and Tuyen [30] proposed the following two algorithms for approximating the solution of SFPMOS (1.7) in Hilbert spaces: and where f : It is clear that the SVIPMOS (1.10) generalizes the SFPMOS (1.7).
In the last couple of years, developing iterative methods with a high rate of convergence for solving optimization problems has become of great interest to researchers.One of the approaches employed by researchers to achieve this objective is the inertial technique.This technique originates from an implicit time discretization method (the heavy ball method) of second-order dynamical systems.In recent years, several authors have constructed highly efficient iterative methods by employing the inertial technique, see, e.g., [1,3,11,14,38,40].
In this paper, we propose and analyze a new Mann-type inertial projection and contraction algorithm with self-adaptive step sizes for approximating the solution SVIPMOS (1.10) when the cost operators are pseudomonotone and non-Lipschitz.While the cost operators are non-Lipschitz, our proposed method does not involve any line search method but uses a more efficient self-adaptive step size technique which generates a non-monotonic sequence of step sizes.Furthermore, we prove that the sequence generated by our proposed method converges to the minimum-norm solution of the problem in Hilbert spaces.Finally, we apply our result to study certain classes of optimization problems and we present several numerical experiments to demonstrate the applicability of our proposed algorithm.The outline of the paper is as follows: In Sect.2, we give some definitions and results required for the convergence analysis.In Sect.3, we present the proposed algorithm and in Sect. 4 we analyze the convergence of our proposed method.In Sect. 5 we apply our result to study certain classes of optimization problems, and in Sect.6 we carry out several numerical experiments with graphical illustrations.Finally, we give some concluding remarks in Sect.7.

Preliminaries
Definition 2.1.[2,16] An operator A : H → H is said to be (i) α-strongly monotone, if there exists α > 0 such that (v) uniformly continuous, if for every > 0, there exists δ = δ( ) > 0, such that Remark 2.2.We note that the following implications hold: (i) =⇒ (ii) =⇒ (iii) but the converses are not generally true.We also point out that uniform continuity is a weaker notion than Lipschitz continuity.
It is well known that if D is a convex subset of H, then A : D → H is uniformly continuous if and only if, for every > 0, there exists a constant K < +∞ such that Lemma 2.3.[27,39] Let H be a real Hilbert space.Then the following results hold for all x, y ∈ H and δ ∈ (0, 1) : Lemma 2.4.( [33]) Let {a n } be a sequence of nonnegative real numbers, {α n } be a sequence in (0, 1) with ∞ n=1 α n = ∞ and {b n } be a sequence of real numbers.Assume that If lim sup k→∞ b n k ≤ 0 for every subsequence {a n k } of {a n } satisfying lim inf k→∞ (a n k+1 − a n k ) ≥ 0, then lim n→∞ a n = 0. Lemma 2.5.[35] Suppose {λ n } and {θ n } are two nonnegative real sequences such that

Main Results
In this section, we present our proposed algorithm for solving the SVIPMOS (1.10).We analyze the convergence of the proposed method under the following conditions: Let C, C i be nonempty, closed and convex subsets of real Hilbert spaces H, H i , i = 1, 2, ..., N, respectively, and let T i : H → H i , i = 1, 2, ..., N, be bounded linear operators with adjoints T * i .Let A : H → H, A i : H i → H i , i = 1, 2, ..., N, be uniformly continuous pseudomonotone operators satisfying the following property: Moreover, we assume that the solution set Ω = ∅ and the control parameters satisfy the following conditions: Now, the algorithm is presented as follows: Step 0. Select initial points x 0 , x 1 ∈ H. Let C 0 = C, T 0 = I H , A 0 = A and set n = 1.
Step 1.Given the (n − 1)th and nth iterates, choose θ n such that 0 ≤ θ n ≤ θn with θn defined by θn = min θ, Step 2. Compute Step 5. Compute Set n := n + 1 and return to Step 1. Remark 3.3.Observe that while the cost operators A i , i = 0, 1, 2, . . ., N are non-Lipschitz, our method does not require any linesearch technique, which could be computationally too expensive too implement.Rather, we employ self-adaptive step sizes that only require simple computations of known information per iteration.

Convergence Analysis
First, we prove some lemmas needed for our strong convergence theorem.Lemma 4.1.Suppose {λ n,i } is the sequence generated by Algorithm 3.1 such that Assumption A holds.Then {λ n,i } is well defined for each i = 0, 1, 2, . . ., N and Proof.Since A i is uniformly continuous for each i = 0, 1, 2, . . ., N, then by (2.1) we have that for any given i > 0, there exists where i = μ i T i w n − y n,i for some μ i ∈ (0, 1) and Thus, by the definition of λ n+1 , the sequence {λ n,i } has lower bound min{ ci Mi , λ 1,i } and has upper bound λ 1,i + Φ i .By Lemma 2.5, the limit lim n→∞ λ n,i exists and we denote by 3) has a positive lower bounded for each i = 0, 1, 2, . . ., N.
Lemma 4.3.Suppose Assumption A of Algorithm 3.1 holds.Then, there exists a positive integer N such that By similar argument, there exists a positive integer N 2,i for each i = 0, 1, 2, . . ., N, such that In addition, since 0 < c i < c i < 1, lim n→∞ c n,i = 0 and lim n→∞ λ n,i = λ i for each i = 0, 1, 2, . . ., N, we have Therefore, for each i = 0, 1, 2, . . ., N, there exists a positive integer N 3,i such that Then, by applying the triangular inequality, it follows from the definition of w n that By Remark (3.2), there exists Thus, it follows from (4.1) that by the property of the projection map it follows that Moreover, since y n,i ∈ C i , i = 0, 1, 2, . . ., N, we have From (4.3) and (4.4) we obtain Now, applying the definition of r n,i and (4.5) we get By Lemma 4.3, there exists a positive integer N such that 0 From the definition of β n,i , if r n,i = 0 i = 0, 1, 2, . . ., N, we have Observe that if r n,i = 0, i = 0, 1, 2, . . ., N, (4.9) still holds.
Next, since the function By Lemma 4.3, there exists a positive integer N such that 0 < φ n,i + φ i < 1, i = 0, 1, 2, . . ., N for all n ≥ N. Now, from (4.10) and by applying Lemma 2.3 and (4.9) we have then using the definition of η n,i we have (4.12) Thus, by applying (4.12) in (4.11) and substituting in (4.10) we have Observe that if T * i (z n,i − T i w n ) = 0, (4.13) still holds from (4.11).By the definition of x n+1 , we have Applying Lemma 2.3(ii) together with (4.13) we have which Proof.From (4.13), we have From this, we obtain Since by the hypothesis of the lemma lim k→∞ w n k − b n k = 0, it follows from (4.17) that which implies that By the definition of η n,i , we have Thus, we have By the definition of λ n+1,i , it follows that From Lemma 4.1 we know that lim k→∞ λ n k ,i = λ i , i = 0, 1, 2, . . ., N and by Lemma 4.3, there exists a positive integer N such that 1− 0, ∀n ≥ N, i = 0, 1, 2, . . ., N. If r n,i = 0, then by applying the continuity of A i , the definitions of β n,i , r n,i and z n,i i = 0, 1, 2, . . ., N, from (4.20) we have (4.21) Thus, we have Since lim k→∞ c n k ,i = k n k ,i = 0 and by Lemma 4.1 lim k→∞ λn k ,i λn k +1,i = 1, i = 0, 1, 2, . . ., N, then from (4.22) and by applying (4.18) we have If r n,i = 0, from (4.20) we know that (4.23) still holds.Since y n,i = P Ci (T i w n − λ n,i A i T i w n ), by the property of the projection map we have From the last inequality, we get Observe that By the continuity of A i , from (4.23) we have Next, let {Θ k,i } be a decreasing sequence of positive numbers such that Θ k,i → 0 as k → ∞, i = 0, 1, 2, . . ., N. For each k, let N k denote the smallest positive integer such that where the existence of N k follows from (4.28 Then, A i y N k ,i , u N k ,i = 1 for each k, i = 0, 1, 2, . . ., N. From (4.29), we obtain

Using the facts that {y
From the last inequality, we obtain By Lemma 2.6, we have , which implies that z ∈ Ω as required.
Lemma 4.6.Let {x n } be a sequence generated by Algorithm 3.1 under Assumption A. Then, the following inequality holds for all p ∈ Ω : Proof.Let p ∈ Ω.Then, by applying Lemma 2.3 together with the Cauchy-Schwartz inequality we have where Next, by the definition of x n+1 , (4.13), (4.31) and applying Lemma 2.3 we have which is the required inequality.Proof.Let x = min{ p : p ∈ Ω}, that is, x = P Ω (0).Then, from Lemma 4.6 we obtain where . Now, we claim that the sequence { x n − x } converges to zero.In view of Lemma 2.4, it suffices to show that lim sup k→∞ d n k ≤ 0 for every subsequence holds.Again, from Lemma 4.6, we obtain By (4.33), Remark 3.2 and the fact that lim k→∞ α n k = 0, we have Thus, we get It follows that By the definition of b n and by applying (4.35), we obtain From the definition of w n and by Remark 3.2, we get Next, from (4.36) and (4.37) we obtain Applying (4.37), (4.38) and the fact that lim k→∞ α n k = 0 we obtain Then, there exist a subsequence {x n k } of {x n } such that x n k x * .It follows from (4.37) that w n k x * .Now, invoking Lemma 4.5 and applying (4.36) we have x * ∈ Ω.Since x * ∈ w ω (x n ) was chosen arbitrarily, it follows that w ω (x n ) ⊂ Ω. Next, by the boundedness of {x n k }, there exists a subsequence {x n k j } of {x n k } such that x n k j q and lim sup Since x = P Ω (0), it follows from the property of the metric projection that lim sup

Split Convex Minimization Problem with Multiple Output Sets
Let C be a nonempty, closed and convex subset of a real Hilbert space H.
The convex minimization problem is formulated as finding a point x * ∈ C, such that where g is a real-valued convex function.We denote the solution set of Problem (5.1) by arg min g.Let C, C i be nonempty, closed and convex subsets of real Hilbert spaces H, H i , i = 1, 2, ..., N, respectively, and let T i : H → H i , i = 1, 2, ..., N, be bounded linear operators with adjoints T * i .Let g : H → R, g i : H i → R be convex and differentiable functions.Here, we apply our result to approximate the solution of the following split convex minimization problem with multiple output sets (SCMPMOS): Find x * ∈ C such that ( We need the following lemma to establish our next result.
Lemma 5.1.[36] Let C be a nonempty, closed and convex subset of a real Banach space E. Let g be a convex function of   Proof.Since g i , i = 0, 1, 2, . . ., N are convex, then g i are monotone [36] and thus pseudomonotone.Consequently, the result follows by applying Lemma 5.1 and setting A i = g i in Theorem 4.7.Step 0. Select initial points x 0 , x 1 ∈ H. Let C 0 = C, T 0 = I H , g 0 = g and set n = 1.

Generalized Split Variational Inequality Problem
Finally, we apply our result to study the generalized split variational inequality problem (see [28]).Let C i be nonempty, closed and convex subsets of real Hilbert spaces H i , i = 1, 2, ..., N, and let S i : H i → H i+1 , i = 1, 2, ..., N −1, be bounded linear operators, such that S i = 0. Let B i : H i → H i , i = 1, 2, ..., N, be single-valued operators.The generalized split variational inequality problem (GSVIP) is formulated as finding a point x * ∈ C 1 such that   (3.1), and suppose Assumption A of Theorem 4.7 holds and the solution set Γ = ∅.Then, the sequence {x n } generated by the following algorithm converges strongly to x ∈ Γ, where x = min{ p : p ∈ Γ}.

Numerical experiments
In this section, we present some numerical experiments to illustrate the implementability of our proposed method (Proposed Alg.3.1).For simplicity, in all the experiments we consider the case when N = 4.All numerical computations were carried out using Matlab version R2021(b).In our computations, we choose We consider the following test examples in both finite and infinite dimensional Hilbert spaces for our numerical experiments.Example 6.1.Let H i = R m , i = 0, 1, . . ., 4, and let A i : R m → R m be a linear operator defined by A i (x) = Sx+q, where q ∈ R m and S = NN T +Q+D, N is a m × m matrix, Q is a m × m skew-symmetric matrix, and D is a m × m diagonal matrix with its diagonal entries being nonnegative (thus S is positive symmetric definite).We let C i = {x ∈ R m : −(i + 2) ≤ x j ≤ i + 2, j = 1, ..., m}.In this example, we generate randomly all the entries of N, Q in [−3, 3] while D is randomly generated in [0, 3], q = 0 and T i x = 3x i+3 .Step 0. Select initial points x Step 1.Given the (n − 1)th and nth iterates, choose θ n such that 0 ≤ θ n ≤ θn with θn defined by θn = min θ, Example 6.2.For each i = 0, 1, . . ., 4, we define the feasible set We note that M is a Hankel-type matrix with nonzero reverse diagonal.
We test Examples 6.1, 6.2, 6.3 and 6.4 under the following experiments: In this experiment, we check the behavior of our method by fixing the other parameters and varying φ n,i in Example 6.1.We do this to check the effects of this parameter and the sensitivity of our method to it.We consider φ n,i ∈ { 3 (n+1) , 5 (n+1) 2 , 7 (n+1) 3 , 9 (n+1) 4 , 11 (n+1) 5 } with m = 25, m = 50, m = 100 and m = 200.Using x n+1 − x n < 10 −3 as the stopping criterion, we plot the graphs of x n+1 − x n against the number of iterations for each m..The numerical results are reported in Figs. 1, 2, 3, 4 and Table 1.Experiment 6.6.In this experiment, we check the behavior of our method by fixing the other parameters and varying c n,i in Example 6.2.We do this to check the effects of this parameter and the sensitivity of our method on it.We consider c n,i ∈ { Using x n+1 − x n < 10 −3 as the stopping criterion, we plot the graphs of x n+1 − x n against the number of iterations in each case.The numerical results are reported in Figures 5,6,7,8 and Table 2.
Finally, we test Examples 6.3 and 6.4 under the following experiment: Experiment 6.7.In this experiment, we check the behavior of our method by fixing the other parameters and varying k n,i and c n,i in Examples 6.3 and 6.4.We do this to check the effects of these parameters and the sensitivity of our method on them.4. Remark 6.8.Using different initial values, cases of m and varying the key parameters in Examples 6.1-6.4,we obtained the numerical results displayed in Tables 1, 2 and 3 and Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12.We noted the following from our numerical experiments: (1) In all the examples, the choice of the key parameters c n,i , k n,i and φ n,i does not affect the number of iterations and no significant difference in the CPU time.Thus, our method is not sensitive to these key parameters for each initial value and case of m. (2) The number of iterations for our method remains consistent in all the examples and so well-behaved.

Conclusion
In this paper, we studied the concept of split variational inequality problem with multiple output sets when the cost operators are pseudomonotone and uniformly continuous.We proposed a new Mann-type inertial projection and contraction method with self-adaptive step sizes for approximating the solution of the problem in the framework of Hilbert spaces.Under some mild conditions on the control sequences and without prior knowledge of the operator norms, we obtained strong convergence result for the proposed algorithm.Finally, we applied our result to study certain classes of optimization problems and we presented several numerical experiments to illustrate the applicability of the proposed method.

Theorem 4 . 7 .
Let {x n } be a sequence generated by Algorithm 3.1 such that Assumption A holds.Then, {x n } converges strongly to x ∈ Ω, where x = min{ p : p ∈ Ω}.
Suppose {w n } and {b n } are two sequences generated by Algorithm 3.1 with subsequences {w n k } and {b n k }, respectively, such that lim k→∞ implies that {x n } is bounded.Hence, {w n }, {y n,i }, {z n,i }, {y n,i }, {r n,i } and {b n } are all bounded.Lemma 4.5.
then z is a solution of Problem (5.1) if and only if z ∈ V I(C, g), where g is the gradient of g.

Table 1 .
Numerical results for ( Experiment 6.5)Then, the sequence {x n } generated by the following algorithm converges strongly to x ∈ Γ, where x = min{ p : p ∈ Γ}.