1 Introduction

Let H be a real Hilbert space with an inner product \(\langle \cdot ,\cdot \rangle \) and induced norm \(||\cdot ||.\) Let C be a nonempty, closed and convex subset of H,  and let \(A:H\rightarrow H\) be a mapping. The variational inequality problem (VIP) is formulated as finding a point \(p\in C\) such that

$$\begin{aligned} \langle x-p, Ap \rangle \ge 0,\quad \forall ~x\in C. \end{aligned}$$
(1.1)

We denote the solution set of the VIP (1.1) by VI(CA). Variational inequality theory was first introduced independently by Fichera [13] and Stampacchia [34]. The VIP is a fundamental problem in optimization theory, which unifies several important concepts in applied mathematics, such as the necessary network equilibrium problems, optimality conditions, systems of nonlinear equations and complementarity problems (e.g. see [4, 5, 20]). In the recent years, the VIP has attracted the attention of researchers due to its numerous applications in diverse fields, such as in optimization theory, economics, structural analysis, operations research, sciences and engineering (see [10, 17, 36] and the references therein). Several authors have proposed and studied different iterative methods for approximating the solution of the VIP (see [2, 7, 16, 25, 26] and references therein).

The split inverse problem (SIP) is another area of research which has recently received great research attention (see [42] and the references therein) due to its several applications in different fields, for instance, in signal processing, phase retrieval, medical image reconstruction, data compression, intensity-modulated radiation therapy, etc. (e.g. see [8, 9, 18, 22, 29]). The SIP model is formulated as follows:

$$\begin{aligned} \text {Find}~~ \hat{x}\in H_1 \quad \text {that solves IP}_1 \end{aligned}$$
(1.2)

such that

$$\begin{aligned} \hat{y}:= T\hat{x}\in H_2 \quad \text {solves IP}_2, \end{aligned}$$
(1.3)

where \(H_1\) and \(H_2\) are real Hilbert spaces, IP\(_1\) denotes an inverse problem formulated in \(H_1\) and IP\(_2\) denotes an inverse problem formulated in \(H_2,\) and \(T: H_1 \rightarrow H_2\) is a bounded linear operator.

In 1994, Censor and Elfving in [9] introduced the first instance of the SIP called the split feasibility problem (SFP) for modelling inverse problems that arise from medical image reconstruction. The SFP finds application in the control theory, approximation theory, signal processing, geophysics, communications, biomedical engineering, etc. [8, 23, 31, 32]. Let C and Q be nonempty, closed and convex subsets of Hilbert spaces \(H_1\) and \(H_2,\) respectively, and let \(T:H_1\rightarrow H_2\) be a bounded linear operator. The SFP is defined as follows:

$$\begin{aligned} \text {Find}~~ \hat{x}\in C~~ \text {such that}~~ \hat{y}= T\hat{x}\in Q. \end{aligned}$$
(1.4)

Several iterative algorithms for solving the SFP (1.4) have been constructed and investigated by researchers (see, e.g. [8, 23, 24] and the references therein).

An important generalization of the SFP is the split variational inequality problem (SVIP) introduced by Censor et al. [10]. The SVIP is formulated as follows:

$$\begin{aligned} \text {Find}~ \hat{x}\in C~ \text {that solves}~ \langle A_1\hat{x},x-\hat{x} \rangle \ge 0,\quad \forall x\in C \end{aligned}$$
(1.5)

such that

$$\begin{aligned} \hat{y}=T\hat{x}\in H_2~ \text {solves}~ \langle A_2\hat{y},y-\hat{y} \rangle \ge 0,\quad \forall y\in Q, \end{aligned}$$
(1.6)

where \(A_1:H_1\rightarrow H_1, A_2:H_2\rightarrow H_2\) are single-valued operators. Several authors have studied and proposed different iterative methods for approximating the solution of SVIP (see [19, 21, 37] and the references therein).

In 2020, Reich and Tuyen [28] introduced and studied the concept of split feasibility problem with multiple output sets in Hilbert spaces (SFPMOS), which is formulated as follows: Find a point \(u^\dagger \) such that

$$\begin{aligned} u^\dagger \in \Gamma := C\cap \left( \bigcap _{i=1}^N T_{i}^{-1}(Q_i)\right) \ne \emptyset . \end{aligned}$$
(1.7)

where \(T_i:H\rightarrow H_i, ~ i=1,2,...,N\), are bounded linear operators, C and \(Q_i\) are nonempty, closed and convex subsets of Hilbert spaces H and \( H_i,i=1,2,\ldots ,N,\) respectively.

Moreover, Reich and Tuyen [30] proposed the following two algorithms for approximating the solution of SFPMOS (1.7) in Hilbert spaces:

$$\begin{aligned} x_{n+1} = P_C\left[ x_n-\gamma _n\sum _{i=1}^{N}T_i^*(I-P_{Q_i})T_ix_n\right] , \end{aligned}$$
(1.8)

and

$$\begin{aligned} x_{n+1} = \alpha _nf(x_n) + (1-\alpha _n)P_C\left[ x_n-\gamma _n\sum _{i=1}^{N}T_i^*(I-P_{Q_i})T_ix_n)\right] , \end{aligned}$$
(1.9)

where \(f:C\rightarrow C\) is a strict contraction, \(\{\gamma _n\}\subset (0,\infty )\) and \(\{\alpha _n\}\subset (0,1).\) The authors obtained weak and strong convergence result for Algorithm (1.8) and Algorithm (1.9), respectively.

In this paper, we study the split variational inequality problem with multiple output sets. Let \(H, H_i,i=1,2,...,N,\) be real Hilbert spaces and let \(C, C_i\) be nonempty, closed and convex subsets of real Hilbert spaces H and \( H_i,i=1,2,...,N,\) respectively. Let \(T_i:H\rightarrow H_i,i=1,2,...,N,\) be bounded linear operators and let \(A:H\rightarrow H, A_i:H_i\rightarrow H_i, i=1,2,...,N,\) be single-valued operators. The split variational inequality problem with multiple output sets (SVIPMOS) is formulated as finding a point \(x^*\in C\) such that

$$\begin{aligned} x^*\in \Omega := VI(C,A)\cap (\mathop {\cap }\limits _{i=1}^N T^{-1}_iVI(C_i,A_i))\ne \emptyset . \end{aligned}$$
(1.10)

It is clear that the SVIPMOS (1.10) generalizes the SFPMOS (1.7).

In the last couple of years, developing iterative methods with a high rate of convergence for solving optimization problems has become of great interest to researchers. One of the approaches employed by researchers to achieve this objective is the inertial technique. This technique originates from an implicit time discretization method (the heavy ball method) of second-order dynamical systems. In recent years, several authors have constructed highly efficient iterative methods by employing the inertial technique, see, e.g., [1, 3, 11, 14, 38, 40].

In this paper, we propose and analyze a new Mann-type inertial projection and contraction algorithm with self-adaptive step sizes for approximating the solution SVIPMOS (1.10) when the cost operators are pseudomonotone and non-Lipschitz. While the cost operators are non-Lipschitz, our proposed method does not involve any line search method but uses a more efficient self-adaptive step size technique which generates a non-monotonic sequence of step sizes. Furthermore, we prove that the sequence generated by our proposed method converges to the minimum-norm solution of the problem in Hilbert spaces. Finally, we apply our result to study certain classes of optimization problems and we present several numerical experiments to demonstrate the applicability of our proposed algorithm.

The outline of the paper is as follows: In Sect. 2, we give some definitions and results required for the convergence analysis. In Sect. 3, we present the proposed algorithm and in Sect. 4 we analyze the convergence of our proposed method. In Sect. 5 we apply our result to study certain classes of optimization problems, and in Sect. 6 we carry out several numerical experiments with graphical illustrations. Finally, we give some concluding remarks in Sect. 7.

2 Preliminaries

Definition 2.1

[2, 16] An operator \(A:H\rightarrow H\) is said to be

  1. (i)

    \(\alpha \)-strongly monotone, if there exists \(\alpha >0\) such that

    $$\begin{aligned} \langle x-y, Ax-Ay\rangle \ge \alpha \Vert x-y\Vert ^2,~~ \forall ~x,y \in H; \end{aligned}$$
  2. (ii)

    monotone, if

    $$\begin{aligned} \langle x-y, Ax-Ay \rangle \ge 0,\quad \forall ~ x,y\in H; \end{aligned}$$
  3. (iii)

    pseudomonotone, if

    $$\begin{aligned} \langle Ay, x-y \rangle \ge 0 \implies ~\langle Ax,x-y \rangle \ge 0,~\forall x,y \in H, \end{aligned}$$
  4. (iv)

    L-Lipschitz continuous, if there exists a constant \(L>0\) such that

    $$\begin{aligned} ||Ax-Ay||\le L||x-y||,\quad \forall ~ x,y\in H; \end{aligned}$$
  5. (v)

    uniformly continuous, if for every \(\epsilon >0,\) there exists \(\delta =\delta (\epsilon )>0,\) such that

    $$\begin{aligned} \Vert Ax-Ay\Vert<\epsilon \quad \text {whenever}\quad \Vert x-y\Vert <\delta ,\quad \forall x,y\in H; \end{aligned}$$

Remark 2.2

We note that the following implications hold: \((i)\implies (ii)\implies (iii)\) but the converses are not generally true. We also point out that uniform continuity is a weaker notion than Lipschitz continuity.

It is well known that if D is a convex subset of H,  then \(A:D\rightarrow H\) is uniformly continuous if and only if, for every \(\epsilon >0,\) there exists a constant \(K<+\infty \) such that

$$\begin{aligned} \Vert Ax-Ay\Vert \le K\Vert x-y\Vert + \epsilon \quad \forall x,y\in D. \end{aligned}$$
(2.1)

Lemma 2.3

[27, 39] Let H be a real Hilbert space. Then the following results hold for all \(x,y\in H\) and \(\delta \in (0, 1):\)

  1. (i)

    \(||x + y||^2 \le ||x||^2 + 2\langle y, x + y \rangle ;\)

  2. (ii)

    \(||x + y||^2 = ||x||^2 + 2\langle x, y \rangle + ||y||^2;\)

  3. (iii)

    \(||\delta x + (1-\delta ) y||^2 = \delta ||x||^2 + (1-\delta )||y||^2 -\delta (1-\delta )||x-y||^2.\)

Lemma 2.4

([33]) Let \(\{a_n\}\) be a sequence of nonnegative real numbers, \(\{\alpha _n\}\) be a sequence in (0, 1) with \(\sum _{n=1}^\infty \alpha _n = \infty \) and \(\{b_n\}\) be a sequence of real numbers. Assume that

$$\begin{aligned} a_{n+1}\le (1 - \alpha _n)a_n + \alpha _nb_n\;\;\;\; \text {for all}\,\, n\ge 1. \end{aligned}$$

If \(\limsup \nolimits _{k\rightarrow \infty }b_{n_k}\le 0\) for every subsequence \(\{a_{n_k}\}\) of \(\{a_n\}\) satisfying \(\liminf \nolimits _{k\rightarrow \infty }(a_{n_{k+1}} - a_{n_k})\ge 0,\) then \(\lim \nolimits _{n\rightarrow \infty }a_n =0.\)

Lemma 2.5

[35] Suppose \(\{\lambda _n\}\) and \(\{\theta _n\}\) are two nonnegative real sequences such that

$$\begin{aligned} \lambda _{n+1}\le \lambda _n + \phi _n,\quad \forall n\ge 1. \end{aligned}$$

If \(\sum _{n=1}^{\infty }\phi _n<\infty ,\) then \(\lim \nolimits _{n\rightarrow \infty }\lambda _n\) exists.

Lemma 2.6

[12] Consider the VIP (1.1) with C being a nonempty, closed, convex subset of a real Hilbert space H and \(A:C\rightarrow H\) being pseudomonotone and continuous. Then p is a solution of VIP (1.1) if and only if

$$\begin{aligned} \langle Ax, x-p \rangle \ge 0, \quad ~~ \forall x\in C \end{aligned}$$

3 Main Results

In this section, we present our proposed algorithm for solving the SVIPMOS (1.10). We analyze the convergence of the proposed method under the following conditions:

Let \(C, C_i\) be nonempty, closed and convex subsets of real Hilbert spaces \(H, H_i,i=1,2,...,N,\) respectively, and let \(T_i:H\rightarrow H_i,i=1,2,...,N,\) be bounded linear operators with adjoints \(T^*_i.\) Let \(A:H\rightarrow H, A_i:H_i\rightarrow H_i, i=1,2,...,N,\) be uniformly continuous pseudomonotone operators satisfying the following property:

$$\begin{aligned}{} & {} \text {whenever}~ \{T_ix_n\}\subset C_i,~ T_ix_n\rightharpoonup T_iz,~~ \text {then}~~ \Vert A_iT_iz\Vert \nonumber \\{} & {} \quad \le \liminf \limits _{n\rightarrow \infty }\Vert A_iT_ix_n\Vert ,~i=0,1,2\ldots ,N, C_0=C,A_0=A, T_0=I^H. \end{aligned}$$
(3.1)

Moreover, we assume that the solution set \(\Omega \ne \emptyset \) and the control parameters satisfy the following conditions:

Assumption A

  1. (A1)

    \(\{\alpha _n\} \subset (0,1), \lim \nolimits _{n\rightarrow \infty }\alpha _n=0, \sum _{n=1}^\infty \alpha _n = +\infty , \lim \nolimits _{n\rightarrow \infty }\frac{\epsilon _n}{\alpha _n}=0,\{\xi _n\}\subset [a,b]\subset (0,1-\alpha _n),\theta >0;\)

  2. (A2)

    \(0<c_i< c_i'<1,0<\phi _i< \phi _i'<1,0<k_i<k_i'<2, \{c_{n,i}\},\{\phi _{n,i}\},\{k_{n,i}\}\subset \mathbb {R}_+, \lim \nolimits _{n\rightarrow \infty }c_{n,i}=\lim \nolimits _{n\rightarrow \infty }\phi _{n,i}=\lim \nolimits _{n\rightarrow \infty }k_{n,i}=0, \lambda _{1,i}>0,~\forall ~i=0,1,2,\ldots ,N;\)

  3. (A3)

    \(\{\rho _{n,i}\}\subset \mathbb {R}_+,\sum _{n=1}^{\infty }\rho _{n,i}<+\infty ,0<a_i\le \delta _{n,i}\le b_i<1, \sum _{i=0}^{N}\delta _{n,i}=1\) for each \(n\ge 1.\)

Now, the algorithm is presented as follows:

Algorithm 3.1.

Step 0. Select initial points \(x_0, x_1 \in H.\) Let \(C_0=C,~ T_0=I^H,~ A_0=A\) and set \(n=1.\)

Step 1. Given the \((n-1)th\) and nth iterates, choose \(\theta _n\) such that \(0\le \theta _n\le \hat{\theta }_n\) with \(\hat{\theta }_n\) defined by

               \(\hat{\theta }_n = {\left\{ \begin{array}{ll} \min \Big \{\theta ,~ \frac{\epsilon _n}{\Vert x_n - x_{n-1}\Vert }\Big \}, \quad \text {if}~ x_n \ne x_{n-1},\\ \theta , \quad \quad \quad \quad \quad \text {otherwise.} \end{array}\right. }\)         (3.2)

Step 2. Compute

               \(w_n = x_n + \theta _n(x_n - x_{n-1}).\)

Step 3. Compute

               \(y_{n,i} = P_{C_i}(T_iw_n-\lambda _{n,i}A_iT_iw_n)\)

               \(\lambda _{n+1,i} = {\left\{ \begin{array}{ll} \min \{\frac{(c_{n,i}+c_i)\Vert T_iw_n - y_{n,i}\Vert }{\Vert A_iT_iw_n - A_iy_{n,i}\Vert },~~ \lambda _{n,i}+\rho _{n,i}\},&{}\text {if}\quad A_iT_iw_n \\ &{}- A_iy_{n,i}\ne 0,\\ \lambda _{n,i}+\rho _{n,i},&{} \text {otherwise.} \end{array}\right. }\)

               \(z_{n,i}=T_iw_n-\beta _{n,i}r_{n,i},\)

         where

               \(r_{n,i}=T_iw_n-y_{n,i}-\lambda _{n,i}(A_iT_iw_n - A_iy_{n,i})\)

         and

               \(\beta _{n,i}={\left\{ \begin{array}{ll} (k_i+k_{n,i})\frac{\langle T_iw_n - y_{n,i},r_{n,i}\rangle }{\Vert r_{n,i}\Vert ^2},&{}\text {if}\quad r_{n,i}\ne 0\\ 0,&{}\text {otherwise.} \end{array}\right. }\)

Step 4. Compute

               \(b_n = \sum _{i=0}^{N}\delta _{n,i}\big (w_n+\eta _{n,i}T_i^*(z_{n,i}-T_iw_n)\big ),\)

         where

      \(\eta _{n,i}={\left\{ \begin{array}{ll} \frac{(\phi _{n,i}+\phi _i)\Vert T_iw_n-z_{n,i}\Vert ^2}{\Vert T_i^*(T_iw_n-z_{n,i})\Vert ^2},&{}\text {if}\quad \Vert T_i^*(T_iw_n-z_{n,i})\Vert \ne 0,\\ 0,&{}\text {otherwise.} \end{array}\right. }\)         (3.3)

Step 5. Compute

               \(x_{n+1}= (1-\alpha _n-\xi _n)w_n+\xi _nb_n.\)

   Set \(n:= n +1 \) and return to Step 1.

Remark 3.2

By conditions (C1) and (C2), it follows from (3.2) that

$$\begin{aligned} \lim _{n\rightarrow \infty }\theta _n||x_n - x_{n-1}|| = 0\quad \text {and}\quad \lim _{n\rightarrow \infty }\frac{\theta _n}{\alpha _n}||x_n - x_{n-1}|| = 0. \end{aligned}$$

Remark 3.3

Observe that while the cost operators \(A_i,i=0,1,2,\ldots ,N\) are non-Lipschitz, our method does not require any linesearch technique, which could be computationally too expensive too implement. Rather, we employ self-adaptive step sizes that only require simple computations of known information per iteration.

4 Convergence Analysis

First, we prove some lemmas needed for our strong convergence theorem.

Lemma 4.1

Suppose \(\{\lambda _{n,i}\}\) is the sequence generated by Algorithm 3.1 such that Assumption A holds. Then \(\{\lambda _{n,i}\}\) is well defined for each \(i=0,1,2,\ldots ,N\) and \(\lim \nolimits _{n\rightarrow \infty }\lambda _{n,i}=\lambda _{1,i}\in [\min \{\frac{c_i}{M_i},\lambda _{1,i}\}, \lambda _{1,i}+\Phi _i],\) where \(\Phi _i=\sum _{n=1}^{\infty }\rho _{n,i}.\)

Proof

Since \(A_i\) is uniformly continuous for each \(i=0,1,2,\ldots ,N,\) then by (2.1) we have that for any given \(\epsilon _i>0,\) there exists \(K_i<+\infty \) such that \(\Vert A_iT_iw_n-A_iy_{n,i}\Vert \le K_i\Vert T_iw_n-y_{n,i}\Vert +\epsilon _i.\) Hence, for the case \(A_iT_iw_n-A_iy_{n,i}\ne 0\) for all \(n\ge 1\) we have

$$\begin{aligned}{} & {} \frac{(c_{n,i}+c_i)\Vert T_iw_n-y_{n,i}\Vert }{\Vert A_iT_iw_n-A_iy_{n,i}\Vert }\ge \frac{(c_{n,i}+c_i)\Vert T_iw_n-y_{n,i}\Vert }{K_i\Vert T_iw_n-y_{n,i}\Vert +\epsilon _i} \\{} & {} \quad = \frac{(c_{n,i}+c_i)\Vert T_iw_n-y_{n,i}\Vert }{(K_i+\mu _i)\Vert T_iw_n-y_{n,i}\Vert } =\frac{(c_{n,i}+c_i)}{M_i}\ge \frac{c_i}{M_i},\ \end{aligned}$$

where \(\epsilon _i =\mu _i\Vert T_iw_n-y_{n,i}\Vert \) for some \(\mu _i\in (0,1)\) and \(M_i=K_i+\mu _i.\) Thus, by the definition of \(\lambda _{n+1},\) the sequence \(\{\lambda _{n,i}\}\) has lower bound \(\min \{\frac{c_i}{M_i},\lambda _{1,i}\}\) and has upper bound \(\lambda _{1,i} + \Phi _i.\) By Lemma 2.5, the limit \(\lim \nolimits _{n\rightarrow \infty }\lambda _{n,i}\) exists and we denote by \(\lambda _i=\lim \nolimits _{n\rightarrow \infty }\lambda _{n,i}.\) It is clear that \(\lambda _i\in \big [\min \{\frac{c_i}{M_i},\lambda _{1,i}\},\lambda _{1,i}+\Phi _i\big ]\) for each \(i=0,1,2\ldots ,N.\) \(\square \)

Lemma 4.2

If \(\Vert T_i^*(T_iw_n-z_{n,i})\Vert \ne 0,\) then the sequence \(\{\eta _{n,i}\}\) defined by (3.3) has a positive lower bounded for each \(i=0,1,2,\ldots ,N.\)

Proof

If \(\Vert T_i^*(T_iw_n-z_{n,i})\Vert \ne 0,\) we have for each \(i=0,1,2,\ldots ,N\)

$$\begin{aligned} \eta _{n,i}=\frac{(\phi _{n,i}+\phi _i)\Vert T_iw_n-z_{n,i}\Vert ^2}{\Vert T^*_i(T_iw_n-z_{n,i})\Vert ^2}. \end{aligned}$$

Since \(T_i\) is a bounded linear operator and \(\lim \nolimits _{n\rightarrow \infty }\phi _{n,i}=0\) for each \(i=0,1,2,\ldots ,N,\) we have

$$\begin{aligned} \frac{(\phi _{n,i}+\phi _i)\Vert T_iw_n-z_{n,i}\Vert ^2}{\Vert T^*_i(T_iw_n-z_{n,i})\Vert ^2}\ge \frac{(\phi _{n,i}+\phi _i)\Vert T_iw_n-z_{n,i}\Vert ^2}{\Vert T_i\Vert ^2\Vert T_iw_n-z_{n,i}\Vert \Vert ^2}\ge \frac{\phi _i}{\Vert T_i\Vert ^2}, \end{aligned}$$

which implies that \(\frac{\phi _i}{\Vert T_i\Vert ^2}\) is a lower bound of \(\{\eta _{n,i}\}\) for each \(i=0,1,2,\ldots ,N\).

\(\square \)

Lemma 4.3

Suppose Assumption A of Algorithm 3.1 holds. Then, there exists a positive integer N such that

$$\begin{aligned} k_{i}+k_{n,i}\in (0,2), \phi _{i}+\phi _{n,i}\in (0,1),\quad \text {and}\quad \frac{\lambda _{n,i}(c_{n,i}+ci)}{\lambda _{n+1,i}}\in (0,1),\quad \forall n\ge N. \end{aligned}$$

Proof

Since \(0<k_i<k_i'<2\) and \(\lim \nolimits _{n\rightarrow \infty }k_{n,i}=0\) for each \(i=0,1,2,\ldots ,N,\) there exists a positive integer \(N_{1,i}\) such that

$$\begin{aligned} 0<k_{i}+k_{n,i}\le k_i'<2,~~\forall n\ge N_{1,i}. \end{aligned}$$

By similar argument, there exists a positive integer \(N_{2,i}\) for each \(i=0,1,2,\ldots ,N,\) such that

$$\begin{aligned} 0<\phi _{i}+\phi _{n,i}\le \phi _i'<1,~~\forall n\ge N_{2,i}. \end{aligned}$$

In addition, since \(0<c_i<c_i'< 1,\lim \nolimits _{n\rightarrow \infty }c_{n,i}=0\) and \(\lim \nolimits _{n\rightarrow \infty }\lambda _{n,i}=\lambda _i\) for each \(i=0,1,2,\ldots ,N,\) we have

$$\begin{aligned} \lim \nolimits _{n\rightarrow \infty }\Big (1-\frac{\lambda _{n,i}(c_{n,i}+c_i)}{\lambda _{n+1,i}}\Big )=1-c_i>1-c_i'>0. \end{aligned}$$

Therefore, for each \(i=0,1,2,\ldots ,N,\) there exists a positive integer \(N_{3,i}\) such that

$$\begin{aligned} 1-\frac{\lambda _{n,i}(c_{n,i}+c_i)}{\lambda _{n+1,i}}>0,~~\forall n\ge N_{3,i}. \end{aligned}$$

Now, by setting \(N=\max \{N_{1,i},~N_{2,i},~N_{3,i}:i=0,1,2,\ldots ,N\},\) the required result follows. \(\square \)

Lemma 4.4

Let \(\{x_n\}\) be a sequence generated by Algorithm 3.1 such that Assumption A holds. Then \(\{x_n\}\) is bounded.

Proof

Let \(p\in \Omega .\) This implies that \(T_ip\in VI(C_i,A_i),~ i=0,1,2,\ldots ,N.\) Then, by applying the triangular inequality, it follows from the definition of \(w_n\) that

$$\begin{aligned} \Vert w_n-p\Vert= & {} \Vert x_n+ \theta _n(x_n-x_{n-1})-p\Vert \nonumber \\\le & {} \Vert x_n-p\Vert +\theta _n\Vert x_n-x_{n-1}\Vert \nonumber \\= & {} \Vert x_n-p\Vert + \alpha _n\dfrac{\theta _n}{\alpha _n}\Vert x_n-x_{n-1}\Vert . \end{aligned}$$
(4.1)

By Remark (3.2), there exists \(M_1 > 0\) such that

$$\begin{aligned} \dfrac{\theta _n}{\alpha _n}\Vert x_n-x_{n-1}\Vert \le M_1, ~~~\forall ~ n\ge 1. \end{aligned}$$

Thus, it follows from (4.1) that

$$\begin{aligned} \Vert w_n-p\Vert \le \Vert x_n-p\Vert + \alpha _nM_1,~~\forall n\ge 1. \end{aligned}$$
(4.2)

Since \(y_{n,i} = P_{C_i}(T_iw_n-\lambda _{n,i}A_iT_iw_n)\) and \(T_ip\in VI(C_i,A_i),~i=0,1,2\ldots , N,\) by the property of the projection map it follows that

$$\begin{aligned} \langle y_{n,i}-T_iw_n+ \lambda _{n,i}A_iT_iw_n, y_{n,i}-T_ip \rangle \le 0. \end{aligned}$$
(4.3)

Moreover, since \(y_{n,i}\in C_i,~i=0,1,2,\ldots ,N,\) we have

$$\begin{aligned} \langle A_iT_ip, y_{n,i}-T_ip \rangle \ge 0, \end{aligned}$$

which follows from the pseudomonotonicity of \(A_i\) that \(\langle A_iy_{n,i}, y_{n,i}-T_ip\rangle \ge 0.\) Since \(\lambda _{n,i}>0,~i=0,1,2,\)\(\ldots ,N,\) we have

$$\begin{aligned} \langle \lambda _{n,i} A_iy_{n,i}, y_{n,i}-T_ip \rangle \ge 0. \end{aligned}$$
(4.4)

From (4.3) and (4.4) we obtain

$$\begin{aligned} \langle T_iw_n-y_{n,i}-\lambda _{n,i}(A_iT_iw_n-A_iy_{n,i}), y_{n,i}-T_ip \rangle \ge 0. \end{aligned}$$
(4.5)

Now, applying the definition of \(r_{n,i}\) and (4.5) we get

$$\begin{aligned} \langle T_iw_n-T_ip,r_{n,i} \rangle= & {} \langle T_iw_n-y_{n,i},r_{n,i} \rangle + \langle y_{n,i}-T_ip,r_{n,i} \rangle \nonumber \\= & {} \langle T_iw_n-y_{n,i},r_{n,i} \rangle \nonumber \\{} & {} + \langle T_iw_n-y_{n,i}-\lambda _{n,i}(A_iT_iw_n - A_iy_{n,i}), y_{n,i}-T_ip \rangle \nonumber \\\ge & {} \langle T_iw_n-y_{n,i},r_{n,i} \rangle . \end{aligned}$$
(4.6)

Since \(z_{n,i}=T_iw_n-\beta _{n,i}r_{n,i},\) it follows that

$$\begin{aligned} \Vert \beta _{n,i} r_{n,i}\Vert ^2=\Vert z_{n,i}-T_iw_n\Vert ^2. \end{aligned}$$
(4.7)

By Lemma 4.3, there exists a positive integer N such that \(0<k_i+k_{n,i}<2~\forall n\ge N.\) From the definition of \(\beta _{n,i},\) if \(r_{n,i}\ne 0~ i=0,1,2,\ldots ,N,\) we have

$$\begin{aligned} \beta _{n,i}\Vert r_{n,i}\Vert ^2=(k_i+k_{n,i})\langle T_iw_n-y_{n,i},r_{n,i}\rangle . \end{aligned}$$
(4.8)

Now, by applying Lemma 2.3, (4.6), (4.7) and (4.8) we get

$$\begin{aligned} \Vert z_{n,i}-T_ip\Vert ^2= & {} \Vert T_iw_n-\beta _{n,i}r_{n,i}-T_ip\Vert ^2\nonumber \\= & {} \Vert T_iw_n-T_ip\Vert ^2 + \beta _{n,i}^2\Vert r_{n,i}\Vert ^2 - 2\beta _{n,i}\langle T_iw_n-T_ip,r_{n,i} \rangle \nonumber \\\le & {} \Vert T_iw_n-T_ip\Vert ^2 + \beta _{n,i}^2\Vert r_{n,i}\Vert ^2 - 2\beta _{n,i}\langle T_iw_n-y_{n,i},r_{n,i} \rangle \nonumber \\= & {} \Vert T_iw_n-T_ip\Vert ^2 + \beta _{n,i}^2\Vert r_{n,i}\Vert ^2 -\frac{2}{k_i+k_{n,i}}{\beta _{n,i}^2}\Vert r_{n,i}\Vert ^2\nonumber \\= & {} \Vert T_iw_n-T_ip\Vert ^2 +\Big (1-\frac{2}{k_i+k_{n,i}}\Big )\Vert z_{n,i}-T_iw_n\Vert ^2\nonumber \\\le & {} \Vert T_iw_n-T_ip\Vert ^2. \end{aligned}$$
(4.9)

Observe that if \(r_{n,i}=0,~i=0,1,2,\ldots ,N,\) (4.9) still holds.

Next, since the function \(\Vert \cdot \Vert ^2\) is convex, we have

$$\begin{aligned} \Vert b_n-p\Vert ^2= & {} \Vert \sum _{i=0}^{N}\delta _{n,i}\big (w_n+\eta _{n,i}T_i^*(z_{n,i}-T_iw_n)\big )-p\Vert ^2\nonumber \\\le & {} \sum _{i=0}^{N}\delta _{n,i}\Vert w_n+\eta _{n,i}T_i^*(z_{n,i}-T_iw_n)-p\Vert ^2. \end{aligned}$$
(4.10)

By Lemma 4.3, there exists a positive integer N such that \(0<\phi _{n,i}+\phi _i<1,~~i=0,1,2,\ldots ,N\) for all \(n\ge N.\) Now, from (4.10) and by applying Lemma 2.3 and (4.9) we have

$$\begin{aligned}{} & {} \Vert w_n+\eta _{n,i}T_i^*(z_{n,i}-T_iw_n)-p\Vert ^2 \nonumber \\{} & {} \quad = \Vert w_n-p\Vert ^2 + \eta _{n,i}^2 \Vert T_i^*(z_{n,i}-T_iw_n)\Vert ^2 + 2\eta _{n,i}\langle w_n-p, T_i^*(z_{n,i}-T_iw_n)\rangle \nonumber \\{} & {} \quad = \Vert w_n-p\Vert ^2 + \eta _{n,i}^2 \Vert T_i^*(z_{n,i}-T_iw_n)\Vert ^2 + 2\eta _{n,i}\langle T_iw_n-T_ip, z_{n,i}-T_iw_n\rangle \nonumber \\{} & {} \quad = \Vert w_n-p\Vert ^2 + \eta _{n,i}^2 \Vert T_i^*(z_{n,i}-T_iw_n)\Vert ^2 + \eta _{n,i}[\Vert z_{n,i}-T_ip\Vert ^2-\Vert T_iw_n-T_ip\Vert ^2\nonumber \\{} & {} \qquad -\Vert z_{n,i}-T_iw_n\Vert ^2]\nonumber \\{} & {} \quad \le \Vert w_n-p\Vert ^2 + \eta _{n,i}^2 \Vert T_i^*(z_{n,i}-T_iw_n)\Vert ^2 - \eta _{n,i}\Vert z_{n,i}-T_iw_n\Vert ^2\nonumber \\{} & {} \quad = \Vert w_n-p\Vert ^2 - \eta _{n,i}[\Vert z_{n,i}-T_iw_n\Vert ^2-\eta _{n,i}\Vert T_i^*(z_{n,i}-T_iw_n)\Vert ^2]. \end{aligned}$$
(4.11)

If \(\Vert T_i^*(z_{n,i}-T_iw_n)\Vert \ne 0,\) then using the definition of \(\eta _{n,i}\) we have

$$\begin{aligned} \Vert z_{n,i}-T_iw_n\Vert ^2-\eta _{n,i}\Vert T_i^*(z_{n,i}-T_iw_n)\Vert ^2 = [1-(\phi _{n,i}+\phi _i)]\Vert T_iw_n-z_{n,i}\Vert ^2\ge 0.\nonumber \\ \end{aligned}$$
(4.12)

Thus, by applying (4.12) in (4.11) and substituting in (4.10) we have

$$\begin{aligned} \Vert b_n-p\Vert ^2\le & {} \Vert w_n-p\Vert ^2-\sum _{i=0}^{N}\delta _{n,i}\eta _{n,i}[1-(\phi _{n,i}+\phi _i)]\Vert T_iw_n-z_{n,i}\Vert ^2\nonumber \\\le & {} \Vert w_n-p\Vert ^2. \end{aligned}$$
(4.13)

Observe that if \(\Vert T_i^*(z_{n,i}-T_iw_n)\Vert =0,\) (4.13) still holds from (4.11).

By the definition of \(x_{n+1},\) we have

$$\begin{aligned} \Vert x_{n+1}-p\Vert= & {} \Vert (1-\alpha _n-\xi _n)(w_n-p) + \xi _n(b_n-p)-\alpha _np\Vert \nonumber \\\le & {} \Vert (1-\alpha _n-\xi _n)(w_n-p) + \xi _n(b_n-p)\Vert + \alpha _n\Vert p\Vert . \end{aligned}$$
(4.14)

Applying Lemma 2.3(ii) together with (4.13) we have

$$\begin{aligned}&\Vert (1-\alpha _n-\xi _n)(w_n-p) + \xi _n(b_n-p)\Vert ^2\\&\quad = (1-\alpha _n-\xi _n)^2\Vert w_n-p\Vert ^2 + 2(1-\alpha _n-\xi _n)\xi _n\langle w_n-p, b_n-p \rangle \\&\qquad + \xi _n^2\Vert b_n-p\Vert ^2\\&\quad \le (1-\alpha _n-\xi _n)^2\Vert w_n-p\Vert ^2 + 2(1-\alpha _n-\xi _n)\xi _n\Vert w_n-p\Vert \Vert b_n-p\Vert \\&\qquad + \xi _n^2\Vert b_n-p\Vert ^2\\&\quad \le (1-\alpha _n-\xi _n)^2\Vert w_n-p\Vert ^2 + (1-\alpha _n-\xi _n)\xi _n\big [\Vert w_n-p\Vert ^2 + \Vert b_n-p\Vert ^2\big ] \\&\qquad + \xi _n^2\Vert b_n-p\Vert ^2\\&\quad =(1-\alpha _n-\xi _n)(1-\alpha _n)\Vert w_n-p\Vert ^2 + \xi _n(1-\alpha _n)\Vert b_n-p\Vert ^2\\&\quad \le (1-\alpha _n-\xi _n)(1-\alpha _n)\Vert w_n-p\Vert ^2 + \xi _n(1-\alpha _n)\Vert w_n-p\Vert ^2\\&\quad =(1-\alpha _n)^2\Vert w_n-p\Vert ^2, \end{aligned}$$

which implies that

$$\begin{aligned} \Vert (1-\alpha _n-\xi _n)(w_n-p) + \xi _n(b_n-p)\Vert \le (1-\alpha _n)\Vert w_n-p\Vert . \end{aligned}$$
(4.15)

Now, applying (4.2) and (4.15) in (4.14), we have for all \(n\ge N\)

$$\begin{aligned} \Vert x_{n+1}-p\Vert&\le (1-\alpha _n)\Vert w_n-p\Vert + \alpha _n\Vert p\Vert \\&\le (1-\alpha _n)\big [\Vert x_n-p\Vert + \alpha _nM_1\big ] + \alpha _n\Vert p\Vert \\&\le (1-\alpha _n)\Vert x_n-p\Vert + \alpha _n\big [\Vert p\Vert + M_1 \big ]\\&\le \max \big \{\Vert x_n-p\Vert , \Vert p\Vert + M_1 \big \}\\&~~\vdots \\&\le \max \big \{\Vert x_N-p\Vert , \Vert p\Vert + M_1 \big \}. \end{aligned}$$

which implies that \(\{x_n\}\) is bounded. Hence, \(\{w_n\},\{y_{n,i}\},\{z_{n,i}\},\{y_{n,i}\},\{r_{n,i}\}\) and \(\{b_n\}\) are all bounded. \(\square \)

Lemma 4.5

Suppose \(\{w_n\}\) and \(\{b_n\}\) are two sequences generated by Algorithm 3.1 with subsequences \(\{w_{n_k}\}\) and \(\{b_{n_k}\},\) respectively, such that \(\lim \nolimits _{k\rightarrow \infty }\Vert w_{n_k}-b_{n_k}\Vert =0.\) If \(w_{n_k}\rightharpoonup z\in H,\) then \(z\in \Omega .\)

Proof

From (4.13), we have

$$\begin{aligned} \Vert b_{n_k}-p\Vert ^2&\le \Vert w_{n_k}-p\Vert ^2-\sum \limits _{i=0}^{N}\delta _{{n_k},i}\eta _{n_k,i}[1-(\phi _{{n_k},i}+\phi _i)]\Vert T_iw_{n_k}-z_{{n_k},i}\Vert ^2.\nonumber \\ \end{aligned}$$
(4.16)

From this, we obtain

$$\begin{aligned}{} & {} \sum _{i=0}^{N}\delta _{{n_k},i}\eta _{n_k,i}[1-(\phi _{{n_k},i}+\phi _i)]\Vert T_iw_{n_k}-z_{{n_k},i}\Vert ^2\nonumber \\{} & {} \quad \le \Vert w_{n_k}-p\Vert ^2-\Vert b_{n_k}-p\Vert ^2 \nonumber \\{} & {} \quad \le \Vert w_{n_k}-b_{n_k}\Vert ^2+ 2\Vert w_{n_k}-b_{n_k}\Vert {\Vert b_{n_k}-p\Vert .} \end{aligned}$$
(4.17)

Since by the hypothesis of the lemma \(\lim \nolimits _{k\rightarrow \infty }\Vert w_{n_k}-b_{n_k}\Vert =0,\) it follows from (4.17) that

$$\begin{aligned} \sum _{i=0}^{N}\delta _{{n_k},i}\eta _{n_k,i}[1-(\phi _{{n_k},i}+\phi _i)]\Vert T_iw_{n_k}-z_{{n_k},i}\Vert ^2\rightarrow 0,\quad k\rightarrow \infty , \end{aligned}$$

which implies that

$$\begin{aligned} \delta _{{n_k},i}\eta _{n_k,i}[1-(\phi _{{n_k},i}+\phi _i)]\Vert T_iw_{n_k}-z_{{n_k},i}\Vert ^2\rightarrow 0,\quad k\rightarrow \infty ,~~\forall i=0,1,2,\ldots ,N. \end{aligned}$$

By the definition of \(\eta _{n,i},\) we have

$$\begin{aligned}{} & {} \delta _{{n_k},i}(\phi _{{n_k},i}+\phi _i)[1-(\phi _{{n_k},i}+\phi _i)]\frac{\Vert T_iw_{n_k}-z_{{n_k},i}\Vert ^4}{\Vert T_i^*(T_iw_{n_k}-z_{{n_k},i})\Vert ^2}\rightarrow 0,\quad \\{} & {} \quad k\rightarrow \infty ,~~\forall i=0,1,2,\ldots ,N, \end{aligned}$$

which implies that

$$\begin{aligned} \frac{\Vert T_iw_{n_k}-z_{{n_k},i}\Vert ^2}{\Vert T_i^*(T_iw_{n_k}-z_{{n_k},i})\Vert }\rightarrow 0,\quad k\rightarrow \infty ,~~\forall i=0,1,2,\ldots ,N, \end{aligned}$$

Since \(\{\Vert T_i^*(T_iw_{n_k}-z_{{n_k},i})\Vert \}\) is bounded, it follows that

$$\begin{aligned} \Vert T_iw_{n_k}-z_{{n_k},i}\Vert \rightarrow 0,\quad k\rightarrow \infty ,~~\forall i=0,1,2,\ldots ,N. \end{aligned}$$
(4.18)

Thus, we have

$$\begin{aligned}{} & {} \Vert T_i^*(T_iw_{n_k}-z_{{n_k},i})\Vert \le \Vert T_i^*\Vert \Vert (T_iw_{n_k}-z_{{n_k},i})\Vert = \Vert T_i\Vert \Vert (T_iw_{n_k}-z_{{n_k},i})\Vert \rightarrow 0,\quad \nonumber \\{} & {} \quad k\rightarrow \infty ,~~\forall i=0,1,2,\ldots ,N. \end{aligned}$$
(4.19)

By the definition of \(\lambda _{n+1,i},\) it follows that

$$\begin{aligned}{} & {} \langle T_iw_{n_k}-y_{n_k,i},r_{n_k,i} \rangle \nonumber \\{} & {} \quad = \langle T_iw_{n_k}-y_{n_k,i}, T_iw_{n_k}-y_{n_k,i}-\lambda _{n_k,i}(A_iT_iw_{n_k} - A_iy_{n_k,i}) \rangle \nonumber \\{} & {} \quad =\Vert T_iw_{n_k}-y_{n_k,i}\Vert ^2 -\langle T_iw_{n_k}-y_{n_k,i}, \lambda _{n_k,i}(A_iT_iw_{n_k} - A_iy_{n_k,i}) \rangle \nonumber \\{} & {} \quad \ge \Vert T_iw_{n_k}-y_{n_k,i}\Vert ^2 -\lambda _{n_k,i}\Vert T_iw_{n_k}-y_{n_k,i}\Vert \Vert A_iT_iw_{n_k} - A_iy_{n_k,i}\Vert \nonumber \\{} & {} \quad \ge \Vert T_iw_{n_k}-y_{n_k,i}\Vert ^2 - \frac{\lambda _{n_k,i}}{\lambda _{n_k+1,i}}(c_{n_k,i}+c_i) \Vert T_iw_{n_k}-y_{n_k,i}\Vert ^2\nonumber \\{} & {} \quad =\Big (1-\frac{\lambda _{n_k,i}}{\lambda _{n_k+1,i}}(c_{n_k,i}+c_i)\Big ) \Vert T_iw_{n_k}-y_{n_k,i}\Vert ^2. \end{aligned}$$
(4.20)

From Lemma 4.1 we know that \(\lim \nolimits _{k\rightarrow \infty }\lambda _{n_k,i}=\lambda _i,~~i=0,1,2,\ldots ,N\) and by Lemma 4.3, there exists a positive integer N such that \(1-\frac{\lambda _{n_k,i}}{\lambda _{n_k+1,i}}(c_{n_k,i}+c_i)>0,~~ \forall n\ge N,~ i=0,1,2,\ldots ,N.\) If \(r_{n,i}\ne 0,\) then by applying the continuity of \(A_i,\) the definitions of \(\beta _{n,i}, r_{n,i}\) and \(z_{n,i}~~ i=0,1,2,\ldots ,N,\) from (4.20) we have

$$\begin{aligned}{} & {} \Vert T_iw_{n_k}-y_{n_k,i}\Vert ^2\nonumber \\{} & {} \quad \le \frac{1}{\Big (1-\frac{\lambda _{n_k,i}}{\lambda _{n_k+1,i}}(c_{n_k,i}+c_i)\Big )} \langle T_iw_{n_k}-y_{n_k,i},r_{n_k,i} \rangle \nonumber \\{} & {} \quad = \frac{1}{(k_i+k_{n_k,i})\Big (1-\frac{\lambda _{n_k,i}}{\lambda _{n_k+1,i}}(c_{n_k,i}+c_i)\Big )}\beta _{n_k,i}\Vert r_{n_k,i}\Vert ^2\nonumber \\{} & {} \quad = \frac{1}{(k_i+k_{n_k,i})\Big (1-\frac{\lambda _{n_k,i}}{\lambda _{n_k+1,i}}(c_{n_k,i}+c_i)\Big )}\beta _{n_k,i}\Vert r_{n_k,i}\Vert \Vert T_iw_{n_k}-y_{n_k,i}\nonumber \\{} & {} \qquad -\lambda _{n_k,i}(A_iT_iw_{n_k}-A_iy_{n_k,i})\nonumber \\{} & {} \quad \le \frac{1}{(k_i+k_{n_k,i})\Big (1-\frac{\lambda _{n_k,i}}{\lambda _{n_k+1,i}}(c_{n_k,i}+c_i)\Big )}\beta _{n_k,i}\Vert r_{n_k,i}\Vert \Big (\Vert T_iw_{n_k}-y_{n_k,i}\Vert \nonumber \\{} & {} \qquad +\lambda _{n_k,i}\Vert A_iT_iw_{n_k}-A_iy_{n_k,i}\Vert \Big )\nonumber \\{} & {} \quad \le \frac{1}{(k_i+k_{n_k,i})\Big (1-\frac{\lambda _{n_k,i}}{\lambda _{n_k+1,i}}(c_{n_k,i}+c_i)\Big )}\beta _{n_k,i}\Vert r_{n_k,i}\Vert \nonumber \\{} & {} \qquad \Big (1+\frac{\lambda _{n_k,i}}{\lambda _{n_k+1,i}}(c_{n_k,i}+c_i)\Big )\Vert T_iw_{n_k}-y_{n_k,i}\Vert \nonumber \\{} & {} \quad = \frac{\Big (1+\frac{\lambda _{n_k,i}}{\lambda _{n_k+1,i}}(c_{n_k,i}+c_i)\Big )}{(k_i+k_{n_k,i})\Big (1-\frac{\lambda _{n_k,i}}{\lambda _{n_k+1,i}}(c_{n_k,i}+c_i)\Big )}\Vert T_iw_{n_k}-z_{n_k,i}\Vert \Vert T_iw_{n_k}-y_{n_k,i}\Vert .\nonumber \\ \end{aligned}$$
(4.21)

Thus, we have

$$\begin{aligned} \Vert T_iw_{n_k}-y_{n_k,i}\Vert \le \frac{\Big (1+\frac{\lambda _{n_k,i}}{\lambda _{n_k+1,i}}(c_{n_k,i}+c_i)\Big )}{(k_i+k_{n_k,i})\Big (1-\frac{\lambda _{n_k,i}}{\lambda _{n_k+1,i}}(c_{n_k,i}+c_i)\Big )}\Vert T_iw_{n_k}-z_{n_k,i}\Vert .\nonumber \\ \end{aligned}$$
(4.22)

Since \(\lim \nolimits _{k\rightarrow \infty }c_{n_k,i}=k_{n_k,i}=0\) and by Lemma 4.1\(\lim \nolimits _{k\rightarrow \infty }\frac{\lambda _{n_k,i}}{\lambda _{n_k+1,i}}=1,~~i=0,1,2,\ldots ,N,\) then from (4.22) and by applying (4.18) we have

$$\begin{aligned} \Vert T_iw_{n_k}-y_{n_k,i}\Vert \rightarrow 0,\quad k\rightarrow \infty ,~~ \forall i=0,1,2,\ldots ,N. \end{aligned}$$
(4.23)

If \(r_{n,i}=0,\) from (4.20) we know that (4.23) still holds.

Since \(y_{n,i} = P_{C_i}(T_iw_n-\lambda _{n,i}A_iT_iw_n),\) by the property of the projection map we have

$$\begin{aligned}&\langle T_iw_{n_k}-\lambda _{n_k,i}A_iT_iw_{n_k}-y_{n_k,i}, T_ix- y_{n_k,i}\rangle \le 0,\quad \forall ~ T_ix \in C_i,\quad \\&\quad i=0,1,2,\ldots ,N, \end{aligned}$$

which implies that

$$\begin{aligned}{} & {} \frac{1}{\lambda _{n_k,i}} \langle T_iw_{n_k}-y_{n_k,i}, T_ix- y_{n_k,i} \rangle \le \langle A_iT_iw_{n_{k}}, T_ix- y_{n_k,i} \rangle ,\quad \\{} & {} \quad \forall ~ T_ix \in C_i,\quad i=0,1,2,\ldots ,N. \end{aligned}$$

From the last inequality, we get

$$\begin{aligned}{} & {} \frac{1}{\lambda _{n_k,i}} \langle T_iw_{n_k}-y_{n_k,i}, T_ix- y_{n_k,i} \rangle + \langle A_iT_iw_{n_{k}}, y_{n_k,i} -T_iw_{n_{k}}\rangle \nonumber \\{} & {} \quad \le \langle A_iT_iw_{n_{k}}, T_ix-T_iw_{n_{k}}\rangle ,~~ \forall ~ T_ix \in C_i,~~ i=0,1,2,\ldots ,N. \end{aligned}$$
(4.24)

By applying (4.23) and the fact that \(\lim \nolimits _{k\rightarrow \infty }\lambda _{n_k,i}=\lambda _i>0,\) from (4.24) we obtain

$$\begin{aligned} \liminf _{k\rightarrow \infty }\langle A_iT_iw_{n_{k}}, T_ix-T_iw_{n_{k}}\rangle \ge 0,\quad \forall ~ T_ix \in C_i,~~ i=0,1,2,\ldots ,N.\nonumber \\ \end{aligned}$$
(4.25)

Observe that

$$\begin{aligned}{} & {} \langle A_i y_{n_k,i}, T_ix-y_{n_k,i}\rangle =\langle A_i y_{n_k,i}- A_iT_iw_{n_{k}}, T_ix- T_iw_{n_{k}} \rangle \nonumber \\{} & {} \quad + \langle A_iT_iw_{n_{k}}, T_ix -T_iw_{n_{k}} \rangle + \langle A_i y_{n_k,i}, T_iw_{n_{k}}-y_{n_k,i}\rangle . \end{aligned}$$
(4.26)

By the continuity of \(A_i,\) from (4.23) we have

$$\begin{aligned} \Vert A_iT_iw_{n_k}-A_iy_{n_k,i}\Vert \rightarrow 0,\quad k\rightarrow \infty ,~~ \forall i=0,1,2,\ldots ,N. \end{aligned}$$
(4.27)

By applying (4.23) and (4.27), we obtain from (4.25) and (4.26) that

$$\begin{aligned} \liminf _{k\rightarrow \infty }\langle A_i y_{n_k,i}, T_ix-y_{n_k,i}\rangle \ge 0,\quad \forall ~ T_ix \in C_i,~~ i=0,1,2,\ldots ,N. \end{aligned}$$
(4.28)

Next, let \(\{\Theta _{k,i}\}\) be a decreasing sequence of positive numbers such that \(\Theta _{k,i}\rightarrow 0\) as \(k\rightarrow \infty ,~i=0,1,2,\ldots ,N.\) For each k,  let \(N_{k}\) denote the smallest positive integer such that

$$\begin{aligned} \langle A_i y_{n_j,i}, T_ix-y_{n_j,i}\rangle + \Theta _{k,i} \ge 0, \quad \forall ~ j\ge N_{k}, ~ T_ix \in C_i,~~ i=0,1,2,\ldots ,N, \nonumber \\ \end{aligned}$$
(4.29)

where the existence of \(N_{k}\) follows from (4.28). Since \(\{\Theta _{k,i}\}\) is decreasing, then \(\{N_{k}\}\) is increasing. Furthermore, since \(\{y_{N_{k},i}\}\subset C_i\) for each k,  we can suppose \(A_iy_{N_{k},i}\ne 0\) (otherwise, \(y_{N_{k},i}\in VI(C_i,A_i),~i=0,1,2\ldots ,N\)) and let

$$\begin{aligned} u_{N_{k},i}=\frac{A_iy_{N_{k},i}}{\Vert A_iy_{N_{k},i}\Vert ^{2}} \end{aligned}$$

Then, \(\langle A_iy_{N_{k},i}, u_{N_{k},i} \rangle =1\) for each \(k,~i=0,1,2,\ldots ,N.\) From (4.29), we obtain

$$\begin{aligned} \langle A_iy_{N_{k},i}, T_ix+ \Theta _{k,i}u_{N_{k},i}-y_{N_{k},i} \rangle \ge 0,\quad \forall ~ T_ix\in C_i,~i=0,1,2,\ldots ,N. \end{aligned}$$

By the pseudomonotonicity of \(A_i\), we obtain

$$\begin{aligned} \langle A_i(T_ix+\Theta _{k,i}u_{N_{k},i}), T_ix + \Theta _{k,i}u_{N_{k},i}-y_{N_{k},i}\rangle \ge 0,\quad \forall ~ T_ix\in C_i,i= & {} 0,1,2,\\{} & {} \ldots ,N. \end{aligned}$$

which is equivalent to

$$\begin{aligned}{} & {} \langle A_iT_ix, T_ix -y_{N_{k},i}\rangle \ge \langle A_iT_ix - A_i(T_ix + \Theta _{k,i}u_{N_{k},i}), T_ix \nonumber \\{} & {} \quad + \Theta _{k,i}u_{N_{k},i}-y_{N_{k},i}\rangle -\Theta _{k,i} \langle A_iT_ix, u_{N_{k},i}\rangle ,~ \forall T_ix\in C_i,i=0,1,\ldots ,N. \nonumber \\ \end{aligned}$$
(4.30)

To complete the proof, we need to show that \(\lim \nolimits _{k\rightarrow \infty }\Theta _{k,i}u_{N_{k},i}=0.\) Since \(w_{n_k}\rightharpoonup z\) and \(T_i\) is a bounded linear operator for each \(i=0,1,2,\ldots ,N,\) we have \(T_iw_{n_k}\rightharpoonup T_iz,~~\forall i=0,1,2,\ldots ,N.\) Thus, from (4.23) we get \(y_{n_k,i}\rightharpoonup T_iz,~~\forall ~ i=0,1,2,\ldots ,N.\) Since \(\{y_{n_k,i}\}\subset C_i,~i=0,1,2,\ldots ,N,\) we have \(T_iz\in C_i.\) If \(T_iz=0,~\forall ~ i=0,1,2,\ldots ,N,\) then \(T_iz\in VI(C_i,A_i)~\forall ~ i=0,1,2,\ldots ,N,\) which implies that \(z\in \Omega .\) On the contrary, we suppose \(T_iz\ne 0,~\forall ~ i=0,1,2,\ldots ,N.\) Since \(A_i\) satisfies condition (3.1), we have for all \(i=0,1,2,\ldots ,N\)

$$\begin{aligned} 0<\Vert A_iT_iz\Vert \le \underset{k\rightarrow \infty }{\lim \inf }\Vert A_iy_{n_{k},i}\Vert . \end{aligned}$$

Using the facts that \(\{y_{N_{k},i}\} \subset \{y_{n_{k},i}\}\) and \(\Theta _{k,i}\rightarrow 0\) as \(k\rightarrow \infty ,~i=0,1,2\ldots ,N,\) we have

$$\begin{aligned} 0\le \underset{k\rightarrow \infty }{\limsup }\Vert \Theta _{k,i}u_{N_{k},i}\Vert =\underset{k\rightarrow \infty }{\limsup }\Big (\frac{\Theta _{k,i}}{\Vert A_iy_{n_{k},i}\Vert }\Big )\le \frac{\underset{k\rightarrow \infty }{\limsup }~\Theta _{k,i}}{\underset{k\rightarrow \infty }{\liminf }\Vert A_i y_{n_{k},i}\Vert }=0, \end{aligned}$$

which implies that \(\limsup \nolimits _{k\rightarrow \infty }~\Theta _{k,i}u_{N_{k},i}=0.\) Applying the facts that \(A_i\) is continuous, \(\{y_{N_{k},i}\}\) and \(\{u_{N_{k},i}\}\) are bounded and \(\lim \nolimits _{k\rightarrow \infty }~\Theta _{k,i}u_{N_{k},i}=0,\) from (4.30) we obtain

$$\begin{aligned} \underset{k\rightarrow \infty }{\lim \inf }\ \langle A_iT_ix, T_ix -y_{N_{k},i}\rangle \ge 0,\quad \forall ~ T_ix\in C_i,~i=0,1,2,\ldots ,N. \end{aligned}$$

From the last inequality, we obtain

$$\begin{aligned}{} & {} \langle A_iT_ix, T_ix-T_iz\rangle = \lim _{_{k\rightarrow \infty }} \langle A_iT_ix, T_ix -y_{N_{k},i}\rangle = \underset{k\rightarrow \infty }{\lim \inf }\ \langle A_iT_ix, T_ix -y_{N_{k},i}\rangle \\{} & {} \quad \ge 0,~\forall ~ T_ix\in C_i,~i=0,1,2,\ldots ,N. \end{aligned}$$

By Lemma 2.6, we have

$$\begin{aligned} T_iz \in VI(C_i,A_i),~ i=0,1,2,\ldots ,N, \end{aligned}$$

which implies that

$$\begin{aligned} z \in T_i^{-1}\big (VI(C_i,A_i)\big ),~ i=0,1,2,\ldots ,N, \end{aligned}$$

Thus, we have \(z \in \bigcap _{i=0}^N T_i^{-1}\big (VI(C_i,A_i)\big ),\) which implies that \(z\in \Omega \) as required. \(\square \)

Lemma 4.6

Let \(\{x_n\}\) be a sequence generated by Algorithm 3.1 under Assumption A. Then, the following inequality holds for all \(p\in \Omega :\)

$$\begin{aligned} \Vert x_{n+1}-p\Vert ^2\le&(1-\alpha _n)||x_n - p||^2 \\&+\alpha _n\Big [ 3M_2(1-\alpha _n)^2\frac{\theta _n}{\alpha _n}\Vert x_n - x_{n-1}\Vert +2\langle p,p-x_{n+1} \rangle \Big ]\\&-\xi _n(1-\alpha _n)\sum _{i=0}^{N}\delta _{n,i}\eta _{n,i}[1-(\phi _{n,i}+\phi _i)]\Vert T_iw_n-z_{n,i}\Vert ^2. \end{aligned}$$

Proof

Let \(p\in \Omega . \) Then, by applying Lemma 2.3 together with the Cauchy-Schwartz inequality we have

$$\begin{aligned} \Vert w_n - p\Vert ^2= & {} \Vert x_n + \theta _n(x_n - x_{n-1}) - p\Vert ^2\nonumber \\= & {} \Vert x_n - p\Vert ^2 + \theta _n^2\Vert x_n - x_{n-1}\Vert ^2 + 2\theta _n\langle x_n - p, x_n - x_{n-1} \rangle \nonumber \\\le & {} \Vert x_n - p\Vert ^2 + \theta _n^2\Vert x_n - x_{n-1}\Vert ^2 + 2\theta _n\Vert x_n - x_{n-1}\Vert \Vert x_n - p\Vert \nonumber \\= & {} \Vert x_n - p\Vert ^2 + \theta _n\Vert x_n - x_{n-1}\Vert (\theta _n\Vert x_n - x_{n-1}\Vert + 2\Vert x_n - p\Vert )\nonumber \\\le & {} \Vert x_n - p\Vert ^2 + 3M_2\theta _n\Vert x_n - x_{n-1}\Vert \nonumber \\= & {} \Vert x_n - p\Vert ^2 + 3M_2\alpha _n\frac{\theta _n}{\alpha _n}\Vert x_n - x_{n-1}\Vert , \end{aligned}$$
(4.31)

where \(M_2:= \sup _{n\in \mathbb {N}}\{\Vert x_n - p\Vert , \theta _n\Vert x_n - x_{n-1}\Vert \}>0.\)

Next, by the definition of \(x_{n+1},\) (4.13), (4.31) and applying Lemma 2.3 we have

$$\begin{aligned} \Vert x_{n+1}-p\Vert ^2&= \Vert (1-\alpha _n-\xi _n)(w_n-p) + \xi _n(b_n-p)-\alpha _np\Vert ^2\\&\le \Vert (1-\alpha _n-\xi _n)(w_n-p) + \xi _n(b_n-p)\Vert ^2 - 2\alpha _n\langle p,x_{n+1}-p \rangle \\&=(1-\alpha _n-\xi _n)^2\Vert w_n-p\Vert ^2 + \xi _n^2\Vert b_n-p\Vert ^2 \\&\quad + 2\xi _n(1-\alpha _n-\xi _n)\langle w_n-p,b_n-p \rangle \\&\quad + 2\alpha _n\langle p,p-x_{n+1} \rangle \\&\le (1-\alpha _n-\xi _n)^2\Vert w_n-p\Vert ^2 + \xi _n^2\Vert b_n-p\Vert ^2 \\&\quad + 2\xi _n(1-\alpha _n-\xi _n)\Vert w_n-p\Vert \Vert b_n-p\Vert \\&+ 2\alpha _n\langle p,p-x_{n+1} \rangle \\&\le (1-\alpha _n-\xi _n)^2\Vert w_n-p\Vert ^2 + \xi _n^2\Vert b_n-p\Vert ^2 \\&\quad + \xi _n(1-\alpha _n-\xi _n)\big [\Vert w_n-p\Vert ^2 +\Vert b_n-p\Vert ^2\big ]\\&\quad + 2\alpha _n\langle p,p-x_{n+1} \rangle \\&=(1-\alpha _n-\xi _n)(1-\alpha _n)\Vert w_n-p\Vert ^2 + \xi _n(1-\alpha _n)\Vert b_n-p\Vert ^2 \\&\quad + 2\alpha _n\langle p,p-x_{n+1} \rangle \\&\le (1-\alpha _n-\xi _n)(1-\alpha _n)\Vert w_n-p\Vert ^2 + \xi _n(1-\alpha _n)\Big [\Vert w_n-p\Vert ^2\\&\quad -\sum _{i=0}^{N}\delta _{n,i}\eta _{n,i}[1-(\phi _{n,i}+\phi _i)]\Vert T_iw_n-z_{n,i}\Vert ^2\Big ]+2\alpha _n\langle p,p-x_{n+1} \rangle \\&=(1-\alpha _n)^2\Vert w_n-p\Vert ^2 -\xi _n(1-\alpha _n)\\&\quad \sum _{i=0}^{N}\delta _{n,i}\eta _{n,i}[1-(\phi _{n,i}+\phi _i)]\Vert T_iw_n-z_{n,i}\Vert ^2\\&\quad +2\alpha _n\langle p,p-x_{n+1} \rangle \\&\le (1-\alpha _n)^2||x_n - p||^2 + 3M_2\alpha _n(1-\alpha _n)^2\frac{\theta _n}{\alpha _n}\Vert x_n - x_{n-1}\Vert \\&\quad +2\alpha _n\langle p,p-x_{n+1} \rangle \\&\quad -\xi _n(1-\alpha _n)\sum _{i=0}^{N}\delta _{n,i}\eta _{n,i}[1-(\phi _{n,i}+\phi _i)]\Vert T_iw_n-z_{n,i}\Vert ^2\\&\le (1-\alpha _n)||x_n - p||^2 +\alpha _n\Big [ 3M_2(1-\alpha _n)^2\frac{\theta _n}{\alpha _n}\Vert x_n - x_{n-1}\Vert \\&\quad +2\langle p,p-x_{n+1} \rangle \Big ]\\&\quad -\xi _n(1-\alpha _n)\sum _{i=0}^{N}\delta _{n,i}\eta _{n,i}[1-(\phi _{n,i}+\phi _i)]\Vert T_iw_n-z_{n,i}\Vert ^2, \end{aligned}$$

which is the required inequality. \(\square \)

Theorem 4.7

Let \(\{x_n\}\) be a sequence generated by Algorithm 3.1 such that Assumption A holds. Then, \(\{x_n\}\) converges strongly to \(\hat{x}\in \Omega ,\) where \(\hat{x}= \min \{\Vert p\Vert :p\in \Omega \}.\)

Proof

Let \(\hat{x}= \min \{\Vert p\Vert :p\in \Omega \},\) that is, \(\hat{x}=P_\Omega (0).\) Then, from Lemma 4.6 we obtain

$$\begin{aligned}{} & {} \Vert x_{n+1}-\hat{x}\Vert ^2\le (1-\alpha _n)||x_n - \hat{x}||^2\nonumber \\{} & {} \qquad +\alpha _n\Big [ 3M_2(1-\alpha _n)^2\frac{\theta _n}{\alpha _n}\Vert x_n - x_{n-1}\Vert +2\langle \hat{x},\hat{x}-x_{n+1} \rangle \Big ]\nonumber \\{} & {} \quad =(1-\alpha _n)||x_n - \hat{x}||^2 +\alpha _nd_n, \end{aligned}$$
(4.32)

where \(d_n=3M_2(1-\alpha _n)^2\frac{\theta _n}{\alpha _n}\Vert x_n - x_{n-1}\Vert +2\langle \hat{x},\hat{x}-x_{n+1} \rangle .\)

Now, we claim that the sequence \(\{\Vert x_n-\hat{x}\Vert \}\) converges to zero. In view of Lemma 2.4, it suffices to show that \(\limsup \nolimits _{k\rightarrow \infty }d_{n_k}\le 0\) for every subsequence \(\{\Vert x_{n_k}-\hat{x}\Vert \}\) of \(\{\Vert x_n-\hat{x}\Vert \}\) satisfying

$$\begin{aligned} \liminf \limits _{k \rightarrow \infty }\left( \Vert x_{{n_k}+1}-\hat{x}\Vert -\Vert x_{n_k}-\hat{x}\Vert \right) \ge 0. \end{aligned}$$
(4.33)

Suppose that \(\{\Vert x_{n_k}-\hat{x}\Vert \}\) is a subsequence of \(\{\Vert x_n-\hat{x}\Vert \}\) such that (4.33) holds. Again, from Lemma 4.6, we obtain

$$\begin{aligned} \xi _{n_k}(1-\alpha _{n_k})\sum _{i=0}^N\delta _{{n_k},i}\eta _{{n_k},i}[1&-(\phi _{{n_k},i}+\phi _i)]\Vert T_iw_{n_k}-z_{{n_k},i}\Vert ^2\\&\le (1-\alpha _{n_k})\Vert x_{n_k}-\hat{x}\Vert ^2 - \Vert x_{{n_k}+1}-\hat{x}\Vert ^2 \\&\quad + \alpha _{n_k}\Big [ 3M_2(1-\alpha _{n_k})^2\frac{\theta _{n_k}}{\alpha _{n_k}}\Vert x_{n_k} - x_{{n_k}-1}\Vert \\&\quad +2\langle \hat{x},\hat{x}-x_{{n_k}+1} \rangle \Big ]. \end{aligned}$$

By (4.33), Remark 3.2 and the fact that \(\lim \nolimits _{k\rightarrow \infty }\alpha _{n_k}=0,\) we have

$$\begin{aligned} \xi _{n_k}(1-\alpha _{n_k})\sum _{i=0}^N\delta _{{n_k},i}\eta _{{n_k},i}[1-(\phi _{{n_k},i}+\phi _i)]\Vert T_iw_{n_k}-z_{{n_k},i}\Vert ^2 \rightarrow 0,\quad k\rightarrow \infty . \end{aligned}$$

Thus, we get

$$\begin{aligned} \lim \nolimits _{k\rightarrow \infty }\Vert T_iw_{n_k}-z_{{n_k},i}\Vert = 0,\quad \forall i=0,1,2,\ldots ,N. \end{aligned}$$
(4.34)

It follows that

$$\begin{aligned} \Vert T_i^*(z_{{n_k},i}-T_iw_{n_k})\Vert \le \Vert T_i^*\Vert \Vert z_{{n_k},i}-T_iw_{n_k}\Vert \rightarrow 0,\quad k\rightarrow \infty ~~\forall i=0,1,2,\ldots ,N.\nonumber \\ \end{aligned}$$
(4.35)

By the definition of \(b_n\) and by applying (4.35), we obtain

$$\begin{aligned} \Vert b_{n_k}-w_{n_k}\Vert= & {} \Vert \sum _{i=0}^{N}\delta _{n_k,i}\big (w_{n_k}+\eta _{n_k,i}T_i^*(z_{n_k,i}-T_iw_{n_k})\big )-w_{n_k}\Vert \nonumber \\\le & {} \sum _{i=0}^{N}\delta _{n_k,i}\eta _{n_k,i}\Vert T_i^*(z_{n_k,i}-T_iw_{n_k})\Vert \rightarrow 0. \end{aligned}$$
(4.36)

From the definition of \(w_n\) and by Remark 3.2, we get

$$\begin{aligned} \Vert w_{n_k}-x_{n_k}\Vert =\theta _{n_k}\Vert x_{n_k}-x_{n_k-1}\Vert \rightarrow 0,\quad k\rightarrow \infty . \end{aligned}$$
(4.37)

Next, from (4.36) and (4.37) we obtain

$$\begin{aligned} \Vert x_{n_k}-b_{n_k}\Vert \le \Vert x_{n_k}-w_{n_k}\Vert + \Vert w_{n_k}-b_{n_k}\Vert \rightarrow 0,\quad k\rightarrow \infty . \end{aligned}$$
(4.38)

Applying (4.37), (4.38) and the fact that \(\lim \nolimits _{k\rightarrow \infty }\alpha _{n_k}=0\) we obtain

$$\begin{aligned} \Vert x_{{n_k}+1}-x_{n_k}\Vert= & {} \Vert (1-\alpha _{n_k}-\xi _{n_k})(w_{n_k}-x_{n_k}) + \xi _{n_k}(b_{n_k}-x_{n_k})-\alpha _nx_{n_k}\Vert \nonumber \\\le & {} (1-\alpha _{n_k}-\xi _{n_k})\Vert w_{n_k}-x_{n_k}\Vert + \xi _{n_k}\Vert b_{n_k}-x_{n_k}\Vert \nonumber \\{} & {} + \alpha _{n_k}\Vert x_{n_k}\Vert \rightarrow 0,\quad k\rightarrow \infty . \end{aligned}$$
(4.39)

Since \(\{x_n\}\) is bounded, \(w_{\omega }(x_n)\ne \emptyset .\) Let \(x^*\in w_{\omega }(x_n)\) be an arbitrary element. Then, there exist a subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\) such that \(x_{n_k}\rightharpoonup x^*.\) It follows from (4.37) that \(w_{n_k}\rightharpoonup x^*.\) Now, invoking Lemma 4.5 and applying (4.36) we have \(x^*\in \Omega .\) Since \(x^*\in w_{\omega }(x_n)\) was chosen arbitrarily, it follows that \(w_{\omega }(x_n)\subset \Omega .\)

Next, by the boundedness of \(\{x_{n_k}\},\) there exists a subsequence \(\{x_{n_{k_j}}\}\) of \(\{x_{n_k}\}\) such that \(x_{n_{k_j}}\rightharpoonup q\) and

$$\begin{aligned} \limsup \limits _{k\rightarrow \infty }\langle \hat{x},\hat{x}-x_{n_k}\rangle =\lim \nolimits _{j\rightarrow \infty }\langle \hat{x},\hat{x}-x_{n_{k_j}}\rangle . \end{aligned}$$

Since \(\hat{x}=P_\Omega (0),\) it follows from the property of the metric projection that

$$\begin{aligned} \limsup \limits _{k\rightarrow \infty }\langle \hat{x},\hat{x}-x_{n_k}\rangle =\lim \nolimits _{j\rightarrow \infty }\langle \hat{x},\hat{x}-x_{n_{k_j}}\rangle =\langle \hat{x},\hat{x}-q\rangle \le 0, \end{aligned}$$
(4.40)

Hence, from (4.39) and (4.40) we obtain

$$\begin{aligned} \limsup \limits _{k\rightarrow \infty }\langle \hat{x},\hat{x}-x_{n_{k+1}}\rangle \le 0. \end{aligned}$$
(4.41)

Now, by Remark 3.2 and (4.41) we have \(\limsup \limits _{k\rightarrow \infty }d_{n_k}\le 0.\) Thus, by applying Lemma 2.4 it follows from (4.32) that \(\{\Vert x_n-\hat{x}\Vert \}\) converges to zero, which completes the proof. \(\square \)

5 Applications

5.1 Split Convex Minimization Problem with Multiple Output Sets

Let C be a nonempty, closed and convex subset of a real Hilbert space H. The convex minimization problem is formulated as finding a point \(x^*\in C\), such that

$$\begin{aligned} g(x^*)=\min _{x\in C}g(x), \end{aligned}$$
(5.1)

where g is a real-valued convex function. We denote the solution set of Problem (5.1) by \(\arg \min g.\)

Let \(C, C_i\) be nonempty, closed and convex subsets of real Hilbert spaces \(H, H_i,i=1,2,...,N,\) respectively, and let \(T_i:H\rightarrow H_i,i=1,2,...,N,\) be bounded linear operators with adjoints \(T^*_i.\) Let \(g:H\rightarrow \mathbb {R}, g_i:H_i\rightarrow \mathbb {R}\) be convex and differentiable functions. Here, we apply our result to approximate the solution of the following split convex minimization problem with multiple output sets (SCMPMOS): Find \(x^*\in C\) such that

$$\begin{aligned} x^*\in \Gamma :=\arg \min g\cap \big (\mathop {\cap }\limits _{i=1}^N T^{-1}_i\big (\arg \min g_i\big )\big )\ne \emptyset . \end{aligned}$$
(5.2)
Fig. 1
figure 1

Experiment 6.5: \(m = 25\)

Fig. 2
figure 2

Experiment 6.5: \(m = 50\)

Fig. 3
figure 3

Experiment 6.5: \(m = 100\)

Fig. 4
figure 4

Experiment 6.5: \(m = 2000\)

We need the following lemma to establish our next result.

Lemma 5.1

[36] Let C be a nonempty, closed and convex subset of a real Banach space E. Let g be a convex function of E into \(\mathbb {R}.\) If g is Fréchet differentiable, then z is a solution of Problem (5.1) if and only if \(z\in VI(C,\triangledown g),\) where \(\triangledown g\) is the gradient of g.

Now, by applying Theorem 4.7 and Lemma 5.1, we obtain the following strong convergence theorem for approximating the solution of the SCMPMOS (5.2) in Hilbert spaces.

Table 1 Numerical results for ( Experiment 6.5)

Theorem 5.2

Let \(C, C_i\) be nonempty, closed and convex subsets of real Hilbert spaces \(H, H_i,i=1,2,...,N,\) respectively, and let \(T_i:H\rightarrow H_i,i=1,2,...,N,\) be bounded linear operators with adjoints \(T^*_i.\) Let \(g:H\rightarrow \mathbb {R}, g_i:H_i\rightarrow \mathbb {R}\) be fréchet differentiable convex functions such that \(\triangledown g, \triangledown g_i\) are uniformly continuous. Suppose that Assumption A of Theorem 4.7 holds and the solution set \(\Gamma \ne \emptyset .\) Then, the sequence \(\{x_n\}\) generated by the following algorithm converges strongly to \(\hat{x}\in \Gamma ,\) where \(\hat{x}= \min \{\Vert p\Vert :p\in \Gamma \}.\)

Algorithm 5.3.

Step 0. Select initial points \(x_0, x_1 \in H.\) Let \(C_0=C,~ T_0=I^H,~\triangledown g_0=\triangledown g\) and set \(n=1.\)

Step 1. Given the \((n-1)th\) and nth iterates, choose \(\theta _n\) such that \(0\le \theta _n\le \hat{\theta }_n\) with \(\hat{\theta }_n\) defined by

      \(\hat{\theta }_n = {\left\{ \begin{array}{ll} \min \Big \{\theta ,~ \frac{\epsilon _n}{\Vert x_n - x_{n-1}\Vert }\Big \}, \quad \text {if}~ x_n \ne x_{n-1},\\ \theta , \quad \quad \text {otherwise.} \end{array}\right. }\)

Step 2. Compute

      \(w_n = x_n + \theta _n(x_n - x_{n-1}).\)

Step 3. Compute

      \(y_{n,i} = P_{C_i}(T_iw_n-\lambda _{n,i}\triangledown g_iT_iw_n)\)

      \(\lambda _{n+1,i} = {\left\{ \begin{array}{ll} \min \{\frac{(c_{n,i}+c_i)\Vert T_iw_n - y_{n,i}\Vert }{\Vert g_iT_iw_n - g_iy_{n,i}\Vert },~~ \lambda _{n,i}+\rho _{n,i}\},&{}\text {if}\quad g_iT_iw_n - \\ g_iy_{n,i}\ne 0,\\ \lambda _{n,i}+\rho _{n,i},&{} \text {otherwise.} \end{array}\right. }\)

      \(z_{n,i}=T_iw_n-\beta _{n,i}r_{n,i},\)

   where

      \(r_{n,i}=T_iw_n-y_{n,i}-\lambda _{n,i}(\triangledown g_iT_iw_n - \triangledown g_iy_{n,i})\)

   and

      \(\beta _{n,i}={\left\{ \begin{array}{ll} (k_i+k_{n,i})\frac{\langle T_iw_n - y_{n,i},r_{n,i}\rangle }{\Vert r_{n,i}\Vert ^2},&{}\text {if}\quad r_{n,i}\ne 0\\ 0,&{}\text {otherwise.} \end{array}\right. }\)

Step 4. Compute

      \(b_n = \sum _{i=0}^{N}\delta _{n,i}\big (w_n+\eta _{n,i}T_i^*(z_{n,i}-T_iw_n)\big ),\)

   where

      \(\eta _{n,i}={\left\{ \begin{array}{ll} \frac{(\phi _{n,i}+\phi _i)\Vert T_iw_n-z_{n,i}\Vert ^2}{\Vert T_i^*(T_iw_n-z_{n,i})\Vert ^2},&{}\text {if}\quad \Vert T_i^*(T_iw_n-z_{n,i})\Vert \ne 0,\\ 0,&{}\text {otherwise.} \end{array}\right. }\)

Step 5. Compute

      \(x_{n+1}= (1-\alpha _n-\xi _n)w_n+\xi _nb_n.\)

   Set \(n:= n +1 \) and return to Step 1.

Proof

Since \(g_i~,i=0,1,2,\ldots ,N\) are convex, then \(\triangledown g_i\) are monotone [36] and thus pseudomonotone. Consequently, the result follows by applying Lemma 5.1 and setting \(A_i=\triangledown g_i\) in Theorem 4.7. \(\square \)

Fig. 5
figure 5

Experiment 6.6: \(m = 10\)

Fig. 6
figure 6

Experiment 6.6: \(m = 20\)

Fig. 7
figure 7

Experiment 6.6: \(m = 30\)

Fig. 8
figure 8

Experiment 6.6: \(m = 40\)

5.2 Generalized Split Variational Inequality Problem

Finally, we apply our result to study the generalized split variational inequality problem (see [28]). Let \(C_i\) be nonempty, closed and convex subsets of real Hilbert spaces \(H_i,i=1,2,...,N,\) and let \(S_i:H_i\rightarrow H_{i+1},i=1,2,...,N-1,\) be bounded linear operators, such that \(S_i\ne 0.\) Let \(B_i:H_i\rightarrow H_i, i=1,2,...,N,\) be single-valued operators. The generalized split variational inequality problem (GSVIP) is formulated as finding a point \(x^*\in C_1\) such that

$$\begin{aligned} x^*\in \Gamma:= & {} VI(C_1,B_1)\cap S_1^{-1}(VI(C_2,B_2))\cap \ldots \nonumber \\{} & {} \quad S_1^{-1}(S_2^{-1}\ldots (S_{N-1}^{-1}(VI(C_N,B_N)))) \ne \emptyset ; \end{aligned}$$
(5.3)
Table 2 Numerical results for ( Experiment 6.6)

that is, \(x^*\in C_1\) such that

$$\begin{aligned} x^*\in VI(C_1,B_1), S_1x^*\in VI(C_2,B_2),\ldots ,S_{N-1}(S_{N-2}\ldots S_1x^*)\in VI(C_N,B_N). \end{aligned}$$

Observe that if we let \(C=C_1,A=B_1, A_i=B_{i+1}, ~1\le i\le N-1, T_1=S_1, T_2=S_2S_1,~\ldots ,~\) and \(T_{N-1}=S_{N-1}S_{N-2}\ldots S_1,\) then the SVIPMOS (1.10) becomes the GSVIP (5.3). Hence, we obtain the following result for approximating the solution of GSVIP (5.3) when the cost operators are pseudomonotone and uniformly continuous.

Theorem 5.4

Let \(C_i\) be nonempty, closed and convex subsets of real Hilbert spaces \(H_i,i=1,2,...,N,\) and let \(S_i:H_i\rightarrow H_{i+1},i=1,2,...,N-1,\) be bounded linear operators with adjoints \(S^*_i\) such that \(S_i\ne 0.\) Let \(B_i:H_i\rightarrow H_i,~1,2,...,N\) be uniformly continuous pseudomonotone operators satisfying condition (3.1), and suppose Assumption A of Theorem 4.7 holds and the solution set \(\Gamma \ne \emptyset .\) Then, the sequence \(\{x_n\}\) generated by the following algorithm converges strongly to \(\hat{x}\in \Gamma ,\) where \(\hat{x}= \min \{\Vert p\Vert :p\in \Gamma \}.\)

Algorithm 5.5.

Step 0. Select initial points \(x_0, x_1 \in H_1.\) Let \(S_0=I^{H_1},~\hat{S}_{N-1}=S_{N-1}S_{N-2}\ldots S_0,~\hat{S}_{N-1}^*=S_0^*S_1^*\ldots S_{N-1}^*,~i=1,2,\ldots ,N\) and set \(n=1.\)

Step 1. Given the \((n-1)th\) and nth iterates, choose \(\theta _n\) such that \(0\le \theta _n\le \hat{\theta }_n\) with \(\hat{\theta }_n\) defined by

      \(\hat{\theta }_n = {\left\{ \begin{array}{ll} \min \Big \{\theta ,~ \frac{\epsilon _n}{\Vert x_n - x_{n-1}\Vert }\Big \}, \quad \text {if}~ x_n \ne x_{n-1},\\ \theta , \quad \text {otherwise.} \end{array}\right. }\)

Step 2. Compute

      \(w_n = x_n + \theta _n(x_n - x_{n-1}).\)

Step 3. Compute

      \(y_{n,i} = P_{C_i}(\hat{S}_{i-1}w_n-\lambda _{n,i}B_i\hat{S}_{i-1}w_n)\)

      \(\lambda _{n+1,i} = {\left\{ \begin{array}{ll} \min \{\frac{(c_{n,i}+c_i)\Vert \hat{S}_{i-1}w_n - y_{n,i}\Vert }{\Vert B_i\hat{S}_{i-1}w_n - B_iy_{n,i}\Vert },~~ \lambda _{n,i}+\rho _{n,i}\},&{}\text {if}\quad B_i\hat{S}_{i-1}w_n \\ &{}- B_iy_{n,i}\ne 0,\\ \lambda _{n,i}+\rho _{n,i},&{} \text {otherwise.} \end{array}\right. }\)

      \(z_{n,i}=\hat{S}_{i-1}w_n-\beta _{n,i}r_{n,i},\)

   where

      \(r_{n,i}=\hat{S}_{i-1}w_n-y_{n,i}-\lambda _{n,i}(B_i\hat{S}_{i-1}w_n - B_iy_{n,i})\)

   and

      \(\beta _{n,i}={\left\{ \begin{array}{ll} (k_i+k_{n,i})\frac{\langle \hat{S}_{i-1}w_n - y_{n,i},r_{n,i}\rangle }{\Vert r_{n,i}\Vert ^2},&{}\text {if}\quad r_{n,i}\ne 0\\ 0,&{}\text {otherwise.} \end{array}\right. }\)

Step 4. Compute

      \(b_n = \sum _{i=1}^{N}\delta _{n,i}\big (w_n+\eta _{n,i}\hat{S}_{i-1}^*(z_{n,i}-\hat{S}_{i-1}w_n)\big ),\)

   where

      \(\eta _{n,i}={\left\{ \begin{array}{ll} \frac{(\phi _{n,i}+\phi _i)\Vert \hat{S}_{i-1}w_n-z_{n,i}\Vert ^2}{\Vert \hat{S}_{i-1}^*(\hat{S}_{i-1}w_n-z_{n,i})\Vert ^2},&{}\text {if}\quad \Vert \hat{S}_{i-1}^*(\hat{S}_{i-1}w_n-z_{n,i})\Vert \ne 0,\\ 0,&{}\text {otherwise.} \end{array}\right. }\)

Step 5. Compute

      \(x_{n+1}= (1-\alpha _n-\xi _n)w_n+\xi _nb_n.\)

   Set \(n:= n +1 \) and return to Step 1.

Fig. 9
figure 9

Experiment 6.7(1):CaseI

Fig. 10
figure 10

Experiment 6.7(1):Case2

6 Numerical experiments

In this section, we present some numerical experiments to illustrate the implementability of our proposed method (Proposed Alg. 3.1). For simplicity, in all the experiments we consider the case when \(N=4.\) All numerical computations were carried out using Matlab version R2021(b).

In our computations, we choose \(\alpha _n = \frac{1}{2n+3},\epsilon _n = \frac{1}{(2n+3)^3},\xi _n=\frac{(1-\alpha _n)}{2},\theta =0.99,\lambda _{1,i}=i+1.2,c_i=0.97,\phi _i=0.98,k_i=1.96,\rho _{n,i}=\frac{10}{n^2},\delta _{n,i}=\frac{1}{5}.\)

We consider the following test examples in both finite and infinite dimensional Hilbert spaces for our numerical experiments.

Example 6.1

Let \(H_i=\mathbb {R}^{m},~i=0,1,\ldots ,4,\) and let \(A_i:\mathbb {R}^{m} \rightarrow \mathbb {R}^{m}\) be a linear operator defined by \(A_i(x)=Sx+q,\) where \(q \in \mathbb {R}^{m}\) and \(S=N N^{T}+Q+D, N\) is a \(m\times m\) matrix, Q is a \(m\times m\) skew-symmetric matrix, and D is a \(m\times m\) diagonal matrix with its diagonal entries being nonnegative (thus S is positive symmetric definite). We let \(C_i=\{x \in \mathbb {R}^{m}:-(i+2) \le x_j\le i+2, j=1,...,m \}.\) In this example, we generate randomly all the entries of NQ in \([-3,3]\) while D is randomly generated in \([0,3], q=0\) and \(T_ix = \frac{3x}{i+3}.\)

Table 3 Numerical results for Experiment 6.7(1)

Example 6.2

For each \(i=0,1,\ldots ,4,\) we define the feasible set \(C_i = \mathbb {R}^m,\) \(T_ix = \frac{2x}{i+2}\) and \(A_i(x) = Mx,\) where M is a square \(m\times m\) matrix given by

$$\begin{aligned} a_{j,k} = {\left\{ \begin{array}{ll} -1, &{}\text {if}\quad k = m+1 - j \quad \text {and} \quad k>j,\\ 1&{} \text {if} \quad k = m+1 - j\quad \text {and} \quad k\le j,\\ 0, &{}\text {otherwise.} \end{array}\right. } \end{aligned}$$

We note that M is a Hankel-type matrix with nonzero reverse diagonal.

Example 6.3

Let \(H_i=\mathbb {R}^2\) and \(C_i = [-1-i, 1+i]^2,~i=0,1,\ldots ,4.\) We define \(T_ix=\frac{4x}{i+4}\) and the cost operator \(A_i: \mathbb {R}^2\rightarrow \mathbb {R}^2\) is defined by

$$\begin{aligned} A_i(x,y) = (-xe^y, y), \quad ( i=0,1,\ldots ,4 ). \end{aligned}$$

We consider the next example in infinite dimensional Hilbert space.

Example 6.4

Let \(H_i= (\ell _2(\mathbb {R}), \Vert \cdot \Vert _2), i=0,1,\ldots ,4,\) where \(\ell _2(\mathbb {R}):=\{x=(x_1,x_2,\ldots ,x_j,\ldots ), x_j\in \mathbb {R}:\sum _{j=1}^{\infty }|x_j|^2<\infty \}, ||x||_2=(\sum _{j=1}^{\infty }|x_j|^2)^{\frac{1}{2}}\) for all \(x\in \ell _2(\mathbb {R}).\) Let \(C_i:= \{x= (x_1,x_2,\ldots ,x_j,\ldots ,)\in E: \Vert x\Vert _2\le i+1\},\) and we define \(T_i=\frac{5x}{i+5}\) and the cost operator \(A_i:H_i\rightarrow H_i\) by \(A_ix = (\frac{1}{\Vert x\Vert + s} + \Vert x\Vert )x, \quad (s > 0; i = 0,1,\cdots 4).\) Then, \(A_i\) is uniformly continuous and pseudomonotone.

Fig. 11
figure 11

Experiment 6.7(2):Case I

Fig. 12
figure 12

Experiment 6.7(2):Case II

We test Examples 6.1, 6.2, 6.3 and 6.4 under the following experiments:

Experiment 6.5

In this experiment, we check the behavior of our method by fixing the other parameters and varying \(\phi _{n,i}\) in Example 6.1. We do this to check the effects of this parameter and the sensitivity of our method to it.

We consider \(\phi _{n,i} \in \{\frac{3}{(n + 1)}, \frac{5}{(n + 1)^2}, \frac{7}{(n + 1)^3}, \frac{9}{(n + 1)^4}, \frac{11}{(n + 1)^5}\}\) with \(m = 25,\) \(m = 50,\) \( m = 100\) and \(m = 200.\)

Using \(\Vert x_{n+1}-x_n\Vert < 10^{-3}\) as the stopping criterion, we plot the graphs of \(\Vert x_{n+1}-x_n\Vert \) against the number of iterations for each m.. The numerical results are reported in Figs. 1, 2, 3, 4 and Table 1.

Experiment 6.6

In this experiment, we check the behavior of our method by fixing the other parameters and varying \(c_{n,i}\) in Example 6.2. We do this to check the effects of this parameter and the sensitivity of our method on it.

We consider \(c_{n,i} \in \{\frac{15}{n^{0.1}},\frac{30}{n^{0.01}},\frac{45}{n^{0.001}},\frac{60}{n^{0.0001}},\frac{75}{n^{0.00001}}\}\) with \(m = 10,\) \( m = 20,\) \( m = 30\) and \( m = 40.\)

Using \(\Vert x_{n+1}-x_n\Vert < 10^{-3}\) as the stopping criterion, we plot the graphs of \(\Vert x_{n+1}-x_n\Vert \) against the number of iterations in each case. The numerical results are reported in Figures 5, 6, 7, 8 and Table 2.

Table 4 Numerical results for Experiment 6.7 (2)

Finally, we test Examples 6.3 and 6.4 under the following experiment:

Experiment 6.7

In this experiment, we check the behavior of our method by fixing the other parameters and varying \(k_{n,i}\) and \(c_{n,i}\) in Examples 6.3 and 6.4. We do this to check the effects of these parameters and the sensitivity of our method on them.

  1. (1)

    We consider \(k_{n,i} \in \{\frac{2}{(n + 1)}, \frac{4}{(2n + 1)^2}, \frac{6}{(3n + 1)^3}, \frac{8}{(4n + 1)^4}, \frac{10}{(5n + 1)^5}\}\) with the following two cases of initial values \(x_0\) and \(x_1:\)

    1. Case I:

      \(x_0 = (2, 3);\) \(x_1 = (3, 4);\)

    2. Case II:

      \(x_0 = (1, 3);\) \(x_1 = (2, 0).\)

    Using \(\Vert x_{n+1}-x_n\Vert < 10^{-4}\) as the stopping criterion, we plot the graphs of \(\Vert x_{n+1}-x_n\Vert \) against the number of iterations in each case. The numerical results are reported in Figs. 9, 10 and Table 3.

  2. (2)

    We consider \(c_{n,i} \in \{\frac{15}{n^{0.1}},\frac{30}{n^{0.01}},\frac{45}{n^{0.001}},\frac{60}{n^{0.0001}},\frac{75}{n^{0.00001}}\}\) with the following two cases of initial values \(x_0\) and \(x_1:\)

    1. Case I:

      \(x_0 = (3, 1, \frac{1}{3}, \cdots );\) \(x_1 = (\frac{1}{3}, \frac{1}{6},\frac{1}{12},\cdots );\)

    2. Case II:

      \(x_0 = (2, 1, \frac{1}{2},\cdots );\) \(x_1 = (\frac{1}{2}, \frac{1}{8}, \frac{1}{32}, \cdots ).\)

    Using \(\Vert x_{n+1}-x_n\Vert < 10^{-4}\) as the stopping criterion, we plot the graphs of \(\Vert x_{n+1}-x_n\Vert \) against the number of iterations in each case. The numerical results are reported in Figs. 11, 12 and Table 4.

Remark 6.8

Using different initial values, cases of m and varying the key parameters in Examples 6.16.4, we obtained the numerical results displayed in Tables 1, 2 and 3 and Figs. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12. We noted the following from our numerical experiments:

  1. (1)

    In all the examples, the choice of the key parameters \(c_{n,i},\) \(k_{n,i}\) and \(\phi _{n,i}\) does not affect the number of iterations and no significant difference in the CPU time. Thus, our method is not sensitive to these key parameters for each initial value and case of m.

  2. (2)

    The number of iterations for our method remains consistent in all the examples and so well-behaved.

7 Conclusion

In this paper, we studied the concept of split variational inequality problem with multiple output sets when the cost operators are pseudomonotone and uniformly continuous. We proposed a new Mann-type inertial projection and contraction method with self-adaptive step sizes for approximating the solution of the

problem in the framework of Hilbert spaces. Under some mild conditions on the control sequences and without prior knowledge of the operator norms, we obtained strong convergence result for the proposed algorithm. Finally, we applied our result to study certain classes of optimization problems and we presented several numerical experiments to illustrate the applicability of the proposed method.