1 Introduction

Fixed-point problems are being used as mathematical programming models to solve several nonlinear problems arising in optimization, machine learning, finance, economics, network, transportation, and in applied areas such as in image recovery, and signal processing (see [11, 13, 30]). Due to this, several authors studied various fixed-point problems in a more generalized sense as a type of Split Inverse Problem (SIP) [15]. Generalized Split Inverse Problem (GSIP) [21] is formulated as a problem of

$$\begin{aligned} \left\{ \begin{array}{ll} \text{ find } x^{*}\in X \text{ that } \text{ solves } \text{ IP1 }\\ \text{ such } \text{ that }\\ A_{k}(x^{*})=y_{k}^{*}\in Y \text{ and } y_{k}^{*} \text{ solves } \text{ IP2 }, \quad \forall k\in \varLambda , \end{array} \right. \end{aligned}$$
(1)

where \(\varLambda \subset {\mathbb {R}}\) is an index set, IP1 and IP2 are two inverse problems installed in space X and Y, respectively, and \(A_{k}\) is a linear transformation from X to Y for each \(k\in \varLambda \). If \(A=A_{k}\) for all \(k\in \varLambda \), then GSIP will be reduced to Split Inverse Problem (SIP) [15]. Many models of inverse problems can be cast in this framework by choosing different inverse problems for IP1 and IP2. There is a considerable investigation in the framework of SIP; see, for example, [12, 14, 24, 27, 35, 39, 43] and the many reference therein.

In this paper, we consider a problem formulated in the framework of GSIP. To be precise, we consider the generalized split system of common fixed-point problem (in short GSSCFP), formulated as a problem of finding

$$\begin{aligned} {\bar{x}}\in \bigcap \limits _{i=1}^{N} F(U_{i}) \text{ such } \text{ that } A_{k}({\bar{x}})\in \bigcap \limits _{j=1}^{M} F(T_{j}),\quad \forall k\in \{1,\ldots ,R\}, \end{aligned}$$
(2)

where \(H_{1}\) and \(H_{2}\) are two real Hilbert spaces, \(U_{i}:H_{1}\rightarrow H_{1}\) (\(i\in \{1,\ldots ,N\}\)) and \(T_{j}:H_{2}\rightarrow H_{2}\) (\(j\in \{1,\ldots ,M\}\)) are nonlinear mappings, \(A_{k}:H_{1}\rightarrow {H_{2}}\) (\(k\in \{1,\ldots ,R\}\)) are linear transformations, and \(F(U_{i})\) and \(F(T_{j})\) are fixed points set of \(U_{i}\) and \(T_{j}\), i.e., \(F(U_{i})=\{{\bar{x}}\in H_{1}:U_{i}({\bar{x}})={\bar{x}}\}\), \(F(T_{j})=\{{\bar{y}}\in H_{2}:T_{j}({\bar{y}})={\bar{y}}\}\). There are few studies in the framework of (2), see for demimetric mappings in [22, 23]. However, several algorithms are introduced to solve (2) for different classes of operators under the case \(A_{k}=A\) for all \(k\in \{1,\ldots ,R\}\). If \(U=U_{i}\) for all \(i\in \{1,\ldots ,N\}\), \(T=T_{j}\) for all \(j\in \{1,\ldots ,M\}\) and \(A_{k}=A\) for all \(k\in \{1,\ldots ,R\}\), GSSCFP (2) will be reduced to split common fixed-point problem (SCFP), i.e., the problem of finding

$$\begin{aligned} {\bar{x}}\in F(U) \text{ such } \text{ that } A({\bar{x}})\in F(T). \end{aligned}$$
(3)

SCFP is first introduced for the case of directed operators U and T by Censor and Segal [16], and they introduced the following algorithm for solving the SCFP:

$$\begin{aligned} x_{n+1}=U(x_{n}-\tau A^{*}(I-T)Ax_{n}), \end{aligned}$$
(4)

where \(\tau \) is a properly chosen step size. If \(\tau \in (0,\frac{2}{\Vert A\Vert ^{2}})\), then sequence \(\{x_{n}\}\) generated by (4) converges weakly to a solution of the SCFP provided that the solution set is nonempty. In particular, if \(T=P_{C}\) and \(U=P_{Q}\) are the projection operators, then the SCFP is reduced to the well-known split feasibility problem (SFP) [10, 14] (C and Q are nonempty closed convex subsets in \(H_{1}\) and \(H_{2}\), respectively). Initiated by SCFP, several GSIP have been investigated and studied, for example, for directed operators [44], demicontractive mappings [39], asymptotically quasi-nonexpansive mappings [40], quasi-nonexpansive operators [12], averaged mapping [48], and the many reference therein. In spite of a wide range of already existing methods are attractive for approximating the solution of the GSIP, several proposed methods are not very convenient to use. The common drawback of several studies on GSIP is that the implementation requires the determination of the step size that depends on the operator norm, which is in general not an easy work in practice, because operator norm is global invariant and is often quite difficult (if not impossible) to compute; see Theorem of Hendrickx and Olshevsky in [29]. To overcome this difficulty, initialed by López et al. [31], many authors have constructed the adaptive variable step size that does not require the norm operator (see, for example, in [41, 45, 47, 48]).

In many disciplines, accelerated convergence is often much more desirable and important; see, for example, [18, 34]. The widely used acceleration technique of an iterative algorithm is the momentum method of Polyak [38] (inertial extrapolation). Polyak method was used by Nesterov [36] to speed up the rate of convergence of the proposed iterative algorithm for solving the smooth convex minimization problem, and the proposed algorithm is given by

$$\begin{aligned} \left\{ \begin{array}{lr} y_{n}=x_{n}+\theta _{n} (x_{n}-x_{n-1}),\\ x_{n+1}=y_{n}-\lambda _{n}\nabla f(y_{n}), \end{array} \right. \end{aligned}$$
(5)

where \(\theta _{n}\in [0,1)\) is an extrapolation factor and \(\lambda _{n}\) is a step-size parameter (sufficiently small) and \(\nabla f\) is the gradient of a smooth convex function f. The algorithm is more effective and converges faster because of the inertial term \(\theta _{n} (x_{n}-x_{n-1})\) in (5). Consequently, several inertial type algorithms are proposed in literature; see, for example, inertial proximal method [2], inertial forward-backward algorithm [32], inertial proximal ADMM [17], and fast iterative shrinkage thresholding algorithm (FISTA) [6], and see more [1, 42].

In this paper, motivated by adaptive variable step-size selection method and inertial extrapolation method, we propose an iterative method solving GSSCFP (2) for averaged mappings \(U_{i}\) and \(T_{i}\). We use a different kind of step-size selection method of \(\theta _{n}\) in the inertial term \(\theta _{n} (x_{n}-x_{n-1})\). Our proposed method is more desirable, since it is inertial accelerated iterative algorithm, and is efficient in practice, since the adaptive variable step-sizes selection technique is established, so that the implementation of our proposed algorithm does not need any prior information about the operator norms \(\Vert A_{k}\Vert \) (\(k\in \{1,\ldots ,R\}\)).

The outline of the paper is as follows. In Sect. 2, we recall some definitions and preliminary results for further analysis. In Sect. 3, the new proposed method for GSSCFP (2) is introduced and its convergence is analyzed. In Sect. 4, we give some of the applications of our result. Finally, in Sect. 5, the numerical experiments are given to illustrate the behavior and efficiency of our result.

2 Preliminary

Let H be a real Hilbert space with the inner product \(\langle .,.\rangle \) and the norm \(\Vert .\Vert \), respectively.

For a sequence \(\{x_{n}\}\) in H and \(p\in H\), we write \(x_{n}\rightharpoonup p\) to indicate that the sequence \(\{x_{n}\}\) converges weakly to p, and \(x_{n}\rightarrow p\) to indicate that \(\{x_{n}\}\) converges strongly to p. The symbol \(\omega _{w}\) denotes the \(\omega \)-weak limit set of \(\{x_{n}\}\), i.e., \(\omega _{w}(x_{n})=\{p:\exists \{x_{n_{l}}\} \subset \{x_{n}\} \text{, }{ such that } x_{n_{l}}\rightharpoonup p \}.\)

Lemma 2.1

For a real Hilbert space H, we have the following fundamental properties:

  1. (i)

    \(\Vert x+y\Vert ^{2}=\Vert x\Vert ^{2}+\Vert y\Vert ^{2}+2\langle x,y \rangle , \quad \forall x,y\in H;\)

  2. (ii)

    \(\langle x,y\rangle = \frac{1}{2}\Vert x\Vert ^{2}+\frac{1}{2}\Vert y\Vert ^{2}-\frac{1}{2}\Vert x-y\Vert ^{2},\quad \forall x,y\in H;\)

  3. (iii)

    \(\Vert \alpha x+(1-\alpha )y\Vert ^{2}=\alpha \Vert x\Vert ^{2}+(1-\alpha )\Vert y\Vert ^{2}-\alpha (1-\alpha )\Vert x-y\Vert ^{2},\quad \forall x,y\in H, \alpha \in {\mathbb {R}}.\)

Definition 2.2

The mapping \(T:H\rightarrow H\) is called

  1. (a)

    L-Lipschitz if \(L>0\), such that

    $$\begin{aligned} \Vert Tx-Ty\Vert \le L\Vert x-y\Vert ,\quad \forall x,y\in H. \end{aligned}$$

    If \(L=1\), then T is said to be nonexpansive mapping.

  2. (b)

    \(\nu \)-inverse strongly monotone (\(\nu \)-ism), if there exists \(\nu >0\), such that

    $$\begin{aligned} \langle x-y,Tx-Ty\rangle \ge \nu \Vert Tx-Ty\Vert ^{2},\quad \forall x,y\in H. \end{aligned}$$

Inverse strong monotone operators have been widely used to solve practical problem in various fields, for instance, in traffic assignment problems (see, for example, [7, 28]). It is easy to see that if T is \(\nu \)-ism, then T is \(\frac{1}{\nu }\)-Lipschitz.

Definition 2.3

A mapping \(T:H\rightarrow H\) is said to be firmly nonexpansive if and only if \(2T-I\) is nonexpansive or equivalently

$$\begin{aligned} \langle x-y,Tx-Ty\rangle \ge \Vert Tx-Ty\Vert ^{2},\quad \forall x,y\in H. \end{aligned}$$

Alternatively, T is firmly nonexpansive if and only if T can be expressed as

$$\begin{aligned} T=\frac{1}{2}(I+S), \end{aligned}$$

where I identity mapping and \(S:H\rightarrow H\) is nonexpansive.

Definition 2.4

A mapping \(T:H\rightarrow H\) is said to be an averaged mapping, if it can be written as the average of the identity mapping I and a nonexpansive mapping, that is

$$\begin{aligned} T=(1-\alpha )I+\alpha S, \end{aligned}$$
(6)

where \(\alpha \in (0,1)\) and \(S:H\rightarrow H\) is nonexpansive. More precisely, when (6) holds, we say that T is \(\alpha \)-averaged.

Firmly nonexpansive mappings (in particular, projections and proximal mappings) are \(\frac{1}{2}\)-averaged mappings. The term averaged mapping was coined by Baillon–Bruck–Reich [3]. The notion of averaged mapping is central in the study of fixed point and optimization theory. You can find further properties of averaged mappings in [3, 11, 19].

Proposition 2.5

[11]. A mapping \(T:H\rightarrow H\) is averaged if and only if the complement \(I-T\) is \(\nu \)-ism for some \(\nu >\frac{1}{2}\). Indeed, for \(\alpha \in (0,1)\), T is \(\alpha \)-averaged if and only if \(I-T\) is \(\frac{1}{2\alpha }\)-ism.

Lemma 2.6

[4]. Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(T:C\rightarrow C\) be a nonexpansive mapping. Then, \(I-T\) is demiclosed (i.e., \(\{x_{n}\}\subset H\), such that \(x_{n}\rightharpoonup {\bar{x}}\) and \(x_{n}-Tx_{n}\rightarrow 0\), then \(T{\bar{x}}={\bar{x}}\)).

The following lemma is the consequence of Opial’s theorem [37].

Lemma 2.7

[46]. Let C be a nonempty subset of real Hilbert space H. Let \(\{x_{n}\}\) be a bounded sequence which satisfies the following properties:

  1. (i)

    \(\lim \limits _{n\rightarrow \infty }\Vert x_{n}-{\bar{x}}\Vert \) exists for every \({\bar{x}}\in C\),

  2. (ii)

    \(\omega _{w}(x_{n})\subset C\).

Then, \(\{x_{n}\}\) converges weakly to a point in C.

Lemma 2.8

[33]. If \(\{\psi _{n}\}\) is a sequence of non-negative real numbers, and \(\{\theta _{n}\}\) and \(\{\delta _{n}\}\) are the sequence of real numbers, such that

  1. (i)

    \(\psi _{n+1}-\psi _{n}\le \theta _{n}(\psi _{n}-\psi _{n-1})+\delta _{n},\)

  2. (ii)

    \(\sum \delta _{n}<\infty \) and \(\theta _{n}\in [0,\theta ]\), where \(\theta \in (0,1),\)

then \(\{\psi _{n}\}\) is a converging sequence and

$$\begin{aligned} \sum _{n=1}^{\infty }[\psi _{n+1}-\psi _{n}]<\infty , \end{aligned}$$

where \([t]_{+}=\max (t,0)\) for any \(t\in {\mathbb {R}}\).

3 Main result

In this section, we present an adaptive inertial algorithm to solve the GSSCFP (2) and investigate its weak convergence under the following assumptions:

(A1):

\(U_{i}:H_{1}\rightarrow H_{1}\) is \(\alpha _{i}\)-averaged for all \(i\in \{1,\ldots ,N\}\);

(A2):

\(T_{j}:H_{2}\rightarrow H_{2}\) is \(\beta _{j}\)-averaged for all \(j\in \{1,\ldots ,M\}\);

(A3):

\(A_{k}:H_{1}\rightarrow H_{2}\) is a bounded linear operator for all \(k\in \{1,\ldots ,R\}\);

(A4):

\(\varGamma \) denotes the solution set of the GSSCFP (2) and \(\varGamma \) is nonempty.

Algorithm 3.1

Choose \(x_{0},x_{1}\in H_{1}\). Let \(\theta \in [0,1)\) and \(\{\epsilon _{n}\}\), \(\{\varepsilon _{n}\}\) and \(\{\rho _{n}\}\) be positive real parameter sequences, such that \(\lim \nolimits _{n\rightarrow \infty }\epsilon _{n}=0\), \(\sum _{n=1}^{\infty }\varepsilon _{n}<\infty \), \(0<\rho _{n}<\frac{1}{\xi }\) and \(\liminf \nolimits _{n\rightarrow \infty } \rho _{n}(\frac{1}{\xi }-\rho _{n})>0\) where \(\xi =\max \{\alpha _{1},\ldots ,\alpha _{N},\beta _{1},\ldots ,\beta _{M}\}\).

STEP 1.:

Choose \(\theta _{n}\), such that \(0\le \theta _{n}\le \bar{\theta }_{n}=\min \{\bar{\alpha _{n}},\bar{\beta _{n}}\}\) where

$$\begin{aligned} \bar{\alpha }_{n}:= \left\{ \begin{array}{lr} \min \big \{\theta ,\frac{\epsilon _{n}}{\Vert x_{n-1}-x_{n}\Vert }\big \},&{} \text{ if } x_{n-1}\ne x_{n} \\ \theta , &{} \text{ otherwise }, \end{array} \right. \end{aligned}$$
(7)

and

$$\begin{aligned} \bar{\beta }_{n}:= \left\{ \begin{array}{lr} \min \big \{\theta ,\frac{\varepsilon _{n}}{\Vert x_{n-1}-x_{n}\Vert ^{2}}\big \},&{} \text{ if } x_{n-1}\ne x_{n} \\ \theta , &{} \text{ otherwise }. \end{array} \right. \end{aligned}$$
(8)
STEP 2.:

Evaluate \(y_{n}=x_{n}+\theta _{n}(x_{n}-x_{n-1}).\)

STEP 3.:

Evaluate \(t_{n}^{(i)}=(I-U_{i})y_{n}\), \(w_{n}^{(j,k)}=(I-T_{j})A_{k}(y_{n})\) and

$$\begin{aligned} \vartheta _{n}=\Big \Vert \sum \limits _{i=1}^{N}t^{(i)}_{n}+\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n})\Big \Vert . \end{aligned}$$

Stopping criterion: If \(\vartheta _{n}=0\), then Stop. Otherwise, got to STEP 4.

STEP 4.:

Evaluate \( x_{n+1}=y_{n}-\eta _{n}\Bigg (\sum _{i=1}^{N}t^{(i)}_{n}+\sum _{k=1}^{R}\sum _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n})\Bigg ), \) where

$$\begin{aligned} \eta _{n}=\frac{\rho _{n}\Big (\sum \nolimits _{i=1}^{N}\Vert t^{(i)}_{n}\Vert ^{2}+\sum \nolimits _{k=1}^{R}\sum \nolimits _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )}{\vartheta _{n}^{2}}. \end{aligned}$$
STEP 5.:

Set \(n:=n+1\) and go to STEP 1.

Remark 3.2

Note also that (7) and (8) in Algorithm 3.1 is easily implemented in numerical computation, since the value of \(\Vert x_{n-1}-x_{n}\Vert \) is a prior known before choosing \(\theta _{n}\). Moreover, \(\epsilon _{n}\ge 0\), \(\lim \nolimits _{n\rightarrow \infty }\epsilon _{n}=0\), \(\varepsilon _{n}\ge 0\), \(\sum _{n=1}^{\infty }\varepsilon _{n}<\infty \), (7) and (8) in Algorithm 3.1 imply that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\theta _{n}\Vert x_{n-1}-x_{n}\Vert =0 \text{ and } \sum \limits _{n=1}^{\infty }\theta _{n}\Vert x_{n-1}-x_{n}\Vert ^{2}<\infty . \end{aligned}$$
(9)

Lemma 3.3

The stopping condition (in STEP 3) of Algorithm 3.1 is satisfied (i.e., \(\vartheta _{n}=0\) for some \(n\in {\mathbb {N}}\)), and then, \(y_{n}\in \varGamma \).

Proof

Suppose \(\vartheta _{n}=0\). Now, for \(p\in \varGamma \), we have

$$\begin{aligned} \begin{array}{l@{}l} 0\,&{}=\vartheta _{n}\Vert y_{n}-p\Vert \\ {} &{}=\Big \Vert \sum \limits _{i=1}^{N}t^{(i)}_{n}+\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n})\Big \Vert \Vert y_{n}-p\Vert \\ {} &{}\ge \Big \langle \sum \limits _{i=1}^{N}t^{(i)}_{n}+\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n}),y_{n}-p\Big \rangle \\ {} &{}=\sum \limits _{i=1}^{N}\langle (I-U_{i})y_{n},y_{n}-p\rangle +\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}\langle A_{k}^{*}(I-T_{j})A_{k}(y_{n}),y_{n}-p\rangle \\ {} &{}=\sum \limits _{i=1}^{N}\langle (I-U_{i})y_{n}-(I-U_{i})p,y_{n}-p\rangle \\ {} &{}\quad +\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}\langle (I-T_{j})A_{k}(y_{n})-(I-T_{j})A_{k}(p),A_{k}(y_{n})-A_{k}(p)\rangle \\ {} &{}\ge \sum \limits _{i=1}^{N}\frac{1}{2\alpha _{i}}\Vert (I-U_{i})y_{n}\Vert ^{2}+\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}\frac{1}{2\beta _{j}}\Vert (I-T_{j})A_{k}(y_{n})\Vert ^{2}.\end{array} \end{aligned}$$

This implies \(\Vert (I-U_{i})y_{n}\Vert =\Vert (I-T_{j})A_{k}(y_{n})\Vert =0\) for all \((i,j,k)\in \{1,\ldots ,N\}\times \{1,\ldots ,M\}\times \{1,\ldots ,R\}\). Therefore, \(y_{n}\) solves GSSCFP (2), i.e, \(y_{n}\in \varGamma \). \(\square \)

If the stopping criteria of Algorithm 3.1 are not satisfied for all n, i.e., \(\vartheta _{n}\ne 0\) for all \(n\in {\mathbb {N}}\), we have the following convergence theorem of Algorithm 3.1 for approximation of solution of problem GSSCFP (2).

Theorem 3.4

The sequence \(\{x_{n}\}\) generated by Algorithm 3.1 converges weakly to an element of \(\varGamma \).

Proof

First, we show that \(\lim _{n\rightarrow \infty } \psi _{n}\) exists for all \({\bar{x}}\in \varGamma \) and the sequence \(\{x_{n}\}\) is bounded. Let \({\bar{x}}\in \varGamma \). Now

$$\begin{aligned} \Vert y_{n}-{\bar{x}}\Vert ^{2}= & {} \Vert x_{n}+\theta _{n}(x_{n}-x_{n-1})-{\bar{x}}\Vert ^{2}\nonumber \\= & {} \Vert x_{n}-{\bar{x}}\Vert ^{2}+\theta ^{2}_{n}\Vert x_{n}-x_{n-1}\Vert ^{2}+2\theta _{n} \langle x_{n}-{\bar{x}},x_{n}-x_{n-1} \rangle . \end{aligned}$$
(10)

From Lemma 2.1 (ii), we have

$$\begin{aligned} \langle x_{n}-{\bar{x}},x_{n}-x_{n-1} \rangle =\frac{1}{2}\Vert x_{n}-{\bar{x}}\Vert ^{2}-\frac{1}{2}\Vert x_{n-1}-{\bar{x}}\Vert ^{2}+\frac{1}{2}\Vert x_{n}-x_{n-1}\Vert ^{2}. \end{aligned}$$
(11)

From (10) and (11) and since \(0\le \theta _{n}<1\), we get

$$\begin{aligned} \Vert y_{n}-{\bar{x}}\Vert ^{2}= & {} \Vert x_{n}-{\bar{x}}\Vert ^{2}+\theta ^{2}_{n}\Vert x_{n}-x_{n-1}\Vert \nonumber \\&+\theta _{n}(\Vert x_{n}-{\bar{x}}\Vert ^{2}-\Vert x_{n-1}-{\bar{x}}\Vert ^{2}+\Vert x_{n}-x_{n-1}\Vert ^{2})\nonumber \\\le & {} (1+\theta _{n})\Vert x_{n}-{\bar{x}}\Vert ^{2}+2\theta _{n}\Vert x_{n}-x_{n-1}\Vert ^{2}-\theta _{n}\Vert x_{n-1}-{\bar{x}}\Vert ^{2}. \end{aligned}$$
(12)

Now, using the definition of \(x_{n+1}\) and Lemma 2.1 (i), we have

$$\begin{aligned} \Vert x_{n+1}-{\bar{x}}\Vert ^{2}= & {} \big \Vert y_{n}-\eta _{n}\Big (\sum \limits _{i=1}^{N}t^{(i)}_{n}+\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n})\Big )-{\bar{x}}\big \Vert ^{2}\nonumber \\= & {} \Vert y_{n}-{\bar{x}}\Vert ^{2}+\Big \Vert \eta _{n}\Big (\sum \limits _{i=1}^{N}t^{(i)}_{n}+\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n})\Big )\Big \Vert ^{2}\nonumber \\&-2\Big \langle \eta _{n}\Big (\sum \limits _{i=1}^{N}t^{(i)}_{n}+\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n})\Big ),y_{n}-{\bar{x}} \Big \rangle . \end{aligned}$$
(13)

Notice that

$$\begin{aligned}&\Big \Vert \eta _{n}\Big (\sum \limits _{i=1}^{N}t^{(i)}_{n}+\sum \limits _{k=1}^{R} \sum \limits _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n})\Big )\Big \Vert ^{2}\nonumber \\&=\eta ^{2}_{n}\Big \Vert \sum \limits _{i=1}^{N}t^{(i)}_{n}+\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n})\Big \Vert ^{2}\nonumber \\&=\frac{\rho ^{2}_{n}\Big (\sum _{i=1}^{N}\Vert t^{(i)}_{n}\Vert ^{2}+\sum _{k=1}^{R} \sum _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )^{2}}{\vartheta _{n}^{4}}\Big \Vert \sum \limits _{i=1}^{N}t^{(i)}_{n}+ \sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n})\Big \Vert ^{2}\nonumber \\&=\frac{\rho ^{2}_{n}\Big (\sum _{i=1}^{N}\Vert t^{(i)}_{n}\Vert ^{2}+\sum _{k=1}^{R} \sum _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )^{2}}{\vartheta _{n}^{4}}\vartheta _{n}^{2}\nonumber \\&=\frac{\rho ^{2}_{n}\Big (\sum _{i=1}^{N}\Vert t^{(i)}_{n}\Vert ^{2}+\sum _{k=1}^{R} \sum _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )^{2}}{\vartheta _{n}^{2}}. \end{aligned}$$
(14)

Furthermore, noting that \(U_{i}\) is \(\alpha _{i}\)-averaged and \(T_{j}\) is \(\beta _{j}\)-averaged, we have

$$\begin{aligned}&\Big \langle \eta _{n}\Big (\sum \limits _{i=1}^{N}t^{(i)}_{n}+\sum \limits _{k=1}^{R}\sum \limits _{j=1}^ {M}A^{*}_{k}(w^{(j,k)}_{n})\Big ),y_{n}-{\bar{x}} \Big \rangle \nonumber \\&=\eta _{n}\Big (\sum \limits _{i=1}^{N}\big \langle t^{(i)}_{n},y_{n}-{\bar{x}} \big \rangle +\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}\big \langle w^{(j,k)}_{n},A_{k}(y_{n})-A_{k}({\bar{x}}) \big \rangle \Big )\nonumber \\&\ge \eta _{n}\Big (\sum \limits _{i=1}^{N}\frac{1}{2\alpha _{i}}\Vert t^{(i)}_{n}\Vert ^{2} +\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}\frac{1}{2\beta _{j}}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )\nonumber \\&\ge \frac{1}{2\xi }\eta _{n}\Big (\sum \limits _{i=1}^{N}\Vert t^{(i)}_{n}\Vert ^{2} +\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )\nonumber \\&=\frac{\frac{\rho _{n}}{2\xi }\Big (\sum _{i=1}^{N}\Vert t^{(i)}_{n}\Vert ^{2}+\sum _{k=1}^{R} \sum _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )}{\vartheta _{n}^{2}}\Big (\sum \limits _{i=1}^{N}\Vert t^{(i)}_{n}\Vert ^{2} +\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )\nonumber \\&=\frac{\frac{\rho _{n}}{2\xi }\Big (\sum _{i=1}^{N}\Vert t^{(i)}_{n}\Vert ^{2}+\sum _{k=1}^{R} \sum _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )^{2}}{\vartheta _{n}^{2}}. \end{aligned}$$
(15)

It turns out from (13), (14), and (15) that

$$\begin{aligned} \Vert x_{n+1}-{\bar{x}}\Vert ^{2}\le \Vert y_{n}-{\bar{x}}\Vert ^{2}+\rho _{n}\Big (\rho _{n}-\frac{1}{\xi }\Big ) \frac{\Big (\sum _{i=1}^{N}\Vert t^{(i)}_{n}\Vert ^{2}+\sum _{k=1}^{R} \sum _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )^{2}}{\vartheta _{n}^{2}}. \end{aligned}$$
(16)

Let \(\psi _{n}=\Vert x_{n}-{\bar{x}}\Vert ^{2}\) and \(\delta _{n}=2\theta _{n}\Vert x_{n}-x_{n-1}\Vert ^{2}\). From (12) and (16), we get that

$$\begin{aligned}&\psi _{n+1}-\psi _{n}\le \theta _{n}(\psi _{n}-\psi _{n-1})+\delta _{n}\nonumber \\&+\rho _{n}\Big (\rho _{n}-\frac{1}{\xi }\Big )\frac{\Big (\sum _{i=1}^{N}\Vert t^{(i)}_ {n}\Vert ^{2}+\sum _{k=1}^{R}\sum _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )^{2}}{\vartheta _{n}^{2}}. \end{aligned}$$
(17)

That is, \( \psi _{n+1}-\psi _{n}\le \theta _{n}(\psi _{n}-\psi _{n-1})+\delta _{n} \). Using (9), we have \(\sum \limits _{n=1}^{\infty }\delta _{n}<\infty \), and hence, by Lemma 2.8, we have \(\lim _{n\rightarrow \infty } \psi _{n}\) exists for all \({\bar{x}}\in \varGamma \). This implies that the sequence \(\{x_{n}\}\) is bounded, and so is the sequence \(\{y_{n}\}\). Next, \(\omega _{w}(x_{n})\subset \varGamma \). Now, from (17), we also obtain that

$$\begin{aligned}&\rho \Big (\frac{1}{\xi }-\rho _{n}\Big )\frac{\Big (\sum \nolimits _{i=1}^{N}\Vert t^{(i)}_{n}\Vert ^{2}+ \sum \nolimits _{k=1}^{R}\sum \nolimits _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )^{2}}{\vartheta _{n}^{2}}\le \psi _{n}-\psi _{n+1}+ \theta _{n}\psi _{n}+\delta _{n}, \end{aligned}$$

and since \((\psi _{n}-\psi _{n+1}+ \theta _{n}\psi _{n}+\delta _{n}) \rightarrow 0\) as \(n\rightarrow \infty \) and \(\liminf \limits _{n\rightarrow \infty } \rho (\frac{1}{\xi }-\rho _{n})>0\), we obtain that

$$\begin{aligned} \frac{\Big (\sum \nolimits _{i=1}^{N}\Vert t^{(i)}_{n}\Vert ^{2}+\sum \nolimits _{k=1}^{R} \sum \nolimits _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )^{2}}{\vartheta _{n}^{2}}\rightarrow 0, \quad n\rightarrow \infty . \end{aligned}$$
(18)

Since \(\{\vartheta _{n}\}\) is bounded sequence, it turns out that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\Big [\sum \limits _{i=1}^{N}\Vert t^{(i)}_{n}\Vert ^{2}+\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big ]=0, \end{aligned}$$

and hence

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\Vert t^{(i)}_{n}\Vert =\lim \limits _{n\rightarrow \infty }\Vert w^{(j,k)}_{n}\Vert =0, \end{aligned}$$
(19)

for all \((i,j,k)\in \{1,\ldots ,N\}\times \{1,\ldots ,M\}\times \{1,\ldots ,R\}\). Now, using the definition of \(x_{n+1}\), we have

$$\begin{aligned} \Vert x_{n+1}-y_{n}\Vert ^{2}= & {} \Big \Vert \eta _{n}\Big (\sum \limits _{i=1}^{N}t^{(i)}_{n} +\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n})\Big )\Big \Vert ^{2} \nonumber \\= & {} \eta ^{2}_{n}\Big \Vert \sum \limits _{i=1}^{N}t^{(i)}_{n}+\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n}) \Big \Vert ^{2}=\eta ^{2}_{n}\vartheta ^{2}_{n}\nonumber \\\le & {} \frac{\rho ^{2}_{n}\Big (\sum _{i=1}^{N}\Vert t^{(i)}_{n}\Vert ^{2} +\sum _{k=1}^{R}\sum _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )^{2}}{\vartheta _{n}^{2}} \nonumber \\ {}\le & {} \frac{1}{\xi ^{2}}\frac{\Big (\sum _{i=1}^{N}\Vert t^{(i)}_{n}\Vert ^{2} +\sum _{k=1}^{R}\sum _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )^{2}}{\vartheta _{n}^{2}}. \end{aligned}$$
(20)

Thus, (18) together with (20) gives

$$\begin{aligned} \Vert x_{n+1}-y_{n}\Vert \rightarrow 0,n\rightarrow \infty . \end{aligned}$$
(21)

Moreover, using the definition of \(y_{n}\) and (9), we have

$$\begin{aligned} \Vert x_{n}-y_{n}\Vert =\Vert x_{n}-x_{n}-\theta _{n}(x_{n}-x_{n-1})\Vert =\theta _{n}\Vert x_{n}-x_{n-1}\Vert \rightarrow 0,n\rightarrow \infty . \end{aligned}$$
(22)

By (21) and (22), we get

$$\begin{aligned} \Vert x_{n+1}-x_{n}\Vert \le \Vert x_{n+1}-y_{n}\Vert +\Vert x_{n}-y_{n}\Vert \rightarrow 0,n\rightarrow \infty . \end{aligned}$$
(23)

Let \(p\in \omega _{w}(x_{n})\) and \(\{x_{n_{l}}\}\) be a subsequence of \(\{x_{n}\}\), such that \(x_{n_{l}}\rightharpoonup p\) as \(l\rightarrow \infty \). Using (22) and (23), we have \(x_{n_{l}+1}\rightharpoonup p\) and \(y_{n_{l}}\rightharpoonup p\) as \(l\rightarrow \infty \), and hence, \(A_{k}(y_{n_{l}})\rightharpoonup A_{k}(p)\) as \(l\rightarrow \infty \) for all \(k\in \{1,\ldots ,R\}\). Thus, it follows from (19) and Lemma 2.6 that \(p\in F(U_{i})\) for all \(i\in \{1,\ldots ,N\}\), and \(A_{k}(p)\in F(T_{j})\) for all \((j,k)\in \{1,\ldots ,M\}\times \{1,\ldots ,R\}\). Hence, \(p\in \varGamma \).

The two conditions (i) and (ii) of Lemma 2.7 are satisfied, and hence, \(\{x_{n}\}\) converges weakly to a point in \(\varGamma \). \(\square \)

It should be noted that our iterative method, Algorithm 3.1, also works for approximation of solution of SCFP (3) when U and T are averaged mappings.

Algorithm 3.5

Choose \(x_{0},x_{1}\in H_{1}\). Let \(\theta \in [0,1)\) and \(\{\epsilon _{n}\}\), \(\{\varepsilon _{n}\}\) and \(\{\rho _{n}\}\) be positive real parameter sequences, such that \(\lim \limits _{n\rightarrow \infty }\epsilon _{n}=0\), \(\sum _{n=1}^{\infty }\varepsilon _{n}<\infty \), \(0<\rho _{n}<\frac{1}{\xi }\) and \(\liminf \limits _{n\rightarrow \infty } \rho _{n}(\frac{1}{\xi }-\rho _{n})>0\) where \(\xi =\max \{\alpha ,\beta \}\).

STEP 1.:

Choose \(\theta _{n}\), such that \(0\le \theta _{n}\le \bar{\theta }_{n}=\min \{\bar{\alpha _{n}},\bar{\beta _{n}}\}\) where

$$\begin{aligned} \bar{\alpha }_{n}:= \left\{ \begin{array}{lr} \min \big \{\theta ,\frac{\epsilon _{n}}{\Vert x_{n-1}-x_{n}\Vert }\big \},&{} \text{ if } x_{n-1}\ne x_{n} \\ \theta , &{} \text{ otherwise }, \end{array} \right. \end{aligned}$$

and

$$\begin{aligned} \bar{\beta }_{n}:= \left\{ \begin{array}{lr} \min \big \{\theta ,\frac{\varepsilon _{n}}{\Vert x_{n-1}-x_{n}\Vert ^{2}}\big \},&{} \text{ if } x_{n-1}\ne x_{n} \\ \theta , &{} \text{ otherwise }. \end{array} \right. \end{aligned}$$
STEP 2.:

Evaluate \(y_{n}=x_{n}+\theta _{n}(x_{n}-x_{n-1}).\)

STEP 3.:

Evaluate \(t_{n}=(I-U)y_{n}\), \(w_{n}=(I-T)A(y_{n})\) and \(\vartheta _{n}=\Vert t_{n}+A^{*}(w_{n})\Vert .\) Stopping criterion: If \(\vartheta _{n}=0\), then Stop. Otherwise, got to STEP 4.

STEP 4.:

Evaluate \( x_{n+1}=y_{n}-\eta _{n}(t_{n}+A^{*}(w_{n})), \) where

$$\begin{aligned} \eta _{n}=\frac{\rho _{n}(\Vert t_{n}\Vert ^{2}+\Vert w_{n}\Vert ^{2})}{\vartheta _{n}^{2}}. \end{aligned}$$
STEP 5.:

Set \(n:=n+1\) and go to STEP 1.

Corollary 3.6

If \(U:H_{1}\rightarrow H_{1}\) is \(\alpha \)-averaged mapping, \(T:H_{2}\rightarrow H_{2}\) is \(\beta \)-averaged mapping, then the sequence \(\{x_{n}\}\) generated by Algorithm 3.5 converges weakly to the solution of the problem SCFP (3), provided that \(\{{\bar{x}}\in F(U):A{\bar{x}}\in F(T)\}\) is nonempty.

4 Application

As applications, we can obtain several new algorithms to solve problems that can be converted to the fixed-point problem of averaged mapping. Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces and \(A_{k}:H_{1}\rightarrow {H_{2}}\) (\(k\in \{1,\ldots ,R\}\)) are bounded linear operators.

4.1 Generalized split system of minimization problem

Let H be a real Hilbert space and let \(f:H\rightarrow {\mathbb {R}}\cup \{+\infty \}\) proper, lower semicontinuous convex function. The proximal operator of the function f with scaling parameter \(\lambda >0\) is a mapping \(\text{ prox}_{\lambda f}:H\rightarrow H\) given by

$$\begin{aligned} \text{ prox}_{\lambda f}(x)=\arg \min \limits _{y\in H}\{f(y)+\frac{1}{2\lambda }\Vert x-y\Vert ^{2}\}. \end{aligned}$$

Proximal operators are firmly nonexpansive (\(\frac{1}{2}\)-averaged mapping), and the point \({\bar{x}}\) minimizes f if and only if \(\text{ prox}_{\lambda f}({\bar{x}})={\bar{x}}\), see [5]. Consider the generalized split system of minimization problem (GSSMP) given by

$$\begin{aligned} {\bar{x}}\in \bigcap \limits _{i=1}^{N}(\arg \min f_{i}) \text{ such } \text{ that } A_{k}({\bar{x}})\in \bigcap \limits _{j=1}^{M}(\arg \min g_{j}),\quad \forall k\in \{1,\ldots ,R\}, \end{aligned}$$
(24)

where \(f_{i}:H_{1}\rightarrow {\mathbb {R}}\cup \{+\infty \}\) (\(i\in \{1,\ldots ,N\}\)) and \(g_{j}:H_{2}\rightarrow {\mathbb {R}}\cup \{+\infty \}\) (\(j\in \{1,\ldots ,M\}\)) are proper, lower semicontinuous convex functions. Let \(\varOmega \) be the solution set of (24), and assume that \(\varOmega \) is nonempty. If \(A_{k}=A\) for all \(k\in \{1,\ldots ,R\}\), GSSMP (24) is reduced to split system of minimization problem (SSMP) considered in [25, 26].

Assuming that the set of solution points of GSSMP (24) is nonempty and by taking \(U_{i}=\text{ prox}_{\lambda f_{i}}\) and \(T_{j}=\text{ prox}_{\lambda g_{j}}\) in Algorithm 3.1, we can have the following iterative method to approximate the solution of GSSMP (24).

Algorithm 4.1

Choose \(x_{0},x_{1}\in H_{1}\). Let \(\theta \in [0,1)\) and \(\{\epsilon _{n}\}\), \(\{\varepsilon _{n}\}\) and \(\{\rho _{n}\}\) be positive real parameter sequences, such that \(\lim \limits _{n\rightarrow \infty }\epsilon _{n}=0\), \(\sum _{n=1}^{\infty }\varepsilon _{n}<\infty \), \(0<\rho _{n}<2\) and \(\liminf \limits _{n\rightarrow \infty } \rho _{n}(2-\rho _{n})>0\).

STEP 1.:

Choose \(\theta _{n}\), such that \(0\le \theta _{n}\le \bar{\theta }_{n}=\min \{\bar{\alpha _{n}},\bar{\beta _{n}}\}\) where

$$\begin{aligned} \bar{\alpha }_{n}:= \left\{ \begin{array}{lr} \min \big \{\theta ,\frac{\epsilon _{n}}{\Vert x_{n-1}-x_{n}\Vert }\big \},&{} \text{ if } x_{n-1}\ne x_{n} \\ \theta , &{} \text{ otherwise }, \end{array} \right. \end{aligned}$$

and

$$\begin{aligned} \bar{\beta }_{n}:= \left\{ \begin{array}{lr} \min \big \{\theta ,\frac{\varepsilon _{n}}{\Vert x_{n-1}-x_{n}\Vert ^{2}}\big \},&{} \text{ if } x_{n-1}\ne x_{n} \\ \theta , &{} \text{ otherwise }. \end{array} \right. \end{aligned}$$
STEP 2.:

Evaluate \(y_{n}=x_{n}+\theta _{n}(x_{n}-x_{n-1}).\)

STEP 3.:

Evaluate \(t_{n}^{(i)}=(I-\text{ prox}_{\lambda f_{i}})y_{n}\), \(w_{n}^{(j,k)}=(I-\text{ prox}_{\lambda g_{j}})A_{k}(y_{n})\) and

$$\begin{aligned} \vartheta _{n}=\Big \Vert \sum \limits _{i=1}^{N}t^{(i)}_{n}+\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n})\Big \Vert . \end{aligned}$$

Stopping criterion: If \(\vartheta _{n}=0\), then Stop. Otherwise, got to STEP 4.

STEP 4.:

Evaluate \( x_{n+1}=y_{n}-\eta _{n}\Big (\sum _{i=1}^{N}t^{(i)}_{n}+\sum _{k=1}^{R}\sum _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n})\Big ), \) where

$$\begin{aligned} \eta _{n}=\frac{\rho _{n}\Big (\sum \nolimits _{i=1}^{N}\Vert t^{(i)}_{n}\Vert ^{2}+\sum \nolimits _{k=1}^{R}\sum \nolimits _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )}{\vartheta _{n}^{2}}. \end{aligned}$$
STEP 5.:

Set \(n:=n+1\) and go to STEP 1.

Theorem 4.2

The sequence \(\{x_{n}\}\) generated by Algorithm 4.1 converges weakly to the solution point of the problem GMSSFP (25).

4.2 Generalized multiple-set split feasibility problem

Generalized multiple-set split feasibility problem (GMSSFP) is formulated as problem of finding a point

$$\begin{aligned} {\bar{x}}\in \bigcap _{i=1}^{N}C_{i} \text{, } \text{ such } \text{ that } A_{k}({\bar{x}})\in \bigcap _{j=1}^{M}Q_{j},\quad \forall k\in \{1,\ldots ,R\}, \end{aligned}$$
(25)

where \(C_{i}\) \((i\in \{1,\ldots ,N\})\) and \(Q_{j}\) \((j\in \{1,\ldots ,M\})\) are nonempty closed convex subsets of \(H_{1}\) and \(H_{2}\), respectively. Many methods have been developed to solve GMSSFP (25) for \(A_{k}=A\) for all \(k\in \{1,\ldots ,R\}\); see, for example, in [9, 43] and references therein. The GMSSFP (25) with \(N=M=R=1\) is known as the SFP [10, 14].

The GMSSFP (25) is a special case of GSSMP (24), i.e., take \(f_{i}=\delta _{C_{i}}\) and \(g_{j}=\delta _{Q_{j}}\) (the indicator functions) in GSSMP (24).

4.3 Generalized split system of inclusion problem

For a real Hilbert space H and maximal monotone set-valued mapping \(T:H\rightarrow 2^{H}\), the resolvent operator \(J_{\lambda }^{T}\) associated with T and \(\lambda >0\) is

$$\begin{aligned} J_{\lambda }^{T}(x)=(I+\lambda T)^{-1}(x),x\in H. \end{aligned}$$
(26)

The resolvent operator \(J_{\lambda }^{T}\) is single-valued and firmly nonexpansive. Moreover, \(0\in T({\bar{x}})\) if and only if \({\bar{x}}\) is a fixed point of \(J_{\lambda }^{T}\) for all \(\lambda >0\); see [8].

Let \(T_{i}:H_{1}\rightarrow 2^{H_{1}}\) \((i\in \{1,\ldots ,N\})\) and \(U_{j}:H_{2}\rightarrow 2^{H_{2}}\) \((j\in \{1,\ldots ,M\})\) be maximal monotone mappings. The generalized split system of inclusion problem is to find \({\bar{x}}\in H_{1}\), such that

$$\begin{aligned} \left\{ \begin{array}{ll} 0\in T_{i}({\bar{x}}),\forall i\in \{1,\ldots ,N\},\\ 0\in U_{j}(A_{k}({\bar{x}})),\forall (j,k)\in \{1,\ldots ,M\}\times \{1,\ldots ,R\}. \end{array} \right. \end{aligned}$$
(27)

Replacing \(U_{i}\) by \(J_{\lambda }^{T_{i}}\) and \(T_{j}\) by \(J_{\lambda }^{U_{j}}\) in Algorithm 3.1, we obtain the weak convergence Theorems for solving (27).

4.4 Generalized split system of equilibrium problems

Let \(h:H\times H\rightarrow {\mathbb {R}}\) be a bifunction. Then, we say that h satisfy Condition CO on H if the following assumptions are satisfied:

  1. (A1)

    \(h(u,u) = 0\), for all \(u\in H;\)

  2. (A2)

    h is monotone on H, i.e., \(h(u,v)+h(v,u)\le 0\), for all \(u,v\in H;\)

  3. (A3)

    for each \(u,v,w\in H\), \(\limsup \limits _{\alpha \downarrow {0}}h(\alpha w+(1-\alpha )u,v)\le h(u,v);\)

  4. (A4)

    h(u, .) is convex and lower semicontinuous on H for each \(u\in H\).

Lemma 4.3

[20]. If h satisfies Condition CO on H, then for each \(r>0\) and \(u\in H\), the mapping (called resolvant of h), given by

$$\begin{aligned} T^{h}_{r}(u)=\{w\in H: h(w,v)+\frac{1}{r}\langle v-w,w-u\rangle \ge {0}, \forall {v\in H}\} \end{aligned}$$

satisfies the following conditions:

  1. (1)

    \(T_{r}^{h}\) is single-valued and \(T_{r}^{h}\) is a firmly nonexpansive;

  2. (2)

    Fix\((T_{r}^{h})=\{{\bar{x}}\in H:h({\bar{x}},y)\ge 0,\forall y\in H\}\);

  3. (3)

    \(\{{\bar{x}}\in H:h({\bar{x}},y)\ge 0,\forall y\in H\}\) is closed and convex.

Let \(f_{i}:H_{1}\times H_{1}\rightarrow {\mathbb {R}}\) \((i\in \{1,\ldots ,N\})\) and \(g_{j}:H_{2}\times H_{2}\rightarrow {\mathbb {R}}\) \((j\in \{1,\ldots ,M\})\) be bifunctions. Assume each bifunction \(f_{i}\) and \(g_{j}\) satisfy Condition CO on \(H_{1}\) and \(H_{2}\), respectively. The generalized split system of equilibrium problem (GSSEP): find \({\bar{x}}\in H_{1}\), such that

$$\begin{aligned} \left\{ \begin{array}{ll} f_{i}({\bar{x}},x)\ge 0,\forall x\in H_{1},\forall i\in \{1,\ldots ,N\},\\ g_{j}(A_{k}({\bar{x}}),u)\ge 0,\forall u\in H_{2},\forall (j,k)\in \{1,\ldots ,M\}\times \{1,\ldots ,R\}. \end{array} \right. \end{aligned}$$
(28)

Similarly, for \(r \>0\), replacing \(U_{i}\) by \(T^{f_{i}}_{r}\) and \(T_{i}\) by \(T^{g_{j}}_{r}\) in Algorithm 3.1, we obtain weak convergence Theorem for approximation of consistent GSSEP (28).

5 Numerical result

In this section, we provide two numerical examples to illustrate the behavior and potential applicability of our proposed algorithm (Algorithm 3.1). In the figures, the x-axis represents the number of iterations n (Iter(n)), while the y-axis gives the value of \(error(n)=\Vert x_{n}\Vert \) generated by each iteration n. Convergent behavior of Algorithm 3.1 with different starting point and different parameters where \(\theta =0.5\) and \(\theta _{n}\) is taken to be \(0\le \theta _{n}\le \bar{\theta }_{n}\) where \(\bar{\theta }_{n}=\min \{\bar{\alpha _{n}},\bar{\beta _{n}}\}\) for \(\bar{\alpha _{n}}\) and \(\bar{\beta _{n}}\) defined by (7) and (8) is given in figures and tables. In the tables, we used the stopping criteria \(\frac{\Vert x_{n}-x_{n-1}\Vert }{\Vert x_{1}-x_{2}\Vert }<0.01\), where ‘Iter(n)’ denotes number of iteration, ‘CPU(s)’ denotes the CPU time in seconds, \(D(n)=\Vert x_{n}-x_{n-1}\Vert \), and \(error(n)=\Vert x_{n}\Vert \).

All codes were written in MATLAB and are performed on HP laptop with Intel(R) Core(TM) i5–7200U CPU @ 250GHz 2.70GHz and RAM 4.00GB.

Example 5.1

In this example, we consider the problem of finding a point

$$\begin{aligned} {\bar{x}}\in \bigcap \limits _{i=1}^{3} C_{i}, \text{ such } \text{ that } A_{k}({\bar{x}})\in \bigcap \limits _{j=1}^{4}(\arg \min g_{j}),\forall k\in \{1,2,3\}, \end{aligned}$$
(29)

where \(C_{i}\) (\(i\in \{1,2,3\}\)) is closed convex subset of \({\mathbb {R}}^{p}\), \(g_{j}:{\mathbb {R}}^{q}\rightarrow {\mathbb {R}}\cup \{+\infty \}\) (\(j\in \{1,2,3,4\}\)), \(A_{k}=\gamma _{k}G^{(k)}_{q\times p}\) (\(k\in \{1,2,3\}\)) given by \(C_{1}=\{x\in {\mathbb {R}}^{p}:\Vert x\Vert \le 1\}\), \(C_{2}=\{x\in {\mathbb {R}}^{p}:\langle x,w_{2}\rangle \le 1\}\) and \(C_{3}=\{x\in {\mathbb {R}}^{p}:\langle x,w_{3}\rangle \le 10\}\) for \(w_{2}= w_{3}=(1,\ldots ,1)\in {\mathbb {R}}^{p}\), \(g_{j}\) is a quadratic functions \(g_{j}(y)=\frac{1}{2}y^{T}B_{j}y\) for \(B_{j}\) is invertible symmetric positive semidefinite \(q\times q\) matrix, and \(G^{(k)}_{q\times p}\) is \(q\times p\) matrix, \(\gamma _{k}\in {\mathbb {R}}\).

Note that each \(C_{i}\) is closed convex subset of \({\mathbb {R}}^{p}\) and each \(g_{j}\) is proper, lower semicontinuous convex function. Take \(U_{i}:{\mathbb {R}}^{p}\rightarrow {\mathbb {R}}^{p}\) by a projection mapping given by \(U_{i}=P_{C_{i}}\), and \(T_{j}:{\mathbb {R}}^{q}\rightarrow {\mathbb {R}}^{q}\) (\(j\in \{1,\ldots ,M\}\)) given by proximal operator \(T_{j}=\text{ prox}_{\lambda g_{j}}\) where \(\lambda =1\), and notice that \(T_{j}(y)=\text{ prox}_{\lambda f_{i}}(y)=(I+B_{i})^{-1}(y).\) Moreover, \(U_{i}\) and \(T_{j}\) are \(\frac{1}{2}\)-averaged mappings, and \(F(U_{i})=F(P_{C_{i}})=C_{i}\) and \(F(T_{j})=\{{\bar{y}}\in {\mathbb {R}}^{q}:g_{j}({\bar{y}})\le g_{j}(y), \forall y\in {\mathbb {R}}^{q}\}=\arg \min g_{j}.\) Hence, Algorithm 3.1 can solve the problem considered here in this example. To apply Algorithm 1 in solving the problem (29), we use \(p=q\), \(B_{j}\) is randomly generated invertible symmetric positive semidefinite \(q\times q\) matrix, \(G^{(k)}_{q\times p}=I_{p\times p}\), where \(I_{p\times p}\) is \(p\times p\) identity matrix, \(\gamma _{k}=k\) for \(k\in \{1,2,3\}\)

Hence, Algorithm 3.1 can solve the problem considered here in this example. To apply Algorithm 3.1 in solving the problem (29), we use \(p=q\), \(B_{j}\) is randomly generated invertible symmetric positive semidefinite \(q\times q\) matrix, \(G^{(k)}_{q\times p}=I_{p\times p}\) where \(I_{p\times p}\) is \(p\times p\) identity matrix, \(\gamma _{k}=k\) for \(k\in \{1,2,3\}\). The numerical behavior of this example is reported in Figs. 1 and 2, and in Table 1. In Fig. 1 and Table 1, we use

  1. Case 1:

    \(\theta _{n}=\bar{\theta }_{n}\), \(\rho _{n}=1\), \(\epsilon _{n}=\varepsilon _{n}=\frac{1}{n^{2}}\),

  2. Case 2:

    \(\theta =0\), \(\rho _{n}=1\), \(\epsilon _{n}=\varepsilon _{n}=\frac{1}{n^{2}}\).

Fig. 1
figure 1

For \(p=5\), \(x_{0}=randi([-2000,2000],5,1)\) and \(x_{1}=50x_{0}\)

Fig. 2
figure 2

For \(p=100\), \(\theta _{n}=\bar{\theta }_{n}\), \(\epsilon _{n}=\frac{1}{n^{0.1}}\), \(\varepsilon _{n}=\frac{1}{n^{6}}\), and \(x_{1}=randi([-2000,2000],100,1)\), \(x_{0}=10x_{1}\)

Example 5.2

We consider the problem given in \(l_{2}\) space, finding a point

$$\begin{aligned} {\bar{x}}\in \bigcap \limits _{i=1}^{3}F(U_{i}), \text{ such } \text{ that } A_{k}({\bar{x}})\in \bigcap \limits _{j=1}^{3}F(T_{j}),\forall k\in \{1,2,3\}, \end{aligned}$$
(30)

where \(U_{i}:l_{2}\rightarrow l_{2}\) (\(i\in \{1,2,3\}\)), \(T_{j}:l_{2}\rightarrow l_{2}\) (\(j\in \{1,2,3\}\)) and \(A_{k}:l_{2}\rightarrow l_{2}\) (\(k\in \{1,2,3\}\)), given by \(U_{i}:x=(x^{(1)},x^{(2)},x^{(3)}\ldots )\mapsto \big (a^{(1)}_{(i)}x^{(1)},a^{(2)}_{(i)}x^{(2)},a^{(3)}_{(i)}x^{(3)},\ldots \big )\), \(T_{j}:x=(x^{(1)},x^{(2)},x^{(3)}\ldots )\mapsto b_{(j)} x= b_{(j)}\big (x^{(1)},x^{(2)},x^{(3)},\ldots \big )\), \(A_{k}(x)=\gamma _{k}x\), where \(\{a^{(t)}_{(i)}\}^{\infty }_{t=1}\) is the sequence of positive real numbers with \(\max \{a^{(t)}_{(i)}:t\in {\mathbb {N}}\}=a_{(i)}<2\) for each \(i\in \{1,2,3\}\), for \(0< b_{(j)}<2\) for each \(j\in \{1,2,3\}\) and \(\gamma _{k}\in {\mathbb {R}}\) for \(k\in \{1,2,3\}\). It is clear that \(\frac{1}{a_{i}}>\frac{1}{2}\), and

$$\begin{aligned} \langle U_{i}(x)-U_{i}(y),x-y\rangle= & {} \sum \limits _{t=1}^{\infty }a_{(i)}^{(t)}(x_{t}-y_{t})^{2}\\= & {} \sum \limits _{t=1}^{\infty }\frac{1}{a^{(t)}_{(i)}}(a^{(t)}_{(i)})^{2}(x_{t}-y_{t})^{2}\\\ge & {} \frac{1}{a_{i}}\sum \limits _{t=1}^{\infty }(a^{(t)}_{(i)})^{2}(x_{t}-y_{t})^{2}\\= & {} \frac{1}{a_{i}}\Vert U_{i}(x)-U_{i}(y)\Vert ^{2}. \end{aligned}$$

Hence, \(U_{i}\) is \(\frac{1}{a_{i}}\)-ism, i.e., \(U_{i}\) is \(\frac{a_{i}}{2}\)-averaged. Similarly, \(T_{j}\) is \(\frac{b_{j}}{2}\)-averaged. The numerical results of Algorithm 3.1 in solving this example [problem (30)] are reported in Fig. 3 and Table 2 where \(a^{(t)}_{(i)}=\frac{i}{i+t^{i}}\), \(b_{(j)}=\frac{j}{j+1}\) and \(\gamma _{k}=k\) and the parameters in the algorithm are taken as

  1. Case 3:

    \(\theta _{n}=\bar{\theta }_{n}\), \(\rho _{n}=1.5\), \(\epsilon _{n}=\frac{1}{n^{0.5}}\), \(\varepsilon _{n}=\frac{1}{n^{3}}\),

  2. Case 4:

    \(\theta _{n}=\bar{\theta }_{n}\), \(\rho _{n}=0.25\), \(\epsilon _{n}=\frac{1}{n^{2}}\), \(\varepsilon _{n}=\frac{1}{n^{4}}\),

  3. Case 5:

    \(\theta =0\), \(\rho _{n}=0.5\), \(\epsilon _{n}=\frac{1}{n^{2}}\), \(\varepsilon _{n}=\frac{1}{n^{4}}\),

are reported in Fig. 3 and Table 2.

Table 1 For \(p=4\), \(\rho _{n}=1.5\), \(\epsilon _{n}=\frac{1}{n^{0.5}}\), \(\varepsilon _{n}=\frac{1}{n^{3}}\), \(\rho _{n}=1\) and for different starting points \(x_{0},x_{1}\)
Fig. 3
figure 3

For \(x_{0}=(1,2,3,\ldots ,100,0,0,0,\ldots )\) and \(x_{1}=100x_{0}\)

Table 2 For \(x_{0}=(1,-2,3,-4,5,-6,7,-8,9,-10,0,0,0,\ldots )\) and \(x_{1}=2x_{0}\)

The numerical results illustrated in Figs. 1, 2, 3 and in Table 1 and 2 confirm the computational effectiveness of the proposed algorithm. From figures and tables, we see that Algorithm 3.1 has better performance when choosing \(\theta _{n}\ne 0\), and hence, this confirms the efficacy of our algorithm with nonzero inertial term.

6 Conclusions

In this paper, we introduce a new algorithm for generalized split inverse problem of averaged mappings. Under some suitable conditions imposed on parameters, we have proved the weak convergence of the algorithm. The new adaptive technique is developed, so that the step sizes in the proposed algorithm are selected without the prior knowledge of the operator norms. Some application of our result is also given. The preliminary numerical experiments have also been performed to illustrate the convergence of the proposed algorithm.