Abstract
The purpose of this paper is to propose a new inertial self-adaptive algorithm for generalized split system of common fixed point problems of finite family of averaged mappings in the framework of Hilbert spaces. The weak convergence theorem of the proposed method is given and its theoretical application for solving several generalized problems is presented. The behavior and efficiency of the proposed algorithm is illustrated by some numerical tests.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Fixed-point problems are being used as mathematical programming models to solve several nonlinear problems arising in optimization, machine learning, finance, economics, network, transportation, and in applied areas such as in image recovery, and signal processing (see [11, 13, 30]). Due to this, several authors studied various fixed-point problems in a more generalized sense as a type of Split Inverse Problem (SIP) [15]. Generalized Split Inverse Problem (GSIP) [21] is formulated as a problem of
where \(\varLambda \subset {\mathbb {R}}\) is an index set, IP1 and IP2 are two inverse problems installed in space X and Y, respectively, and \(A_{k}\) is a linear transformation from X to Y for each \(k\in \varLambda \). If \(A=A_{k}\) for all \(k\in \varLambda \), then GSIP will be reduced to Split Inverse Problem (SIP) [15]. Many models of inverse problems can be cast in this framework by choosing different inverse problems for IP1 and IP2. There is a considerable investigation in the framework of SIP; see, for example, [12, 14, 24, 27, 35, 39, 43] and the many reference therein.
In this paper, we consider a problem formulated in the framework of GSIP. To be precise, we consider the generalized split system of common fixed-point problem (in short GSSCFP), formulated as a problem of finding
where \(H_{1}\) and \(H_{2}\) are two real Hilbert spaces, \(U_{i}:H_{1}\rightarrow H_{1}\) (\(i\in \{1,\ldots ,N\}\)) and \(T_{j}:H_{2}\rightarrow H_{2}\) (\(j\in \{1,\ldots ,M\}\)) are nonlinear mappings, \(A_{k}:H_{1}\rightarrow {H_{2}}\) (\(k\in \{1,\ldots ,R\}\)) are linear transformations, and \(F(U_{i})\) and \(F(T_{j})\) are fixed points set of \(U_{i}\) and \(T_{j}\), i.e., \(F(U_{i})=\{{\bar{x}}\in H_{1}:U_{i}({\bar{x}})={\bar{x}}\}\), \(F(T_{j})=\{{\bar{y}}\in H_{2}:T_{j}({\bar{y}})={\bar{y}}\}\). There are few studies in the framework of (2), see for demimetric mappings in [22, 23]. However, several algorithms are introduced to solve (2) for different classes of operators under the case \(A_{k}=A\) for all \(k\in \{1,\ldots ,R\}\). If \(U=U_{i}\) for all \(i\in \{1,\ldots ,N\}\), \(T=T_{j}\) for all \(j\in \{1,\ldots ,M\}\) and \(A_{k}=A\) for all \(k\in \{1,\ldots ,R\}\), GSSCFP (2) will be reduced to split common fixed-point problem (SCFP), i.e., the problem of finding
SCFP is first introduced for the case of directed operators U and T by Censor and Segal [16], and they introduced the following algorithm for solving the SCFP:
where \(\tau \) is a properly chosen step size. If \(\tau \in (0,\frac{2}{\Vert A\Vert ^{2}})\), then sequence \(\{x_{n}\}\) generated by (4) converges weakly to a solution of the SCFP provided that the solution set is nonempty. In particular, if \(T=P_{C}\) and \(U=P_{Q}\) are the projection operators, then the SCFP is reduced to the well-known split feasibility problem (SFP) [10, 14] (C and Q are nonempty closed convex subsets in \(H_{1}\) and \(H_{2}\), respectively). Initiated by SCFP, several GSIP have been investigated and studied, for example, for directed operators [44], demicontractive mappings [39], asymptotically quasi-nonexpansive mappings [40], quasi-nonexpansive operators [12], averaged mapping [48], and the many reference therein. In spite of a wide range of already existing methods are attractive for approximating the solution of the GSIP, several proposed methods are not very convenient to use. The common drawback of several studies on GSIP is that the implementation requires the determination of the step size that depends on the operator norm, which is in general not an easy work in practice, because operator norm is global invariant and is often quite difficult (if not impossible) to compute; see Theorem of Hendrickx and Olshevsky in [29]. To overcome this difficulty, initialed by López et al. [31], many authors have constructed the adaptive variable step size that does not require the norm operator (see, for example, in [41, 45, 47, 48]).
In many disciplines, accelerated convergence is often much more desirable and important; see, for example, [18, 34]. The widely used acceleration technique of an iterative algorithm is the momentum method of Polyak [38] (inertial extrapolation). Polyak method was used by Nesterov [36] to speed up the rate of convergence of the proposed iterative algorithm for solving the smooth convex minimization problem, and the proposed algorithm is given by
where \(\theta _{n}\in [0,1)\) is an extrapolation factor and \(\lambda _{n}\) is a step-size parameter (sufficiently small) and \(\nabla f\) is the gradient of a smooth convex function f. The algorithm is more effective and converges faster because of the inertial term \(\theta _{n} (x_{n}-x_{n-1})\) in (5). Consequently, several inertial type algorithms are proposed in literature; see, for example, inertial proximal method [2], inertial forward-backward algorithm [32], inertial proximal ADMM [17], and fast iterative shrinkage thresholding algorithm (FISTA) [6], and see more [1, 42].
In this paper, motivated by adaptive variable step-size selection method and inertial extrapolation method, we propose an iterative method solving GSSCFP (2) for averaged mappings \(U_{i}\) and \(T_{i}\). We use a different kind of step-size selection method of \(\theta _{n}\) in the inertial term \(\theta _{n} (x_{n}-x_{n-1})\). Our proposed method is more desirable, since it is inertial accelerated iterative algorithm, and is efficient in practice, since the adaptive variable step-sizes selection technique is established, so that the implementation of our proposed algorithm does not need any prior information about the operator norms \(\Vert A_{k}\Vert \) (\(k\in \{1,\ldots ,R\}\)).
The outline of the paper is as follows. In Sect. 2, we recall some definitions and preliminary results for further analysis. In Sect. 3, the new proposed method for GSSCFP (2) is introduced and its convergence is analyzed. In Sect. 4, we give some of the applications of our result. Finally, in Sect. 5, the numerical experiments are given to illustrate the behavior and efficiency of our result.
2 Preliminary
Let H be a real Hilbert space with the inner product \(\langle .,.\rangle \) and the norm \(\Vert .\Vert \), respectively.
For a sequence \(\{x_{n}\}\) in H and \(p\in H\), we write \(x_{n}\rightharpoonup p\) to indicate that the sequence \(\{x_{n}\}\) converges weakly to p, and \(x_{n}\rightarrow p\) to indicate that \(\{x_{n}\}\) converges strongly to p. The symbol \(\omega _{w}\) denotes the \(\omega \)-weak limit set of \(\{x_{n}\}\), i.e., \(\omega _{w}(x_{n})=\{p:\exists \{x_{n_{l}}\} \subset \{x_{n}\} \text{, }{ such that } x_{n_{l}}\rightharpoonup p \}.\)
Lemma 2.1
For a real Hilbert space H, we have the following fundamental properties:
-
(i)
\(\Vert x+y\Vert ^{2}=\Vert x\Vert ^{2}+\Vert y\Vert ^{2}+2\langle x,y \rangle , \quad \forall x,y\in H;\)
-
(ii)
\(\langle x,y\rangle = \frac{1}{2}\Vert x\Vert ^{2}+\frac{1}{2}\Vert y\Vert ^{2}-\frac{1}{2}\Vert x-y\Vert ^{2},\quad \forall x,y\in H;\)
-
(iii)
\(\Vert \alpha x+(1-\alpha )y\Vert ^{2}=\alpha \Vert x\Vert ^{2}+(1-\alpha )\Vert y\Vert ^{2}-\alpha (1-\alpha )\Vert x-y\Vert ^{2},\quad \forall x,y\in H, \alpha \in {\mathbb {R}}.\)
Definition 2.2
The mapping \(T:H\rightarrow H\) is called
-
(a)
L-Lipschitz if \(L>0\), such that
$$\begin{aligned} \Vert Tx-Ty\Vert \le L\Vert x-y\Vert ,\quad \forall x,y\in H. \end{aligned}$$If \(L=1\), then T is said to be nonexpansive mapping.
-
(b)
\(\nu \)-inverse strongly monotone (\(\nu \)-ism), if there exists \(\nu >0\), such that
$$\begin{aligned} \langle x-y,Tx-Ty\rangle \ge \nu \Vert Tx-Ty\Vert ^{2},\quad \forall x,y\in H. \end{aligned}$$
Inverse strong monotone operators have been widely used to solve practical problem in various fields, for instance, in traffic assignment problems (see, for example, [7, 28]). It is easy to see that if T is \(\nu \)-ism, then T is \(\frac{1}{\nu }\)-Lipschitz.
Definition 2.3
A mapping \(T:H\rightarrow H\) is said to be firmly nonexpansive if and only if \(2T-I\) is nonexpansive or equivalently
Alternatively, T is firmly nonexpansive if and only if T can be expressed as
where I identity mapping and \(S:H\rightarrow H\) is nonexpansive.
Definition 2.4
A mapping \(T:H\rightarrow H\) is said to be an averaged mapping, if it can be written as the average of the identity mapping I and a nonexpansive mapping, that is
where \(\alpha \in (0,1)\) and \(S:H\rightarrow H\) is nonexpansive. More precisely, when (6) holds, we say that T is \(\alpha \)-averaged.
Firmly nonexpansive mappings (in particular, projections and proximal mappings) are \(\frac{1}{2}\)-averaged mappings. The term averaged mapping was coined by Baillon–Bruck–Reich [3]. The notion of averaged mapping is central in the study of fixed point and optimization theory. You can find further properties of averaged mappings in [3, 11, 19].
Proposition 2.5
[11]. A mapping \(T:H\rightarrow H\) is averaged if and only if the complement \(I-T\) is \(\nu \)-ism for some \(\nu >\frac{1}{2}\). Indeed, for \(\alpha \in (0,1)\), T is \(\alpha \)-averaged if and only if \(I-T\) is \(\frac{1}{2\alpha }\)-ism.
Lemma 2.6
[4]. Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let \(T:C\rightarrow C\) be a nonexpansive mapping. Then, \(I-T\) is demiclosed (i.e., \(\{x_{n}\}\subset H\), such that \(x_{n}\rightharpoonup {\bar{x}}\) and \(x_{n}-Tx_{n}\rightarrow 0\), then \(T{\bar{x}}={\bar{x}}\)).
The following lemma is the consequence of Opial’s theorem [37].
Lemma 2.7
[46]. Let C be a nonempty subset of real Hilbert space H. Let \(\{x_{n}\}\) be a bounded sequence which satisfies the following properties:
-
(i)
\(\lim \limits _{n\rightarrow \infty }\Vert x_{n}-{\bar{x}}\Vert \) exists for every \({\bar{x}}\in C\),
-
(ii)
\(\omega _{w}(x_{n})\subset C\).
Then, \(\{x_{n}\}\) converges weakly to a point in C.
Lemma 2.8
[33]. If \(\{\psi _{n}\}\) is a sequence of non-negative real numbers, and \(\{\theta _{n}\}\) and \(\{\delta _{n}\}\) are the sequence of real numbers, such that
-
(i)
\(\psi _{n+1}-\psi _{n}\le \theta _{n}(\psi _{n}-\psi _{n-1})+\delta _{n},\)
-
(ii)
\(\sum \delta _{n}<\infty \) and \(\theta _{n}\in [0,\theta ]\), where \(\theta \in (0,1),\)
then \(\{\psi _{n}\}\) is a converging sequence and
where \([t]_{+}=\max (t,0)\) for any \(t\in {\mathbb {R}}\).
3 Main result
In this section, we present an adaptive inertial algorithm to solve the GSSCFP (2) and investigate its weak convergence under the following assumptions:
- (A1):
-
\(U_{i}:H_{1}\rightarrow H_{1}\) is \(\alpha _{i}\)-averaged for all \(i\in \{1,\ldots ,N\}\);
- (A2):
-
\(T_{j}:H_{2}\rightarrow H_{2}\) is \(\beta _{j}\)-averaged for all \(j\in \{1,\ldots ,M\}\);
- (A3):
-
\(A_{k}:H_{1}\rightarrow H_{2}\) is a bounded linear operator for all \(k\in \{1,\ldots ,R\}\);
- (A4):
-
\(\varGamma \) denotes the solution set of the GSSCFP (2) and \(\varGamma \) is nonempty.
Algorithm 3.1
Choose \(x_{0},x_{1}\in H_{1}\). Let \(\theta \in [0,1)\) and \(\{\epsilon _{n}\}\), \(\{\varepsilon _{n}\}\) and \(\{\rho _{n}\}\) be positive real parameter sequences, such that \(\lim \nolimits _{n\rightarrow \infty }\epsilon _{n}=0\), \(\sum _{n=1}^{\infty }\varepsilon _{n}<\infty \), \(0<\rho _{n}<\frac{1}{\xi }\) and \(\liminf \nolimits _{n\rightarrow \infty } \rho _{n}(\frac{1}{\xi }-\rho _{n})>0\) where \(\xi =\max \{\alpha _{1},\ldots ,\alpha _{N},\beta _{1},\ldots ,\beta _{M}\}\).
- STEP 1.:
-
Choose \(\theta _{n}\), such that \(0\le \theta _{n}\le \bar{\theta }_{n}=\min \{\bar{\alpha _{n}},\bar{\beta _{n}}\}\) where
$$\begin{aligned} \bar{\alpha }_{n}:= \left\{ \begin{array}{lr} \min \big \{\theta ,\frac{\epsilon _{n}}{\Vert x_{n-1}-x_{n}\Vert }\big \},&{} \text{ if } x_{n-1}\ne x_{n} \\ \theta , &{} \text{ otherwise }, \end{array} \right. \end{aligned}$$(7)and
$$\begin{aligned} \bar{\beta }_{n}:= \left\{ \begin{array}{lr} \min \big \{\theta ,\frac{\varepsilon _{n}}{\Vert x_{n-1}-x_{n}\Vert ^{2}}\big \},&{} \text{ if } x_{n-1}\ne x_{n} \\ \theta , &{} \text{ otherwise }. \end{array} \right. \end{aligned}$$(8) - STEP 2.:
-
Evaluate \(y_{n}=x_{n}+\theta _{n}(x_{n}-x_{n-1}).\)
- STEP 3.:
-
Evaluate \(t_{n}^{(i)}=(I-U_{i})y_{n}\), \(w_{n}^{(j,k)}=(I-T_{j})A_{k}(y_{n})\) and
$$\begin{aligned} \vartheta _{n}=\Big \Vert \sum \limits _{i=1}^{N}t^{(i)}_{n}+\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n})\Big \Vert . \end{aligned}$$Stopping criterion: If \(\vartheta _{n}=0\), then Stop. Otherwise, got to STEP 4.
- STEP 4.:
-
Evaluate \( x_{n+1}=y_{n}-\eta _{n}\Bigg (\sum _{i=1}^{N}t^{(i)}_{n}+\sum _{k=1}^{R}\sum _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n})\Bigg ), \) where
$$\begin{aligned} \eta _{n}=\frac{\rho _{n}\Big (\sum \nolimits _{i=1}^{N}\Vert t^{(i)}_{n}\Vert ^{2}+\sum \nolimits _{k=1}^{R}\sum \nolimits _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )}{\vartheta _{n}^{2}}. \end{aligned}$$ - STEP 5.:
-
Set \(n:=n+1\) and go to STEP 1.
Remark 3.2
Note also that (7) and (8) in Algorithm 3.1 is easily implemented in numerical computation, since the value of \(\Vert x_{n-1}-x_{n}\Vert \) is a prior known before choosing \(\theta _{n}\). Moreover, \(\epsilon _{n}\ge 0\), \(\lim \nolimits _{n\rightarrow \infty }\epsilon _{n}=0\), \(\varepsilon _{n}\ge 0\), \(\sum _{n=1}^{\infty }\varepsilon _{n}<\infty \), (7) and (8) in Algorithm 3.1 imply that
Lemma 3.3
The stopping condition (in STEP 3) of Algorithm 3.1 is satisfied (i.e., \(\vartheta _{n}=0\) for some \(n\in {\mathbb {N}}\)), and then, \(y_{n}\in \varGamma \).
Proof
Suppose \(\vartheta _{n}=0\). Now, for \(p\in \varGamma \), we have
This implies \(\Vert (I-U_{i})y_{n}\Vert =\Vert (I-T_{j})A_{k}(y_{n})\Vert =0\) for all \((i,j,k)\in \{1,\ldots ,N\}\times \{1,\ldots ,M\}\times \{1,\ldots ,R\}\). Therefore, \(y_{n}\) solves GSSCFP (2), i.e, \(y_{n}\in \varGamma \). \(\square \)
If the stopping criteria of Algorithm 3.1 are not satisfied for all n, i.e., \(\vartheta _{n}\ne 0\) for all \(n\in {\mathbb {N}}\), we have the following convergence theorem of Algorithm 3.1 for approximation of solution of problem GSSCFP (2).
Theorem 3.4
The sequence \(\{x_{n}\}\) generated by Algorithm 3.1 converges weakly to an element of \(\varGamma \).
Proof
First, we show that \(\lim _{n\rightarrow \infty } \psi _{n}\) exists for all \({\bar{x}}\in \varGamma \) and the sequence \(\{x_{n}\}\) is bounded. Let \({\bar{x}}\in \varGamma \). Now
From Lemma 2.1 (ii), we have
From (10) and (11) and since \(0\le \theta _{n}<1\), we get
Now, using the definition of \(x_{n+1}\) and Lemma 2.1 (i), we have
Notice that
Furthermore, noting that \(U_{i}\) is \(\alpha _{i}\)-averaged and \(T_{j}\) is \(\beta _{j}\)-averaged, we have
It turns out from (13), (14), and (15) that
Let \(\psi _{n}=\Vert x_{n}-{\bar{x}}\Vert ^{2}\) and \(\delta _{n}=2\theta _{n}\Vert x_{n}-x_{n-1}\Vert ^{2}\). From (12) and (16), we get that
That is, \( \psi _{n+1}-\psi _{n}\le \theta _{n}(\psi _{n}-\psi _{n-1})+\delta _{n} \). Using (9), we have \(\sum \limits _{n=1}^{\infty }\delta _{n}<\infty \), and hence, by Lemma 2.8, we have \(\lim _{n\rightarrow \infty } \psi _{n}\) exists for all \({\bar{x}}\in \varGamma \). This implies that the sequence \(\{x_{n}\}\) is bounded, and so is the sequence \(\{y_{n}\}\). Next, \(\omega _{w}(x_{n})\subset \varGamma \). Now, from (17), we also obtain that
and since \((\psi _{n}-\psi _{n+1}+ \theta _{n}\psi _{n}+\delta _{n}) \rightarrow 0\) as \(n\rightarrow \infty \) and \(\liminf \limits _{n\rightarrow \infty } \rho (\frac{1}{\xi }-\rho _{n})>0\), we obtain that
Since \(\{\vartheta _{n}\}\) is bounded sequence, it turns out that
and hence
for all \((i,j,k)\in \{1,\ldots ,N\}\times \{1,\ldots ,M\}\times \{1,\ldots ,R\}\). Now, using the definition of \(x_{n+1}\), we have
Thus, (18) together with (20) gives
Moreover, using the definition of \(y_{n}\) and (9), we have
Let \(p\in \omega _{w}(x_{n})\) and \(\{x_{n_{l}}\}\) be a subsequence of \(\{x_{n}\}\), such that \(x_{n_{l}}\rightharpoonup p\) as \(l\rightarrow \infty \). Using (22) and (23), we have \(x_{n_{l}+1}\rightharpoonup p\) and \(y_{n_{l}}\rightharpoonup p\) as \(l\rightarrow \infty \), and hence, \(A_{k}(y_{n_{l}})\rightharpoonup A_{k}(p)\) as \(l\rightarrow \infty \) for all \(k\in \{1,\ldots ,R\}\). Thus, it follows from (19) and Lemma 2.6 that \(p\in F(U_{i})\) for all \(i\in \{1,\ldots ,N\}\), and \(A_{k}(p)\in F(T_{j})\) for all \((j,k)\in \{1,\ldots ,M\}\times \{1,\ldots ,R\}\). Hence, \(p\in \varGamma \).
The two conditions (i) and (ii) of Lemma 2.7 are satisfied, and hence, \(\{x_{n}\}\) converges weakly to a point in \(\varGamma \). \(\square \)
It should be noted that our iterative method, Algorithm 3.1, also works for approximation of solution of SCFP (3) when U and T are averaged mappings.
Algorithm 3.5
Choose \(x_{0},x_{1}\in H_{1}\). Let \(\theta \in [0,1)\) and \(\{\epsilon _{n}\}\), \(\{\varepsilon _{n}\}\) and \(\{\rho _{n}\}\) be positive real parameter sequences, such that \(\lim \limits _{n\rightarrow \infty }\epsilon _{n}=0\), \(\sum _{n=1}^{\infty }\varepsilon _{n}<\infty \), \(0<\rho _{n}<\frac{1}{\xi }\) and \(\liminf \limits _{n\rightarrow \infty } \rho _{n}(\frac{1}{\xi }-\rho _{n})>0\) where \(\xi =\max \{\alpha ,\beta \}\).
- STEP 1.:
-
Choose \(\theta _{n}\), such that \(0\le \theta _{n}\le \bar{\theta }_{n}=\min \{\bar{\alpha _{n}},\bar{\beta _{n}}\}\) where
$$\begin{aligned} \bar{\alpha }_{n}:= \left\{ \begin{array}{lr} \min \big \{\theta ,\frac{\epsilon _{n}}{\Vert x_{n-1}-x_{n}\Vert }\big \},&{} \text{ if } x_{n-1}\ne x_{n} \\ \theta , &{} \text{ otherwise }, \end{array} \right. \end{aligned}$$and
$$\begin{aligned} \bar{\beta }_{n}:= \left\{ \begin{array}{lr} \min \big \{\theta ,\frac{\varepsilon _{n}}{\Vert x_{n-1}-x_{n}\Vert ^{2}}\big \},&{} \text{ if } x_{n-1}\ne x_{n} \\ \theta , &{} \text{ otherwise }. \end{array} \right. \end{aligned}$$ - STEP 2.:
-
Evaluate \(y_{n}=x_{n}+\theta _{n}(x_{n}-x_{n-1}).\)
- STEP 3.:
-
Evaluate \(t_{n}=(I-U)y_{n}\), \(w_{n}=(I-T)A(y_{n})\) and \(\vartheta _{n}=\Vert t_{n}+A^{*}(w_{n})\Vert .\) Stopping criterion: If \(\vartheta _{n}=0\), then Stop. Otherwise, got to STEP 4.
- STEP 4.:
-
Evaluate \( x_{n+1}=y_{n}-\eta _{n}(t_{n}+A^{*}(w_{n})), \) where
$$\begin{aligned} \eta _{n}=\frac{\rho _{n}(\Vert t_{n}\Vert ^{2}+\Vert w_{n}\Vert ^{2})}{\vartheta _{n}^{2}}. \end{aligned}$$ - STEP 5.:
-
Set \(n:=n+1\) and go to STEP 1.
Corollary 3.6
If \(U:H_{1}\rightarrow H_{1}\) is \(\alpha \)-averaged mapping, \(T:H_{2}\rightarrow H_{2}\) is \(\beta \)-averaged mapping, then the sequence \(\{x_{n}\}\) generated by Algorithm 3.5 converges weakly to the solution of the problem SCFP (3), provided that \(\{{\bar{x}}\in F(U):A{\bar{x}}\in F(T)\}\) is nonempty.
4 Application
As applications, we can obtain several new algorithms to solve problems that can be converted to the fixed-point problem of averaged mapping. Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces and \(A_{k}:H_{1}\rightarrow {H_{2}}\) (\(k\in \{1,\ldots ,R\}\)) are bounded linear operators.
4.1 Generalized split system of minimization problem
Let H be a real Hilbert space and let \(f:H\rightarrow {\mathbb {R}}\cup \{+\infty \}\) proper, lower semicontinuous convex function. The proximal operator of the function f with scaling parameter \(\lambda >0\) is a mapping \(\text{ prox}_{\lambda f}:H\rightarrow H\) given by
Proximal operators are firmly nonexpansive (\(\frac{1}{2}\)-averaged mapping), and the point \({\bar{x}}\) minimizes f if and only if \(\text{ prox}_{\lambda f}({\bar{x}})={\bar{x}}\), see [5]. Consider the generalized split system of minimization problem (GSSMP) given by
where \(f_{i}:H_{1}\rightarrow {\mathbb {R}}\cup \{+\infty \}\) (\(i\in \{1,\ldots ,N\}\)) and \(g_{j}:H_{2}\rightarrow {\mathbb {R}}\cup \{+\infty \}\) (\(j\in \{1,\ldots ,M\}\)) are proper, lower semicontinuous convex functions. Let \(\varOmega \) be the solution set of (24), and assume that \(\varOmega \) is nonempty. If \(A_{k}=A\) for all \(k\in \{1,\ldots ,R\}\), GSSMP (24) is reduced to split system of minimization problem (SSMP) considered in [25, 26].
Assuming that the set of solution points of GSSMP (24) is nonempty and by taking \(U_{i}=\text{ prox}_{\lambda f_{i}}\) and \(T_{j}=\text{ prox}_{\lambda g_{j}}\) in Algorithm 3.1, we can have the following iterative method to approximate the solution of GSSMP (24).
Algorithm 4.1
Choose \(x_{0},x_{1}\in H_{1}\). Let \(\theta \in [0,1)\) and \(\{\epsilon _{n}\}\), \(\{\varepsilon _{n}\}\) and \(\{\rho _{n}\}\) be positive real parameter sequences, such that \(\lim \limits _{n\rightarrow \infty }\epsilon _{n}=0\), \(\sum _{n=1}^{\infty }\varepsilon _{n}<\infty \), \(0<\rho _{n}<2\) and \(\liminf \limits _{n\rightarrow \infty } \rho _{n}(2-\rho _{n})>0\).
- STEP 1.:
-
Choose \(\theta _{n}\), such that \(0\le \theta _{n}\le \bar{\theta }_{n}=\min \{\bar{\alpha _{n}},\bar{\beta _{n}}\}\) where
$$\begin{aligned} \bar{\alpha }_{n}:= \left\{ \begin{array}{lr} \min \big \{\theta ,\frac{\epsilon _{n}}{\Vert x_{n-1}-x_{n}\Vert }\big \},&{} \text{ if } x_{n-1}\ne x_{n} \\ \theta , &{} \text{ otherwise }, \end{array} \right. \end{aligned}$$and
$$\begin{aligned} \bar{\beta }_{n}:= \left\{ \begin{array}{lr} \min \big \{\theta ,\frac{\varepsilon _{n}}{\Vert x_{n-1}-x_{n}\Vert ^{2}}\big \},&{} \text{ if } x_{n-1}\ne x_{n} \\ \theta , &{} \text{ otherwise }. \end{array} \right. \end{aligned}$$ - STEP 2.:
-
Evaluate \(y_{n}=x_{n}+\theta _{n}(x_{n}-x_{n-1}).\)
- STEP 3.:
-
Evaluate \(t_{n}^{(i)}=(I-\text{ prox}_{\lambda f_{i}})y_{n}\), \(w_{n}^{(j,k)}=(I-\text{ prox}_{\lambda g_{j}})A_{k}(y_{n})\) and
$$\begin{aligned} \vartheta _{n}=\Big \Vert \sum \limits _{i=1}^{N}t^{(i)}_{n}+\sum \limits _{k=1}^{R}\sum \limits _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n})\Big \Vert . \end{aligned}$$Stopping criterion: If \(\vartheta _{n}=0\), then Stop. Otherwise, got to STEP 4.
- STEP 4.:
-
Evaluate \( x_{n+1}=y_{n}-\eta _{n}\Big (\sum _{i=1}^{N}t^{(i)}_{n}+\sum _{k=1}^{R}\sum _{j=1}^{M}A^{*}_{k}(w^{(j,k)}_{n})\Big ), \) where
$$\begin{aligned} \eta _{n}=\frac{\rho _{n}\Big (\sum \nolimits _{i=1}^{N}\Vert t^{(i)}_{n}\Vert ^{2}+\sum \nolimits _{k=1}^{R}\sum \nolimits _{j=1}^{M}\Vert w^{(j,k)}_{n}\Vert ^{2}\Big )}{\vartheta _{n}^{2}}. \end{aligned}$$ - STEP 5.:
-
Set \(n:=n+1\) and go to STEP 1.
Theorem 4.2
The sequence \(\{x_{n}\}\) generated by Algorithm 4.1 converges weakly to the solution point of the problem GMSSFP (25).
4.2 Generalized multiple-set split feasibility problem
Generalized multiple-set split feasibility problem (GMSSFP) is formulated as problem of finding a point
where \(C_{i}\) \((i\in \{1,\ldots ,N\})\) and \(Q_{j}\) \((j\in \{1,\ldots ,M\})\) are nonempty closed convex subsets of \(H_{1}\) and \(H_{2}\), respectively. Many methods have been developed to solve GMSSFP (25) for \(A_{k}=A\) for all \(k\in \{1,\ldots ,R\}\); see, for example, in [9, 43] and references therein. The GMSSFP (25) with \(N=M=R=1\) is known as the SFP [10, 14].
The GMSSFP (25) is a special case of GSSMP (24), i.e., take \(f_{i}=\delta _{C_{i}}\) and \(g_{j}=\delta _{Q_{j}}\) (the indicator functions) in GSSMP (24).
4.3 Generalized split system of inclusion problem
For a real Hilbert space H and maximal monotone set-valued mapping \(T:H\rightarrow 2^{H}\), the resolvent operator \(J_{\lambda }^{T}\) associated with T and \(\lambda >0\) is
The resolvent operator \(J_{\lambda }^{T}\) is single-valued and firmly nonexpansive. Moreover, \(0\in T({\bar{x}})\) if and only if \({\bar{x}}\) is a fixed point of \(J_{\lambda }^{T}\) for all \(\lambda >0\); see [8].
Let \(T_{i}:H_{1}\rightarrow 2^{H_{1}}\) \((i\in \{1,\ldots ,N\})\) and \(U_{j}:H_{2}\rightarrow 2^{H_{2}}\) \((j\in \{1,\ldots ,M\})\) be maximal monotone mappings. The generalized split system of inclusion problem is to find \({\bar{x}}\in H_{1}\), such that
Replacing \(U_{i}\) by \(J_{\lambda }^{T_{i}}\) and \(T_{j}\) by \(J_{\lambda }^{U_{j}}\) in Algorithm 3.1, we obtain the weak convergence Theorems for solving (27).
4.4 Generalized split system of equilibrium problems
Let \(h:H\times H\rightarrow {\mathbb {R}}\) be a bifunction. Then, we say that h satisfy Condition CO on H if the following assumptions are satisfied:
-
(A1)
\(h(u,u) = 0\), for all \(u\in H;\)
-
(A2)
h is monotone on H, i.e., \(h(u,v)+h(v,u)\le 0\), for all \(u,v\in H;\)
-
(A3)
for each \(u,v,w\in H\), \(\limsup \limits _{\alpha \downarrow {0}}h(\alpha w+(1-\alpha )u,v)\le h(u,v);\)
-
(A4)
h(u, .) is convex and lower semicontinuous on H for each \(u\in H\).
Lemma 4.3
[20]. If h satisfies Condition CO on H, then for each \(r>0\) and \(u\in H\), the mapping (called resolvant of h), given by
satisfies the following conditions:
-
(1)
\(T_{r}^{h}\) is single-valued and \(T_{r}^{h}\) is a firmly nonexpansive;
-
(2)
Fix\((T_{r}^{h})=\{{\bar{x}}\in H:h({\bar{x}},y)\ge 0,\forall y\in H\}\);
-
(3)
\(\{{\bar{x}}\in H:h({\bar{x}},y)\ge 0,\forall y\in H\}\) is closed and convex.
Let \(f_{i}:H_{1}\times H_{1}\rightarrow {\mathbb {R}}\) \((i\in \{1,\ldots ,N\})\) and \(g_{j}:H_{2}\times H_{2}\rightarrow {\mathbb {R}}\) \((j\in \{1,\ldots ,M\})\) be bifunctions. Assume each bifunction \(f_{i}\) and \(g_{j}\) satisfy Condition CO on \(H_{1}\) and \(H_{2}\), respectively. The generalized split system of equilibrium problem (GSSEP): find \({\bar{x}}\in H_{1}\), such that
Similarly, for \(r \>0\), replacing \(U_{i}\) by \(T^{f_{i}}_{r}\) and \(T_{i}\) by \(T^{g_{j}}_{r}\) in Algorithm 3.1, we obtain weak convergence Theorem for approximation of consistent GSSEP (28).
5 Numerical result
In this section, we provide two numerical examples to illustrate the behavior and potential applicability of our proposed algorithm (Algorithm 3.1). In the figures, the x-axis represents the number of iterations n (Iter(n)), while the y-axis gives the value of \(error(n)=\Vert x_{n}\Vert \) generated by each iteration n. Convergent behavior of Algorithm 3.1 with different starting point and different parameters where \(\theta =0.5\) and \(\theta _{n}\) is taken to be \(0\le \theta _{n}\le \bar{\theta }_{n}\) where \(\bar{\theta }_{n}=\min \{\bar{\alpha _{n}},\bar{\beta _{n}}\}\) for \(\bar{\alpha _{n}}\) and \(\bar{\beta _{n}}\) defined by (7) and (8) is given in figures and tables. In the tables, we used the stopping criteria \(\frac{\Vert x_{n}-x_{n-1}\Vert }{\Vert x_{1}-x_{2}\Vert }<0.01\), where ‘Iter(n)’ denotes number of iteration, ‘CPU(s)’ denotes the CPU time in seconds, \(D(n)=\Vert x_{n}-x_{n-1}\Vert \), and \(error(n)=\Vert x_{n}\Vert \).
All codes were written in MATLAB and are performed on HP laptop with Intel(R) Core(TM) i5–7200U CPU @ 250GHz 2.70GHz and RAM 4.00GB.
Example 5.1
In this example, we consider the problem of finding a point
where \(C_{i}\) (\(i\in \{1,2,3\}\)) is closed convex subset of \({\mathbb {R}}^{p}\), \(g_{j}:{\mathbb {R}}^{q}\rightarrow {\mathbb {R}}\cup \{+\infty \}\) (\(j\in \{1,2,3,4\}\)), \(A_{k}=\gamma _{k}G^{(k)}_{q\times p}\) (\(k\in \{1,2,3\}\)) given by \(C_{1}=\{x\in {\mathbb {R}}^{p}:\Vert x\Vert \le 1\}\), \(C_{2}=\{x\in {\mathbb {R}}^{p}:\langle x,w_{2}\rangle \le 1\}\) and \(C_{3}=\{x\in {\mathbb {R}}^{p}:\langle x,w_{3}\rangle \le 10\}\) for \(w_{2}= w_{3}=(1,\ldots ,1)\in {\mathbb {R}}^{p}\), \(g_{j}\) is a quadratic functions \(g_{j}(y)=\frac{1}{2}y^{T}B_{j}y\) for \(B_{j}\) is invertible symmetric positive semidefinite \(q\times q\) matrix, and \(G^{(k)}_{q\times p}\) is \(q\times p\) matrix, \(\gamma _{k}\in {\mathbb {R}}\).
Note that each \(C_{i}\) is closed convex subset of \({\mathbb {R}}^{p}\) and each \(g_{j}\) is proper, lower semicontinuous convex function. Take \(U_{i}:{\mathbb {R}}^{p}\rightarrow {\mathbb {R}}^{p}\) by a projection mapping given by \(U_{i}=P_{C_{i}}\), and \(T_{j}:{\mathbb {R}}^{q}\rightarrow {\mathbb {R}}^{q}\) (\(j\in \{1,\ldots ,M\}\)) given by proximal operator \(T_{j}=\text{ prox}_{\lambda g_{j}}\) where \(\lambda =1\), and notice that \(T_{j}(y)=\text{ prox}_{\lambda f_{i}}(y)=(I+B_{i})^{-1}(y).\) Moreover, \(U_{i}\) and \(T_{j}\) are \(\frac{1}{2}\)-averaged mappings, and \(F(U_{i})=F(P_{C_{i}})=C_{i}\) and \(F(T_{j})=\{{\bar{y}}\in {\mathbb {R}}^{q}:g_{j}({\bar{y}})\le g_{j}(y), \forall y\in {\mathbb {R}}^{q}\}=\arg \min g_{j}.\) Hence, Algorithm 3.1 can solve the problem considered here in this example. To apply Algorithm 1 in solving the problem (29), we use \(p=q\), \(B_{j}\) is randomly generated invertible symmetric positive semidefinite \(q\times q\) matrix, \(G^{(k)}_{q\times p}=I_{p\times p}\), where \(I_{p\times p}\) is \(p\times p\) identity matrix, \(\gamma _{k}=k\) for \(k\in \{1,2,3\}\)
Hence, Algorithm 3.1 can solve the problem considered here in this example. To apply Algorithm 3.1 in solving the problem (29), we use \(p=q\), \(B_{j}\) is randomly generated invertible symmetric positive semidefinite \(q\times q\) matrix, \(G^{(k)}_{q\times p}=I_{p\times p}\) where \(I_{p\times p}\) is \(p\times p\) identity matrix, \(\gamma _{k}=k\) for \(k\in \{1,2,3\}\). The numerical behavior of this example is reported in Figs. 1 and 2, and in Table 1. In Fig. 1 and Table 1, we use
-
Case 1:
\(\theta _{n}=\bar{\theta }_{n}\), \(\rho _{n}=1\), \(\epsilon _{n}=\varepsilon _{n}=\frac{1}{n^{2}}\),
-
Case 2:
\(\theta =0\), \(\rho _{n}=1\), \(\epsilon _{n}=\varepsilon _{n}=\frac{1}{n^{2}}\).
Example 5.2
We consider the problem given in \(l_{2}\) space, finding a point
where \(U_{i}:l_{2}\rightarrow l_{2}\) (\(i\in \{1,2,3\}\)), \(T_{j}:l_{2}\rightarrow l_{2}\) (\(j\in \{1,2,3\}\)) and \(A_{k}:l_{2}\rightarrow l_{2}\) (\(k\in \{1,2,3\}\)), given by \(U_{i}:x=(x^{(1)},x^{(2)},x^{(3)}\ldots )\mapsto \big (a^{(1)}_{(i)}x^{(1)},a^{(2)}_{(i)}x^{(2)},a^{(3)}_{(i)}x^{(3)},\ldots \big )\), \(T_{j}:x=(x^{(1)},x^{(2)},x^{(3)}\ldots )\mapsto b_{(j)} x= b_{(j)}\big (x^{(1)},x^{(2)},x^{(3)},\ldots \big )\), \(A_{k}(x)=\gamma _{k}x\), where \(\{a^{(t)}_{(i)}\}^{\infty }_{t=1}\) is the sequence of positive real numbers with \(\max \{a^{(t)}_{(i)}:t\in {\mathbb {N}}\}=a_{(i)}<2\) for each \(i\in \{1,2,3\}\), for \(0< b_{(j)}<2\) for each \(j\in \{1,2,3\}\) and \(\gamma _{k}\in {\mathbb {R}}\) for \(k\in \{1,2,3\}\). It is clear that \(\frac{1}{a_{i}}>\frac{1}{2}\), and
Hence, \(U_{i}\) is \(\frac{1}{a_{i}}\)-ism, i.e., \(U_{i}\) is \(\frac{a_{i}}{2}\)-averaged. Similarly, \(T_{j}\) is \(\frac{b_{j}}{2}\)-averaged. The numerical results of Algorithm 3.1 in solving this example [problem (30)] are reported in Fig. 3 and Table 2 where \(a^{(t)}_{(i)}=\frac{i}{i+t^{i}}\), \(b_{(j)}=\frac{j}{j+1}\) and \(\gamma _{k}=k\) and the parameters in the algorithm are taken as
-
Case 3:
\(\theta _{n}=\bar{\theta }_{n}\), \(\rho _{n}=1.5\), \(\epsilon _{n}=\frac{1}{n^{0.5}}\), \(\varepsilon _{n}=\frac{1}{n^{3}}\),
-
Case 4:
\(\theta _{n}=\bar{\theta }_{n}\), \(\rho _{n}=0.25\), \(\epsilon _{n}=\frac{1}{n^{2}}\), \(\varepsilon _{n}=\frac{1}{n^{4}}\),
-
Case 5:
\(\theta =0\), \(\rho _{n}=0.5\), \(\epsilon _{n}=\frac{1}{n^{2}}\), \(\varepsilon _{n}=\frac{1}{n^{4}}\),
are reported in Fig. 3 and Table 2.
The numerical results illustrated in Figs. 1, 2, 3 and in Table 1 and 2 confirm the computational effectiveness of the proposed algorithm. From figures and tables, we see that Algorithm 3.1 has better performance when choosing \(\theta _{n}\ne 0\), and hence, this confirms the efficacy of our algorithm with nonzero inertial term.
6 Conclusions
In this paper, we introduce a new algorithm for generalized split inverse problem of averaged mappings. Under some suitable conditions imposed on parameters, we have proved the weak convergence of the algorithm. The new adaptive technique is developed, so that the step sizes in the proposed algorithm are selected without the prior knowledge of the operator norms. Some application of our result is also given. The preliminary numerical experiments have also been performed to illustrate the convergence of the proposed algorithm.
References
Abubakar, J., Kumam, P., Hassan Ibrahim, A., ur Rehman, H.: Inertial iterative schemes with variable step sizes for variational inequality problem involving pseudomonotone operator. Mathematics 8(4), 609 (2020)
Alvarez, F.; Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set Val. Anal. 9(1–2), 3–11 (2001)
Bailion, J.; Bruck, R.E.; Reich, S.: On the asymptotic behavior of nonexpansive mappings and semigroups in Banach spaces. Houston J. Math. 4(3), 1–9 (1978)
Bauschke, H.H.: The approximation of fixed points of compositions of nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 202(1), 150–159 (1996)
Bauschke, H.H.; Combettes, P.L.; et al.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, vol. 408. Springer, New York (2011)
Beck, A.; Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Bertsekas, D.P., Gafni, E.M.: Projection methods for variational inequalities with application to the traffic assignment problem. In: Nondifferential and variational techniques in optimization, pp. 139–159. Springer, New York (1982)
Brezis, H.: Operateurs Maximaux Monotones et Semi-groupes de Contractions dans les Espaces de Hilbert. Elsevier, Amsterdam (1973)
Buong, N.: Iterative algorithms for the multiple-sets split feasibility problem in Hilbert spaces. Numer. Algorithm. 76(3), 783–798 (2017)
Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Prob. 18(2), 441 (2002)
Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Prob. 20(1), 103 (2003)
Cegielski, A.: General method for solving the split common fixed point problem. J. Optim. Theory Appl. 165(2), 385–404 (2015)
Censor, Y.; Chen, W.; Combettes, P.L.; Davidi, R.; Herman, G.T.: On the effectiveness of projection methods for convex feasibility problems with linear inequality constraints. Optim. Appl. 51(3), 1065–1088 (2012)
Censor, Y.; Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8(2), 221–239 (1994)
Censor, Y.; Gibali, A.; Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithm. 59(2), 301–323 (2012)
Censor, Y.; Segal, A.: The split common fixed point problem for directed operators. J. Convex Anal. 16(2), 587–600 (2009)
Chen, C.; Chan, R.H.; Ma, S.; Yang, J.: Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM J. Imaging Sci. 8(4), 2239–2267 (2015)
Chen, P.; Huang, J.; Zhang, X.: A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Prob. 29(2), 025–111 (2013)
Combettes, P.L.: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 53(5–6), 475–504 (2004)
Combettes, P.L.; Hirstoaga, S.A.; et al.: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal 6(1), 117–136 (2005)
Gebrie, A.G.: Dual variable inertial accelerated algorithm for split system of null point equality problems. Numer. Funct. Anal. Optim. 42(7), 1–27 (2021)
Gebrie, A.G.: A novel low-cost method for generalized split inverse problem of finite family of demimetric mappings. Comput. Appl. Math. 40(2), 1–18 (2021)
Gebrie, A.G.: Weak and strong convergence adaptive algorithms for generalized split common fixed point problems. Optimization pp. 1–26 (2021)
Gebrie, A.G., Bekele, B.: Viscosity self-adaptive method for generalized split system of variational inclusion problem. B. Iran. Math. Soc. pp. 1–21 (2020)
Gebrie, A.G., Wangkeeree, R.: Proximal method of solving split system of minimization problem. J. Appl. Math. Comput. pp. 1–26 (2019)
Gebrie, A.G., Wangkeeree, R.: An iterative scheme for solving split system of minimization problems. J. Comput. Anal. Appl. 28(1) (2020)
Gebrie, A.G., Wangkeeree, R.: Parallel proximal method of solving split system of fixed point set constraint minimization problems. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Math. RACSAM 114(1), 13 (2020)
Han, D.; Lo, H.K.: Solving non-additive traffic assignment problems: a descent method for co-coercive variational inequalities. Eur. J. Oper. Res. 159(3), 529–544 (2004)
Hendrickx, J.M.; Olshevsky, A.: Matrix \(p\)-norms are NP-hard to approximate if \(p\ne 1,2,\infty \). SIAM J. Matrix Anal. Appl. 31(5), 2802–2812 (2010)
Ibrahim, A.H.; Kumam, P.; Kumam, W.: A family of derivative-free conjugate gradient methods for constrained nonlinear equations and image restoration. IEEE Access 8, 162714–162729 (2020)
López, G.; Martín-Márquez, V.; Wang, F.; Xu, H.K.: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Prob. 28(8), 085–104 (2012)
Lorenz, D.A.; Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51(2), 311–325 (2015)
Maingé, P.E.: Convergence theorems for inertial KM-type algorithms. Comput. Appl. Math. 219(1), 223–236 (2008)
Micchelli, C.A.; Shen, L.; Xu, Y.: Proximity algorithms for image models: denoising. Inverse Prob. 27(4), 045–109 (2011)
Moudafi, A.: The split common fixed-point problem for demicontractive mappings. Inverse Prob. 26(5), 055–107 (2010)
Nesterov, Y.E.: A method for solving the convex programming problem with convergence rate o (1/k 2). In: Dokl. akad. nauk Sssr, vol. 269, pp. 543–547 (1983)
Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73(4), 591–597 (1967)
Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)
Qin, L.J.; Wang, G.: Multiple-set split feasibility problems for a finite family of demicontractive mappings in Hilbert spaces. Math. Inequal. Appl. 16(4), 1151–1157 (2013)
Qin, L.J.; Wang, L.; Chang, S.: Multiple-set split feasibility problem for a finite family of asymptotically quasi-nonexpansive mappings. Pan. Am. Math. J. 22(1), 37–45 (2012)
Suparatulatorn, R., Suantai, S., Phudolsitthiphat, N.: Reckoning solution of split common fixed point problems by using inertial self-adaptive algorithms. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Math. RACSAM 113(4), 3101–3114 (2019)
Taddele, G.H.; Kumam, P.; Gebrie, A.G.: An inertial extrapolation method for multiple-set split feasibility problem. J. Inequal. Appl. 2020(1), 1–22 (2020)
Taddele, G.H.; Kumam, P.; Gebrie, A.G.; Sitthithakerngkiet, K.: Half-space relaxation projection method for solving multiple-set split feasibility problem. Math. Comput. Appl. 25(3), 47 (2020)
Tang, Y.C.; Liu, L.W.: Several iterative algorithms for solving the split common fixed point problem of directed operators with applications. Optimization 65(1), 53–65 (2016)
Wang, F.: A new method for split common fixed-point problem without priori knowledge of operator norms. J. Fixed Point Theory Appl. 19(4), 2427–2436 (2017)
Xu, H.K.: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Prob. 26(10), 105–118 (2010)
Yao, Y.; Liou, Y.C.; Postolache, M.: Self-adaptive algorithms for the split problem of the demicontractive operators. Optimization 67(9), 1309–1319 (2018)
Zhao, J.; Hou, D.: A self-adaptive iterative algorithm for the split common fixed point problems. Numer. Algorithm. 82(3), 1047–1063 (2019)
Acknowledgements
We would like to thank the anonymous reviewers and editor of the journal for their invaluable comments on an earlier draft of this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Gebrie, A.G., Bedane, D.S. A new self-adaptive accelerated method for generalized split system of common fixed-point problem of averaged mappings. Arab. J. Math. 11, 261–275 (2022). https://doi.org/10.1007/s40065-021-00354-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40065-021-00354-w