Abstract
In this paper, we introduce an inertial Halpern-type iterative algorithm for approximating a zero of the sum of two monotone operators in the setting of real Banach spaces that are 2-uniformly convex and uniformly smooth. Strong convergence of the sequence generated by our proposed algorithm is established by means of some new geometric inequalities proved in this paper that are of independent interest. Furthermore, numerical simulations in image restoration and compressed sensing problems are also presented. Finally, the performance of the proposed method is compared with that of some existing methods in the literature.
Similar content being viewed by others
1 Introduction
Let H be a real Hilbert space. Let \(A: H \to H\) and \(B: H \to 2^{H}\) be single- and multi-valued monotone operators, respectively. Consider the problem:
The vast applicability of the monotone inclusion problem (1) in solving problems such as convex minimization, variational inequality, image restoration, and signal processing has made it a problem of contemporary interest (see, e.g., [1–5]). Many mathematical algorithms have been developed for solving problem (1). Early methods include, for example, the forward-backward algorithm (FBA) of Passty [6], the Peaceman-Rachford algorithm [7], and the Douglas-Rachford algorithm [8] to mention but a few. However, these methods do not guarantee strong convergence to a solution of problem (1). The FBA is an iterative procedure that starts at a point \(x_{1}\in H\), and generates a sequence \(\{x_{n}\}\subset H\) by:
where \(\{\lambda _{n}\}\) is a sequence of positive real numbers. Due to the nonexpansive nature of the resolvent operator \((I+\lambda _{n}B )^{-1}\) appearing in the backward step in the FBA, the algorithm has been studied extensively by many authors. Over the years, modified versions of the FBA using the idea of Halpern-type or viscosity-type approximation technique have been proposed to obtain strong convergence. For example, in 2012, Takahashi et al. [9] introduced and studied a Halpern-type modification of the FBA in real Hilbert spaces and proved strong convergence of the sequence generated by their algorithm to a solution of (1). Also, in 2020, using the idea of viscosity approximation, Kitkuan et al. [10] introduced and studied a viscosity-type algorithm for approximating solutions of problem (1) and proved a strong convergence theorem in the setting of real Hilbert spaces.
It is well known that iterative methods involving monotone operators have slow convergence properties. In the literature, the study of convergence properties of iterative algorithms has become an area of contemporary interest (see, e.g., [11–17]). One method that is now studied enormously is the inertial extrapolation technique which dates back to the early result of Polyak [18] in the context of convex minimization. An algorithm of inertial type is an iterative procedure in which subsequent terms are obtained using the preceding two terms. Many authors have shown numerically that adding the inertial extrapolation term in an existing algorithm improves its performance (see, e.g., [19–24]). The inertial technique has successfully been employed as an acceleration process for the FBA and its modifications. For example, in [25], Lorenz and Pock introduced and studied an inertial version of the FBA in the setting of real Hilbert spaces and proved weak convergence. Later, Cholamjiak et al. [26] introduced and studied an inertial Halpern-type FBA and proved strong convergence in the setting of real Hilbert spaces. Recently, in 2021, Adamu et al. [27] used the idea of viscosity approximation technique to introduce a three-step inertial modified viscosity-type FBA in the setting of a real Hilbert space and proved strong convergence.
It is worthy of mentioning that all the results mentioned above owe their validity to the setting of real Hilbert spaces. However, the following indicates that some real-life problems do not reside only in Hilbert spaces. Consider the 3D Navier-Stokes equation:
where u denotes the velocity of a fluid, ρ denotes the scalar pressure, and \(\nu >0\) denotes velocity. In 1933, Leray [28] proved the existence of a weak solution to the corresponding Cauchy problem with initial data from \(L_{2}(\mathbb{R}^{3})\). Unfortunately, the well-posedness of the problem still remains a major open problem to date. On the other hand, Miyakawa [29] showed that if the initial data is taken from \(L_{p}(\mathbb{R}^{3})\), \(3< p<\infty \), there exists a unique solution to the 3D Navier-Stokes equation (which is known to exist locally in time). In order to approximate the solution guaranteed by Miyakawa of the 3D Navier-Stokes equation, algorithms in \(L_{p}(\mathbb{R}^{3})\) may be needed. It, perhaps, can serve as a motivation for introducing and studying iterative algorithms in real Banach spaces more general than real Hilbert spaces.
The concept of monotonicity defined on Hilbert spaces can be extended to Banach spaces in either of the following sense. When the operator is a self map on a Banach space, E and satisfies some property, it is called accretive. However, if the operator maps E to its dual space, \(E^{*}\) and satisfies the same property as in the setting of real Hilbert spaces, the name monotone is retained. In the literature, extensions of the inclusion problem (1) involving accretive operators have been studied by many authors (see, e.g., [30–35]). However, there are only a few results in the monotone sense. It is worthy of mentioning that a motivation for the study of monotone operators on real Banach spaces stems mainly from their firm connection with optimization problems (see, e.g., [36]). Some early results concerning extensions of the inclusion problem (1) involving monotone operators on real Banach spaces in the literature include, for example, the result of Shehu [37]. He proved the following theorem:
Theorem 1.1
Let E be a uniformly smooth and 2-uniformly convex real Banach space. Let \(A: E \to E^{*}\) be a monotone and L-Lipschitz continuous mapping and \(B: E \to 2^{E^{*}}\) be a maximal monotone mapping. Suppose the solution set \((A+B)^{-1}0\) is nonempty and the normalized duality mapping on E is weakly sequentially continuous. Let \(\{x_{n}\}\) be a sequence defined by:
where \(\{\lambda _{n}\}\) satisfies the following condition: \(0< a<\lambda _{n}<\frac{1}{\sqrt{2}\mu \kappa L}\), μ is the 2-uniform convexity constant of E; κ is the 2-uniform smoothness constant of \(E^{*}\); and L is the Lipschitz constant of A. Then, the sequence \(\{x_{n}\}\) generated by (2) converges weakly to a solution of problem (1).
Also, in the same year, Kimura and Nakajo [38] proved the following strong convergence theorem:
Theorem 1.2
Let C be a nonempty closed and convex subset of a uniformly smooth and 2-uniformly convex real Banach space E. Let \(A: C \to E^{*}\) be an α-inverse strongly monotone mapping and \(B: E \to 2^{E^{*}}\) be a maximal monotone mapping. Suppose the solution set \((A+B)^{-1}0 \) is nonempty. Let \(u\in E\) and \(\{x_{n}\}\subset C\) be a sequence defined by:
where Π is the generalized projection, \(\lambda _{n}\subset (0,\infty )\) and \(\gamma _{n}\subset [0,1]\) such that \(\lim_{n\to \infty} \gamma _{n}=0\) and \(\sum_{n=1}^{\infty}\gamma _{n}=\infty \). Then, the sequence \(\{x_{n}\}\) generated by (3) converges strongly to a solution of problem (1).
Our main interest in this work is to introduce a new inertial Halpern-type FBA involving monotone operators in the setting of 2-uniformly convex and uniformly smooth real Banach spaces and prove strong convergence of the sequence generated by our algorithm to a solution of the inclusion problem (1). The strong convergence was achieved by means of some new geometric inequalities we establish here, which are of independent interest. Furthermore, some interesting numerical implementations of our proposed method in solving image restoration and compressed sensing problems are also presented.
2 Preliminaries
Let E be a real normed space and let \(J: E \to 2^{E^{*}} \) be the normalized duality map (see, e.g., [39] for the explicit definition of J and its properties on certain Banach spaces). The following functional \(\phi :E\times E \to \mathbb{R}\) defined on a smooth real Banach space by
will be needed in our estimations in the sequel. It was introduced by Alber [40] and studied by many authors (see, e.g., [41–43]). For any \(x,y,z\in E \) and \(\tau \in [0,1]\) using the definition of ϕ, one can easily deduce the following (see, e.g., Nilsrakoo and Saejung, [41]):
-
P1:
\((\Vert x \Vert -\Vert y\Vert )^{2} \leq \phi (x,y)\leq (\Vert x\|+ \Vert y\Vert )^{2}\),
-
P2:
\(\phi \bigl(x, J^{-1}(\tau Jy+(1-\tau )Jz)\bigr)\leq \tau \phi (x,y) + (1-\tau ) \phi (x,z) \),
-
P3:
\(\phi (x,y)=\phi (x,z)+\phi (z,y)+2\langle z-x,Jy-Jz\rangle \),
where J and \(J^{-1}\) are the duality maps on E and \(E^{*}\), respectively.
We shall use interchangeably ϕ and \(V: E\times E^{*} \to \mathbb{R}\) defined by
since \(V(x,y^{*})=\phi(x,J^{-1}y^{*})\).
Definition 2.1
Let E be a reflexive, strictly convex, and smooth real Banach space and let \(B: E \to 2^{E^{*}}\) be a maximal monotone operator. Then for any \(\lambda >0\) and \(u\in E\), there exists a unique element \(u_{\lambda}\in E\) such that \(Ju \in (Ju_{\lambda}+\lambda Bu_{\lambda})\). The element \(u_{\lambda}\) is called the resolvent of B, and it is denoted by \(J_{\lambda}^{B}u\). Alternatively, \(J_{\lambda}^{B}= (J+\lambda B)^{-1}J\), for all \(\lambda >0\). It is easy to verify that \(B^{-1}0=F(J_{\lambda}^{B})\), \(\forall \lambda >0\), where \(F(J_{\lambda}^{B})\) denotes the set of fixed points of \(J_{\lambda}^{B}\).
Lemma 2.2
([39])
Let C be a nonempty closed and convex subset of a smooth, strictly convex, and reflexive real Banach space E and let \(\Pi _{C} : E \to C\) be the generalized projection. For any \(x\in E\) and \(y \in C\), \(\tilde{x} =\Pi _{C}x\) if and only if \(\langle \tilde{x}-y, Jx-J\tilde{x} \rangle \geq 0\), for all \(y\in C\).
Lemma 2.3
([40])
Let E be a reflexive strictly convex and smooth Banach space with \(E^{*}\) as its dual. Then,
for all \(u\in E\) and \(u^{*},v^{*}\in E^{*}\).
Lemma 2.4
([44])
Let E be a 2-uniformly smooth real Banach space. Then, there exists a constant \(\gamma >0\) such that \(\forall x,y\in E\)
In a real Hilbert space, \(\gamma =1\).
Lemma 2.5
([45])
Let E be a uniformly convex and smooth real Banach space, and let \(\{u_{n}\}\) and \(\{v_{n}\}\) be two sequences of E. If either \(\{u_{n}\}\) or \(\{v_{n}\}\) is bounded and \(\phi (u_{n},v_{n} )\to 0 \) then \(\Vert u_{n}-v_{n}\Vert \to 0 \).
Lemma 2.6
([41])
Let E be a uniformly smooth Banach space and \(r > 0\). Then, there exists a continuous, strictly increasing, and convex function \(g : [0, 2r] \rightarrow [0, 1)\) such that \(g(0) = 0\) and
for all \(\beta \in [0, 1]\), \(u \in E\) and \(x, y \in B_{r}\).
Lemma 2.7
([46])
Let \(\{ a_{n} \}\) be a sequence of nonnegative numbers satisfying the condition
where \(\{ \alpha _{n} \}\), \(\{ \beta _{n} \}\) and \(\{c_{n}\}\) are sequences of real numbers such that
Then, \(\lim_{n \to \infty } a_{n}=0 \).
Lemma 2.8
([47])
Let \(\Gamma _{n}\) be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence \(\{\Gamma _{n_{j}}\}_{j\geq 0}\) of \(\{\Gamma _{n}\}\), which satisfies \(\Gamma _{n_{j}}<\Gamma _{n_{j}+1}\) for all \(j\geq 0\). Also, consider the sequence of integers \(\{ \tau (n)\}_{n\geq n_{0}}\) defined by
Then, \(\{\tau (n)\}_{n\geq n_{0}}\) is a nondecreasing sequence verifying \(\lim_{n\to \infty}\tau (n)=\infty \), and, for all \(n \geq n_{0}\), it holds that \(\Gamma _{\tau (n)}\leq \Gamma _{\tau (n)+1}\), and we have
Lemma 2.9
([48])
Let \(\{\Gamma _{n}\}\), \(\{\delta _{n}\}\) and \(\{\alpha _{n}\}\) be sequences in \([0,\infty ) \) such that
for all \(n \geq 1\), \(\sum_{n=1}^{\infty} \delta _{n} < +\infty \), and there exists a real number α with \(0 \leq \alpha _{n} \leq \alpha <1\), for all \(n \in \mathbb{N}\). Then, the following hold:
-
(i)
\(\sum_{n \geq 1}[\Gamma _{n} - \Gamma _{n-1}]_{+} < + \infty \), where \([t]_{+}=\max \{t,0\} \);
-
(ii)
there exists \(\Gamma ^{*} \in [0, \infty )\) such that \(\lim_{n \rightarrow \infty} \Gamma _{n}= \Gamma ^{*}\).
Lemma 2.10
([38])
Let E be a 2-uniformly convex and uniformly smooth real Banach space with dual space, \(E^{*}\). Let \(A: E \to E^{*}\) be an α-inverse strongly monotone and \(B: E \to 2^{E^{*}}\) be a maximal monotone. Let \(T_{\lambda}x= J_{\lambda}^{B}J^{-1}(Jx-\lambda Ax)\), for all \(\lambda >0\) and \(x\in E\). Then, the following hold:
-
(i)
\(F(T_{\lambda})=(A+B)^{-1}0\) and \((A+B)^{-1}0\) is closed and convex.
-
(ii)
\(\phi (u, T_{\lambda}x) \leq \phi (u,x)-(\gamma -\lambda \beta ) \|x-T_{\lambda}x\|^{2}-\lambda (2\alpha -\frac{1}{\beta} )\|Ax-Au\|^{2}\),
for all \(\beta >0\), where γ is the constant appearing in Lemma 2.4.
Remark 1
Observe that given \(\alpha >0\), there exists \(\lambda _{0}>0\) such that \(\frac{\gamma}{\lambda _{0}}>\frac{1}{2\alpha}\). Thus, one can choose \(\beta >0\) such that \(\frac{1}{2\alpha} <\beta < \frac{\gamma}{\lambda _{0}}\). Hence, from (ii) we have
3 Main result
The following lemmas will play a crucial role in the proof of our main Theorem.
Lemma 3.1
Let E be a uniformly smooth real Banach space, and let \(u,x,y,z \in E\), and let \(a,b,c\in (0,1)\) with \(a+b+c=1\). Then
Proof
Using P2, we estimate as follows:
establishing the Lemma. □
Lemma 3.2
Let E be a uniformly smooth Banach space and \(r > 0\). Then, there exists a continuous, strictly increasing, and convex function \(g : [0, 2r] \rightarrow [0, 1)\) such that \(g(0) = 0\) and
for all \(a,b,c\in (0,1)\) with \(a+b+c=1\), \(u \in E\) and \(x, y,z \in B_{r}\).
Proof
Observe that for any \(x,y\in B_{r}\) and \(a,b,c\in (0,1)\) with \(a+b+c=1\), \((\frac{a}{1-c}x+\frac{b}{1-c}y )\in B_{r}\). Using Lemma 2.6, we estimate as follows:
establishing the Lemma. □
Theorem 3.3
Let E be a 2-uniformly convex and uniformly smooth real Banach space with dual space, \(E^{*}\). Let \(A: E \to E^{*}\) be an α-inverse strongly monotone and \(B: E \to 2^{E^{*}}\) be a maximal monotone. Assume that the solution set \(\Omega = (A+B)^{-1}0 \neq \emptyset \), given \(w_{1}, v \in E\), let \(\{w_{n}\}\) be a sequence in E defined by:
where \(0<\mu _{n}\leq \bar{\mu _{n}}\) and
\(\mu \in (0,1)\) and \(\{\vartheta _{n}\}\subset (0,1)\) such that \(\sum_{n=1}^{\infty}\vartheta _{n}<\infty \), \(0<\lambda _{n}< 2\alpha \gamma \), \(\{a_{n}\}, \{b_{n}\}, \{c_{n}\}\subset (0,1)\) with \(a_{n}+b_{n}+c_{n}=1\) and \(\lim_{n\to \infty} a_{n}=0\). Then, \(\{w_{n}\}\) converges strongly to \(w\in \Omega \).
Proof
First, we show that the sequence \(\{w_{n}\}\) is bounded. To prove this, we start by estimating the inertial term \(y_{n}\) using property P3.
Also, by Lemma 2.4, one can estimate \(y_{n}\) as follows:
Putting together equation (8) and inequality (10), we get
From (9), this implies that
Having obtained this estimate of \(\phi (w,y_{n})\), we can now use it to prove the boundedness of \(\{w_{n}\}\). By Lemmas 3.1 and 2.10, and inequality (11), we get
Suppose the maximum is \(\phi (w,u)\). Then the conclusion follows trivially. Otherwise, there exists an \(n_{0}\in \mathbb{N}\) such that for all \(n\geq n_{0}\),
By Lemma 2.9, this implies that \(\{\phi (w,w_{n})\}\) is convergent. Moreover, by property P1, we deduce that \(\{w_{n}\}\) is bounded. Thus, \(\{w_{n}\}\), \(\{y_{n}\}\), \(\{z_{n}\}\) and \(\{u_{n}\}\) are bounded.
Having established the boundedness of \(\{w_{n}\}\), the next task is to prove that the sequence \(\{w_{n}\}\) converges to the point \(w=\Pi _{\Omega}v\). Now, using Lemmas 3.2 and 2.10,
Setting \(d_{n}=c_{n}(\gamma -\lambda _{n}\beta )\|y_{n}-z_{n}\|\), \(e_{n}=c_{n}\lambda _{n} (2\alpha -\frac{1}{\beta} )\|Ay_{n}-Aw \|\) and \(l_{n}=b_{n}c_{n}g(\|Jy_{n}-Jz_{n}\|)\), we deduce from inequality (13) that
To complete the proof, it is important to consider the following cases:
Case 1. Assume that for some \(n_{0}\in \mathbb{N}\), the following inequality holds:
Then, by the boundedness of \(\{w_{n}\}\), this implies that \(\lim_{n\to \infty} \phi (w,w_{n})\) exists. Thus, from inequality (14), we obtain that
Furthermore, we deduce that \(\lim_{n\to \infty}\|Jy_{n}-Jz_{n}\|\). Also,
Hence, it is not difficult to establish that
In addition, observe that
As usual, the next thing to show now is that the set of all weak subsequential limits of \(\{w_{n}\}\) is contained in \((A+B)^{-1}0\), which is a standard proof (see, e.g., [38] for this proof).
Now we have all the tools to prove that \(\{w_{n}\}\) converges strongly to \(w=\Pi _{\Omega}v\). Let \(w^{*}\) be a weak limit of \(\{w_{n}\}\). Then, there exists a subsequence \(\{w_{n_{k}}\}\subset \{w_{n}\}\) such that
Since \(w=\Pi _{\Omega}v\) and \(w^{*}\in \Omega \), this implies that
Thus, by (15), we deduce that
Finally, we conclude Case 1 using Lemma 2.3, and what we have established so far.
By Lemma 2.7, we deduce from (17) that \(\lim_{n\to \infty}\phi (w,w_{n})=0\), which implies that \(\lim_{n\to \infty} w_{n}=w\) as a consequence of Lemma 2.5.
Case 2. If the hypothesis of Case 1 fails, since every sequence in \(\mathbb{R}\) has a monotone subsequence, one can construct a subsequence \(\{w_{m_{j}}\}\subset \{w_{n}\}\) that will satisfy
By Lemma 2.8, we have that
From inequality (14), using this index \(\{m_{k}\}\subset \mathbb{N}\), we have
Following a similar argument as in Case 1, one can establish the following
Furthermore, from (16), we have
By Lemma 2.7, we deduce from (18) that \(\lim_{k\to \infty} \phi (w,w_{m_{k}})=0\). Thus,
Therefore, \(\limsup_{k\to \infty} \phi (w,w_{k})=0\) and so, by Lemma 2.5, \(\lim_{k\to \infty} w_{k}=w\). □
Remark 2
The following results can be deduced from Theorem 3.3.
-
In Theorem 3.3, setting \(\mu _{n}\equiv 0\), one gets the unaccelerated version of our proposed algorithm (7).
-
One can also get an alternated inertial version of our proposed algorithm (7) by modifying the inertial term in the following way:
$$ \hat{y_{n}}:= \textstyle\begin{cases} w_{n} & \text{if } n \text{ is even}; \\ J^{-1} ( Jw_{n}+\mu _{n}(Jw_{n}-Jw_{n-1}) )& \text{if } n \text{ is odd}; \end{cases} $$and then replace \(y_{n}\) by \(\hat{y_{n}}\) in algorithm (7) to get the following algorithm:
$$ \textstyle\begin{cases} z_{n}= J_{\lambda _{n}}^{B}J^{-1} (J\hat{y_{n}}-\lambda _{n} A \hat{y_{n}} ), & \\ w_{n+1}= J^{-1} ( a_{n}Jv+b_{n}J\hat{y_{n}}+c_{n}Jz_{n} ), \end{cases} $$(19)which is the so-called alternated inertial algorithm. This simple modification was first considered for the case of the proximal point algorithm by Mu and Peng [49]. For motivation and relevance of the alternated inertial algorithm, interested readers may see [49].
4 Applications and numerical illustrations
4.1 Application to convex minimization problem
Consider the structured nonsmooth convex minimization problem:
where E is a real Banach space, \(f: E \to \mathbb{R}\cup \{+\infty \}\) is proper, convex and lower semi-continuous (lsc) and \(g: E \to \mathbb{R}\) is smooth and convex with gradient ∇g, which is L-Lipschitz continuous. As we shall see in Sects. 4.2 and 4.3, problem (20) is suitable for modeling problems coming from image deblurring and denoising, and compressed sensing.
Observe that a solution of problem (20) is equivalent to a solution of the following inclusion problem:
Since ∂f is maximal monotone, imposing the condition that ∇g is α-inverse strongly monotone, then the FBA and its modifications can be used to approximate solutions of (21), which are minimizers of (20). Just as in the case of arbitrary monotone operators, the acceleration process has been an active topic of nonsmooth convex minimization. In this context, the inertial extrapolation technique by Ployak [18] has been employed as an acceleration process. A particular case of the inertial FBA introduced by Lorenz and Pock [25] that captured the attention of many authors is the fast iterative shrinkage-thresholding algorithm (FISTA) developed by Beck and Teboulle [50]. The algorithm is defined by:
where \(t_{0}=1\), \(\lambda =\frac{1}{L}\), \(x_{0}=x_{1}\in H\) and, f and g are as defined in problem (20) in the setting of real Hilbert spaces. Beck and Teboulle [50] proved weak convergence of the sequence generated by (22) to a solution of the inclusion problem (21) in real Hilbert spaces.
Remark 3
The literature on the modifications of the \(\{t_{n}\}\) in FISTA to take care of the oscillatory behavior of the scheme abound. Interested readers may see, for example, [51] what has been.
Setting \(A=\nabla g\) and \(B=\partial f\) in our proposed algorithm (7), one gets an algorithm for solving problem (20).
4.2 Application to image restoration problems
Images are produced to record or display useful information. Due to imperfections that may occur in the capturing process, the recorded images may invariably represent a degraded version of the original scene. The aim of image restoration techniques is to restore the original image from a degraded observation of it. The convex optimization problem in image restoration is modeled as
where F is a convex differentiable functional on a real Hilbert space H. Since the solution may not be unique for any degraded image, problem (23) inherits ill-posedness. To restore well-posedness, regularization techniques are employed. That is, a stable solution can be obtained by recasting problem (23) as
where \(\lambda >0\) is a regularization parameter and G is a regularization function that may be smooth or nonsmooth. In the literature, the \(l_{1}\)-regularization is usually used for image denoising and deblurring problems. The model is given by:
where b is an observed image, x is an unknown image, y is noise, and L is a linear operator that depends on the concerned image recovery problem, \(\|\cdot \|\) denotes the Euclidean norm, λ is a positive regularization parameter, and \(\|\cdot \|_{1}\) is the \(l_{1}\)-regularization term. By setting \(Ax:=\nabla (\frac {1}{2}\|Lx-b\|^{2})=L^{T}(Lx-b )\) and \(Bx:=\partial (\lambda \|x\|_{1})\), one can use the FBA and its modifications to find an equivalent solution of (24).
In our numerical experiments, we used the MATLAB blur function “P=fspecial(’motion’,20,30)” and added random noise. We initialize the vectors \(x_{0}\) and u to be zeros and \(x_{1}\) to be ones in \(R^{n}\). In algorithm (2) of Shehu [37], we set \(\lambda = 0.0001\); in algorithm (3) of Kimura and Nakajo [38], we set \(\lambda =0.00001\), \(\gamma _{n}=\frac{1}{n+1}\) and \(C=R^{256}\). In the FISTA (22), we set \(t_{0}=1\) and \(\lambda =0.03\), and in our proposed algorithm (7), we choose \(a_{n}=0\), \(b_{n}=0.75\), \(c_{n}=0.25\) and \(\mu _{n}=0.95\) and \(\lambda _{n}=0.00001\). Finally, we used a tolerance of 10−4 and a maximum number of iterations (n) to be 100 for all the algorithms. The original test images (Abubakar, Barbra, Duangkamon and peppers) their degradation and restoration are presented in Figs. 1 and 2.
The signal-to-noise ratio (SNR) is a performance metric used to measure the performance algorithms in the restoration of degraded images. It is defined as:
where x and \(x_{n}\) are the original and estimated image at iteration n, respectively. Using this performance metric, the higher the SNR value for a restored image, the better the restoration process via the algorithm. In Fig. 3, we present a chart to show the performances of algorithms (2), (3), (22) and our proposed algorithm (7) in restoring the test images.
4.3 Application to signal processing
In this subsection, we give an application of our method to compressed sensing, which is an aspect of signal processing that has to do with reconstructing a sparse signal from measured data. Precisely, our goal here is to recover a sparse signal \(x\in \mathbb{R}^{N}\) from the following observation model:
where L is an \(M \times N\) sensing matrix with Gaussian entries and \(M< N\), and \(z\in \mathbb{R}^{M}\) is a Gaussian noise, and \(y\in \mathbb{R}^{M}\) is the observed or measured data. Since the system (25) is undetermined and noisy, regularization methods are used to recover x. The approach is similar to that of the image restoration we introduced in Sect. 4.2.
Just like we did for the image restoration, here we also compared the performance of algorithms (3), (2), (22), and (7) in the recovery process of the sparse vector \(x\in \mathbb{R}^{N}\) with m nonzero entries. We considered a signal of length \(N=4096\) and \(M=2048\) observations. Also, we study the behavior of the algorithms and their mean square errors (MSE) as we vary \(m=50\) and \(m=100\). In the experiment, for algorithm (3) of Kimura and Nakajo, we choose \(\gamma _{n}=\frac{1}{n+1}\) and \(\lambda _{n}=0.001\); in algorithm (2) of Shehu, we choose \(\lambda _{n}=0.001\); in the FISTA (22) of Beck and Teboulle, we choose \(\lambda =0.03\) and \(t_{0}=1\), and in our proposed algorithm (7), we choose \(\mu _{n}=0.95\), \(a_{n}=0\), \(b_{n}=0.75\), \(c_{n}=0.25\) and \(\lambda _{n}=0.019\), and we used stopping criteria of 10−4 for all the algorithms. The numerical results of the simulations are presented in Figs. 4, 5, 6, and 7.
4.4 Discussion of the numerical simulations and conclusion
Discussion
-
For the test images considered in the image restoration problem, as shown in Figs. 1, 2, and 3, our proposed algorithm (7) outperforms algorithms (3), (2), and (22) in the restoration process. Furthermore, as we can see clearly from the SNR plots in Fig. 3, our proposed algorithm (7) restored the test images with the highest quality (highest SNR value).
-
For the recovery process of the sparse vector in the compressed sensing problem, for algorithm (3) of Kimura and Nakajo, for the first case when the number of spikes was 50, it took their algorithm 181 iterations to recover the signal, and when we increased the number of spikes to 100, their algorithm required 271 iterations to restore the signal. Also, for algorithm (2) of Shehu, for the first case when the number of spikes was 50, it took 187 iterations to restore the signal, and as the number was increased to 100, it took 272 iterations. Furthermore, the FISTA algorithm also required 117 iterations for 50 spikes and 143 iterations for 100 spikes. This is indeed an improvement in the recovery compared to that of the algorithms of Kimura and Nakajo [38] and Shehu [37]. However, our proposed algorithm outperforms the FISTA algorithm in the recovery process since it took just 83 iterations for 50 spikes and 94 iterations for 100 spikes.
Conclusion
This paper presents some new geometric inequalities in some real Banach spaces. The new inequalities were used to prove the strong convergence of a sequence generated by an inertial Halpern-type algorithm to a solution of a monotone inclusion problem. Furthermore, some interesting applications of the theorem to convex minimization, image restoration, and signal processing problems were presented. Finally, some numerical simulations to restore some test images degraded by random noise and motion blur and to recover a sparse signal were presented. From the results of the experiment, the proposed method, algorithm (7) appears to be competitive and promising.
Availability of data and materials
Data sharing is not applicable to this article.
References
Kitkuan, D., Kumam, P., Martínez-Moreno, J.: Generalized Halpern-type forward–backward splitting methods for convex minimization problems with application to image restoration problems. Optimization 69 1557–1581 (2020)
Adamu, A., Kitkuan, D., Padcharoen, A., Chidume, C.E., Kumam, P.: Inertial viscosity-type iterative method for solving inclusion problems with applications. Math. Comput. Simul. 194, 445–459 (2022)
Yodjai, P., Kumam, P., Kitkuan, D., Jirakitpuwapat, W., Plubtieng, S.: The Halpern approximation of three operators splitting method for convex minimization problems with an application to image inpainting. Bangmod Int. J. Math. Comput. Sci. 5(2), 58–75 (2019)
Ibrahim, A.H., Deepho, J., Abubakar, A.B., Adamu, A.: A three-term Polak–Ribière–Polyak derivative-free method and its application to image restoration. Sci. Afr. 13, e00880 (2021)
Abubakar, A.B., Kumam, P., Awwal, A.M.: A modified self-adaptive conjugate gradient method for solving convex constrained monotone nonlinear equations with applications to signal recovery problems. Bangmod Int. J. Math. Comput. Sci. 5(2), 1–26 (2019)
Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979). https://doi.org/10.1016/0022-247X(79)90234-8. http://www.sciencedirect.com/science/article/pii/0022247X79902348
Peaceman, D.W., Rachford, H.H. Jr.: The numerical solution of parabolic and elliptic differential equations. J. Soc. Ind. Appl. Math. 3(1), 28–41 (1955)
Douglas, J., Rachford, H.H.: On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 82(2), 421–439 (1956)
Takahashi, W., Wong, N.-C., Yao, J.-C., et al.: Two generalized strong convergence theorems of Halpern’s type in Hilbert spaces and applications. Taiwan. J. Math. 16(3), 1151–1172 (2012)
Kitkuan, D., Kumam, P., Martínez-Moreno, J., Sitthithakerngkiet, K.: Inertial viscosity forward–backward splitting algorithm for monotone inclusions and its application to image restoration problems. Int. J. Comput. Math. 97(1–2), 482–497 (2020)
Ogwo, G.N., Alakoya, T.O., Mewomo, O.T.: Inertial iterative method with self-adaptive step size for finite family of split monotone variational inclusion and fixed point problems in Banach spaces. Demonstr. Math. (2021)
Alakoya, T.O., Mewomo, O.T.: Viscosity s-iteration method with inertial technique and self-adaptive step size for split variational inclusion, equilibrium and fixed point problems. Comput. Appl. Math. 41(1), 1–31 (2022)
Chidume, C.E., Adamu, A., Nnakwe, M.O.: Strong convergence of an inertial algorithm for maximal monotone inclusions with applications. Fixed Point Theory Appl. 2020(1), 13 (2020)
Jiang, B., Wang, Y., Yao, J.-C.: Multi-step inertial regularized methods for hierarchical variational inequality problems involving generalized Lipschitzian mappings. Mathematics 9(17), 2103 (2021). https://doi.org/10.3390/math9172103. https://www.mdpi.com/2227-7390/9/17/2103
Wang, Y., Li, X., Jiang, B.: Two new inertial relaxed gradient CQ algorithms on the split equality problem. J. Appl. Anal. Comput. 12(1), 436–454 (2022)
Phairatchatniyom, P., ur Rehman, H., Abubakar, J., Kumam, P., Martínez-Moreno, J.: An inertial iterative scheme for solving split variational inclusion problems in real Hilbert spaces. Bangmod Int. J. Math. Comput. Sci. 7(2), 35–52 (2021)
Chidume, C.E., Ikechukwu, S.I., Adamu, A.: Inertial algorithm for approximating a common fixed point for a countable family of relatively nonexpansive maps. Fixed Point Theory Appl. 2018(1), 9 (2018)
Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)
Chidume, C.E., Adamu, A., Nnakwe, M.O.: An inertial algorithm for solving Hammerstein equations. Symmetry 13(3), 376 (2021)
Ibrahim, A.H., Kumam, P., Abubakar, A.B., Adamu, A.: Accelerated derivative-free method for nonlinear monotone equations with an application. Numer. Linear Algebra Appl. 29, e2424 (2022)
Pan, C., Wang, Y.: Convergence theorems for modified inertial viscosity splitting methods in Banach spaces. Mathematics 7(2), 156 (2019)
Taddele, G.H., Gebrie, A.G., Abubakar, J.: An iterative method with inertial effect for solving multiple-set split feasibility problem. Bangmod Int. J. Math. Comput. Sci. 7(2), 53–73 (2021)
Chidume, C.E., Kumam, P., Adamu, A.: A hybrid inertial algorithm for approximating solution of convex feasibility problems with applications. Fixed Point Theory Appl. 2020(1), 12 (2020)
Adamu, A., Adam, A.A.: Approximation of solutions of split equality fixed point problems with applications. Carpath. J. Math. 37(3), 381–392 (2021)
Lorenz, D.A., Pock, T.: An inertial forward–backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51(2), 311–325 (2015)
Cholamjiak, W., Cholamjiak, P., Suantai, S.: An inertial forward–backward splitting method for solving inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 20(1), 1–17 (2018)
Adamu, A., Deepho, J., Ibrahim, A.H., Abubakar, A.B.: Approximation of zeros of sum of monotone mappings with applications to variational inequality problem and image processing. Nonlinear Funct. Anal. Appl. 26(2), 411–432 (2021)
Leray, J.: Sur le mouvement d’un liquide visqueux emplissant l’espace. Acta Math. 63(1), 193–248 (1934)
Miyakawa, T.: On the initial value problem for the Navier–Stokes equations in \(L^{p}\) spaces. Hiroshima Math. J. 11(1), 9–20 (1981)
Pholasaa, N., Cholamjiak, P., Cho, Y.J.: Modified forward–backward splitting methods for accretive operators in Banach spaces. J. Nonlinear Sci. Appl. 9, 2766–2778 (2016)
Shahzad, N., Zegeye, H.: Strong convergence theorems for a common zero for a finite family of m-accretive mappings. Nonlinear Anal. 66, 1161–1169 (2007)
Qin, X., Cho, S.Y., Yao, J.-C.: Weak and strong convergence of splitting algorithms in Banach spaces. Optimization 69(2), 243–267 (2020)
Luo, Y.: Weak and strong convergence results of forward–backward splitting methods for solving inclusion problems in Banach spaces. J. Nonlinear Convex Anal. 21(2), 341–353 (2020)
Chidume, C.E., Adamu, A., Kumam, P., Kitkuan, D.: Generalized hybrid viscosity-type forward–backward splitting method with application to convex minimization and image restoration problems. Numer. Funct. Anal. Optim. 42, 1586–1607 (2021)
Chidume, C.E., Adamu, A., Minjibir, M.S., Nnyaba, U.V.: On the strong convergence of the proximal point algorithm with an application to Hammerstein euations. J. Fixed Point Theory Appl. 22(3), 1–21 (2020)
Chidume, C.E., Adamu, A., Okereke, L.C.: Strong convergence theorem for some nonexpansive-type mappings in certain Banach spaces. Thai J. Math. 18(3), 1537–1548 (2020)
Shehu, Y.: Convergence results of forward–backward algorithms for sum of monotone operators in Banach spaces. Results Math. 74(4), 1–24 (2019)
Kimura, Y., Nakajo, K.: Strong convergence for a modified forward–backward splitting method in Banach spaces. J. Nonlinear Var. Anal. 3(1), 5–18 (2019)
Alber, Y.I.: Metric and generalized projection operators in Banach spaces: properties and applications. In: Theory and Applications of Nonlinear Operators of Accretive and Monotone Type. Lecture Notes in Pure and Applied Mathematics, pp. 15–50 (1996)
Alber, Y., Ryazantseva, I.: Nonlinear Ill Posed Problems of Monotone Type. Springer, Berlin (2006). ISBN 9781402043963
Nilsrakoo, W., Saejung, S.: Strong convergence theorems by Halpern–Mann iterations for relatively nonexpansive mappings in Banach spaces. Appl. Math. Comput. 217(14), 6577–6586 (2011)
Chidume, C.E., Adamu, A.: Solving split equality fixed point problem for quasi-phi-nonexpansive mappings. Thai J. Math. 19(4), 1699–1717 (2021)
Chidume, C.E., Adamu, A., Okereke, L.C.: Iterative algorithms for solutions of Hammerstein equations in real Banach spaces. Fixed Point Theory Appl. 2020(1), 4 (2020)
Xu, Z.-B., Roach, G.F.: Characteristic inequalities of uniformly convex and uniformly smooth Banach spaces. J. Math. Anal. Appl. 157(1), 189–210 (1991)
Kamimura, S., Takahashi, W.: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13(3), 938–945 (2002)
Xu, H.-K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66(1), 240–256 (2002)
Maingé, P.E.: The viscosity approximation process for quasi-nonexpansive mappings in Hilbert spaces. Comput. Math. Appl. 59(1), 74–79 (2010)
Alvarez, F.: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 14(3), 773–782 (2004)
Mu, Z., Peng, Y.: A note on the inertial proximal point method. Stat. Optim. Inf. Comput. 3(3), 241–248 (2015)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Liang, J., Luo, T., Schönlieb, C.-B.: Improving “Fast iterative shrinkage-thresholding algorithm”: faster, smarter and greedier. arXiv preprint (2018). arXiv:1811.01430
Acknowledgements
The authors will like to thank the referees for their esteemed comments and suggestions. The first author acknowledges with thanks, the King Mongkut’s University of Technology Thonburi’s Postdoctoral Fellowship and the Center of Excellence in Theoretical and Computational Science (TaCS-CoE) for their financial support. Moreover, this research was funded by National Science, Research and Innovation Fund (NSRF), King Mongkut’s University of Technology North Bangkok with Contract no. KMUTNB-FF-65-13.
Funding
This project was funded by National Research Council of Thailand (NRCT) under Research Grants for Talented Mid-Career Researchers (Contract no. N41A640089.
Author information
Authors and Affiliations
Contributions
AA and PK formulated the problem and discussed the formulation with DK, AP and TS. Analysis, proof of the main theorem and writing the draft manuscript were done jointly by AA, DK, PK, AP and TS. Proofreading and writing of the original manuscript were done jointly by AA and DK. Software and numerical simulations were done jointly by AP, DK and AA. Finally, TS and PK secured the grant for the research. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Adamu, A., Kitkuan, D., Kumam, P. et al. Approximation method for monotone inclusion problems in real Banach spaces with applications. J Inequal Appl 2022, 70 (2022). https://doi.org/10.1186/s13660-022-02805-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-022-02805-0