Two-step inertial forward–reflected–anchored–backward splitting algorithm for solving monotone inclusion problems

The main purpose of this paper is to propose and study a two-step inertial anchored version of the forward–reflected–backward splitting algorithm of Malitsky and Tam in a real Hilbert space. Our proposed algorithm converges strongly to a zero of the sum of a set-valued maximal monotone operator and a single-valued monotone Lipschitz continuous operator. It involves only one forward evaluation of the single-valued operator and one backward evaluation of the set-valued operator at each iteration; a feature that is absent in many other available strongly convergent splitting methods in the literature. Finally, we perform numerical experiments involving image restoration problem and compare our algorithm with known related strongly convergent splitting algorithms in the literature.


Introduction
Let H be a real Hilbert space, and let A : H → 2 H and B : H → H be two monotone operators.The problem of finding a zero of the sum of A and B also known as the monotone inclusion problem is defined as find x ∈ H such that 0 ∈ (A + B) x. (1.1) We denote the solution set of this problem by (A + B) −1 (0).The monotone inclusion is an important problem in optimization as well as in signal processing, image recovery, and machine learning.For instance, consider the minimization problem: where f : H → (−∞, +∞] is proper convex and lower semicontinuous, and g : H → R is convex and continuously differentiable.The solutions to (1.2) are solutions to the problem: where ∂ f denotes the subdifferential of f and ∇g is the gradient of g.Thus, the minimization problem of the sum of two convex functions is a special case of the monotone inclusion problem (1.1).Problems of the form (1.1) are often solved by splitting algorithms which involve evaluating A and B separately by means of a forward evaluation of B and a backward evaluation of A rather than their sum (A + B).These algorithms have undergone tremendous study which has been motivated by the desire to devise faster, computationally inexpensive and much more applicable methods.Among these splitting algorithms is the following forward-backward splitting algorithm (Lions and Mercier 1979;Passty 1979): where I H is the identity operator on H and λ is a positive constant.This algorithm involves only one forward evaluation of B and one backward evaluation of A per iteration, and it is known to converge weakly to a solution of the inclusion problem (1.1) when B is L −1cocoercive (which is strict), λ ∈ (0, 2L −1 ), A is maximal monotone and (A + B) −1 (0) is nonempty.Strongly convergent variants of Algorithm (1.3) have been studied under the strict cocoercive assumption on B in Takahashi et al. (2010); Wang and Wang (2018).
The strict cocoercive assumption on B in Algorithm (1.3) (and its strong convergent variants) was removed in Tseng (2000), where the author proposed the following forward-backwardforward splitting algorithm (also known as Tseng's splitting method): The main advantage of this algorithm is that it converges weakly to a solution of (1.1) under the much more weaker assumption that B is monotone and L-Lipschitz continuous, λ ∈ (0, L −1 ), A is maximal monotone and (A + B) −1 (0) = ∅.However, its main disadvantage is that it involves two forward evaluations of B, and this might affect the efficiency of the algorithm especially when this algorithm is applied to optimization arising from large-scale problems and optimal control theory, where computations of pertinent operators are often very expensive (see (Lions 1971)).Strongly convergent variants of Algorithm (1.4) were studied in Gibali and Thong (2018); Thong and Cholamjiak (2019), and they also have the disadvantage of requiring two forward evaluations of B.
This disadvantage was recently overcome by Malitsky and Tam (2020); they proposed the following forward-reflected-backward splitting algorithm: The main advantage of Algorithm (1.5) is that it requires only one forward evaluation of B even when B is monotone and Lipschitz continuous.The reflexive Banach space variant of Algorithm (1.5) was studied in Izuchukwu et al. (2022), and when B is linear, Algorithm (1.5) has the same structure as the reflected-forward-backward splitting algorithm proposed and studied in Cevher and Vũ (2021).However, due to the computational structure of Algorithm (1.5) (unlike Algorithm (1.3) and Algorithm (1.4)), its strongly convergent variants are very rare in the literature despite that in infinite-dimensional spaces, strong convergence results are much more desirable than weak convergence results.
Recently, fast convergent algorithms for solving optimization problems have been of great interest to many researchers.On one hand, the anchored extrapolation step is known to be one of the most important ingredients for improving the convergence rate of optimization algorithms (see (Qi and Xu 2021;Yoon and Ryu 2021) for details).On the other hand, the inertial technique which is based upon a discrete analog of a second-order dissipative dynamical system is also known for its efficiency in improving the convergence speed of iterative algorithms.The one-step inertial extrapolation x n + ϑ(x n − x n−1 ) is the most commonly used technique for this purpose.It originates from the heavy ball method of the second-order dynamical system for minimizing a smooth convex function: where γ > 0 is a damping or friction parameter.Polyak (1964) was the first author to propose the heavy ball method, Alvarez and Attouch (2001) extended it to the setting of a general maximal monotone operator.In Bing and Cho (2021), the authors proposed the following one-step inertial viscosity-type forward-backward-forward splitting algorithm (Bing and Cho 2021, Algorithm 3.4): where f is a contraction mapping and ϑ n is the inertial parameter.They proved that the sequence {x n } generated by Algorithm (1.6) converges strongly to a solution of the monotone inclusion problem (1.1) (see also (Bing and Cho (2021) (1.7) It was shown in [25, Section 4] that for problem (1.7), the two-step inertial fixed point iteration where , converges faster in terms of both the number of iterations and the CPU time, than the one-step inertial fixed point iteration (1.8) Furthermore, it was shown, using problem (1.7), that the sequences generated by the onestep inertial fixed point iteration (1.8) converge more slowly than those generated by its non-inertial version.This shows that the one-step inertial fixed point iteration (1.8) may fail to provide acceleration.But as discussed in (Liang 2016, Chapter 4), employing the multistep inertial, like the two-step inertial term  Polyak (1987), the authors recently studied multi-step inertial algorithms and showed that multi-step inertial terms (e.g., the two-step inertial term) enhances the speed of optimization algorithms.
In this paper, we investigate the strong convergence of a two-step inertial anchored variant of the forward-reflected-backward splitting algorithm (1.5).In other words, we propose a two-step inertial forward-reflected-anchored-backward splitting algorithm and prove that it converges strongly to a solution of Problem (1.1).The proposed algorithm involves only one forward evaluation of the monotone Lipschitz continuous operator B and one backward evaluation of the maximal monotone operator A at each iteration; a feature that is absent in many other available strongly convergent inertial splitting algorithms in the literature.Furthermore, we perform numerical experiments for problems emanating from image restoration, and these experiments confirm that our proposed algorithm is efficient and faster than other related strongly convergent splitting algorithms in the literature.

Preliminaries
The operator B : Let A be a set-valued operator A : The monotone operator A is called maximal if the graph G( A) of A, defined by is not properly contained in the graph of any other monotone operator.In other words, A is called a maximal monotone operator if for For a set-valued operator A, the resolvent associated with it is the mapping J A λ : H → 2 H defined by If A is maximal monotone and B is single-valued, then both J A λ and J A λ (I H − λB) are singlevalued and everywhere defined on H. Furthermore, the resolvent J A λ is nonexpansive.The following hold in a real Hilbert space H: and (2.2) Lemma 2.1 Saejung and Yotkaew (2012) Maingé (2007) Suppose that { p n } and {r n } are sequences of nonnegative real numbers such that for all n ≥ 1, where otherwise.
On the other hand, since β ≤ 0 and c 1 > 0, we get for all n ≥ n 0 , that Therefore, we get from (3.22) that p n ≥ 0 for all n ≥ n 0 .Using these facts in (3.19), we obtain that { p n } is bounded.It then follows from (3.22) that the sequence {x n } is indeed bounded, as claimed.
We now state and prove the convergence theorem of this paper.Theorem 3.5 Let {x n } be generated by Algorithm 3.1 when Assumption 3.2 holds.If lim n→∞ α n = 0 and ∞ n=1 α n = ∞, then {x n } converges strongly to P (A+B) −1 (0) v.
Proof Let x = P (A+B) −1 (0) v. Then by (2.1), we get Again, using (2.1), we get where M := sup n≥1 x n+1 − v which exists due to the boundedness of {x n } proved in Lemma 3.4.Now, using (3.23) and (3.24) in (3.7), we see that (3.25) Next, putting (3.12) and (3.17) in (3.25), we obtain This implies that That is, (3.26)where Thus, there exists n 1 ≥ n 0 such that c n > 0 for all n ≥ n 1 .Also, from (3.21), we have There exists n 2 ≥ n 0 such that d n > 0 for all n ≥ n 2 .Therefore, Using (3.28) and (3.29), we obtain In the light of Lemma 3.4, we see that {x n i } is bounded.Thus, we can choose a subsequence {x n i j } of {x n i } such that {x n i j } converges weakly to some x * ∈ H, and lim sup Using this, (3.3) and the monotonicity of A, we find that Thus, using the monotonicity of B, we obtain As j → ∞ in (3.35), we obtain, using (3.32) and (3.33), that v, u − x * ≥ 0. By Lemma 2.2, A + B is maximal monotone.Hence, we get that x * ∈ (A + B) −1 (0).Since x = P (A+B) −1 (0) v, it follows from (3.34) and the characterization of the metric projection that lim sup (3.36) Using (3.29), (3.30) and (3.36), we obtain that lim sup i→∞ q n i ≤ 0. Thus, in view of the condition ∞ n=1 α n = ∞, Lemma 2.1 and (3.27), we see that lim n→∞ t n = 0.This together with (3.22) imply that {x n } converges strongly to x = P (A+B) −1 (0) v, as asserted.
The step size defined in (3.1) makes it possible for Algorithm 3.1 to be applied in practice even when the Lipschitz constant L of B is not known.However, when this constant is known or can be calculated, we simply adopt the following variant of Algorithm 3.1: Algorithm 3.6 Let λ ∈ 0, 1 2 L , ϑ ∈ [0, 1), β ≤ 0, and choose the sequence {α n } in (0, 1).For arbitrary v, x −1 , x 0 , x 1 ∈ H, let the sequence {x n } be generated by Remark 3.7 Using arguments similar to those in Lemma 3.4 and Theorem 3.5, we can establish that the sequence {x n } generated by Algorithm 3.6 converges strongly to P (A+B) −1 (0) v.
Remark 3.8 (a) We obtained strong convergence results for Algorithm 3.1 without assuming that either A or B is strongly monotone (a condition that is quite restrictive).Rather, we modified the forward-reflected-backward splitting algorithm in Malitsky and Tam (2020) appropriately to obtain our strong convergent results.(b) Compared to Izuchukwu et al. (2023), we proved the strong convergence of Algorithm 3.1 using the double inertial technique.
Remark 3.9 A more careful examination of Algorithm 3.1 and it convergence analysis can help us to relax the Lipschitz continuity on B to uniform continuity (see, for example (Thong et al. (2023), page 1114).In a finite-dimensional space, B can even be continuous (see (Izuchukwu and Shehu 2021, Section 3)).However, as seen in these papers, this relaxation may be achieved with the cost of having strict restrictions on the stepsize {λ n } (e.g., through some linesearch techniques).Therefore, we intend to investigate these restrictions in detail in a different project in the future.

Numerical illustrations
In this section, using test examples which originate from image restoration problem, as well as an academic example, we compare Algorithm 3.1 with other strongly convergent algorithms (Alakoya et al. 2022, Algorithm 3.2), (Bing and Cho (2021) where λ > 0 (we take λ = 1), x ∈ R m is the original image to be restored, ĉ ∈ R N is the observed image and D : R m → R N is the blurring operator.The quality of the restored image is measured by where SNR means signal-to-noise ratio, and x * is the recovered image.For the implementation, we take x 0 = 0 ∈ R m×m and x −1 = x 1 = 1 ∈ R m×N , and use the following image found in the MATLAB Image Processing Toolbox:

Example 4.2 Let
Then, A is maximal monotone and B is Lipschitz continuous and monotone with Lipschitz constant L = 1.
For the implementation, we take the starting points: Case 1: During the implementation, we make use of the following:  We then use the stopping criterion; TOL n := 0.5 x n − J A (x n −Bx n ) 2 < for all algorithms, where is the predetermined error.
All the computations are performed using Matlab 2016 (b) which is running on a personal computer with an Intel(R) Core(TM) i5-2600 CPU at 2.30GHz and 8.00 Gb-RAM.
In Tables 1 and 2, "Iter" and "CPU" mean the CPU time in seconds and the number of iterations, respectively.Remark 4.3 Figures 1, 2, 3 can be seen clearly (or understood better) by looking at the graphs of "SNR" vs "Iteration number (n)", and Table 1.Note that the larger the SNR, the better the quality of the restored image.

Conclusion
We have proposed a two-step inertial forward-reflected-anchored-backward splitting algorithm for solving the monotone inclusion problem (1.1) in a real Hilbert space.We have also proved that the sequence generated by this algorithm converges strongly to a solution of the monotone inclusion problem.This algorithm inherits the attractive features of the forwardreflected-backward splitting algorithm (1.5), namely it involves only one forward evaluation of B even when B is not required to be cocoercive.However, unlike the forward-reflectedbackward splitting algorithm (1.5), our algorithm converges strongly.Numerical results show that the proposed algorithm is efficient and faster than other related strongly convergent splitting algorithms in the literature.We remark that our proposed algorithm involves a restrictive condition on {e n }; for example, the sequence { 1 n } does not satisfy this condition.Therefore, we intend to relax the restriction on {e n } in our ongoing projects.
article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
28) and the Lipschitz continuity of B, we find that lim i→∞ Bx n i +1 − Bx n i = 0.(3.33) (a) Tire Image of size 205 × 232.To create the blurred and noisy image (observed image), we use the Gaussian blur of size 9 × 9 and standard deviation σ = 4.(b) Cameraman Image of size 256 × 256.We use the Gaussian blur of size 7× 7 and standard deviation σ = 4. (c) Medical Resonance Imaging (MRI) of size 128 × 128.We use the Gaussian blur of size 7 × 7 and standard deviation σ = 4.

Fig. 1
Fig. 1 Numerical results for Tire

Table 1
Numerical results for Example 4.1