Abstract
In this paper, we study the contraction-proximal point algorithm for approximating a zero of a maximal monotone mapping. The norm convergence of such an algorithm has been established under two new conditions. This extends a recent result obtained by Ceng, Wu and Yao to a more general case.
MSC:47J20, 49J40, 65J15, 90C25.
Similar content being viewed by others
1 Introduction
We consider the problem of finding so that
where ℋ is a Hilbert space and is a given maximal monotone mapping. This problem is essential due to its various application in some concrete disciplines, including convex programming and variational inequalities. A classical way to solve such a problem is the proximal point algorithm (PPA) [1]. For any initial guess , the PPA generates an iterative sequence as
where stands for the resolvent of A and is the error sequence. In general, the following accuracy criterion on the error sequence:
is needed to ensure the convergence of PPA. In [1], Rockefeller also presented another accuracy criterion on the error sequence:
where
This criterion was then improved by Han and He [2] as
It is well known that the PPA does not necessarily converge strongly [3]. Then how to modify the PPA so that the strong convergence is guaranteed attracts serious attention of many researchers (see, e.g., [4–8]). In particular, one method for doing this has the following scheme:
where is fixed and is a real sequence. This algorithm, introduced independently by Xu [8] and Kamimura-Takahashi [5], is known as the contraction-proximal point algorithm (CPPA) [9], which is indeed a combination of Halpern’s iteration and the PPA. There are various conditions that ensure the norm convergence of the CPPA with criterion (I) (cf. [7, 10–12]) and the weakest one so far may be the following [13]:
-
(i)
;
-
(ii)
, ;
-
(iii)
, .
Let us now turn our attention to the CPPA under criterion (II). In this situation, Ceng, Wu and Yao [14] obtained the norm convergence under the following conditions:
-
(i)
;
-
(ii)
, ;
-
(iii)
with .
In the hypothesis mentioned above, the sequence is assumed to tend to infinity, so it is natural to ask whether the norm convergence is still guaranteed for bounded , especially for constant sequence. In the present paper, we shall answer this question affirmatively and relax condition to a more general case:
that is, we only need to assume the sequence is bounded below away from zero. The paper is organized as follows. In Section 2, we prove two useful lemmas that are very useful for proving the boundedness of the iteration. In Section 3, we establish norm convergence of the CPPA under two different conditions. As a result, we extend the corresponding result obtained in [14].
2 Some lemmas
We denote by ‘→’ strong convergence, and ‘⇀’ weak convergence. An operator is called monotone if
for any , ; maximal monotone if its graph
is not properly contained in the graph of any other monotone operator.
Let C be a nonempty, closed and convex subset of ℋ. We use to denote the projection from ℋ onto C; namely, for , is the unique point in C with the property:
It is well known that is characterized by
A mapping is called firmly nonexpansive if
for all . Here and hereafter, we denote by
the resolvent of A, where and I is the identity operator. The zero set of A is denoted by . The resolvent operator has the following properties (see [15]).
Lemma 1 Let A be a maximal monotone operator. Then
-
(i)
;
-
(ii)
is single-valued and firmly nonexpansive;
-
(iii)
, where denotes the fixed point set of ;
-
(iv)
its graph is weak-to-strong closed in .
Since is firmly nonexpansive, this implies that
for all and all . In what follows, we present two lemmas that are very useful for proving the boundedness of the iterative sequence.
Lemma 2 Given , let be a nonnegative real sequence satisfying
where and are real sequences. Then is bounded; more precisely,
Proof
We first show the following estimates:
For , we have
Assume . Since , we have
We thus verify inequality (5) by induction. Hence,
where the last inequality follows from the basic inequality: for all . □
Lemma 3 Given , let be a nonnegative real sequence satisfying
where and are real sequences. If , then is bounded; more precisely, .
Proof Let . Then . It follows that
Since , we have
which implies that
By induction, we can show the result as desired. □
We end this section by two useful lemmas. The first one is due to Maingé [16] and the second one is due to Xu [8].
Lemma 4 Let be a real sequence that does not decrease at infinity, in the sense that there exists a subsequence so that
For every define an integer sequence as
Then as and for all
Lemma 5 Let , , and be sequences such that
If , , and , then .
3 Convergence analysis
In what follows, we assume that A is a maximal monotone mapping and its zero set S is nonempty. To establish the convergence, we need the following lemma, which is indeed proved in [2]. We present here a different proof that is mainly based on property of firmly nonexpansive mappings.
Lemma 6 Let , and . If , then
Proof Since , it follows from (4) that
By using inequality , we have
Subsisting this into (8) and noting , we see that
from which it follows that
Consequently, the desired inequality (7) follows from the fact . □
We now are ready to prove our main results.
Theorem 1 For any , the sequence generated by
converges strongly to , provided that
-
(i)
;
-
(ii)
, ;
-
(iii)
, .
Proof Let . By our hypothesis, we may assume without loss of generality that . Then by Lemma 6, we have
where satisfying . It then follows from (9) that
which together with (10) yields
Applying Lemma 2 to the last inequality, we conclude that is bounded.
It follows from the subdifferential inequality that
Combining this with (10) yields
where is a sufficiently large number. Since , we assume that
and define . Setting , we rewrite (11) as
It is obvious that .
We next consider two possible cases on the sequence .
Case 1. is eventually decreasing (i.e., there exists such that is decreasing for ). In this case, must be convergent, and from (12) it follows
from which we have . Extract a subsequence from so that converges weakly to and
By noting the fact that , this implies
and . Hence, the weak-to-strong closedness of implies , i.e., . Consequently, we have
where the inequality follows from (3). Again it follows from (12) that
By using Lemma 5, we conclude that .
Case 2. is not eventually decreasing. Hence, we can find a subsequence so that for all . In this case, we may define an integer sequence as in Lemma 4. Since for all , it follows again from (12) that
so that as . Analogously,
On the other hand, we deduce from (9) that
which together with (13) gets
Noting and dividing by in (12), we arrive at
for all , which together with (14) yields
In view of (6), we have
Since and implies , this together with the fact immediately yields . □
For criterion (I), Boikanyo and Morosanu [10] introduced a new condition:
to ensure the convergence of the CPPA. In the following theorem, we shall present a similar condition under the accuracy criterion (II).
Theorem 2 For any , the sequence generated by
converges strongly to , provided that
-
(i)
;
-
(ii)
, ;
-
(iii)
, .
Proof Let . Similarly, we have
where satisfying , so we assume without loss of generality that . Applying Lemma 3, we conclude that is bounded.
From inequality (11), we also obtain
where we define .
To show , we consider two possible cases for .
Case 1. is eventually decreasing (i.e., there exists such that is decreasing for ). In this case, must be convergent, and from (12) it follows
where a sufficiently large number. Analogous to the previous theorem,
Rearranging terms in (17) yields that
We note that by our hypothesis goes to zero, and thus apply Lemma 5 to the previous inequality to conclude that .
Case 2. is not eventually decreasing. In this case, we may define an integer sequence as in Lemma 4. Since for all , it follows again from (17) that
so that , and furthermore . Analogously,
It follows from (17) that for all
By combining the last two inequalities, we have
from which we arrive at
Consequently, follows from (6) immediately. □
References
Rockafellar RT: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14: 877–898. 10.1137/0314056
Han D, He BS: A new accuracy criterion for approximate proximal point algorithms. J. Math. Anal. Appl. 2001, 263: 343–354. 10.1006/jmaa.2001.7535
Güler O: On the convergence of the proximal point algorithm for convex optimization. SIAM J. Control Optim. 1991, 29: 403–419. 10.1137/0329022
Bauschke HH, Combettes PL: A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Math. Oper. Res. 2001, 26: 248–264. 10.1287/moor.26.2.248.10558
Kamimura S, Takahashi W: Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 2000, 106: 226–240. 10.1006/jath.2000.3493
Solodov MV, Svaiter BF: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program. 2000, 87: 189–202.
Wang F: A note on the regularized proximal point algorithm. J. Glob. Optim. 2011, 50: 531–535. 10.1007/s10898-010-9611-z
Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66: 240–256. 10.1112/S0024610702003332
Marino G, Xu HK: Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. 2004, 3: 791–808.
Boikanyo OA, Morosanu G: A proximal point algorithm converging strongly for general errors. Optim. Lett. 2010, 4: 635–641. 10.1007/s11590-010-0176-z
Boikanyo OA, Morosanu G: Four parameter proximal point algorithms. Nonlinear Anal. 2011, 74: 544–555. 10.1016/j.na.2010.09.008
Xu HK: A regularization method for the proximal point algorithm. J. Glob. Optim. 2006, 36: 115–125. 10.1007/s10898-006-9002-7
Wang F, Cui H: On the contraction-proximal point algorithms with multi-parameters. J. Glob. Optim. 2012, 54: 485–491. 10.1007/s10898-011-9772-4
Ceng LC, Wu SY, Yao JC: New accuracy criteria for modified approximate proximal point algorithms in Hilbert space. Taiwan. J. Math. 2008, 12: 1691–1705.
Bauschke HH, Combettes PL: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York; 2011.
Maingé PE: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16: 899–912. 10.1007/s11228-008-0102-z
Acknowledgements
The authors thank the referees for their useful comments and suggestions. This work is supported by the National Natural Science Foundation of China, Tianyuan Foundation (11226227), the Basic Science and Technological Frontier Project of Henan (122300410268, 122300410375).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
Both authors contributed equally and significantly to writing this manuscript. Both authors read and approved the manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Tian, C., Wang, F. The contraction-proximal point algorithm with square-summable errors. Fixed Point Theory Appl 2013, 93 (2013). https://doi.org/10.1186/1687-1812-2013-93
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687-1812-2013-93