Abstract
The purpose of this paper is to present accelerations of the Mann and CQ algorithms. We first apply the Picard algorithm to the smooth convex minimization problem and point out that the Picard algorithm is the steepest descent method for solving the minimization problem. Next, we provide the accelerated Picard algorithm by using the ideas of conjugate gradient methods that accelerate the steepest descent method. Then, based on the accelerated Picard algorithm, we present accelerations of the Mann and CQ algorithms. Under certain assumptions, we show that the new algorithms converge to a fixed point of a nonexpansive mapping. Finally, we show the efficiency of the accelerated Mann algorithm by numerically comparing with the Mann algorithm. A numerical example is provided to illustrate that the acceleration of the CQ algorithm is ineffective.
Similar content being viewed by others
1 Introduction
Let H be a real Hilbert space with inner product \(\langle\cdot,\cdot \rangle\) and induced norm \(\|\cdot\|\). Suppose that \(C \subset H\) is nonempty, closed and convex. A mapping \(T:C\rightarrow C \) is said to be nonexpansive if
for all \(x, y \in C\). The set of fixed points of T is defined by \(\operatorname{Fix}(T):=\{x\in C : Tx=x\}\).
In this paper, we consider the following fixed point problem.
Problem 1.1
Suppose that \(T: C\rightarrow C\) is nonexpansive with \(\operatorname{Fix}(T)\neq \emptyset\). Then
The fixed point problems for nonexpansive mappings [1–4] capture various applications in diversified areas, such as convex feasibility problems, convex optimization problems, problems of finding the zeros of monotone operators, and monotone variational inequalities (see [1, 5] and the references therein). The Picard algorithm [6], the Mann algorithm [7, 8], and the CQ method [9] are useful fixed point algorithms to solve the fixed point problems. Meanwhile, to guarantee practical systems and networks (see, e.g., [10–13]) are stable and reliable, the fixed point has to be quickly found. Recently, Sakurai and Liduka [14] accelerated the Halpern algorithm and obtained a fast algorithm with strong convergence. Inspired by their work, we focus on the Mann and the CQ algorithms and present new algorithms to accelerate the approximation of a fixed point of a nonexpansive mapping.
We first apply the Picard algorithm to the smooth convex minimization problem and illustrate that the Picard algorithm is the steepest descent method [15] for solving the minimization problem. Since conjugate gradient methods [15] have been widely seen as an efficient accelerated version of most gradient methods, we introduce an accelerated Picard algorithm by combining the conjugate gradient methods and the Picard algorithm. Finally, based on the accelerated Picard algorithm, we present accelerations of the Mann and CQ algorithms.
In this paper, we propose two accelerated algorithms for finding a fixed point of a nonexpansive mapping and prove the convergence of the algorithms. Finally, the numerical examples are presented to demonstrate the effectiveness and fast convergence of the accelerated Mann algorithm and the ineffectiveness of the accelerated CQ algorithm.
2 Mathematical preliminaries
2.1 Picard algorithm and our algorithm
The Picard algorithm generates the sequence \(\{x_{n}\}_{n=0}^{\infty}\) as follows: given \(x_{0}\in H\),
The Picard algorithm (1) converges to a fixed point of the mapping T if \(T:C\rightarrow C\) is contractive (see, e.g., [1]).
When \(\operatorname{Fix}(T)\) is the set of all minimizers of a convex, continuously Fréchet differentiable functional f over H, that algorithm (1) is the steepest descent method [15] to minimize f over H. Suppose that the gradient of f, denoted by ∇f, is Lipschitz continuous with a constant \(L >0\) and define \(T^{f}:H\rightarrow H\) by
where \(\lambda\in(0,2/L)\) and \(I:H\rightarrow H\) stands for the identity mapping. Accordingly, \(T^{f}\) satisfies the contraction condition (see, e.g., [10]) and
Therefore, algorithm (1) with \(T := T^{f}\) can be expressed as follows:
The conjugate gradient methods [15] are popular acceleration methods of the steepest descent method. The conjugate gradient direction of f at \(x_{n}\) (\(n\geq0\)) is \(d^{f,CGD}_{n+1}:= -\nabla f(x_{n})+ \beta_{n}d^{f,CGD}_{n}\), where \(d^{f,CGD}_{0}:= -\nabla f(x_{0})\) and \(\{\beta_{n}\}_{n=0}^{\infty}\subset (0,\infty)\), which, together with (2), implies that
By replacing \(d^{f}_{n+1}:= -\nabla f (x_{n})\) in algorithm (3) with \(d^{f,CGD}_{n+1}\) defined by (4), we get the accelerated Picard algorithm as follows:
The convergence condition of Picard algorithm is very restrictive and it does not converge for general nonexpansive mappings (see, e.g., [16]). So, in 1953 Mann [8] introduced the Mann algorithm
and showed that the sequence generated by it converges to a fixed point of a nonexpansive mapping. In this paper we combine (5)-(6) and the CQ algorithm to present two novel algorithms.
2.2 Some lemmas
We will use the following notation:
-
(1)
\(x_{n}\rightharpoonup x\) means that \(\{x_{n}\}\) converges weakly to x and \(x_{n}\rightarrow x\) means that \(\{x_{n}\}\) converges strongly to x.
-
(2)
\(\omega_{w}(x_{n}):=\{x:\exists x_{n_{j}}\rightharpoonup x\}\) denotes the weak ω-limit set of \(\{x_{n}\}\).
Lemma 2.1
Let H be a real Hilbert space. There hold the following identities:
-
(i)
\(\|x-y\|^{2}=\|x\|^{2}-\|y\|^{2}-2\langle x-y, y\rangle\), \(\forall x, y \in H\),
-
(ii)
\(\|tx+(1-t)y\|^{2}=t\|x\|^{2}+(1-t)\|y\|^{2}-t(1-t)\|x-y\|^{2}\), \(t\in[0, 1]\), \(\forall x, y \in H\).
Lemma 2.2
Let K be a closed convex subset of a real Hilbert space H, and let \(P_{K}\) be the (metric or nearest point) projection from H onto K (i.e., for \(x\in H\), \(P_{K}x\) is the only point in K such that \(\|x-P_{K}x\|=\inf\{\|x-z\|: z \in K\}\)). Given \(x \in H\) and \(z\in K\). Then \(z=P_{K}x\) if and only if there holds the relation
Lemma 2.3
(See [17])
Let K be a closed convex subset of H. Let \(\{x_{n}\}\) be a sequence in H and \(u\in H\). Let \(q=P_{K}u\). Suppose that \(\{x_{n}\}\) is such that \(\omega_{w}(x_{n})\subset K\) and satisfies the condition
Then \(x_{n}\rightarrow q\).
Lemma 2.4
(See [2])
Let C be a closed convex subset of a real Hilbert space H, and let \(T : C \rightarrow C\) be a nonexpansive mapping such that \(\operatorname{Fix}(T)\neq\emptyset\). If a sequence \(\{x_{n}\}\) in C is such that \(x_{n}\rightharpoonup z\) and \(x_{n} - T x_{n} \rightarrow0\), then \(z = T z\).
Lemma 2.5
(See [18])
Assume that \(\{a_{n}\}\) is a sequence of nonnegative real numbers satisfying the property
where \(\{u_{n}\}\) is a sequence of nonnegative real numbers such that \(\sum_{n=1}^{\infty}u_{n}<\infty\). Then \(\lim_{n\rightarrow\infty} a_{n}\) exists.
3 The accelerated Mann algorithm
In this section, we present the accelerated Mann algorithm and give its convergence.
Algorithm 3.1
Choose \(\mu\in(0,1]\), \(\lambda>0\), and \(x_{0}\in{H}\) arbitrarily, and set \(\{\alpha_{n}\}_{n=0}^{\infty}\subset(0,1)\), \(\{\beta_{n}\}_{n=0}^{\infty}\subset[0,\infty)\). Set \(d_{0}:=(T(x_{0})-x_{0})/\lambda\). Compute \(d_{n+1}\) and \(x_{n+1}\) as follows:
We can check that Algorithm 3.1 coincides with the Mann algorithm (6) when \(\beta_{n}:\equiv0\) and \(\mu:= 1\).
In this section we make the following assumptions.
Assumption 3.1
The sequences \(\{\alpha_{n}\}_{n=0}^{\infty}\) and \(\{\beta_{n}\}_{n=0}^{\infty}\) satisfy
-
(C1)
\(\sum_{n=0}^{\infty}\mu\alpha_{n}(1-\mu\alpha_{n})=\infty\),
-
(C2)
\(\sum_{n=0}^{\infty}\beta_{n}<\infty\).
Moreover, \(\{x_{n}\}_{n=0}^{\infty}\) satisfies
-
(C3)
\(\{T(x_{n})-x_{n}\}_{n=0}^{\infty}\) is bounded.
Before doing the convergence analysis of Algorithm 3.1, we first show the two lemmas.
Lemma 3.1
Suppose that \(T: H\rightarrow H\) is nonexpansive with \(\operatorname{Fix}(T)\neq \emptyset\) and that Assumption 3.1 holds. Then \(\{d_{n}\}_{n=0}^{\infty}\) and \(\{\|x_{n}-p\|\}_{n=0}^{\infty}\) are bounded for any \(p\in \operatorname{Fix}(T)\). Furthermore, \(\lim_{n\rightarrow\infty} \|x_{n}-p\|\) exists.
Proof
We have from (C2) that \(\lim_{n\rightarrow\infty} \beta_{n}=0\). Accordingly, there exists \(n_{0}\in\mathbb{N}\) such that \(\beta_{n}\leq 1/2 \) for all \(n\geq n_{0}\). Define \(M_{1}:=\max\{\max_{1\leq k\leq n_{0}}\| d_{k}\|, (2/\lambda)\sup_{n\in\mathbb{N}}\|T(x_{n})-x_{n}\|\}\). Then (C3) implies that \(M_{1}< \infty\). Assume that \(\|d_{n}\|\leq M_{1}\) for some \(n\geq n_{0}\). The triangle inequality ensures that
which means that \(\|d_{n}\|\leq M_{1}\) for all \(n\geq0\), i.e., \(\{ d_{n}\}_{n=0}^{\infty}\) is bounded.
The definition of \(\{y_{n}\}_{n=0}^{\infty}\) implies that
The nonexpansivity of T and (9) imply that, for any \(p\in \operatorname{Fix}(T)\) and for all \(n\geq n_{0}\),
Therefore, we find
which implies
So, we get that \(\{x_{n}\}_{n=0}^{\infty}\) is bounded. From (10) it follows that \(\{y_{n}\}_{n=0}^{\infty}\) is bounded.
In addition, using Lemma 2.5, (C2), and (11), we obtain \(\lim_{n\rightarrow\infty} \|x_{n}-p\|\) exists. □
Lemma 3.2
Suppose that \(T: H\rightarrow H\) is nonexpansive with \(\operatorname{Fix}(T)\neq \emptyset\) and that Assumption 3.1 holds. Then
Proof
By (7)-(9) and the nonexpansivity of T, we have
which, with (C2) and Lemma 2.5, yields that the limit of \(\|x_{n}-T(x_{n})\| \) exists.
On the other hand, for any \(p\in \operatorname{Fix}(T)\) and for all \(n\geq n_{0}\), using the triangle inequality, the Cauchy-Schwarz inequality, and Lemma 2.1(ii), we obtain
We have from Lemma 3.1 that \(M_{2}:=\sup_{k\geq0}(1-\mu\alpha_{k})\lambda \{2M_{1}\|x_{k}-p\|+(1-\mu\alpha_{k})\lambda\beta_{k} M_{1}^{2}\}\) is bounded. Therefore, using (C2), we obtain
which, with (C1), implies that
Due to existence of the limit of \(\|T(x_{n})-x_{n}\|\), we have
which with Lemma 2.4 implies that \(\omega_{w}(x_{n})\subset \operatorname{Fix}(T)\). □
Theorem 3.1
Suppose that \(T: H\rightarrow H\) is nonexpansive with \(\operatorname{Fix}(T)\neq \emptyset\) and that Assumption 3.1 holds. Then the sequence \(\{x_{n}\}\) generated by Algorithm 3.1 weakly converges to a fixed point of T.
Proof
To see that \(\{x_{n}\}\) is actually weakly convergent, we need to show that \(\omega_{\omega}(x_{n})\) consists of exactly one point. Take \(p, q\in \omega_{w}(x_{n})\) and let \(\{x_{n_{i}}\}\) and \(\{x_{m_{j}}\}\) be two subsequences of \(\{x_{n}\}\) such that \(x_{n_{i}}\rightharpoonup p\) and \(x_{m_{j}}\rightharpoonup q\), respectively. Using Lemma 2.7 of [19] and Lemma 3.1, we have \(p=q\). Hence, the proof is complete. □
4 The accelerated CQ algorithm
In general, the Mann algorithm (6) has only weak convergence (see [20] for an example). However, strong convergence is often much more desirable than weak convergence in many problems that arise in infinite dimensional spaces (see [21] and the references therein). In 2003, Nakajo and Takahashi [9] introduced the following modification of the Mann algorithm:
where C is a nonempty closed convex subset of a Hilbert space H and \(T :C\rightarrow C\) is a nonexpansive mapping, and \(P_{K}\) denotes the metric projection from H onto a closed convex subset K of H.
Here, we introduce an acceleration of CQ algorithm based on Algorithm 3.1 and show its strong convergence.
Theorem 4.1
Let C be a bounded, closed and convex subset of a Hilbert space H and \(T: C\rightarrow C\) be nonexpansive with \(\operatorname{Fix}(T)\neq\emptyset\). Assume that \(\{\alpha_{n}\}_{n=0}^{\infty}\) is a sequence in \((0, a]\) for some \(0< a<1\) and \(\{\beta_{n}\}_{n=0}^{\infty}\subset[0,\infty)\) such that \(\lim_{n\rightarrow\infty}\beta_{n}=0\). Define a sequence \(\{x_{n}\}_{n=0}^{\infty}\) in C by the following algorithm:
where
\(M_{3}=\operatorname{diam} C\) and \(M_{4}:=\max\{\max_{1\leq k\leq n_{0}}\|d_{k}\|, (2/\lambda )M_{3}\}\), \(n_{0}\) is chosen such that \(\beta_{n}\leq1/2\) for all \(n\geq n_{0}\). Then \(\{x_{n}\}_{n=0}^{\infty}\) converges in norm to \(P_{\operatorname{Fix}(T)}(x_{0})\).
Proof
First observe that \(C_{n}\) is convex. Indeed, the inequality defined in \(C_{n}\) can be rewritten as
which is affine (and hence convex ) in z. Next we show that \(\operatorname{Fix}(T)\subset C_{n}\) for all n. Similar to the proof of (8), we get \(\|d_{n}\|\leq M_{4}\). Due to \(x_{n}\in C\), we have, for all \(p\in \operatorname{Fix}(T)\subset C\),
and
which comes from (10). Thus we get
and consequently,
where \(\theta_{n}=\lambda\beta_{n}M_{4}[\lambda\beta_{n}M_{4}+2M_{3}]\). So, \(p\in C_{n}\) for all n. Next we show that
We prove this by induction. For \(n=0\), we have \(\operatorname{Fix}(T)\subset C= Q_{0}\). Assume that \(\operatorname{Fix}(T)\subset Q_{n}\). Since \(x_{n+1}\) is the projection of \(x_{0}\) onto \(C_{n}\cap Q_{n} \), we have
As \(\operatorname{Fix}(T)\subset C_{n}\cap Q_{n}\), the last inequality holds, in particular, for all \(z\in \operatorname{Fix}(T)\). This together with the definition of \(Q_{n+1}\) implies that \(\operatorname{Fix}(T)\subset Q_{n+1}\). Hence (15) holds for all \(n\geq0\).
Notice that the definition of \(Q_{n}\) actually implies \(x_{n}=P_{Q_{n}}(x_{0})\). This together with the fact that \(\operatorname{Fix}(T)\subset Q_{n}\) further implies
Due to \(q=P_{\operatorname{Fix}(T)}(x_{0})\in \operatorname{Fix}(T)\), we have
The fact that \(x_{n+1}\in Q_{n}\) asserts that \(\langle x_{n+1}-x_{n}, x_{n}-x_{0}\rangle\geq0\). This together with Lemma 2.1(i) implies
This implies that the sequence \(\{\|x_{n}-x_{0}\|\}_{n=0}^{\infty}\) is increasing. Recall (14), we get that \(\lim_{n\rightarrow\infty }\|x_{n}-x_{0}\|\) exists. It turns out from (17) that
By the fact \(x_{n+1}\in C_{n}\) we get
and thus
On the other hand, since \(z_{n}=\alpha_{n}x_{n}+(1-\alpha_{n})T(x_{n})+(1-\alpha _{n})\lambda\beta_{n} d_{n}\) and \(\alpha_{n}\leq a\), we have
where the last inequality comes from (18).
Lemma 2.4 and (19) then guarantee that every weak limit point of \(\{x_{n}\}_{n=0}^{\infty}\) is a fixed point of T. That is, \(\omega _{w}(x_{n})\subset \operatorname{Fix} (T)\). This fact, with inequality (16) and Lemma 2.3, ensures the strong convergence of \(\{x_{n}\}_{n=0}^{\infty}\) to \(q=P_{\operatorname{Fix}(T)}x_{0}\). □
5 Numerical examples and conclusion
In this section, we compare the original algorithms and the accelerated algorithms. The codes were written in Matlab 7.0 and run on personal computer.
Firstly, we apply the Mann algorithm (6) and Algorithm 3.1 to the following convex feasibility problem (see [1, 14]).
Problem 5.1
(From [14])
Given a nonempty, closed convex set \(C_{i}\subset\mathbb{R}^{N} \) (\(i= 0, 1, \ldots, m\)),
where one assumes that \(C\neq\emptyset\). Define a mapping \(T:\mathbb{R}^{N}\rightarrow\mathbb{R}^{N}\) by
where \(P_{i}=P_{C_{i}} \) (\(i= 0, 1, \ldots, m\)) stands for the metric projection onto \(C_{i}\). Since \(P_{i} \) (\(i= 0, 1, \ldots, m\)) is nonexpansive, T defined by (20) is also nonexpansive. Moreover, we find that
Set \(\lambda:=2\), \(\mu:=0.05\), \(\alpha_{n}:=1/(n+1) \) (\(n\geq0\)), and \(\beta_{n}:=1/(n+1) \) in Algorithm 3.1 and \(\alpha_{n}:={\mu}/(n+1)\) in the Mann algorithm (6). In the experiment, we set \(C_{i}\) (\(i=0,1,\ldots, m\)) as a closed ball with center \(c_{i}\in\mathbb{R}^{N}\) and radius \(r_{i} >0\). Thus, \(P_{i} \) (\(i=0,1,\ldots, m\)) can be computed with
We set \(r_{i}:=1 \) (\(i=0,1,\ldots, m\)), \(c_{0}:=0\) and \(c_{i}\in({-1}/{\sqrt {N}}, {1}/{\sqrt{N}})^{N} \) (\(i=1,\ldots, m\)) were randomly chosen. Set \(e:=(1,1,\ldots,1)\). In Table 1, ‘Iter.’ and ‘Sec.’ denote the number of iterations and the cpu time in seconds, respectively. We took \(\|T(x_{n})-x_{n}\| <\varepsilon=10^{-6}\) as the stopping criterion.
Table 1 illustrates that, with a few exceptions, Algorithm 3.1 significantly reduces the running time and iterations needed to find a fixed point compared with the Mann algorithm. The advantage is more obvious, as the parameters N and m become larger. It is worth further research on the reason of emergence of a few exceptions.
Next, we apply the CQ algorithm (12) and the accelerated CQ algorithm (13) to the following problem.
Problem 5.2
(From [22])
Let C be the unit closed ball \(S(0,1)=\{x \in\mathbb{R}^{3} | \|x\| \leq1\}\) and \(T:S(0,1)\rightarrow S(0,1)\) be defined by \(T:(x_{1},x_{2},x_{3})^{T}\mapsto(\frac{1}{\sqrt{3}}\sin(x+z),\frac{1}{\sqrt {3}}\sin(x+z),\frac{1}{\sqrt{3}}(x+y))^{T}\). Then
He and Yang [22] showed that T is nonexpansive and has at least a fixed point in \(S(0,1)\).
Take the sequence \(\alpha_{n}=\frac{1}{n}\) in (12) and (13), and \(\beta_{n}=\frac{1}{66\times n^{3}}\), \(\lambda=1.2\). in (13). We tested four different initial points and the numerical results are listed in Table 2.
Table 2 shows that the acceleration of the CQ algorithm is ineffective, that is, the accelerated CQ algorithm does not in fact accelerate the CQ algorithm from running time or the number of iterations. The acceleration may be eliminated by the projection onto the sets \(C_{n}\) and \(Q_{n}\).
6 Concluding remarks
In this paper, we accelerate the Mann and CQ algorithms to obtain the accelerated Mann and CQ algorithms, respectively. Then we present the weak convergence of the accelerated Mann algorithm and the strong convergence of the accelerated CQ algorithm under some conditions. The numerical examples illustrate that the acceleration of the Mann algorithm is effective, however, the acceleration of the CQ algorithm is ineffective.
References
Bauschke, HH, Combettes, PL: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Berlin (2011)
Goebel, K, Kirk, WA: Topics in Metric Fixed Point Theory. Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (1990)
Goebel, K, Reich, S: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Dekker, New York (1984)
Takahashi, W: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)
Bauschke, HH, Borwein, JM: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38(3), 367-426 (1996)
Picard, E: Mémoire sur la théorie des équations aux dérivées partielles et la méthode des approximations successives. J. Math. Pures Appl. 6, 145-210 (1890)
Krasnosel’skii, MA: Two remarks on the method of successive approximations. Usp. Mat. Nauk 10, 123-127 (1955)
Mann, WR: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506-510 (1953)
Nakajo, K, Takahashi, W: Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 279, 372-379 (2003)
Iiduka, H: Iterative algorithm for solving triple-hierarchical constrained optimization problem. J. Optim. Theory Appl. 148, 580-592 (2011)
Iiduka, H: Fixed point optimization algorithm and its application to power control in CDMA data networks. Math. Program. 133, 227-242 (2012)
Iiduka, H: Iterative algorithm for triple-hierarchical constrained nonconvex optimization problem and its application to network bandwidth allocation. SIAM J. Optim. 22(3), 862-878 (2012)
Iiduka, H: Fixed point optimization algorithms for distributed optimization in networked systems. SIAM J. Optim. 23, 1-26 (2013)
Sakurai, K, Liduka, H: Acceleration of the Halpern algorithm to search for a fixed point of a nonexpansive mapping. Fixed Point Theory Appl. 2014, 202 (2014)
Nocedal, J, Wright, SJ: Numerical Optimization, 2nd edn. Springer Series in Operations Research and Financial Engineering. Springer, Berlin (2006)
Yao, Y, Marino, G, Muglia, L: A modified Korpelevich’s method convergent to the minimum-norm solution of a variational inequality. Optimization 63(4), 1-11 (2012)
Matinez-Yanes, C, Xu, HK: Strong convergence of the CQ method for fixed point processes. Nonlinear Anal. 64, 2400-2411 (2006)
Tan, KK, Xu, HK: Approximating fixed points of nonexpansive mapping by the Ishikawa iteration process. J. Math. Anal. Appl. 178, 301-308 (1993)
Suantai, S: Weak and strong convergence criteria of Noor iterations for asymptotically nonexpansive mappings. J. Math. Anal. Appl. 311, 506-517 (2005)
Genel, A, Lindenstrauss, J: An example concerning fixed points. Isr. J. Math. 22, 81-86 (1975)
Bauschke, HH, Combettes, PL: A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Math. Oper. Res. 26(2), 248-264 (2001)
He, S, Yang, Z: Realization-based method of successive projection for nonexpansive mappings and nonexpansive semigroups (submitted)
Acknowledgements
The authors express their thanks to the reviewers, whose constructive suggestions led to improvements in the presentation of the results. Supported by the National Natural Science Foundation of China (No. 61379102) and Fundamental Research Funds for the Central Universities (No. 3122013D017), in part by the Foundation of Tianjin Key Lab for Advanced Signal Processing.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Dong, QL., Yuan, Hb. Accelerated Mann and CQ algorithms for finding a fixed point of a nonexpansive mapping. Fixed Point Theory Appl 2015, 125 (2015). https://doi.org/10.1186/s13663-015-0374-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13663-015-0374-6