1 Introduction

Let H be a real Hilbert space with the inner product \(\langle\cdot ,\cdot\rangle\) and the norm \(\|\cdot\|\) and C be a nonempty closed convex subset of H. Recall that a mapping \(T : C\rightarrow C\) is said to be nonexpansive if \(\|Tx-Ty\|\leq\|x-y\|\) holds for all \(x, y \in C\). We denote by \(\operatorname{Fix}(T)\) the set of fixed points of T, i.e., \(\operatorname{Fix}(T) = \{x \in C : Tx = x\}\).

Recently, a great deal of literatures on iteration algorithms for approximating fixed points of nonexpansive mappings have been published since they have a variety of applications in inverse problem, image recovery, and signal processing; see [17]. Mann’s iteration process [8] is often used to approximate a fixed point of the operators, but it has only weak convergence (see [9] for an example). However, strong convergence is often much more desirable than weak convergence in many problems that arise in infinite dimensional spaces (see [10] and references therein). So, attempts have been made to modify Mann’s iteration process so that strong convergence is guaranteed. Let \(T:C\rightarrow C\) be a nonexpansive mapping such that \(\operatorname{Fix}(T)\neq \emptyset\). Nakajo and Takahashi [11] firstly introduced the following hybrid algorithm.

Algorithm 1

$$ \left \{ \begin{array}{@{}l} x_{0}\in C \mbox{ chosen arbitrarily},\\ y_{n}=\alpha_{n}x_{n}+(1-\alpha_{n})Tx_{n}, \\ C_{n}=\{z\in C: \|y_{n}-z\|\leq\|x_{n}-z\|\},\\ Q_{n}=\{z\in C:\langle x_{n}-z,x_{n}-x_{0}\rangle\leq0\},\\ x_{n+1}=P_{C_{n}\cap Q_{n}}x_{0}, \end{array} \right . $$
(1)

where \(P_{K}\) denotes the metric projection onto the set K, \(\{\alpha _{n}\}\subset[0,\sigma]\) for some \(\sigma\in(0,1]\). Thereafter, many hybrid algorithms have been studied extensively since they have strong convergence, see [1219]. As far as we know, most hybrid algorithms can be seen as the modification for weak convergence algorithms.

Inspired by the recent work of Malitsky and Semenov [20], we propose the following algorithm.

Algorithm 2

$$ \left \{ \begin{array}{@{}l} x_{0},z_{0}\in C \mbox{ chosen arbitrarily},\\ z_{n+1}=\alpha_{n}z_{n}+(1-\alpha_{n})Tx_{n}, \\ C_{n}=\{z\in C: \|z_{n+1}-z\|^{2}\leq\alpha_{n}\|z_{n}-z\|^{2}+(1-\alpha_{n})\| x_{n}-z\|^{2}\},\\ Q_{n}=\{z\in C:\langle x_{n}-z,x_{n}-x_{0}\rangle\leq0\},\\ x_{n+1}=P_{C_{n}\cap Q_{n}}x_{0}, \end{array} \right . $$
(2)

where \(\{\alpha_{n}\}\subset[0, \sigma]\) for some \(\sigma\in [0, \frac{1}{2})\). It is easy to see that Algorithm 2 is not a modification of any weak convergence algorithm.

The paper is organized as follows. In the next section, we present some lemmas which will be used in the main results. In Section 3, strong convergence theorem and its proof are given. In the final section, Section 4, some numerical results are provided, which show advantages of our algorithm.

2 Preliminaries

We will use the following notation:

  1. (1)

    ⇀ for weak convergence and → for strong convergence.

  2. (2)

    \(\omega_{w}(x_{n}) = \{x : \exists x_{n_{j}}\rightharpoonup x\}\) denotes the weak ω-limit set of \(\{x_{n}\}\).

We need some facts and tools in a real Hilbert space H which are listed as lemmas below.

Lemma 2.1

The following identity in a real Hilbert space H holds:

$$\|u-v\|^{2}=\|u\|^{2}-\|v\|^{2}-2\langle u-v,v \rangle, \quad u,v\in H. $$

Lemma 2.2

(Goebel and Kirk [21])

Let C be a closed convex subset of a real Hilbert space H, and let \(T : C \rightarrow C\) be a nonexpansive mapping such that \(\operatorname{Fix}(T)\neq\emptyset\). If a sequence \(\{x_{n}\}\) in C is such that \(x_{n}\rightharpoonup z\) and \(x_{n} - T x_{n} \rightarrow0\), then \(z = T z\).

Lemma 2.3

Let K be a closed convex subset of a real Hilbert space H, and let \(P_{K}\) be the (metric or nearest point) projection from H onto K (i.e., for \(x\in H\), \(P_{K}x\) is the only point in K such that \(\|x-P_{K}x\|=\inf\{\|x-z\|: z \in K\}\)). Given \(x \in H\) and \(z\in K\). Then \(z=P_{K}x\) if and only if the following relation holds:

$$\langle x-z,y-z\rangle\leq0 \quad\textit{for all } y\in K. $$

Lemma 2.4

(Matinez-Yanes and Xu [22])

Let K be a closed convex subset of H. Let \(\{x_{n}\}\) be a sequence in H and \(u\in H\). Let \(q = P_{K}u\). If \(\{x_{n}\}\) is such that \(\omega_{w}\{x_{n}\}\subset K\) and satisfies the condition

$$\|x_{n}-u\|\leq\|u-q\| \quad\textit{for all } n, $$

then \(x_{n}\rightarrow q\).

Lemma 2.5

Let \(\{a_{n}\}\) and \(\{b_{n}\}\) be nonnegative real sequences, \(\alpha\in [0,1)\), \(\beta\in\mathbb{R}^{+}\), and for all \(n\in\mathbb{N}\) the following inequality holds:

$$ a_{n+1}\leq\alpha a_{n}+\beta b_{n}. $$
(3)

If \(\sum_{n=1}^{\infty}b_{n}<+\infty\), then \(\lim_{n\rightarrow\infty}a_{n}=0\).

Proof

Using inequality (3) for \(n=1,2,\ldots,N-1\), we obtain

$$\begin{aligned}& a_{2}\leq\alpha a_{1}+\beta b_{1}, \\& a_{3}\leq\alpha a_{2}+\beta b_{2}, \\& \vdots \\& a_{N}\leq\alpha a_{N-1}+\beta b_{N-1}. \end{aligned}$$

Adding all these inequalities yields

$$\sum_{n=1}^{N}a_{n}\leq \frac{1}{1-\alpha} \Biggl(a_{1}-\alpha a_{N}+\beta\sum _{n=1}^{N-1}b_{n} \Biggr) \leq \frac{1}{1-\alpha} \Biggl(a_{1}+\beta\sum_{n=1}^{\infty}b_{n} \Biggr). $$

Since N is arbitrary, we see that the series \(\sum_{n=1}^{\infty}a_{n}\) is convergent and hence \(a_{n} \rightarrow0\). □

3 Algorithm and its convergence

In this section, we present strong convergence theorem and its proof for Algorithm 2.

Theorem 3.1

Let C be a closed convex subset of a Hilbert space H, and let \(T: C \rightarrow C\) be a nonexpansive mapping such that \(\operatorname{Fix}(T)\neq\emptyset \). Assume that \(\{\alpha_{n}\}\subset[0,\sigma]\) holds for some \(\sigma \in [0, \frac{1}{2})\). Then \(\{x_{n}\}\) and \(\{z_{n}\}\) generated by Algorithm 2 converge strongly to \(P_{\operatorname{Fix}(T)}x_{0}\).

Proof

It is easy to know that \(C_{n}\) is convex (see Lemma 1.3 in [22]). Next we show that \(\operatorname{Fix}(T)\subset C_{n}\) for all \(n \geq0\). To observe this, taking \(p\in \operatorname{Fix}(T)\) arbitrarily, we have

$$\begin{aligned} \|z_{n+1}-p\|^{2}&=\bigl\| \alpha_{n}z_{n}+(1- \alpha_{n})Tx_{n}-p\bigr\| ^{2} \\ &\leq\alpha_{n}\|z_{n}-p\|^{2}+(1- \alpha_{n})\|x_{n}-p\|^{2}, \end{aligned}$$

which implies \(\operatorname{Fix}(T)\subset C_{n}\) for all \(n \geq0\). Next we show

$$ \operatorname{Fix}(T)\subset Q_{n} \quad\mbox{for all } n \geq0 $$
(4)

by induction. For \(n=0\), we have \(\operatorname{Fix}(T)\subset C=Q_{0}\). Assume \(\operatorname{Fix}(T)\subset Q_{n}\). Since \(x_{n+1} \) is the projection of \(x_{0}\) onto \(C_{n}\cap Q_{n}\), by Lemma 2.3 we have

$$\langle x_{n+1}-z,x_{n+1}-x_{0}\rangle\leq0 \quad \forall z\in C_{n}\cap Q_{n}. $$

As \(\operatorname{Fix}(T)\subset C_{n}\cap Q_{n}\), by the induction assumption, the last inequality holds, in particular, for all \(z\in \operatorname{Fix}(T)\). This together with the definition of \(Q_{n+1}\) implies that \(\operatorname{Fix}(T) \subset Q_{n+1}\). Hence (4) holds for all \(n\geq0\).

Since \(\operatorname{Fix}(T)\) is a nonempty closed convex subset of C, there exists a unique element \(q\in \operatorname{Fix}(T)\) such that \(q=P_{\operatorname{Fix}(T)}x_{0}\). From \(x_{n} = P_{Q_{n}}x_{0}\) (by the definition of \(Q_{n}\)) and \(\operatorname{Fix}(T)\subset Q_{n}\), we have \(\|x_{n}-x_{0}\|\leq\|p-x_{0}\|\) for all \(p \in \operatorname{Fix}(T)\). Due to \(q\in \operatorname{Fix}(T)\), we get

$$ \|x_{n}-x_{0}\|\leq\|q-x_{0}\|, $$
(5)

which implies that \(\{x_{n}\}\) is bounded.

The fact that \(x_{n+1}\in Q_{n}\) implies that \(\langle x_{n+1}-x_{n},x_{n}-x_{0}\rangle\geq0\). This together with Lemma 2.1 implies

$$ \|x_{n+1}-x_{n}\|^{2} \leq \|x_{n+1}-x_{0}\|^{2}-\|x_{n}-x_{0} \|^{2}. $$
(6)

From (5) and (6) we obtain

$$\begin{aligned} \sum_{n=1}^{N}\|x_{n+1}-x_{n} \|^{2}&\leq\sum_{n=1}^{N} \bigl( \|x_{n+1}-x_{0}\|^{2}-\| x_{n}-x_{0} \|^{2} \bigr) \\ &=\|x_{N+1}-x_{0}\|^{2}-\|x_{1}-x_{0} \|^{2} \\ &\leq\|q-x_{0}\|^{2}-\|x_{1}-x_{0} \|^{2}. \end{aligned}$$

So it follows that \(\sum_{n=1}^{\infty}\|x_{n+1}-x_{n}\|^{2}\) is convergent and thus \(\|x_{n+1}-x_{n}\|\rightarrow0\) as \(n\rightarrow\infty\). The fact that \(x_{n+1}\in C_{n}\) implies that

$$\begin{aligned} \|z_{n+1}-x_{n+1}\|^{2}\leq{}&\alpha_{n} \|z_{n}-x_{n+1}\|^{2}+(1-\alpha_{n})\| x_{n}-x_{n+1}\|^{2} \\ ={}&\alpha_{n}\bigl(\|z_{n}-x_{n} \|^{2}+2\langle z_{n}-x_{n}, x_{n}-x_{n+1} \rangle+\| x_{n}-x_{n+1}\|^{2}\bigr) \\ &{} +(1-\alpha_{n})\|x_{n}-x_{n+1} \|^{2} \\ \leq{}&2\alpha_{n}\bigl(\|z_{n}-x_{n} \|^{2}+\|x_{n}-x_{n+1}\|^{2}\bigr)+(1- \alpha_{n})\| x_{n}-x_{n+1}\|^{2} \\ \leq{}&2\sigma\|z_{n}-x_{n}\|^{2}+2 \|x_{n}-x_{n+1}\|^{2}, \end{aligned}$$

where the second inequality follows from the AM-GM and the Cauchy-Schwarz inequalities, and the last inequality follows from \(\alpha_{n}\leq\sigma\). From Lemma 2.5 and \(\sigma\in(0,\frac{1}{2})\), we obtain

$$ \|z_{n}-x_{n}\|\rightarrow0. $$
(7)

For this reason, we have

$$ \|z_{n+1}-x_{n}\|\leq\|z_{n+1}-x_{n+1} \|+\|x_{n+1}-x_{n}\|\rightarrow0. $$
(8)

Noting that \((1-\alpha_{n})(Tx_{n}-x_{n})=(z_{n+1}-x_{n})+\alpha_{n}(x_{n}-z_{n})\), we obtain

$$\|Tx_{n}-x_{n}\|\leq\frac{1}{1-\alpha_{n}}\|z_{n+1}-x_{n} \|+\frac{\alpha _{n}}{1-\alpha_{n}}\|x_{n}-z_{n}\|. $$

Since \(\alpha_{n}\leq\sigma\) and by (7) and (8), we get

$$ \|Tx_{n}-x_{n}\|\rightarrow0. $$
(9)

By Lemma 2.2, we obtain that \(\omega_{w}(x_{n}) \subset \operatorname{Fix}(T)\). This, together with (5) and Lemma 2.4, guarantees strong convergence of \(\{x_{n}\}\) to \(P_{\operatorname{Fix}(T)}x_{0}\). From (7), strong convergence of \(\{z_{n}\}\) to \(P_{\operatorname{Fix}(T)}x_{0}\) is obtained. □

Changing the definitions of \(z_{n+1}\) and \(C_{n}\) in Algorithm 2, we get the following algorithm:

$$ \left \{ \begin{array}{@{}l} x_{0},z_{0}\in C \mbox{ chosen arbitrarily},\\ z_{n+1}=\alpha_{n}x_{n}+(1-\alpha_{n})Tz_{n}, \\ C_{n}=\{z\in C: \|z_{n+1}-z\|^{2}\leq\alpha_{n}\|x_{n}-z\|^{2}+(1-\alpha_{n})\| z_{n}-z\|^{2}\},\\ Q_{n}=\{z\in C:\langle x_{n}-z,x_{n}-x_{0}\rangle\leq0\},\\ x_{n+1}=P_{C_{n}\cap Q_{n}}x_{0}, \end{array} \right . $$
(10)

where \(\{\alpha_{n}\}_{n=0}^{\infty}\subset[a,b]\) for some \(a,b\in (\frac{1}{2}, 1)\). Using the process of proof of Theorem 3.1, we can show the following theorem.

Theorem 3.2

Let C be a closed convex subset of a Hilbert space H, and let \(T: C \rightarrow C\) be a nonexpansive mapping such that \(\operatorname{Fix}(T)\neq\emptyset \). Assume \(\{\alpha_{n}\}\subset[a,b]\) for some \(a,b\in (\frac{1}{2}, 1)\). Then \(\{x_{n}\}\) and \(\{z_{n}\}\) generated by the iteration process (10) strongly converge to \(P_{\operatorname{Fix}(T)}x_{0}\).

4 Numerical experiments

In this section, we firstly present specific expression of \(P_{C_{n}\cap Q_{n}} x_{0}\) in Algorithm 2 and then compare Algorithms 1 and 2 through numerical examples.

He et al. [23] pointed out that it is difficult to realize the hybrid algorithm in actual computing programs because the specific expression of \(P_{C_{n}\cap Q_{n}} x_{0}\) cannot be got, in general. For a special case \(C = H\), where \(C_{n}\) and \(Q_{n}\) are two half-spaces, they obtained the specific expression of \(P_{C_{n}\cap Q_{n}} x_{0}\) and realized Algorithm 1.

In the case \(C = H\), following the ideas of He et al. [23], we obtain the specific expression of \(P_{C_{n}\cap Q_{n}}x_{0}\) of Algorithm 2 as follows:

$$ \left \{ \begin{array}{@{}l} x_{0}, z_{0}\in H \mbox{ chosen arbitrarily},\\ z_{n+1}=\alpha_{n}z_{n}+(1-\alpha_{n})Tx_{n}, \\ u_{n}=\alpha_{n} z_{n}+(1-\alpha_{n})x_{n}-z_{n+1},\\ v_{n}=(\alpha_{n}\|z_{n}\|^{2}+(1-\alpha_{n})\|x_{n}\|^{2}-\|z_{n+1}\|^{2})/2,\\ C_{n}=\{z\in C: \langle u_{n},z\rangle\leq v_{n}\},\\ Q_{n}=\{z\in C:\langle x_{n}-z,x_{n}-x_{0}\rangle\leq0\},\\ x_{n+1}=p_{n}, \quad\mbox{if } p_{n}\in Q_{n},\\ x_{n+1}=q_{n}, \quad\mbox{if } p_{n}\notin Q_{n}, \end{array} \right . $$
(11)

where

$$\begin{aligned}& p_{n}=x_{0}-\frac{\langle u_{n},x_{0}\rangle-v_{n}}{\|u_{n}\|^{2}}u_{n}, \\& q_{n}= \biggl(1-\frac{\langle x_{0}-x_{n},x_{n}-p_{n}\rangle}{\langle x_{0}-x_{n},w_{n}-p_{n}\rangle} \biggr)p_{n} + \frac{\langle x_{0}-x_{n},x_{n}-p_{n}\rangle}{\langle x_{0}-x_{n},w_{n}-p_{n}\rangle }w_{n}, \\& w_{n}=x_{n}-\frac{\langle u_{n},x_{n}\rangle-v_{n}}{\|u_{n}\|^{2}}u_{n}. \end{aligned}$$

Let \(R^{2}\) be a two-dimensional Euclidean space with the usual inner product \(\langle v^{(1)},v^{(2)}\rangle=v^{(1)}_{1}v^{(2)}_{1}+v^{(1)}_{2}v^{(2)}_{2}\) (\(\forall v^{(1)}=(v^{(1)}_{1},v^{(1)}_{2})^{T}, v^{(2)}=(v^{(2)}_{1},v^{(2)}_{2})^{T}\in R^{2}\)) and the norm \(\|v\|=\sqrt {v_{1}^{2}+v_{2}^{2}}\) (\(v=(v_{1},v_{2})^{T}\in R^{2}\)). He et al. [23] defined a mapping

$$ T:v=(v_{1},v_{2})^{T}\mapsto \biggl(\sin\frac{v_{1}+v_{2}}{\sqrt{2}},\cos\frac {v_{1}+v_{2}}{\sqrt{2}} \biggr)^{T}, $$
(12)

and showed it is nonexpansive. It is easy to get that T has a fixed point in the unit disk which is difficult to calculate.

Next, we compare Algorithms 1 and 2 with the nonexpansive mapping T defined in (12). In the numerical results listed in Table 1, ‘Iter.’ and ‘Sec.’ denote the number of iterations and the cpu time in seconds, respectively. We took \(E(x)<\varepsilon\) as the stopping criterion and \(\varepsilon=10^{-4}\). We set \(x_{0}=z_{0}\) in Algorithm 2 and took \(\alpha_{n}=0.1\) for Algorithms 1 and 2. The algorithms were coded in Matlab 7.1 and run on a personal computer.

Table 1 Comparison of Algorithms 1 and 2 with different initial values

Table 1 illustrates that in our examples Algorithm 2 has a competitive performance. We caution, however, that this study is a very preliminary one.