1 Introduction

Iteration schemes for numerical reckoning fixed points of various classes of nonlinear operators are available in the literature. The class of contractive mappings via iteration methods is extensively studied in this regard. In 1952, Plunkett published a paper on the rate of convergence for relaxation methods [1]. In 1953, Bowden presented a talk in a symposium on digital computing machines entitled ‘Faster than thought’ [2]. Later, this basic idea has been used in engineering, statistics, numerical analysis, approximation theory, and physics for many years (see, for example, [39] and [10]). In 1991, Argyros published a paper about iterations converging faster than Newton’s method to the solutions of nonlinear equations in Banach spaces [11, 12]. In 1997, Lucet presented a method faster than the fast Legendre transform [13]. In 2004, Berinde used the notion of rate of convergence for iterations method and showed that the Picard iteration converges faster than the Mann iteration for a class of quasi-contractive operators [14]. Later, he provided some results in this area [15, 16]. In 2006, Babu and Vara Prasad showed that the Mann iteration converges faster than the Ishikawa iteration for the class of Zamfirescu operators [17]. In 2007, Popescu showed that the Picard iteration converges faster than the Mann iteration for the class of quasi-contractive operators [18]. Recently, there have been published some papers about introducing some new iterations and comparing of the rates of convergence for some iteration methods (see, for example, [1922] and [23]).

In this paper, we compare the rates of convergence of some iteration methods for contractions and show that the involved coefficients in such methods have an important role to play in determining the rate of convergence. During the preparation of this work, we found that the efficiency of coefficients had been considered in [24] and [25]. But we obtained our results independently, before reading these works, and one can see it by comparing our results and those ones.

2 Preliminaries

As we know, the Picard iteration has been extensively used in many works from different points of view. Let \((X, d)\) be a metric space, \(x_{0}\in X\), and \(T\colon X\to X\) a selfmap. The Picard iteration is defined by

$$ x_{n+1}=Tx_{n} $$

for all \(n\geq0\). Let \(\{\alpha_{n}\}_{n\geq0}\), \(\{\beta_{n}\}_{n\geq 0}\), and \(\{\gamma_{n}\}_{n\geq0}\) be sequences in \([0, 1]\). Then the Mann iteration method is defined by

$$ x_{n+1}=\alpha_{n} x_{n} + (1- \alpha_{n}) Tx_{n} $$
(2.1)

for all \(n\geq0\) (for more information, see [26]). Also, the Ishikawa iteration method is defined by

$$ \begin{aligned} &x_{n+1} =(1- \alpha_{n})x_{n}+\alpha_{n}Ty_{n}, \\ &y_{n} = (1-\beta_{n})x_{n} +\beta_{n}Tx_{n} \end{aligned} $$
(2.2)

for all \(n\geq0\) (for more information, see [27]). The Noor iteration method is defined by

$$\begin{aligned}& x_{n+1} =(1-\alpha_{n})x_{n} + \alpha_{n} Ty_{n}, \\& y_{n} =(1-\beta_{n})x_{n} + \beta_{n} Tz_{n}, \\& z_{n} =(1-\gamma_{n})x_{n} +\gamma_{n} Tx_{n} \end{aligned}$$
(2.3)

for all \(n\geq0\) (for more information, see [28]). In 2007, Agarwal et al. defined their new iteration methods by

$$ \begin{aligned} &x_{n+1} = (1 - \alpha_{n})Tx_{n} + \alpha_{n}Ty_{n}, \\ &y_{n} = (1 - \beta_{n})x_{n} + \beta_{n}Tx_{n} \end{aligned} $$
(2.4)

for all \(n\geq0\) (for more information, see [29]). In 2014, Abbas et al. defined their new iteration methods by

$$\begin{aligned}& x_{n+1} =(1 - \alpha_{n})Ty_{n} + \alpha_{n}Tz_{n} , \\& y_{n} = (1-\beta_{n})Tx_{n} +\beta_{n}Tz_{n}, \\& z_{n} = (1-\gamma_{n})x_{n} + \gamma_{n}Tx_{n} \end{aligned}$$
(2.5)

for all \(n\geq0\) (for more information, see [30]). In 2014, Thakur et al. defined their new iteration methods by

$$\begin{aligned}& x_{n+1} =(1 - \alpha_{n})Tx_{n} + \alpha_{n}Ty_{n}, \\& y_{n} = (1-\beta_{n})z_{n} +\beta_{n}Tz_{n}, \\& z_{n} = (1-\gamma_{n})x_{n} + \gamma_{n}Tx_{n} \end{aligned}$$
(2.6)

for all \(n\geq0\) (for more information, see [23]). Also, the Picard S-iteration was defined by

$$\begin{aligned}& x_{n+1} = Ty_{n}, \\& y_{n} = (1-\beta_{n})Tx_{n} +\beta_{n}Tz_{n}, \\& z_{n} = (1-\gamma_{n})x_{n} + \gamma_{n}Tx_{n} \end{aligned}$$
(2.7)

for all \(n\geq0\) (for more information, see [20] and [22]).

3 Self-comparing of iteration methods

Now, we are ready to provide our main results for contractive maps. In this respect, we assume that \((X, \|\cdot\|)\) is a normed space, \(x_{0}\in X\), \(T\colon X\to X\) is a selfmap and \(\{\alpha_{n}\}_{n\geq 0}\), \(\{\beta_{n}\}_{n\geq0}\) and \(\{\gamma_{n}\}_{n\geq0}\) are sequences in \((0, 1)\).

The Mann iteration is given by \(x_{n+1}= (1-\alpha_{n})x_{n}+\alpha_{n} Tx_{n}\) for all \(n\geq0\).

Note that we can rewrite it as \(x_{n+1}= \alpha_{n} x_{n}+(1-\alpha_{n}) Tx_{n}\) for all \(n\geq0\).

We call these cases the first and second forms of the Mann iteration method.

In the next result we show that choosing a type of sequence \(\{\alpha_{n}\}_{n\geq0}\) in the Mann iteration has a notable role to play in the rate of convergence of the sequence \(\{x_{n}\}_{n\geq0}\).

Let \(\{u_{n}\}_{n\geq0}\) and \(\{v_{n}\}_{n\geq0}\) be two fixed point iteration procedures that converge to the same fixed point p and \(\|u_{n}-p\|\leq a_{n}\) and \(\|v_{n}-p\|\leq b_{n}\) for all \(n\geq0\). If the sequences \(\{a_{n}\}_{n\geq0}\) and \(\{b_{n}\}_{n\geq0}\) converge to a and b, respectively, and \(\lim_{n\to\infty}\frac{\|a_{n}-a\|}{\|b_{n}-b\|}=0\), then we say that \(\{u_{n}\}_{n\geq0}\) converges faster than \(\{v_{n}\}_{n\geq0}\) to p (see [14] and [23]).

Proposition 3.1

Let C be a nonempty, closed, and convex subset of a Banach space X, \(x_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\) and p a fixed point of T. Consider the first case for Mann iteration. If the coefficients of \(Tx_{n}\) are greater than the coefficients of \(x_{n}\), that is, \(1-\alpha_{n} < \alpha_{n}\) for all \(n\geq0\) or equivalently \(\{\alpha_{n}\}_{n\geq0}\) is a sequence in \((\frac{1}{2}, 1)\), then the Mann iteration converges faster than the Mann iteration which the coefficients of \(x_{n}\) are greater than the coefficients of \(Tx_{n}\).

Proof

Let \(\{x_{n}\}\) be the sequence in the Mann iteration which the coefficients of \(Tx_{n}\) are greater than the coefficients of \(x_{n}\), that is,

$$ x_{n+1}=(1-\alpha_{n}) x_{n} + \alpha_{n} Tx_{n} $$
(3.1)

for all n. In this case, we have

$$\begin{aligned} \begin{aligned} \Vert x_{n+1}-p\Vert &=\bigl\Vert (1-\alpha_{n})x_{n}+ \alpha_{n} Tx_{n}-p\bigr\Vert \leq(1-\alpha_{n}) \Vert x_{n}-p\Vert +\alpha_{n}\Vert Tx_{n}-p \Vert \\ &\leq \bigl(1-\alpha_{n}(1-k) \bigr) \Vert x_{n}-p\Vert \end{aligned} \end{aligned}$$

for all n. Since \(\alpha_{n} \in(\frac{1}{2}, 1)\), \(1-\alpha_{n}(1-k) < 1-\frac{1}{2}(1-k)\). Put \(a_{n} = (1-\frac{1}{2}(1-k) )^{n} \Vert x_{1}-p\Vert \) for all n. Now, let \(\{x_{n}\}\) be the sequence in the Mann iteration of which the coefficients of \(x_{n}\) are greater than the coefficients of \(Tx_{n}\). In this case, we have

$$\begin{aligned} \Vert x_{n+1}-p\Vert =&\bigl\Vert \alpha_{n} x_{n}+(1- \alpha _{n})Tx_{n}-p\bigr\Vert \leq \alpha_{n} \Vert x_{n}-p\Vert +(1-\alpha_{n}) \Vert Tx_{n}-p\Vert \\ \leq& \bigl(1-(1-\alpha_{n}) (1-k) \bigr) \Vert x_{n}-p \Vert \end{aligned}$$

for all n. Since \(1-\alpha_{n} < \alpha_{n}\) for all \(n\geq0\), we get \(1-(1-\alpha_{n})(1-k) < 1\) for all \(n\geq0\). Put \(b_{n}=\Vert x_{1}-p\Vert \) for all n. Note that \(\lim\frac{a_{n}}{b_{n}}=\lim\frac{ (1-\frac{1}{2}(1-k) )^{n} \Vert x_{1}-p\Vert }{ \Vert x_{1}-p\Vert }=0\). This completes the proof. □

Note that we can use \(1-\alpha_{n} < \alpha_{n}\), for n large enough, instead of the condition \(1-\alpha_{n} < \alpha_{n}\), for all \(n\geq0\). One can use similar conditions instead of the conditions which we will use in our results.

As we know, we can consider four cases for writing the Ishikawa iteration method. In the next result, we indicate each case by different enumeration. Similar to the last result, we want to compare the Ishikawa iteration method with itself in the four possible cases. Again, we show that the coefficient sequences \(\{\alpha_{n}\}_{n\geq 0}\) and \(\{\beta_{n}\}_{n\geq0}\) have effective roles to play in the rate of convergence of the sequence \(\{x_{n}\}_{n\geq0}\) in the Ishikawa iteration method.

Proposition 3.2

Let C be a nonempty, closed, and convex subset of a Banach space X, \(x_{1}\in C\), \(T\colon C \to C\) a contraction with constant \(k\in(0, 1)\), and p a fixed point of T. Consider the following cases of the Ishikawa iteration method:

$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} x_{n+1} =(1 - \alpha_{n})x_{n} + \alpha_{n}Ty_{n}, \\ y_{n} = (1-\beta_{n})x_{n} +\beta_{n}Tx_{n}, \end{array}\displaystyle \right . \end{aligned}$$
(3.2)
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} x_{n+1} =\alpha_{n} x_{n} + (1-\alpha_{n})Ty_{n} , \\ y_{n} = \beta_{n} x_{n} +(1-\beta_{n})Tx_{n}, \end{array}\displaystyle \right . \end{aligned}$$
(3.3)
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} x_{n+1} = \alpha_{n}x_{n} + (1-\alpha_{n})Ty_{n} , \\ y_{n} = (1-\beta_{n})x_{n} +\beta_{n}Tx_{n}, \end{array}\displaystyle \right . \end{aligned}$$
(3.4)

and

$$ \left \{ \textstyle\begin{array}{l} x_{n+1} =(1 - \alpha_{n})x_{n} + \alpha_{n}Ty_{n} , \\ y_{n} = \beta_{n} x_{n} +(1-\beta_{n})Tx_{n} \end{array}\displaystyle \right . $$
(3.5)

for all \(n\geq0\). If \(1-\alpha_{n} < \alpha_{n}\) and \(1-\beta_{n} < \beta_{n}\) for all \(n\geq0\), then the case (3.2) converges faster than the others. In fact, the Ishikawa iteration method is faster whenever the coefficients of \(Ty_{n}\) and \(Tx_{n}\) simultaneously are greater than the related coefficients of \(x_{n}\) for all \(n\geq0\).

Proof

Let \(\{x_{n}\}_{n\geq0}\) be the sequence in the case (3.2). Then we have

$$\begin{aligned} \Vert y_{n}-p\Vert &= \bigl\Vert (1-\beta_{n}) x_{n} + \beta_{n}Tx_{n}-p\bigr\Vert \\ &\leq(1 -\beta_{n})\Vert x_{n}-p\Vert + \beta_{n}\Vert Tx_{n}-p\Vert \\ & \leq \bigl((1-\beta_{n}) + \beta_{n} k \bigr) \Vert x_{n} -p \Vert \end{aligned}$$

and

$$\begin{aligned} \Vert x_{n+1}-p\Vert &= \bigl\Vert (1-\alpha_{n}) x_{n} + \alpha_{n}Ty_{n}-p\bigr\Vert \\ &\leq(1-\alpha_{n})\Vert x_{n}-p\Vert + \alpha_{n} \Vert Ty_{n}-p\Vert \\ &\leq(1-\alpha_{n})\Vert x_{n}-p\Vert + k \alpha_{n} \Vert y_{n}-p\Vert \\ & \leq \bigl(1-\alpha_{n} + k\alpha_{n}\bigl[(1- \beta_{n}) +\beta_{n} k\bigr] \bigr) \Vert x_{n} -p\Vert \\ & \leq \bigl(1-\alpha_{n} + \alpha_{n} k- \alpha_{n}\beta_{n} k +\alpha _{n} \beta_{n} k^{2} \bigr) \Vert x_{n} -p\Vert \\ & \leq \bigl(1 - \alpha_{n}(1-k)-\alpha_{n} \beta_{n} k (1 - k) \bigr) \Vert x_{n} -p\Vert \end{aligned}$$

for all \(n\geq0\). Since \(\alpha_{n}, \beta_{n} \in(\frac{1}{2}, 1)\), \(1-\alpha_{n}(1-k)-\alpha_{n}\beta_{n} k (1-k)< 1-\frac{1}{2}(1-k)-\frac {1}{4}k(1-k)\) for all \(n\geq0\). Put \(a_{n}= (1-\frac {1}{2}(1-k)-\frac{1}{4}k(1-k) ) ^{n} \Vert x_{1}-p\Vert \) for all \(n\geq0\). If \(\{x_{n}\}_{n\geq0}\) is the sequence in the case (3.3), then we get

$$\begin{aligned} \Vert y_{n}-p\Vert &= \bigl\Vert \beta_{n} x_{n} + (1-\beta _{n})Tx_{n}-p\bigr\Vert \\ &\leq\beta_{n}\Vert x_{n}-p\Vert + (1- \beta_{n})\Vert Tx_{n}-p\Vert \\ & \leq \bigl(1-(1-\beta_{n}) (1-k) \bigr) \Vert x_{n} -p \Vert \end{aligned}$$

and

$$\begin{aligned} \Vert x_{n+1}-p\Vert &= \bigl\Vert \alpha_{n} x_{n} + (1-\alpha_{n})Ty_{n}-p\bigr\Vert \\ &\leq\alpha_{n}\Vert x_{n}-p\Vert + (1- \alpha_{n}) \Vert Ty_{n}-p\Vert \\ &\leq\alpha_{n}\Vert x_{n}-p\Vert + k (1- \alpha_{n}) \Vert y_{n}-p\Vert \\ &\leq \bigl(\alpha_{n}+k(1-\alpha_{n}) \bigl(1-(1- \beta_{n}) (1-k)\bigr)\bigr) \Vert x_{n}-p\Vert \\ &= \bigl(\alpha_{n}+(1-\alpha_{n})k-k(1- \alpha_{n}) (1-\beta_{n}) (1-k) \bigr) \Vert x_{n}-p\Vert \\ &= \bigl(1-(1-\alpha_{n}) (1-k)-(1-\alpha_{n}) (1- \beta_{n})k(1-k) \bigr) \Vert x_{n}-p\Vert \end{aligned}$$

for all \(n\geq0\). Since \(\alpha_{n}, \beta_{n} \in(\frac{1}{2}, 1)\), \(1-(1-\alpha_{n}) (1-k)-(1-\alpha_{n})(1-\beta_{n})(1-k)< 1\) for all \(n\geq0\). Put \(b_{n} =\Vert x_{1}-p\Vert \) for all \(n\geq0\). Since

$$1-\frac{1}{2}(1-k)-\frac{1}{4}k(k-1) < 1+\frac{1}{2}k(1-k), $$

we get \(\lim\frac{a_{n}}{b_{n}}=\lim\frac{ (1-\frac{1}{2}(1-k)-\frac {1}{4}k(1-k) )^{n} \Vert x_{1}-p\Vert }{\Vert x_{1}-p\Vert }=0\) and so the iteration (3.2) converges faster than the case (3.3). Now, let \(\{x_{n}\}_{n\geq0}\) be the sequence in the case (3.4). Then

$$\begin{aligned} \Vert y_{n}-p\Vert &= \bigl\Vert \beta x_{n} +(1- \beta _{n})Tx_{n}-p\bigr\Vert \\ &\leq\beta_{n}\Vert x_{n}-p\Vert + (1 - \beta_{n}) \Vert Tx_{n}-p\Vert \\ &\leq \bigl(\beta_{n}+k(1-\beta_{n}) \bigr) \Vert x_{n} -p\Vert \\ &= \bigl(1-(1-\beta_{n}) (1-k) \bigr) \Vert x_{n} -p \Vert \end{aligned}$$

and

$$\begin{aligned} \Vert x_{n+1}-p\Vert &= \bigl\Vert (1-\alpha_{n}) x_{n} + \alpha_{n}Ty_{n}-p\bigr\Vert \\ &\leq(1-\alpha_{n})\Vert x_{n}-p\Vert + \alpha_{n} \Vert Ty_{n}-p\Vert \\ &\leq \bigl(1-\alpha_{n} + k\alpha_{n}\bigl[ \bigl(1-(1- \beta_{n}) (1-k) \bigr)\bigr] \bigr) \Vert x_{n} -p\Vert \\ &= \bigl(1-\alpha_{n} + k\alpha_{n}-\alpha_{n}(1- \beta_{n})k(1-k) \bigr) \Vert x_{n}-p\Vert \\ &= \bigl(1-\alpha_{n}(1-k)-\alpha_{n}(1- \beta_{n})k(1-k) \bigr) \Vert x_{n}-p\Vert \end{aligned}$$

for all \(n\geq0\). Since \(\alpha_{n}, \beta_{n}\in(\frac{1}{2}, 1)\) for all \(n\geq0\), \(-(1-k)<-\alpha_{n}(1-k)< -\frac{1}{2}(1-k)\) and \(\frac{-1}{2}k(1-k)<-\alpha_{n}(1-\beta_{n})k(1-k)<0\) for all n. Hence,

$$1-\alpha_{n}(1-k)-\alpha_{n}(1-\beta_{n})k(1-k)< 1- \frac{1}{2}(1-k) $$

for all \(n\geq0\). Put \(c_{n} = (1-\frac{1}{2}(1-k) )^{n} \Vert x_{1}-p\Vert \) for all \(n\geq0\). Thus, we obtain

$$ \lim\frac{a_{n}}{c_{n}}=\lim\frac{ (1-\frac{1}{2}(1-k)-\frac {1}{4}k(1-k) )^{n} \Vert x_{1} -p\Vert }{ (1-\frac{1}{2}(1-k) )^{n} \Vert x_{1} -p\Vert }=0 $$

and so the iteration (3.2) converges faster than the case (3.4). Now, let \(\{x_{n}\}_{n\geq0}\) be the sequence in the case (3.5). Then we have

$$\begin{aligned} \Vert y_{n}-p\Vert &= \bigl\Vert (1-\beta)x_{n} + \beta _{n} Tx_{n}-p\bigr\Vert \\ &\leq(1-\beta_{n})\Vert x_{n}-p\Vert + \beta_{n}\Vert Tx_{n}-p\Vert \\ & \leq \bigl(1-\beta_{n}(1-k) \bigr) \Vert x_{n} -p\Vert \end{aligned}$$

and

$$\begin{aligned} \Vert x_{n+1}-p\Vert &= \bigl\Vert \alpha_{n} x_{n} + (1-\alpha_{n})Ty_{n}\bigr\Vert \\ &\leq\alpha_{n}\Vert x_{n}-p\Vert + (1- \alpha_{n}) \Vert Ty_{n}-p\Vert \\ &\leq\alpha_{n}\Vert x_{n}-p\Vert + k (1- \alpha_{n}) \Vert y_{n}-p\Vert \\ &\leq \bigl(\alpha_{n} + k(1-\alpha_{n})\bigl[1- \beta_{n}(1-k)\bigr] \bigr) \Vert x_{n} -p\Vert \\ &\leq \bigl(\alpha_{n} + k(1-\alpha_{n})-(1- \alpha_{n})\beta_{n}k(1-k) \bigr) \Vert x_{n} -p \Vert \\ &\leq \bigl(1-(1-\alpha_{n} ) + k(1-\alpha_{n})-(1- \alpha_{n})\beta _{n}k(1-k) \bigr) \Vert x_{n} -p \Vert \\ &\leq \bigl(1 - (1-\alpha_{n}) (1-k)-(1-\alpha_{n}) \beta_{n}k(1-k) \bigr) \Vert x_{n} -p\Vert \end{aligned}$$

for all \(n\geq0\). Since \(\alpha_{n}, \beta_{n}\in(\frac{1}{2}, 1)\) for all n, \(-(1-k^{2})<-\alpha_{n}(1-k^{2})< -\frac{1}{2}(1-k^{2})\), and \(-\frac{1}{2}k(1-k)<-(1-\alpha_{n})\beta_{n}k(1-k)<0\) and so

$$1-\alpha_{n}(1-k)-(1-\alpha_{n})\beta_{n}k(1-k)< 1- \frac{1}{2}(1-k) $$

for all \(n\geq0\). Put \(d_{n} = (1-\frac{1}{2}(1-k) )^{n} \Vert x_{1}-p \Vert \) for all \(n\geq0\). Then we have

$$ \lim\frac{a_{n}}{d_{n}}=\lim\frac{ (1-\frac{1}{2}(1-k)-\frac {1}{4}k(1-k) )^{n} \Vert x_{1}-p\Vert }{ (1-\frac {1}{2}(1-k) )^{n} \Vert x_{1}-p\Vert }=0 $$

and so the iteration (3.2) converges faster than the case (3.5). □

By using a similar condition, one can show that the iteration (3.5) is faster than the case (3.3).

Now consider eight cases for writing the Noor iteration method. By using a condition, we show that the coefficient sequences \(\{\alpha_{n}\}_{n\geq0}\), \(\{\beta_{n}\}_{n\geq0}\), and \(\{\gamma_{n}\}_{n\geq0}\) have effective roles to play in the rate of convergence of the sequence \(\{x_{n}\}_{n\geq0}\) in the Noor iteration method. We enumerate the cases of the Noor iteration method during the proof of our next result.

Theorem 3.1

Let C be a nonempty, closed, and convex subset of a Banach space X, \(x_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\) and p a fixed point of T. Consider the case (2.3) of the Noor iteration method

$$ \left \{ \textstyle\begin{array}{l} x_{n+1} =(1-\alpha_{n})x_{n}+\alpha_{n} Ty_{n}, \\ y_{n} = (1-\beta_{n})x_{n} +\beta_{n}Tz_{n}, \\ z_{n} = (1-\gamma_{n})x_{n} + \gamma_{n}Tx_{n} \end{array}\displaystyle \right . $$

for all \(n\geq0\). If \(1-\alpha_{n} <\alpha_{n}\), \(1-\beta_{n} <\beta_{n}\), and \(1-\gamma_{n}<\gamma_{n}\) for all \(n\geq0\), then the iteration (2.3) is faster than the other possible cases.

Proof

First, we compare the case (2.3) with the following Noor iteration case:

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} = (1-\alpha_{n})u_{n}+\alpha_{n} Tv_{n}, \\ v_{n} = (1-\beta_{n})u_{n} +\beta_{n}Tw_{n}, \\ w_{n} = \gamma_{n}u_{n} +(1- \gamma_{n}) Tu_{n} \end{array}\displaystyle \right . $$
(3.6)

for all \(n\geq0\). Note that

$$\begin{aligned} \Vert z_{n}-p\Vert &= \bigl\Vert (1-\gamma_{n})x_{n} +\gamma _{n}Tx_{n} -p\bigr\Vert \\ &\leq(1-\gamma_{n})\Vert x_{n}-p\Vert + k \gamma_{n} \Vert x_{n}-p\Vert \\ &= \bigl(1-(1-k)\gamma_{n}\bigr) \Vert x_{n}-p\Vert \end{aligned}$$

and

$$\begin{aligned} \Vert y_{n}-p\Vert &= \bigl\Vert (1-\beta_{n})x_{n} +\beta _{n}Tz_{n} -p\bigr\Vert \\ &\leq(1-\beta_{n})\Vert x_{n}-p\Vert + k \beta_{n} \Vert z_{n}-p\Vert \\ &\leq(1-\beta_{n}) +k \beta_{n} \bigl(\bigl(1-(1-k) \gamma_{n}\bigr)\bigr) \Vert x_{n}-p\Vert \\ &\leq\bigl[1-\beta_{n}(1-k)-\beta_{n}\gamma_{n} k(1-k)\bigr]\Vert x_{n}-p \Vert \end{aligned}$$

for all \(n\geq0\). Also, we have

$$\begin{aligned} \Vert x_{n+1}-p\Vert &= \bigl\Vert (1-\alpha _{n})x_{n}+ \alpha_{n}Ty_{n} -p\bigr\Vert \\ &\leq(1-\alpha_{n}) \Vert x_{n}-p\Vert + k \alpha_{n} \Vert y_{n}-p\Vert \\ &\leq(1-\alpha_{n} ) \Vert x_{n}-p\Vert + k \alpha_{n} \bigl[1-\beta_{n}(1-k)-\beta_{n} \gamma_{n} k(1-k)\bigr] \Vert x_{n}-p \Vert \\ &\leq\bigl(1-\alpha_{n} + k\alpha_{n} \bigl(1- \beta_{n}(1-k)-\beta_{n}\gamma_{n} k(1-k)\bigr) \bigr)\Vert x_{n}-p\Vert \\ &\leq\bigl(1-\alpha_{n} + k\alpha_{n} -k(1-k) \beta_{n}\alpha_{n}-\alpha_{n}\beta _{n} \gamma_{n} k^{2} (1-k)\bigr) \Vert x_{n}-p\Vert \\ &\leq\bigl(1 - (1-k)\alpha_{n} -k(1-k)\beta_{n} \alpha_{n}-\alpha_{n}\beta _{n}\gamma_{n} k^{2} (1-k)\bigr) \Vert x_{n}-p\Vert \end{aligned}$$

for all \(n\geq0\). Since \(\alpha_{n}, \beta_{n}, \gamma_{n}\in(\frac {1}{2}, 1)\) for all n, \(-(1-k^{2})< -\alpha_{n}(1-k^{2})< -\frac{1}{2}(1 -k^{2})\), \(-k(1-k)<-\alpha_{n}\beta_{n}k(1-k)< -\frac{1}{4}k(1-k)\), and

$$-k^{2}(1-k) < -\alpha_{n}\beta_{n} \gamma_{n} k^{2} (1-k) < -\frac{1}{8}k^{2}(1-k) $$

for all n. This implies that

$$1 - (1-k)\alpha_{n} -k(1-k)\beta_{n}\alpha_{n}- \alpha_{n}\beta_{n}\gamma_{n} k^{2} (1-k)< 1-\frac{1}{2}(1-k)-\frac{1}{4}k(1-k)-\frac{1}{8}k^{2}(1-k) $$

for all n. Put \(a_{n} = (1-\frac{1}{2}(1-k) -\frac {1}{8}k^{2}(1-k))^{n}\Vert x_{1}-p\Vert \) for all \(n\geq0\). Now for the sequences \(\{u_{n}\}_{n\geq0}\) with \(u_{1}=x_{1}\) and \(\{v_{n}\} _{n\geq0}\) in (3.6), we have

$$\begin{aligned} \Vert w_{n}-p\Vert &= \bigl\Vert \gamma_{n} u_{n} +(1-\gamma _{n})Tu_{n} -p\bigr\Vert \\ &\leq\gamma_{n}\Vert u_{n}-p\Vert + k(1- \gamma_{n} ) \Vert u_{n}-p\Vert \\ &= \bigl(1-(1-\gamma_{n}) (1-k)\bigr) \Vert u_{n}-p\Vert \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} \Vert v_{n}-p\Vert &= \bigl\Vert (1-\beta_{n})u_{n} +\beta _{n}Tw_{n} -p\bigr\Vert \\ &\leq(1-\beta_{n})\Vert u_{n}-p\Vert + k \beta_{n} \Vert w_{n}-p\Vert \\ &\leq(1-\beta_{n}) + k\beta_{n} \bigl(1-(1- \gamma_{n}) (1-k)\bigr) \Vert u_{n}-p\Vert \\ &\leq\bigl(1-\beta_{n} + k\beta_{n} -\beta_{n}(1- \gamma_{n})k(1-k)\bigr) \Vert u_{n}-p\Vert \\ & \leq\bigl(1-\beta_{n} (1-k) -\beta_{n}(1- \gamma_{n})k(1-k)\bigr) \Vert u_{n}-p\Vert \end{aligned} \end{aligned}$$

for all \(n\geq0\). Hence,

$$\begin{aligned} \Vert u_{n+1}-p\Vert &= \bigl\Vert (1-\alpha_{n})u_{n} +\alpha_{n} Tv_{n} -p\bigr\Vert \\ &\leq(1 - \alpha_{n}) \Vert u_{n}-p\Vert + k \alpha_{n} \Vert v_{n}-p\Vert \\ &\leq(1-\alpha_{n}) \Vert u_{n}-p\Vert + k \alpha_{n} \bigl(1-\beta_{n} (1-k) -\beta_{n}(1- \gamma_{n})k(1-k)\bigr)\Vert u_{n}-p \Vert \\ &\leq\bigl( (1-\alpha_{n}) + k\alpha_{n} - \alpha_{n}\beta_{n}k(1-k) - \alpha \beta_{n}(1- \gamma_{n}) K^{2}(1-k) \bigr)\Vert u_{n}-p\Vert \\ &\leq\bigl( 1-\alpha_{n} (1-k) -\alpha_{n} \beta_{n}k(1-k) - \alpha\beta _{n}(1-\gamma_{n}) K^{2}(1-k) \bigr)\Vert u_{n}-p\Vert \end{aligned}$$

for all n. Since \(\alpha_{n}, \beta_{n}, \gamma_{n}\in(\frac{1}{2}, 1)\) for all n, \(-k(1-k)<-\alpha_{n}\beta_{n}k(1-k)< -\frac{1}{4}k (1-k)\) and \(\frac{1}{2}k^{2}(1-k)< -\alpha_{n}\beta_{n}(1-\gamma _{n})k^{2}(1-k) < 0 \) for all n. Hence,

$$1-\alpha_{n} (1-k) -\alpha_{n}\beta_{n}k(1-k) - \alpha\beta_{n}(1-\gamma _{n}) k^{2}(1-k)< 1- \frac{1}{2}(1-k) -\frac{1}{4}k(1-k) $$

for all n. Put \(b_{n}=(1-\frac{1}{2}(1-k) -\frac{1}{4}k(1-k))^{n}\| u_{1}-p\|\) for all \(n\geq0\). Then we have

$$\begin{aligned} \lim_{n\to\infty} \frac{a_{n}}{b_{n}}= \frac{(1-\frac {1}{2}(1-k)-\frac{1}{4}k(1-k)-\frac{1}{8}k^{2}(1-k))^{n}\Vert x_{1}-p\Vert }{ {(1-\frac{1}{2}(1-k) -\frac{1}{4}k(1-k))}^{n} \Vert u_{1} -p\Vert }=0. \end{aligned}$$

Thus, \(\{x_{n}\}_{n\geq0}\) converges faster than the sequence \(\{u_{n}\} _{n\geq0}\). Now, we compare the case (2.3) with the following Noor iteration case:

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n} )u_{n} + \alpha_{n}Tv_{n}, \\ v_{n} = \beta_{n} u_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} +\gamma_{n} Tu_{n} \end{array}\displaystyle \right . $$
(3.7)

for all \(n\geq0\). Note that

$$\begin{aligned} \Vert w_{n}-p\Vert &= \bigl\Vert (1-\gamma_{n}) u_{n} +\gamma _{n}Tu_{n} -p\bigr\Vert \\ &\leq(1-\gamma_{n})\Vert u_{n}-p\Vert + k \gamma_{n} \Vert u_{n}-p\Vert \\ &= \bigl(1-(1 - k)\gamma_{n}\bigr) \Vert u_{n}-p\Vert \end{aligned}$$

and

$$\begin{aligned} \Vert v_{n}-p\Vert &= \bigl\Vert \beta_{n}u_{n} +(1-\beta _{n})Tw_{n} -p\bigr\Vert \\ &\leq\beta_{n}\Vert u_{n}-p\Vert + k(1- \beta_{n}) \Vert w_{n}-p\Vert \\ &\leq\bigl(\beta_{n} + k(1-\beta_{n} )-\beta_{n} \gamma_{n} k(1-k)\bigr) \Vert u_{n}-p\Vert \\ &\leq\bigl(1-(1-k) (1-\beta_{n} )-\beta_{n} \gamma_{n} k(1-k)\bigr) \Vert u_{n}-p\Vert \end{aligned}$$

for all \(n\geq0\). Hence,

$$\begin{aligned} \Vert u_{n+1}-p\Vert &= \bigl\Vert (1-\alpha_{n})u_{n} +\alpha_{n} Tv_{n} -p\bigr\Vert \\ &\leq(1 - \alpha_{n}) \Vert u_{n}-p\Vert + k \alpha_{n} \Vert w_{n}-p\Vert \\ &\leq(1-\alpha_{n}) \Vert u_{n}-p\Vert + k \alpha_{n} \bigl(1-(1-k) (1-\beta_{n} )-\beta_{n} \gamma_{n} k(1-k)\bigr) \Vert u_{n}-p \Vert \\ &\leq\bigl((1-\alpha_{n}) + k\alpha_{n}-k(1-k) \alpha_{n}(1-\beta_{n} )-\alpha _{n} \beta_{n}\gamma_{n} k^{2}(1-k)\bigr) \Vert u_{n}-p\Vert \\ &\leq\bigl(1-(1-k)\alpha_{n} -\alpha_{n}(1- \beta_{n} )k(1-k)-\alpha_{n}\beta _{n} \gamma_{n} k^{2}(1-k) \bigr)\Vert u_{n}-p\Vert \end{aligned}$$

for all \(n\geq0\). Since \(\alpha_{n}, \beta_{n}, \gamma_{n}\in(\frac {1}{2}, 1)\) for all n, \(-\frac{1}{2}k(1-k)<-\alpha_{n}(1-\beta _{n})k(1-k)< 0 \), and \(-k^{2}(1-k)< -\alpha_{n}\beta_{n}(1-\gamma_{n})k^{2}(1-k) < -\frac{1}{8}k^{2}(1-k)\) and so

$$1-(1-k)\alpha_{n} -\alpha_{n}(1-\beta_{n} )k(1-k)- \alpha_{n}\beta_{n}\gamma _{n} k^{2}(1-k)< 1- \frac{1}{2}(1-k) -\frac{1}{8}k^{2}(1-k) $$

for all n. Put \(c_{n}=(1-\frac{1}{2}(1-k) -\frac {1}{8}k^{2}(1-k))^{n}\Vert u_{1}-p\Vert \) for all \(n\geq0\). Then we have

$$ \lim_{n\to\infty}\frac{a_{n}}{c_{n}}= \frac{(1-\frac{1}{2}(1-k)-\frac {1}{4}k(1-k)-\frac{1}{8}k^{2}(1-k))^{n}\Vert x_{1}-p\Vert }{ {(1-\frac{1}{2}(1-k) -\frac{1}{8}k^{2}(1-k))}^{n} \Vert u_{1} -p\Vert }=0. $$

Thus, \(\{x_{n}\}_{n\geq0}\) converges faster than the sequence \(\{u_{n}\} _{n\geq0}\). Now, we compare the case (2.3) with the following Noor iteration case:

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n}) u_{n} + \alpha_{n}Tv_{n}, \\ v_{n} = \beta_{n}u_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = \gamma_{n}u_{n} +(1-\gamma_{n} )Tu_{n} \end{array}\displaystyle \right . $$
(3.8)

for all \(n\geq0\). Note that

$$\begin{aligned} \Vert w_{n}-p\Vert &= \bigl\Vert \gamma_{n} u_{n} +(1-\gamma _{n})Tu_{n} -p\bigr\Vert \\ &\leq\gamma_{n}\Vert u_{n}-p\Vert + k(1- \gamma_{n} ) \Vert u_{n}-p\Vert \\ &= \bigl(1-(1-\gamma_{n}) (1-k)\bigr) \Vert u_{n}-p\Vert \end{aligned}$$

and

$$\begin{aligned} \Vert v_{n}-p\Vert &= \bigl\Vert (1-\beta_{n})u_{n} +\beta _{n}Tw_{n} -p\bigr\Vert \\ &\leq(1-\beta_{n})\Vert u_{n}-p\Vert + k \beta_{n} \Vert w_{n}-p\Vert \\ &\leq\bigl(1-\beta_{n} + k\beta_{n} \bigl(1-(1- \gamma_{n}) (1-k)\bigr)\bigr) \Vert u_{n}-p\Vert \\ &\leq\bigl(1-\beta_{n} + k\beta_{n} -\beta_{n}(1- \gamma_{n})k(1-k)\bigr) \Vert u_{n}-p\Vert \\ &\leq\bigl(1-\beta_{n} (1-k) -\beta_{n}(1- \gamma_{n})k(1-k)\bigr) \Vert u_{n}-p\Vert \end{aligned}$$

and so

$$\begin{aligned} \Vert u_{n+1}-p\Vert &= \bigl\Vert (1-\alpha_{n})u_{n} +\alpha_{n} Tv_{n} -p\bigr\Vert \\ &\leq(1 - \alpha_{n}) \Vert u_{n}-p\Vert + k \alpha_{n} \Vert w_{n}-p\Vert \\ &\leq(1-\alpha_{n}) \Vert u_{n}-p\Vert + k \alpha_{n} \bigl(1-\beta_{n} (1-k) -\beta_{n}(1- \gamma_{n})k(1-k)\bigr) \Vert u_{n}-p \Vert \\ &\leq\bigl(1-\alpha_{n} + k\alpha_{n} -\alpha_{n} \beta_{n} k(1-k) - \alpha _{n}\beta_{n}(1- \gamma_{n} )k^{2}(1-k) \bigr)\Vert u_{n}-p\Vert \\ &\leq\bigl(1-(1-k)\alpha_{n} -\alpha_{n}\beta_{n} k(1-k) - \alpha_{n}\beta _{n}(1-\gamma_{n}) k^{2}(1-k) \bigr)\Vert u_{n}-p\Vert \end{aligned}$$

for all n. Since \(\alpha_{n}, \beta_{n}, \gamma_{n}\in(\frac{1}{2}, 1)\) for all n, \(-k(1-k)<-\alpha_{n}\beta_{n}k(1-k)< -\frac {1}{4}k(1-k)\), and \(-\frac{1}{2}k^{2}(1-k)< -\alpha_{n}\beta_{n}(1-\gamma _{n})k^{2}(1-k) < 0\) for all n. This implies that

$$1-(1-k)\alpha_{n} -\alpha_{n}\beta_{n} k(1-k)- \alpha_{n}\beta_{n}(1-\gamma _{n}) k^{2}(1-k)< 1-\frac{1}{2}(1-k) -\frac{1}{4}k(1-k) $$

for all n. Put \(d_{n}=(1-\frac{1}{2}(1-k) -\frac{1}{4}k(1-k))^{n} \Vert u_{1}-p\Vert \) for all \(n\geq0\). Then we get

$$ \lim_{n\to\infty}\frac{a_{n}}{d_{n}}=\frac{(1-\frac{1}{2}(1-k)-\frac {1}{4}k(1-k)-\frac{1}{8}k^{2}(1-k))^{n}\Vert x_{1}-p\Vert }{( 1-\frac{1}{2}(1-k) -\frac{1}{4}k(1-k))^{n} \Vert u_{1} -p \Vert }=0 $$

and so the sequence \(\{x_{n}\}_{n\geq0}\) converges faster than the sequence \(\{u_{n}\}_{n\geq0}\). By using similar proofs, one can show that the case (2.3) is faster than the following cases of the Noor iteration method:

$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} u_{n+1} = \alpha_{n}u_{n} +(1-\alpha_{n}) Tv_{n}, \\ v_{n} = (1-\beta_{n})u_{n} +\beta_{n}Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} +\gamma_{n} Tu_{n} , \end{array}\displaystyle \right . \end{aligned}$$
(3.9)
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n} u_{n} +(1- \alpha_{n})Tv_{n}, \\ v_{n} = (1-\beta_{n})u_{n} +\beta_{n}Tw_{n}, \\ w_{n} = \gamma_{n} u_{n} +(1-\gamma_{n}) Tu_{n} , \end{array}\displaystyle \right . \end{aligned}$$
(3.10)
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} u_{n+1} = \alpha_{n}u_{n} +(1-\alpha_{n}) Tv_{n}, \\ v_{n} = \beta_{n}u_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} =(1 - \gamma_{n}) u_{n} +\gamma_{n} Tu_{n}, \end{array}\displaystyle \right . \end{aligned}$$
(3.11)

and

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n}u_{n}+(1-\alpha_{n}) Tv_{n}, \\ v_{n} = \beta_{n} u_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = \gamma_{n} u_{n} +(1-\gamma_{n}) Tu_{n} \end{array}\displaystyle \right . $$
(3.12)

for all \(n\geq0\). This completes the proof. □

By using similar conditions, one can show that the case (3.7) converges faster than (3.8), (3.9) converges faster than (3.11), (3.11) converges faster than (3.10) and (3.10) converges faster than (3.12).

As we know, the Agarwal iteration method could be written in the following four cases:

$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} x_{n+1} =(1 - \alpha_{n})Tx_{n} + \alpha_{n}Ty_{n}, \\ y_{n} = (1-\beta_{n})x_{n} +\beta_{n}Tx_{n}, \end{array}\displaystyle \right . \end{aligned}$$
(3.13)
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} x_{n+1} =\alpha_{n} Tx_{n} + (1-\alpha_{n})Ty_{n}, \\ y_{n} = \beta_{n} x_{n} +(1-\beta_{n})Tx_{n}, \end{array}\displaystyle \right . \end{aligned}$$
(3.14)
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} x_{n+1} = \alpha_{n}Tx_{n} + (1-\alpha_{n})Ty_{n}, \\ y_{n} = (1-\beta_{n})x_{n} +\beta_{n}Tx_{n}, \end{array}\displaystyle \right . \end{aligned}$$
(3.15)

and

$$ \left \{ \textstyle\begin{array}{l} x_{n+1} =(1 - \alpha_{n})Tx_{n} + \alpha_{n}Ty_{n}, \\ y_{n} = \beta_{n} x_{n} +(1-\beta_{n})Tx_{n} \end{array}\displaystyle \right . $$
(3.16)

for all \(n\geq0\). One can easily show that the case (3.13) converges faster than the other ones for contractive maps. We record it as the next lemma.

Lemma 3.1

Let C be a nonempty, closed, and convex subset of a Banach space X, \(x_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\) and p a fixed point of T. If \(1-\alpha_{n}<\alpha_{n}\) and \(1-\beta_{n} <\beta_{n}\) for all \(n\geq0\), then the case (3.13) converges faster than (3.14), (3.15), and (3.16).

Also by using a similar condition, one can show that the case (3.16) converges faster than (3.14). Similar to Theorem 3.1, we can prove that for contractive maps one case in the Abbas iteration method converges faster than the other possible cases whenever the elements of the sequences \(\{\alpha_{n}\}_{n\geq0}\), \(\{\beta_{n}\}_{n\geq0}\), and \(\{\gamma_{n}\}_{n\geq0}\) are in \((\frac{1}{2}, 1)\) for sufficiently large n. Also, one can show that for contractive maps the case (2.6) of the Thakur-Thakur-Postolache iteration method converges faster than the other possible cases whenever elements of the sequences \(\{\alpha_{n}\}_{n\geq0}\), \(\{\beta_{n}\}_{n\geq0}\), and \(\{\gamma_{n}\}_{n\geq0}\) are in \((\frac{1}{2}, 1)\) for sufficiently large n. We record these results as follows.

Lemma 3.2

Let C be a nonempty, closed, and convex subset of a Banach space X, \(u_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\), and p a fixed point of T. Consider the following case in the Abbas iteration method:

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n} Tv_{n} + (1-\alpha_{n})Tw_{n}, \\ v_{n} = (1-\beta_{n})Tu_{n} +\beta_{n}Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} +\gamma_{n} Tu_{n} \end{array}\displaystyle \right . $$
(3.17)

for all n. If \(1-\alpha_{n} <\alpha_{n}\), \(1-\beta_{n} <\beta_{n}\), and \(1-\gamma_{n}< \gamma_{n}\) for sufficiently large n, then the case (3.17) converges faster than the other possible cases.

Also by using similar conditions in the Abbas iteration method, one can show that the cases

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n} Tv_{n} + (1-\alpha_{n})Tw_{n}, \\ v_{n} = \beta_{n}Tu_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} +\gamma_{n} Tu_{n} \end{array}\displaystyle \right . $$
(3.18)

and

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n} Tv_{n} + (1-\alpha_{n})Tw_{n}, \\ v_{n} = (1-\beta_{n})Tu_{n} +\beta_{n}Tw_{n}, \\ w_{n} = \gamma_{n} u_{n} +(1-\gamma_{n}) Tu_{n} \end{array}\displaystyle \right . $$
(3.19)

converge faster than the case

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n} Tv_{n} + (1-\alpha_{n})Tw_{n}, \\ v_{n} = \beta_{n}Tu_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = \gamma_{n}u_{n} +(1-\gamma_{n} )Tu_{n}. \end{array}\displaystyle \right . $$
(3.20)

Also the case

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} = (1-\alpha_{n} )Tv_{n}+\alpha_{n}Tw_{n}, \\ v_{n} =(1- \beta_{n})Tu_{n} +\beta_{n}Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} +\gamma_{n} Tu_{n} \end{array}\displaystyle \right . $$
(3.21)

converges faster than the cases

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n}) Tv_{n} + \alpha_{n}Tw_{n}, \\ v_{n} = \beta_{n}Tu_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} =(1 - \gamma_{n}) u_{n} +\gamma_{n} Tu_{n} \end{array}\displaystyle \right . $$
(3.22)

and

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n}) Tv_{n} + \alpha_{n}Tw_{n}, \\ v_{n} = (1-\beta_{n})Tu_{n} +\beta_{n}Tw_{n}, \\ w_{n} = \gamma_{n} u_{n} +(1-\gamma_{n}) Tu_{n}, \end{array}\displaystyle \right . $$
(3.23)

and

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n}) Tv_{n} + \alpha_{n}Tw_{n}, \\ v_{n} = \beta_{n}Tu_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = \gamma_{n} u_{n} +(1-\gamma_{n}) Tu_{n}. \end{array}\displaystyle \right . $$
(3.24)

Lemma 3.3

Let C be a nonempty, closed, and convex subset of a Banach space X, \(u_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\) and p a fixed point of T. If \(1-\alpha_{n} <\alpha_{n}\), \(1-\beta_{n} <\beta_{n}\), and \(1-\gamma_{n}< \gamma_{n}\) for sufficiently large n, then the case (2.6) in the Thakur-Thakur-Postolache iteration method converges faster than the other possible cases.

Also by using similar conditions, one can show that the cases

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n} )Tu_{n} + \alpha_{n}Tv_{n}, \\ v_{n} = \beta_{n} w_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} +\gamma_{n} Tu_{n} \end{array}\displaystyle \right . $$
(3.25)

and

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} = (1-\alpha_{n})Tu_{n} +\alpha_{n} Tv_{n}, \\ v_{n} = (1-\beta_{n})w_{n} +\beta_{n}Tw_{n}, \\ w_{n} = \gamma_{n} u_{n} +(1-\gamma_{n}) Tu_{n} \end{array}\displaystyle \right . $$
(3.26)

converge faster than the case

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n}) Tu_{n} + \alpha_{n}Tv_{n}, \\ v_{n} = \beta_{n}w_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = \gamma_{n}u_{n} +(1-\gamma_{n} )Tu_{n}. \end{array}\displaystyle \right . $$
(3.27)

Also the case

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n} Tu_{n} +(1- \alpha_{n})Tv_{n}, \\ v_{n} = (1-\beta_{n} )w_{n} +\beta_{n}Tw_{n}, \\ w_{n} = (1-\gamma_{n}) u_{n} +\gamma_{n} Tu_{n} \end{array}\displaystyle \right . $$
(3.28)

converges faster than the cases

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n} Tu_{n} +(1- \alpha_{n})Tv_{n}, \\ v_{n} = \beta_{n} w_{n} +(1-\beta_{n}) Tw_{n}, \\ w_{n} =(1- \gamma_{n}) u_{n} +\gamma_{n} Tu_{n} \end{array}\displaystyle \right . $$
(3.29)

and

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n} Tu_{n} +(1- \alpha_{n})Tv_{n}, \\ v_{n} = (1-\beta_{n})w_{n} +\beta_{n}Tw_{n}, \\ w_{n} = \gamma_{n} u_{n} +(1-\gamma_{n}) Tu_{n}, \end{array}\displaystyle \right . $$
(3.30)

and

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n}Tu_{n}+(1-\alpha_{n}) Tv_{n}, \\ v_{n} = \beta_{n}w_{n} +(1-\beta_{n})Tw_{n}, \\ w_{n} = \gamma_{n} u_{n} +(1-\gamma_{n}) Tu_{n}. \end{array}\displaystyle \right . $$
(3.31)

Finally, we have a similar situation for the Picard S-iteration which we record here.

Lemma 3.4

Let C be a nonempty, closed, and convex subset of a Banach space X, \(x_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\) and p a fixed point of T. If \(1-\alpha_{n} <\alpha_{n}\) and \(1-\beta_{n} <\beta_{n}\) for sufficiently large n, then the case (2.7) in the Picard S-iteration method converges faster than the other possible cases.

4 Comparing different iterations methods

In this section, we compare the rate of convergence of some different iteration methods for contractive maps. Our goal is to show that the rate of convergence relates to the coefficients.

Theorem 4.1

Let C be a nonempty, closed, and convex subset of a Banach space X, \(u_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\) and p a fixed point of T. Consider the case (2.5) in the Abbas iteration method

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n}) Tv_{n} + \alpha_{n}Tw_{n}, \\ v_{n} = (1-\beta_{n})Tu_{n} +\beta_{n}Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} + \gamma_{n}Tu_{n}, \end{array}\displaystyle \right . $$

the case (3.17) in the Abbas iteration method

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =\alpha_{n}Tv_{n} +(1-\alpha_{n})Tw_{n}, \\ v_{n} = (1-\beta_{n})Tu_{n} +\beta_{n}Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} + \gamma_{n}Tu_{n}, \end{array}\displaystyle \right . $$

and the case (2.6) in the Thakur-Thakur-Postolache iteration method

$$ \left \{ \textstyle\begin{array}{l} u_{n+1} =(1-\alpha_{n})Tu_{n}+\alpha_{n} Tv_{n}, \\ v_{n} = (1-\beta_{n})w_{n} +\beta_{n}Tw_{n}, \\ w_{n} = (1-\gamma_{n})u_{n} + \gamma_{n}Tu_{n} \end{array}\displaystyle \right . $$

for all \(n\geq0\). If \(1-\alpha_{n} <\alpha_{n}\), \(1-\beta_{n} <\beta_{n}\), and \(1-\gamma_{n}< \gamma_{n}\) for sufficiently large n, then the case (3.17) in the Abbas iteration method converges faster than the case (2.6) in the Thakur-Thakur-Postolache iteration method. Also, the case (2.6) in the Thakur-Thakur-Postolache iteration method is faster than the case (2.5) in the Abbas iteration method.

Proof

Let \(\{u_{n}\}_{n\geq0}\) be the sequence in the case (3.17). Then we have

$$\begin{aligned}& \Vert w_{n}-p\Vert = \bigl\Vert (1-\gamma_{n})u_{n} +\gamma _{n}Tu_{n} -p\bigr\Vert \\& \hphantom{\Vert w_{n}-p\Vert }\leq(1-\gamma_{n})\Vert u_{n}-p\Vert + k \gamma_{n} \Vert u_{n}-p\Vert \\& \hphantom{\Vert w_{n}-p\Vert }= \bigl(1-(1-k)\gamma_{n}\bigr) \Vert u_{n}-p\Vert , \\& \Vert v_{n}-p\Vert = \bigl\Vert (1-\beta_{n})Tu_{n} +\beta _{n}Tw_{n} -p\bigr\Vert \\& \hphantom{\Vert v_{n}-p\Vert }\leq k(1-\beta_{n})\Vert u_{n}-p\Vert + k \beta_{n} \Vert w_{n}-p\Vert \\& \hphantom{\Vert v_{n}-p\Vert }\leq k \bigl[ (1-\beta_{n}) + \beta_{n} \bigl(1-(1-k)\gamma_{n}\bigr)\bigr] \Vert u_{n}-p\Vert \\& \hphantom{\Vert v_{n}-p\Vert }\leq k\bigl[1 - \beta_{n}\gamma_{n}(1-k) \bigr]\Vert u_{n}-p\Vert , \end{aligned}$$

and

$$\begin{aligned} \Vert u_{n+1}-p\Vert &= \bigl\Vert \alpha_{n}Tv_{n} +(1-\alpha_{n})Tw_{n} -p\bigr\Vert \\ &\leq\alpha_{n} k\Vert v_{n}-p\Vert + k \alpha_{n} \Vert w_{n}-p\Vert \\ &\leq\alpha_{n} k^{2} \bigl(1 - \beta_{n} \gamma_{n} (1-k)\bigr) \Vert u_{n}-p\Vert + k(1 - \alpha_{n}) \bigl(1-(1-k)\gamma_{n}\bigr) \Vert u_{n}-p\Vert \\ &\leq k\bigl[ k\alpha_{n} -\alpha_{n}\beta_{n} \gamma_{n} k(1-k) + (1-\alpha_{n}) \bigl(1-(1-k) \gamma_{n}\bigr)\bigr]\Vert u_{n}-p\Vert \\ &=k\bigl[k\alpha_{n} -\alpha_{n}\beta_{n} \gamma_{n} k(1-k)+1-\alpha_{n} - (1-\alpha_{n}) \gamma_{n}(1-k)\bigr]\Vert u_{n}-p\Vert \\ &=k\bigl[1 -\alpha_{n}(1-k)- (1-\alpha_{n}) \gamma_{n}(1-k)-\alpha_{n}\beta _{n} \gamma_{n} k(1-k) \bigr]\Vert u_{n}-p\Vert \end{aligned}$$

for all n. Since \(\alpha_{n}, \beta_{n}, \gamma_{n}\in(\frac{1}{2}, 1)\) for sufficiently large n, we have

$$-(1-k)< -\alpha_{n}(1-k)< -\frac{1}{2}(1-k), $$

\(-\frac{1}{2}(1-k)<-\alpha_{n}\gamma_{n}(1-k)<0\), and \(-k(1-k)<-\alpha _{n}\beta_{n}\gamma_{n} k (1-k) <-\frac{1}{8}k(1-k)\) for sufficiently large n. Hence,

$$1-\alpha_{n}(1-k)-(1-\alpha_{n})\gamma_{n}(1-k)- \alpha_{n}\beta_{n}\gamma_{n} k(1-k)< 1- \frac{1}{2}(1-k)-\frac{1}{8}k(1-k) $$

for sufficiently large n. Put \(a_{n} =k^{n} (1-\frac{1}{2}(1-k)-\frac {1}{8}k(1-k))^{n}\Vert u_{1}-p\Vert \) for all n. Now, let \(\{ u_{n}\}_{n\geq0}\) be the sequence in the case (2.6). Then we have

$$\begin{aligned}& \Vert w_{n}-p\Vert = \bigl\Vert (1-\gamma_{n})u_{n} +\gamma _{n}Tu_{n} -p\bigr\Vert \\& \hphantom{\Vert w_{n}-p\Vert } \leq(1-\gamma_{n})\Vert u_{n}-p\Vert + k \gamma_{n} \Vert u_{n}-p\Vert \\& \hphantom{\Vert w_{n}-p\Vert } = \bigl(1-(1-k)\gamma_{n}\bigr) \Vert u_{n}-p\Vert , \\& \Vert v_{n}-p\Vert = \bigl\Vert (1-\beta_{n})w_{n} +\beta _{n}Tw_{n} -p\bigr\Vert \\& \hphantom{\Vert v_{n}-p\Vert } \leq(1-\beta_{n})\Vert u_{n}-p\Vert + k \beta_{n} \Vert w_{n}-p\Vert \\& \hphantom{\Vert v_{n}-p\Vert } \leq(1-\beta_{n}) \bigl(1-(1-k)\gamma_{n} \bigr) +k \beta_{n} \bigl(\bigl(1-(1-k)\gamma_{n}\bigr)\bigr) \Vert u_{n}-p\Vert \\& \hphantom{\Vert v_{n}-p\Vert } \leq\bigl[1-\beta_{n}(1-k)\bigr] \bigl[1- \gamma_{n}(1-k)\bigr]\Vert u_{n}-p\Vert , \end{aligned}$$

and

$$\begin{aligned} \Vert u_{n+1}-p\Vert =& \bigl\Vert (1-\alpha _{n})Tu_{n}+\alpha_{n}Tv_{n} -p\bigr\Vert \\ \leq&(1-\alpha_{n}) k\Vert u_{n}-p\Vert + k \alpha_{n} \Vert v_{n}-p\Vert \\ \leq& k(1-\alpha_{n} ) \Vert u_{n}-p\Vert + k \alpha_{n} \bigl[1-\beta_{n}(1-k)\bigr] \bigl[1- \gamma_{n}(1-k)\bigr] \Vert u_{n}-p\Vert \\ \leq& k\bigl[ 1-\alpha_{n} + \alpha_{n} \bigl(1- \beta_{n}(1-k)\bigr) \bigl(1-\gamma _{n}(1-k)\bigr)\bigr] \Vert u_{n}-p\Vert \\ \leq& k\bigl[ 1-\alpha_{n} + \bigl(\alpha_{n} -(1-k) \beta_{n}\alpha_{n}\bigr) \bigl((1-\gamma _{n})+k \gamma_{n}\bigr)\bigr] \Vert u_{n}-p\Vert \\ \leq& k\bigl[ 1-\alpha_{n} + \alpha_{n} (1- \gamma_{n}) + \alpha_{n}\gamma_{n} k - \beta_{n}\alpha_{n}(1-\gamma_{n}) (1-k) \\ &{} -\alpha_{n}\beta_{n}\gamma_{n} k(1-k)\bigr] \Vert u_{n}-p\Vert \\ \leq& k\bigl[ 1 -\alpha_{n}\gamma_{n}(1-k) - \alpha_{n}\beta_{n}(1-\gamma _{n}) (1-k)- \alpha_{n}\beta_{n}\gamma_{n}k(1-k)\bigr] \Vert u_{n}-p \Vert \end{aligned}$$

for all n. Since \(\alpha_{n}, \beta_{n}, \gamma_{n}\in(\frac{1}{2}, 1)\) for sufficiently large n, we have

$$-(1-k)< -\alpha_{n}\gamma_{n}(1-k)< -\frac{1}{4}(1-k), $$

\(-\frac{1}{2}(1-k)<-\alpha_{n}\beta_{n}(1-\gamma_{n})(1-k)<0\), and \(-k(1-k)<-\alpha_{n}\beta_{n}\gamma_{n} k (1-k) <-\frac{1}{8}k(1-k)\) for sufficiently large n. Hence,

$$1-\alpha_{n}\gamma_{n}(1-k)-\alpha_{n} \beta_{n}(1-\gamma_{n}) (1-k)-\alpha _{n} \beta_{n}\gamma_{n}k(1-k)< 1-\frac{1}{4}(1-k) - \frac{1}{8}k(1-k) $$

for sufficiently large n. Put \(b_{n} =k^{n} (1-\frac{1}{4}(1-k) -\frac {1}{8}k(1-k))^{n}\Vert u_{1}-p\Vert \) for all n. Then

$$ \lim_{n\to\infty}\frac{a_{n}}{b_{n}}=\frac{k^{n}(1-\frac{1}{2}(1-k) -\frac{1}{8}k(1-k))^{n}\Vert u_{1}-p\Vert }{k^{n} (1-\frac {1}{4}(1-k) -\frac{1}{8}k(1-k))^{n} \Vert u_{1} -p\Vert }=0. $$

Thus, the case (3.17) in the Abbas iteration method converges faster than the case (2.6) in the Thakur-Thakur-Postolache iteration method.

Now for the case (2.5), we have

$$\begin{aligned}& \Vert w_{n}-p\Vert = \Vert 1-\gamma_{n} u_{n} +\gamma _{n}Tu_{n} -p\Vert \\& \hphantom{\Vert w_{n}-p\Vert } \leq(1-\gamma_{n})\Vert u_{n}-p\Vert + k \gamma_{n} \Vert u_{n}-p\Vert \\& \hphantom{\Vert w_{n}-p\Vert } = \bigl(1-(1-k)\gamma_{n}\bigr) \Vert u_{n}-p\Vert , \\& \Vert v_{n}-p\Vert = \bigl\Vert (1-\beta_{n})Tu_{n} +\beta _{n}Tw_{n} -p\bigr\Vert \\& \hphantom{\Vert v_{n}-p\Vert } \leq k(1-\beta_{n})\Vert u_{n}-p\Vert + k\beta_{n} \Vert w_{n}-p\Vert \\& \hphantom{\Vert v_{n}-p\Vert } \leq k \bigl[ (1-\beta_{n}) + \beta_{n} \bigl(1 -(1-k)\gamma_{n}\bigr)\bigr] \Vert u_{n}-p\Vert \\& \hphantom{\Vert v_{n}-p\Vert } \leq k\bigl[1 - \beta_{n}\gamma_{n}(1-k) \bigr]\Vert u_{n}-p\Vert , \end{aligned}$$

and

$$\begin{aligned} \Vert u_{n+1}-p\Vert &= \bigl\Vert (1-\alpha_{n})Tv_{n} +\alpha_{n}Tw_{n} -p\bigr\Vert \\ &\leq(1-\alpha_{n}) k\Vert v_{n}-p\Vert + k \alpha_{n} \Vert w_{n}-p\Vert \\ &\leq(1 - \alpha_{n}) k^{2} \bigl(1-\beta_{n} \gamma_{n} (1-k)\bigr) \Vert u_{n}-p\Vert + k \alpha_{n} \bigl(1-(1-k)\gamma_{n}\bigr) \Vert u_{n}-p\Vert \\ &\leq k\bigl[(1- \alpha_{n})k -(1-\alpha_{n}) \beta_{n}\gamma_{n} k(1-k) + \alpha _{n} - \alpha_{n}\gamma_{n}(1-k)\bigr]\Vert u_{n}-p \Vert \\ &\leq k\bigl[1-(1-\alpha_{n}) (1-k) - \alpha_{n} \gamma_{n}(1-k)-(1-\alpha _{n})\beta_{n} \gamma_{n} k(1-k)\bigr]\Vert u_{n}-p\Vert \end{aligned}$$

for all n. Since \(\alpha_{n}, \beta_{n}, \gamma_{n}\in(\frac{1}{2}, 1)\) for sufficiently large n, \(-\frac{1}{2}(1-k)<-(1-\alpha _{n})(1-k)< 0\), \(-(1-k)<-\alpha_{n}\gamma_{n}(1-k)<-\frac{1}{4}(1-k)\), and \(-\frac {1}{2}k(1-k)<-(1-\alpha_{n})\beta_{n}\gamma_{n} k (1-k) <0\) for sufficiently large n. Hence,

$$1-(1-\alpha_{n}) (1-k)-\alpha_{n}\gamma_{n}(1-k)-(1- \alpha_{n})\beta _{n}\gamma_{n} k(1-k)< 1- \frac{1}{4}(1-k) $$

for sufficiently large n. Put \(c_{n} =k^{n} (1-\frac{1}{4}(1-k) )^{n} \Vert x_{1}-p\Vert \) for all n. Then we have

$$ \lim_{n\to\infty}\frac{b_{n}}{c_{n}}=\frac{k^{n}(1-\frac {1}{4}(1-k)-\frac{1}{8}k(1-k))^{n}\Vert u_{1}-p\Vert }{k^{n}(1-\frac{1}{4}(1-k))^{n} \Vert u_{1}-p\Vert }=0 $$

and so the case (2.6) in the Thakur-Thakur-Postolache iteration method is faster than the case (2.5) in the Abbas iteration method. □

By using a similar proof, we can compare the Thakur-Thakur-Postolache and the Agarwal iteration methods as follows.

Theorem 4.2

Let C be a nonempty, closed, and convex subset of a Banach space X, \(x_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\) and p a fixed point of T. If \(1-\alpha_{n} <\alpha_{n}\), \(1-\beta_{n} <\beta_{n}\), and \(1-\gamma_{n}< \gamma_{n}\) for sufficiently large n, then the case (2.6) in the Thakur-Thakur-Postolache iteration method converges faster than the case (2.4) in the Agarwal iteration method and the case (2.4) in the Agarwal iteration method is faster than the cases (3.29) and (3.30) in the Thakur-Thakur-Postolache iteration method.

Also by using similar proofs, we can compare some another iteration methods. We record those as follows.

Theorem 4.3

Let C be a nonempty, closed, and convex subset of a Banach space X, \(x_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\), and p a fixed point of T. If \(1-\alpha_{n} <\alpha_{n}\), \(1-\beta_{n} <\beta_{n}\), and \(1-\gamma_{n}< \gamma_{n}\) for sufficiently large n, then the case (2.3) in the Abbas iteration method converges faster than the case (2.2) in the Ishikawa iteration method and the case (2.2) in the Ishikawa iteration method is faster than the cases (3.11) and (3.12) in the Abbas iteration method.

It is notable that there are some cases which the coefficients have no effective roles to play in the rate of convergence. By using similar proofs, one can check the next result. One can obtain some similar cases. This shows us that researchers should stress more the probability of the efficiency of coefficients in the rate of convergence for iteration methods.

Theorem 4.4

Let C be a nonempty, closed, and convex subset of a Banach space X, \(x_{1}\in C\), \(T\colon C \to C \) a contraction with constant \(k\in(0, 1)\), p a fixed point of T, and \(\alpha_{n}, \beta_{n}, \gamma_{n} \in(0, 1)\) for all \(n\geq0\). Then the case (2.4) in the Agarwal iteration method is faster than the case (2.1) in the Mann iteration method, the case (2.5) in the Abbas iteration method is faster than the case (2.1) in the Mann iteration method, the case (2.6) in the Thakur-Thakur-Postolache iteration method is faster than the case (2.1) in the Mann iteration method, the case (2.4) in the Agarwal iteration method is faster than the case (2.2) in the Ishikawa iteration method, the case (2.5) in the Abbas iteration method is faster than the case (2.2) in the Ishikawa iteration method and the case (2.6) in the Thakur-Thakur-Postolache iteration method is faster than the case (2.2) in the Ishikawa iteration method.

5 Examples and figures

In this section, we provide some examples to illustrate our results.

Example 1

Let \(X = \mathbb{R}\), \(C=[1, 60]\), \(x_{0}=20\), \(\alpha_{n}=0.7\), and \(\beta_{n}=0.85\) for all \(n\geq0\). Define the map \(T\colon C \to C\) by the formula \(T(x)=(3x+18)^{\frac{1}{3}}\) for all \(x\in C\). It is easy to see that T is a contraction. In Tables 1-3, we first compare two cases of the Mann iteration method and also four cases of the Ishikawa and Agarwal iteration methods separately. From a mathematical point of view, one can see that the Mann iteration (3.1) is more than 2.82 times faster than the Mann iteration (2.1), the Ishikawa iteration (3.2) is more than 1.07 times faster than the Ishikawa iteration (3.4), the Ishikawa iteration (3.2) is more than 11.33 times faster than the Ishikawa iteration (3.3), the Ishikawa iteration (3.2) is more than 11 times faster than the Ishikawa iteration (3.5), the Ishikawa iteration (3.4) is more than 8.75 times faster than the Ishikawa iteration (3.5), the Agarwal iteration (3.13) is 1.22 times faster than the Agarwal iteration (3.14), the Agarwal iteration (3.13) is 1.11 times faster than the Agarwal iteration (3.15), the Agarwal iteration (3.13) is 1.22 times faster than the Agarwal iteration (3.16) and so on. We first add our CPU time in Tables 1-3 for each iteration method. Also, we provide Figure 1 by using at least 30 times calculating of CPU times for our faster cases in the methods. From a computer-calculation point of view, we get a different answer. As one can see in the CPU time table, we found that the Agarwal iteration (3.13) and the Mann iteration (3.1) are faster than the Ishikawa iteration (3.2). This note emphasizes the difference of the mathematical results and computer-calculation results which have appeared many times in the literature.

Figure 1
figure 1

CPU time.

Table 1 Cases of Mann iteration
Table 2 Cases of Ishikawa iteration
Table 3 Cases of Agarwal iteration

The next example illustrates Lemma 3.2.

Example 2

Let \(X = \mathbb{R}\), \(C = [0, 2000]\), \(x_{0}=1000\), \(\alpha_{n}=0. 85\), \(\beta_{n}=0. 65\), and \(\gamma_{n}=0. 75\) for all \(n\geq0\). Define the map \(T\colon C\to C\) by the formula \(T(x) =\sqrt[3]{x^{2}}\) for all \(x\in C\). Table 4 shows us that the Abbas iteration (3.17) converges faster than the other cases, the Abbas iteration (3.18) is 1.1 times faster than the Abbas iteration (3.20), the Abbas iteration (3.19) is 1.05 times faster than the Abbas iteration (3.20), the Abbas iteration (3.21) is 1.04 times faster than the Abbas iteration (3.22) and 1.3 times faster than the Abbas iteration (3.23) and the Abbas iteration (3.24). One can get similar results about difference of the mathematical and computer-calculating points of views for this example.

Table 4 Cases of Abbas iteration

The next example illustrates Theorem 3.1.

Example 3

Let \(X = \mathbb{R}\), \(C = [1, 60]\), \(x_{0}=40\), \(\alpha_{n}=0.9\), \(\beta _{n}=0.6\), and \(\gamma_{n}=0.8\) for all \(n\geq0\). Define the map \(T\colon C\to C\) by \(T(x) =\sqrt{x^{2}-8x+40}\) for all \(x\in C\) (see [23]). Table 5 shows the Abbas iteration (3.17) converges 1.09 times faster than the Thakur-Thakur-Postolache iteration (2.6) and the Thakur-Thakur-Postolache iteration (2.6) is 1.16 times faster than the Abbas iteration (2.5) from the mathematical point of view. Again, we get different results from the computer-calculating point of view by checking Table 5 and Figures 2 and 3.

Figure 2
figure 2

Convergence behavior of the iteration methods of Thakur equation ( 2.6 ), Abbas equation ( 2.5 ), and Abbas equation ( 3.17 ).

Figure 3
figure 3

CPU time.

Table 5 Comparison between Thakur iteration and Abbas iteration

The next example shows that choosing the coefficients is very important in the rate of convergence of an iteration method.

Example 4

Let \(X=\mathbb{R}\), \(C=[0, 30]\), and \(x_{0}=20\). Define the map \(T\colon \mathbb{R}\to\mathbb{R}\) by \(T(x) = \frac{x}{2}+1\) for all \(x\in C\). Consider the following coefficients separately in the Thakur-Thakur-Postolache iteration (2.6):

  1. (a)

    \(\alpha_{n}=\beta_{n}=\gamma_{n}= 1-\frac{1}{(n+1)^{10}}\),

  2. (b)

    \(\alpha_{n}=\beta_{n}=\gamma_{n}= 1-\frac{1}{n+1}\),

  3. (c)

    \(\alpha_{n}=\beta_{n}=\gamma_{n}= 1-\frac{1}{(n+1)^{\frac{1}{2}}}\),

  4. (d)

    \(\alpha_{n}=\beta_{n}=\gamma_{n}=1 -\frac{1}{(n+1)^{\frac{1}{5}}}\)

for all \(n\geq0\). Table 6 shows that the Thakur-Thakur-Postolache iteration (2.6) with coefficients (a) is 1.25 times faster than the Thakur-Thakur-Postolache iteration (2.6) with coefficients (b), the Thakur-Thakur-Postolache iteration (2.6) with coefficients (a) is 1.6 times faster than the Thakur-Thakur-Postolache iteration (2.6) with coefficients (c) and the Thakur-Thakur-Postolache iteration (2.6) with coefficients (a) is 2.16 times faster than the Thakur-Thakur-Postolache iteration (2.6) with coefficients (d). This note satisfies other iteration methods of course from the mathematical point of view. Here, we find a little different computer-calculating result for the CPU time table of this example, which one can check in Figure 4.

Figure 4
figure 4

CPU time.

Table 6 Cases of Thakur iteration