Advertisement

Journal of Mathematical Chemistry

, Volume 56, Issue 7, pp 2117–2131 | Cite as

Ball convergence of a sixth-order Newton-like method based on means under weak conditions

  • Á. A. Magreñán
  • I. K. Argyros
  • J. J. Rainer
  • J. A. Sicilia
Original Paper

Abstract

We study the local convergence of a Newton-like method of convergence order six to approximate a locally unique solution of a nonlinear equation. Earlier studies show convergence under hypotheses on the seventh derivative or even higher. The convergence in this study is shown under hypotheses on the first derivative although only the first derivative appears in these methods. Hence, the applicability of the method is expanded. Finally, we solve the problem of the fractional conversion in the ammonia process showing the applicability of the theoretical results.

Keywords

Newton-like method Local convergence Stolarky means Gini means Efficiency index 

Mathematics Subject Classification

65D10 65D99 65G99 90C30 

1 Introduction

In this study we are concerned with the problem of approximating a locally unique solution \(\xi \) of equation
$$\begin{aligned} F(x) =0, \end{aligned}$$
(1.1)
where F is a differentiable function defined on a convex subset D of S with values in S, where S is \(\mathbb {R}\) or \(\mathbb {C}\).

Many problems from applied sciences including engineering can be solved by means of finding the solutions of equations in a form like (1.1) using mathematical modelling [4, 5, 31, 34]. For example, dynamic systems are mathematically modeled by difference or differential equations, and their solutions usually represent the states of the systems. Except in special cases, the solutions of these equations can be found in closed form. This is the main reason why the most commonly used solution methods are usually iterative. The convergence analysis of iterative methods is usually divided into two categories: semilocal and local convergence analysis. The semilocal convergence matter is, based on the information around an initial point, to give criteria ensuring the convergence of iteration procedures. A very important problem in the study of iterative procedures is the convergence domain. In general the convergence domain is small. Therefore, it is important to enlarge the convergence domain without additional hypothesis. Another important problem is to find more precise error estimates on the distances \(\Vert x_{n+1}-x_n\Vert , \Vert x_n- \xi \Vert \).

The most popular method for approximating a simple solution \(\xi \) of equation (1.1) is undoubtedly Newton’s method defined for all \(n=0,1,2,\ldots \) by
$$\begin{aligned} x_{n+1}=x_n-\dfrac{F(x_n)}{F'(x_n)}, \end{aligned}$$
(1.2)
where \(x_0\) is an initial point. Newton’s method converges quadratically to \(\xi \) [4, 5] provided that \(F'\) does not vanish in D and \(x_0\) is close enough to \(\xi \). To obtain higher order of convergence many third order methods have been proposed [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35].
These methods look like
$$\begin{aligned} x_{n+1} = x_n-\dfrac{F(x_n)}{F'(x_n)}{\varGamma }(s(x_n)), \end{aligned}$$
(1.3)
where
$$\begin{aligned} s = s(x) = \dfrac{{F'\left( {x - \frac{{F(x)}}{{F'(x)}}} \right) }}{{F'(x)}} \end{aligned}$$
and only differ in the choice of function \({\varGamma }\).
If
$$\begin{aligned} {\varGamma }(s)= \dfrac{2}{{1 + s}}, \end{aligned}$$
(1.4)
in the method (1.3) we obtain a third order method from Weerakoon and Fernando [3]. If
$$\begin{aligned} {\varGamma }(s) = \left\{ \begin{array}{l@{\quad }l} \root q - p \of {{\frac{{q\left( {1 - {s^p}} \right) }}{{p\left( {1 - {s^q}} \right) }}}}, &{} qp\left( {q - p} \right) \ne 0,\\ \exp \left( {\frac{1}{q} + \frac{{{s^q}\ln s}}{{1 - {s^q}}}} \right) , &{} q = p \not = 0,\\ \root q \of {{\frac{{q\ln s}}{{{s^q} - 1}}}}, &{} q \ne 0,p = 0,\\ \frac{1}{{\sqrt{s} }}, &{} q = p = 0,\\ \end{array} \right. \end{aligned}$$
(1.5)
in method (1.3) then, we obtain the Stolarsky mean Newton’s method (SN). The SN method contains the p-power mean method for q \(=\) 2p; the geometric, logarithmic, identric and arithmetic means for p \(=\) q \(=\) 0, q \(=\) 1, p \(=\) 0, q \(=\) p \(=\) 1, q \(=\) 2, p \(=\) 1, respectively. Notice that SN method is not defined for q \(=\) 0, \(p \ne 0\). If
$$\begin{aligned} {\varGamma }(s) = \left\{ \begin{array}{l@{\quad }l} \root p - q \of {{\frac{{1 + {s^q}}}{{1 + {s^p}}}}} &{} q \ne p,\\ \exp \left( { - \frac{{{s^q}\ln s}}{{1 - {s^q}}}} \right) &{} q = p \ne 0,\\ \frac{1}{{\sqrt{s} }} &{} q = p = 0,\\ \end{array} \right. \end{aligned}$$
(1.6)
In method (1.3) we obtain the Gini mean Newton’s method (GN). The GN method contains the p-power mean Newton’s method [9] if q \(=\) 0 and \(p \ne 0\). The arithmetic, harmonic and geometric mean Newton’s methods are special cases of the GN method for q \(=\) 0, p \(=\) 1, q \(=\) 0, p \(=\) − 1, q \(=\) 0 and p \(=\) 0, respectively. In order to avoid \({ exp}, { ln}\) or mth root, we can replace (1.5) and (1.6), respectively by second degree Taylor polynomials in s about \({\varGamma }\), defined by
$$\begin{aligned} {\varGamma }(s) = 1 + \frac{{1 - s}}{2} + \frac{{{{\left( {1 - s} \right) }^2}}}{{24}}\left( {9 - p - q} \right) , \end{aligned}$$
(1.7)
and
$$\begin{aligned} {\varGamma }(s) =1 + \dfrac{{1 - s}}{2} + \frac{{{{\left( {1 - s} \right) }^2}}}{8}\left( {3 - p - q} \right) , \end{aligned}$$
(1.8)
Recently the method defined for all \(n=0,1,2,\ldots \) by
$$\begin{aligned} {x_{n + 1}} = {x_n} - B({x_n})\frac{{F({x_n})}}{{F'({x_n})}}, \end{aligned}$$
(1.9)
where
$$\begin{aligned} B({x_0}) = {\varGamma } \left( {s({x_0})} \right) + \frac{{F\left( {{x_0} - \frac{{F({x_0})}}{{F'({x_0})}}\Gamma \left( {s({x_0})} \right) } \right) \left[ {F'\left( {{x_0} - \frac{{F({x_0})}}{{F'({x_0})}}} \right) + F'({x_0})} \right] }}{{F({x_0})\left[ {3F'\left( {{x_0} - \frac{{F({x_0})}}{{F'({x_0})}}} \right) - F'({x_0})} \right] }} \end{aligned}$$
was studied in [19], where the function \({\varGamma }\) is given by (1.5) or (1.6) or (1.7) or (1.8). The sixth order of convergence was shown under hypotheses reaching up to the seventh derivative of the function F. In terms of the computational cost, method (1.9) require two function evaluations and two first derivative evaluations per iterations. Therefore, the efficiency index is \(6^{1/4} \approx 1.56508\) provided that evaluations of functions such as exp, ln or mth root are negleited.

Notice, that in particular there is a plethora of iterative methods for approximating solutions of nonlinear equations [4, 5]. These results show that if the initial point \(x_0\) is sufficiently close to the solution \(\xi \), then the sequence \(\{x_n\}\) converges to \(\xi \). But how close to the solution \(\xi \) the initial guess \(x_0\) should be? These local results give no information on the radius of the convergence ball or the corresponding method. The same technique can be used to other methods.

In the present study we first study the local convergence of method (1.9) using hypotheses up to the first derivative of function F. We also provide the radius of the convergence ball, computable error bounds on the distances involved and a uniqueness of the solution result. Such results were not given in [19] or the earlier related studies [20, 21]. This way we expand the applicability of method (1.9). It is convenient for us to simplify method (1.9) and study the equivalent method defined for all \(n=0,1,2,\ldots \) by
$$\begin{aligned} {y_n}= & {} {x_n} - \dfrac{{F({x_n})}}{{F'({x_n})}},\nonumber \\ {z_n}= & {} {x_n} - \dfrac{{F({x_n})}}{{F'({x_n})}}\Gamma \left( {s({x_n})} \right) ,\nonumber \\ {x_{n + 1}}= & {} {z_n} - \dfrac{{s({x_n}) + 1}}{{3s({x_n}) - 1}}\dfrac{{F({z_n})}}{{F'({x_n})}}, \end{aligned}$$
(1.10)
where \(s(x_n)\) is defined in method (1.3).

The dynamical properties related to an iterative method applied to polynomials give important information about its stability and reliability. In recently studies, authors such Amat et al. [1, 2, 3], Gutiérrez et al. [16], Chun et al. [11] and many others [9] have found interesting dynamical planes, including periodical behavior and others anomalies. One of our main interests in this paper is the study of the parameter spaces associated to a family of iterative methods, which allow us to distinguish between the good and bad methods in terms of its numerical properties.

The rest of the paper is structured as follows: we present the local convergence analysis of method (1.10) in Sect. 2. The application of method (1.10) to a chemical problem is given in Sect. 3.

2 Local convergence

Let \(F\,{:}\,D\subset X\rightarrow Y\) be a continuously Fréchet-differentiable operator, XY be Banach spaces and D an open, convex subset of X. Consider (1.1) rewritten
$$\begin{aligned} {y_n}= & {} {x_n} - F'({x_n})^{-1}F({x_n}),\nonumber \\ {z_n}= & {} {x_n} - \Gamma (F'(x_n)^{-1}(F'(x_n-F'(x_n)^{-1}F(x_n))F'(x_n)^{-1}F(x_n),\nonumber \\ {x_{n + 1}}= & {} z_n-(3s(x_n)-I)^{-1}(s(x_n)+I)F'(x_n)^{-1}F(z_n), \end{aligned}$$
(2.1)
where \(s(x)=F'(x)^{-1}F'(x-F'(x)^{-1}F(x))\) and \(\Gamma (\cdot )\,{:}\,X\rightarrow \mathbb {L}(Y,X)\) is a linear operator.
In this section we shall present the local convergence, analysis of method (2.1) in the more general setting of a Banach space. Let \(L_0>0, L>0, M\ge 1\) be given parameters. Let \(A\,{:}\,\left[ {0,\frac{1}{{{L_0}}}} \right) \rightarrow \left[ {0,\frac{1}{M}} \right) \) be a continuous and non-decreasing function. It is convenient for the local convergence analysis of method (1.10) that follows to introduce some functions and parameters. Define: functions on the interval \(\left[ {0,\frac{1}{{{L_0}}}} \right) \) by
$$\begin{aligned} g_1(t)= & {} \dfrac{{Lt}}{{2\left( {1 - {L_0}t} \right) }}\\ g_2(t)= & {} \dfrac{{Lt + 2MA(t)}}{{2\left( {1 - {L_0}t} \right) }}\\ h_2(t)= & {} {g_2}(t) - 1 \end{aligned}$$
and parameter
$$\begin{aligned} {r_1} = \dfrac{2}{{2{L_0} + L}} < \frac{1}{{{L_0}}} \end{aligned}$$
(2.2)
By the definition of functions \(g_1, g_2\) and A we have that \(g_1(r_1)=0, h_2(0)=MA(0)-1<0\) and \({h_2}(t) \rightarrow \infty \) as t \(\rightarrow {\left( {\frac{1}{{{L_0}}}} \right) ^ - }\). It follows from the intermediate value theorem that function \(h_2\) has zeros in the interval \(\left( {0,\frac{1}{{{L_0}}}} \right) \). Denote by \(r_2\) the smallest zero. Moreover, define functions on the interval \([0,\infty )\) by
$$\begin{aligned} {g_0}(t) = \dfrac{{{L_0}}}{2}\left( {\frac{3}{2}Lt + 1} \right) t \end{aligned}$$
and
$$\begin{aligned} {h_0}(t) = {g_0}(t) - 1 \end{aligned}$$
Then, the parameter
$$\begin{aligned} {r_0} = \dfrac{{4{L_0}L}}{{{L_0} + \sqrt{L_0^2 + 12{L_0}L} }} \end{aligned}$$
(2.3)
is the only positive zero of function \(h_0\). Finally define functions \(g_3\) and \(h_3\) on the interval \([0, \min \{{\dfrac{1}{{{L_0}}},{r_0}}\})\) by
$$\begin{aligned} {g_3}(t)= & {} \left[ {1 + \frac{{M{L_0}\left( {1 + {g_1}(t)} \right) }}{{2(1 - {L_0}t)(1 - {g_0}(t))}}} \right] {g_2}(t)\\= & {} \frac{1}{{2(1 - {L_0}t)}}\left[ {1 + \frac{{M{L_0}\left( {1 + {g_1}(t)} \right) t}}{{2(1 - {L_0}t)(1 - {g_0}(t))}}} \right] \left( {Lt + 2MA(t)} \right) \end{aligned}$$
and
$$\begin{aligned} h_3(t)=g_3(t)-1. \end{aligned}$$
We have by the definition of functions \(g_1, g_2, g_3\) and A that \(h_3(0)=MA(0)-1<0\) and \({h_3}(t) \rightarrow \infty \) as t \(\rightarrow {\left( {\frac{1}{{{L_0}}}} \right) ^ - }\) or t \(\rightarrow r_0^ - \). It follows again from the intermediate value theorem that function \(h_3\) has zeros in the interval \((0, \min \{{\dfrac{1}{{{L_0}}},{r_0}}\})\). Denote by \(r_3\) the smallest such zero. Set
$$\begin{aligned} r=\min \left\{ r_i \right\} ,\quad i=0,1,2,3. \end{aligned}$$
(2.4)
Then, we have that
$$\begin{aligned} 0\le g_i(t)<1,\quad \text { for each }\,\, t\in [0,r). \end{aligned}$$
(2.5)
Let \(U(x,{\varrho }),\bar{U}\left( {\xi ,\varrho } \right) \) stand for the open and closed balls in S, respectively with center \(x \in X\) and of radius \({\varrho } > 0\). Moreover, let \(R>0\). Define
$$\begin{aligned} R_0:={ sup}\{t\in [0,R):U(\xi ,t)\subseteq D\}. \end{aligned}$$
(2.6)
Next, using the preceding notation we can show the local convergence result for method (1.10).

Theorem 1

Let F: \(U(\xi ,R_0) \subseteq X \rightarrow Y\) be Fréchet-differentiable. Suppose that there exist \(\xi \in D, L_0>0\) such that for each \(x\in U(\xi ,R_0)\):
$$\begin{aligned}&F\left( {{\xi }} \right) = 0\end{aligned}$$
(2.7)
$$\begin{aligned}&F'\left( {{\xi }} \right) \ne 0,\end{aligned}$$
(2.8)
$$\begin{aligned}&\left\| {F'{{\left( {{\xi }} \right) }^{ - 1}}\left( {F'\left( x \right) - F'\left( {{\xi }} \right) } \right) } \right\| \leqslant {L_0}\left\| {x - {\xi }} \right\| . \end{aligned}$$
(2.9)
Moreover, suppose that there exist \(L>0, M\ge 1, \left\{ {{s_n}} \right\} \) and functions \(\Gamma (\cdot )\,{:}\,X \rightarrow L(Y,X)\), A: \(\left[ {0,\frac{1}{{{L_0}}}} \right) \rightarrow \left[ {0,\frac{1}{M}} \right) \) continuous and non-decreasing such that for each \(x,y \in U(\xi ,\frac{1}{L_0})\cap U(\xi ,R_0)\)
$$\begin{aligned}&\left\| {F'{{\left( {{\xi }} \right) }^{ - 1}}\left( {F'\left( x \right) - F'\left( y \right) } \right) } \right\| \leqslant L\left\| {x - y} \right\| ,\end{aligned}$$
(2.10)
$$\begin{aligned}&\left\| {F'{{\left( {{\xi }} \right) }^{ - 1}}F'(x)} \right\| \leqslant M, \end{aligned}$$
(2.11)
and
$$\begin{aligned} \left| {1 - \Gamma \left( {{s(x)}} \right) } \right| \leqslant A{(\Vert x-\xi \Vert ){\left\| {{x} - {\xi }} \right\| }}. \end{aligned}$$
(2.12)
Then, sequence \(\left\{ {{x_n}} \right\} \) generated by method (1.10) for \({x_0} \in U\left( {{\xi },r} \right) {\setminus }\{\xi \}\) remains in \(U\left( {{\xi },r} \right) \) for each \(n=0,1,2,\ldots \) and converges to \(\xi \) where r is defined in (2.4). Moreover, the following estimates for each \(n=0,1,2,\ldots \)
$$\begin{aligned}&\left\| {{y_n} - {\xi }} \right\| \leqslant {g_1}\left( {\left\| {{x_n} - {\xi }} \right\| } \right) \left\| {{x_n} - {\xi }} \right\| \le \left\| {{x_n} - {\xi }} \right\| <r,\end{aligned}$$
(2.13)
$$\begin{aligned}&\left\| {{z_n} - {\xi }} \right\| \leqslant {g_2}\left( {\left\| {{x_n} - {\xi }} \right\| } \right) \left\| {{x_n} - {\xi }} \right\| \le \left\| {{x_n} - {\xi }} \right\| \end{aligned}$$
(2.14)
$$\begin{aligned}&\left\| {{{\left( {3F'({y_n}) - F'({x_n})} \right) }^{ - 1}}\left( {2F'\left( {{\xi }} \right) } \right) } \right\| \leqslant \frac{1}{{1 - {g_0}(\left\| {{x_n} - {\xi }} \right\| )}} \end{aligned}$$
(2.15)
and
$$\begin{aligned} \left\| {{x_{n + 1}} - {\xi }} \right\| \leqslant {g_3}\left( {\left\| {{x_n} - {\xi }} \right\| } \right) \left\| {{x_n} - {\xi }} \right\| \le \left\| {{x_n} - {\xi }} \right\| , \end{aligned}$$
(2.16)
where the \(g_i, i=0,1,2,3\) functions are defined preciously. Furthermore for \(T\in [r,\frac{2}{L_0})\), the vector point \(\xi \) is the only solution of equation \(F(x)=0\) in \(U\left( \xi ,R_0\right) \cap \overline{U}(\xi ,T)\).

Proof

By hypothesis \({x_0} \in U\left( {{\xi },r} \right) \), the definition of r and (2.9) we get that
$$\begin{aligned} \left\| {F'{{\left( {{\xi }} \right) }^{ - 1}}\left( {F'\left( {{x_0}} \right) - F'\left( {{\xi }} \right) } \right) } \right\| \leqslant {L_0}\left\| {{x_0} - {\xi }} \right\|< {L_0}r < 1 \end{aligned}$$
(2.17)
It follows from (2.17) and the Banach lemma on invertible functions [4, 5, 33] that \(F'{\left( {{x_0}} \right) ^{ - 1}} \in L\left( Y,X \right) \) and
$$\begin{aligned} \left\| {F'{{\left( {{x_0}} \right) }^{ - 1}}F'\left( {{\xi }} \right) } \right\| \leqslant \frac{1}{{1 - {L_0}\left\| {{x_0} - {\xi }} \right\| }} < \frac{1}{{1 - {L_0}r}}. \end{aligned}$$
(2.18)
Hence, \(y_0\) is well defined by the first substep of method (1.10) for \(\mathrm{n}=0\). Using the first substep of method (1.10) for \(\mathrm{n}=0\) and (2.7), we have that
$$\begin{aligned} {y_0} - {\xi }= & {} {x_0} - {\xi } - F'{({x_0})^{ - 1}}F({x_0})\nonumber \\= & {} - F'{({x_0})^{ - 1}}F'\left( {{\xi }} \right) \int \limits _0^1 {F'{{\left( {{\xi }} \right) }^{ - 1}}\left[ {F'\left( {{\xi } + \theta \left( {{x_0} - {\xi }} \right) } \right) - F'({x_0})} \right] \left( {{x_0} - {\xi }} \right) } d\theta \nonumber \\ \end{aligned}$$
(2.19)
Then, using (2.4), (2.5), (2.10), (2.18) and (2.19), we obtain that
$$\begin{aligned} \left\| {{y_0} - {\xi }} \right\|\leqslant & {} \left\| {F'{{({x_0})}^{ - 1}}F'\left( {{\xi }} \right) } \right\| \left\| {\int \limits _0^1 {F'{{\left( {{\xi }} \right) }^{ - 1}}\left[ {F'\left( {{\xi } + \theta \left( {{x_0} - {\xi }} \right) } \right) - F'({x_0})} \right] \left( {{x_0} - {\xi }} \right) } d\theta } \right\| \nonumber \\\leqslant & {} \dfrac{{L{{\left\| {{x_0} - {\xi }} \right\| }^2}}}{{2\left( {1 - {L_0}\left\| {{x_0} - {\xi }} \right\| } \right) }} = {g_1}\left( {\left\| {{x_0} - {\xi }} \right\| } \right) \left( {\left\| {{x_0} - {\xi }} \right\| } \right) \le {\left\| {{x_0} - {\xi }} \right\| } < r \end{aligned}$$
(2.20)
which shows (2.13) for \(n=0\) and \(y_0\in U(\xi ,r)\).
Then using the second substep of method (1.10) for n=0, (2.13), (2.18), (2.20), (2.11) and (2.12) we get in then that
$$\begin{aligned} {z_0} - {\xi }= & {} {x_0} - {\xi } - F'{\left( {{x_0}} \right) ^{ - 1}}F\left( {{x_0}} \right) + F'{\left( {{x_0}} \right) ^{ - 1}}F\left( {{x_0}} \right) - \Gamma \left( {s\left( {{x_0}} \right) } \right) F'{\left( {{x_0}} \right) ^{ - 1}}F\left( {{x_0}} \right) \nonumber \\= & {} {y_0} - {\xi } + \left[ {1 - \Gamma \left( {s\left( {{x_0}} \right) } \right) } \right] F'{\left( {{x_0}} \right) ^{ - 1}}F\left( {{x_0}} \right) \end{aligned}$$
(2.21)
so,
$$\begin{aligned} \left\| {{z_0} - {\xi }} \right\|\leqslant & {} \left\| {{y_0} - {\xi }} \right\| + \left\| {\left[ {1 - \Gamma \left( {s\left( {{x_0}} \right) } \right) } \right] } \right\| \left\| {F'{{\left( {{x_0}} \right) }^{ - 1}}F'\left( {{\xi }} \right) } \right\| \left\| {F'{{\left( {{\xi }} \right) }^{ - 1}}F\left( {{x_0}} \right) } \right\| \\\leqslant & {} {g_1}\left( {\left\| {{x_0} - {\xi }} \right\| } \right) \left\| {{x_0} - {\xi }} \right\| + \frac{{A\left( {\left\| {{x_0} - {\xi }} \right\| } \right) M\left\| {{x_0} - {\xi }} \right\| }}{{1 - {L_0}\left\| {{x_0} - {\xi }} \right\| }}\\= & {} {g_2}\left( {\left\| {{x_0} - {\xi }} \right\| } \right) \left\| {{x_0} - {\xi }} \right\| \le \left\| {{x_0} - {\xi }} \right\| < r, \end{aligned}$$
which shows (2.14) for \(\mathrm{n}=0\) and \(z_0\in U(\xi ,r)\). Notice that we used the estimates \( \left\| {{\xi } + \theta \left( {{x_0} - {\xi }} \right) - {\xi }} \right\| \leqslant \theta \left\| {{x_0} - {\xi }} \right\| \leqslant \left\| {{x_0} - {\xi }} \right\| < r \)
$$\begin{aligned} F\left( {{x_0}} \right)= & {} F\left( {{x_0}} \right) - F\left( {{\xi }} \right) = \int \limits _0^1 {F'\left( {{\xi } + \theta \left( {{x_0} - {\xi }} \right) } \right) \left( {{x_0} - {\xi }} \right) d\theta ,} \nonumber \\&\text {and } \left\| {F'{{\left( {{\xi }} \right) }^{ - 1}}\int \limits _0^1 {F'\left( {{\xi } + \theta \left( {{x_0} - {\xi }} \right) } \right) \left( {{x_0} - {\xi }} \right) } d\theta } \right\| \leqslant M\left\| {{x_0} - {\xi }} \right| \nonumber \\ \end{aligned}$$
(2.22)
Next, we need estimates on \(\left\| {F'\left( {{y_0}} \right) + F'\left( {{x_0}} \right) } \right\| \) and \(\left\| {{{\left( {3F'({y_0}) - F'({x_0})} \right) }^{ - 1}}} \right\| \).
Using (2.9) and (2.13) we get that
$$\begin{aligned} \left\| {{{\left( {2F'\left( {{\xi }} \right) } \right) }^{ - 1}}\left( {F'\left( {{y_0}} \right) + F'\left( {{x_0}} \right) } \right) } \right\|\leqslant & {} \frac{1}{2}\left[ \left\| {F'{{\left( {{\xi }} \right) }^{ - 1}}\left( {F'\left( {{y_0}} \right) - F'\left( {{\xi }} \right) } \right) } \right\| \right. \nonumber \\&\left. +\,\left\| {F'{{\left( {{\xi }} \right) }^{ - 1}}\left( {F'\left( {{x_0}} \right) - F'\left( {{\xi }} \right) } \right) } \right\| \right] \nonumber \\\leqslant & {} \frac{1}{2}{L_0}\left( {\left\| {{y_0} - {\xi }} \right\| + \left\| {{x_0} - {\xi }} \right\| } \right) \nonumber \\\leqslant & {} \frac{1}{2}{L_0}\left( {{g_1}\left( {\left\| {{x_0} - {\xi }} \right\| } \right) + 1} \right) \left\| {{x_0} - {\xi }} \right| \qquad \qquad \end{aligned}$$
(2.23)
and
$$\begin{aligned}&\left\| {{{\left( {2F'\left( {{\xi }} \right) } \right) }^{ - 1}}{{\left( {3F'({y_0}) - F'({x_0}) - 2F'\left( {{\xi }} \right) } \right) }}} \right\| \nonumber \\&\quad \leqslant \frac{1}{2}\left[ {3\left\| {F'{{\left( {{\xi }} \right) }^{ - 1}}\left( {F'\left( {{y_0}} \right) - F'\left( {{\xi }} \right) } \right) } \right\| + \left\| {F'{{\left( {{\xi }} \right) }^{ - 1}}\left( {F'\left( {{x_0}} \right) - F'\left( {{\xi }} \right) } \right) } \right\| } \right] \nonumber \\&\quad \leqslant \frac{1}{2}\left[ {3{L_0}\left\| {{y_0} - {\xi }} \right\| + {L_0}\left\| {{x_0} - {\xi }} \right\| } \right] \nonumber \\&\quad \leqslant \frac{{{L_0}}}{2}\left[ {3{g_1}\left( {\left\| {{x_0} - {\xi }} \right\| } \right) + 1} \right] \left\| {{x_0} - {\xi }} \right\| = {g_0}\left( {\left\| {{x_0} - {\xi }} \right\| } \right) < 1. \end{aligned}$$
(2.24)
It follows from (2.24) that \({3F'({y_n}) - F'({x_n})}\) is invertible and
$$\begin{aligned} \left\| {{{\left( {3F'({y_0}) - F'({x_0})} \right) }^{ - 1}}\left( {2F'\left( {{\xi }} \right) } \right) } \right\| \leqslant \frac{1}{{{g_0}\left( {\left\| {{x_0} - F'\left( {{\xi }} \right) } \right\| } \right) }} \end{aligned}$$
(2.25)
which shows (2.15) for \(\mathrm{n}=0\). Then, using the third substep of method (1.10) for \(\mathrm{n}=0\), (2.4), (2.5), (2.14), (2.18), (2.22) (for \(z_0\) replacing \(x_0\)), (2.23) and (2.25) we obtain in turn that
$$\begin{aligned} \left\| {{x_1} - {\xi }} \right\|\leqslant & {} \left\| {{z_0} - {\xi }} \right\| + \frac{{M\left\| {{z_0} - {\xi }} \right\| }}{{1 - {L_0}\left\| {{x_0} - {\xi }} \right\| }}\left\| {{{\left( {2F'\left( {{\xi }} \right) } \right) }^{ - 1}}\left( {F'\left( {{y_0}} \right) + F'\left( {{x_0}} \right) } \right) } \right\| \\&\quad \left\| {{{\left( {3F'({y_0}) - F'({x_0})} \right) }^{ - 1}}2F'\left( {{\xi }} \right) } \right\| \\\leqslant & {} \left[ {1 + \frac{{M{L_0}\left( {1 + {g_1}\left( {\left\| {{x_0} - {\xi }} \right\| } \right) } \right) \left\| {{x_0} - {\xi }} \right\| }}{{2\left( {1 - {L_0}\left\| {{x_0} - {\xi }} \right\| } \right) \left( {1 - {g_0}\left( {\left\| {{x_0} - {\xi }} \right\| } \right) } \right) }}} \right] \left\| {{z_0} - {\xi }} \right\| \\\leqslant & {} \left[ {1 + \frac{{M{L_0}\left( {1 + {g_1}\left( {\left\| {{x_0} - {\xi }} \right\| } \right) } \right) \left\| {{x_0} - {\xi }} \right\| }}{{2\left( {1 - {L_0}\left\| {{x_0} - {\xi }} \right\| } \right) \left( {1 - {g_0}\left( {\left\| {{x_0} - {\xi }} \right\| } \right) } \right) }}} \right] {g_2}\left( {\left\| {{x_0} - {\xi }} \right\| } \right) \left\| {{x_0} - {\xi }} \right\| \\= & {} {g_3}\left( {\left\| {{x_0} - {\xi }} \right\| } \right) \left\| {{x_0} - {\xi }} \right\|< \left\| {{x_0} - {\xi }} \right\| < r \end{aligned}$$
which shows (2.16) for \(\mathrm{n}=0\) and \(x_1 \in U(\xi ,r)\). By simply replacing \(x_0, y_0, x_1\) by \(x_k, y_k, x_{k+1}\) in the preceding estimates we arrive at (2.13)–(2.16). Using the estimate \(\left\| {{x_{k + 1}} - {\xi }} \right\| \le c \left\| {{x_k} - {\xi }} \right\| <r, c=g_3(\Vert x_0-\xi \Vert )\in [0,1]\) we deduce that \(\mathop {\lim }\nolimits _{k \rightarrow \infty } {x_k} = {\xi }\) and \({x_{k + 1}} \in U\left( {{\xi },r} \right) \). Finally, to show the uniqueness part, let \(Q= \int \nolimits _0^1 {F'\left( {{y^*} + \theta \left( {{\xi } - {y^*}} \right) } \right) d\theta ,}\) for some \({y^*} \in D_0\) with \(F(y^*)=0\). Using (2.9) we get that
$$\begin{aligned} \left\| {F'{{\left( {{\xi }} \right) }^{ - 1}}\left( {Q - F'\left( {{\xi }} \right) } \right) } \right\|\le & {} \int \limits _0^1 {{L_0}\left\| {{y^*} + \theta \left( {{\xi } - {y^*}} \right) - {\xi }} \right\| } d\theta \nonumber \\\le & {} {\int \limits _0^1 {\left( {1 - \theta } \right) \left\| {{\xi } - {y^*}} \right\| } d\theta } \le {\frac{{{L_0}}}{2}T < 1} \end{aligned}$$
(2.26)
It follows from (2.26) that T is invertible. Then, in view of the identity \(0=F(\xi )-F(y^*)=Q(\xi -y^*)\), we conclude that \(\xi =y^*\).\(\square \)

Remark 1

  1. (1)
    In view of (2.9) and the estimate
    $$\begin{aligned} \Vert F'(\xi )^{-1}F'(x)\Vert= & {} \Vert F'(\xi )^{-1}(F'(x)-F'(\xi ))+I\Vert \\\le & {} 1+\Vert F'(\xi )^{-1}(F'(x)-F'(\xi ))\Vert \\\le & {} 1+L_0\Vert x_0-\xi \Vert \end{aligned}$$
    condition (2.14) can be dropped and M can be replaced by
    $$\begin{aligned} M(t)=1+L_0t. \end{aligned}$$
    or by \(M(t)=M=2\), since \(t\in \left[ 0,\frac{1}{L_0}\right) \).
     
  2. (2)
    The results obtained here can be used for operators F satisfying the autonomous differential equation [4, 5] of the form
    $$\begin{aligned} F'(x)=P(F(x)), \end{aligned}$$
    where \(P\,{:}\,\mathbb {R}\rightarrow \mathbb {R}\) is a known continuous operator, when e.g. \(X=Y=\mathbb {R}\). We can apply the results without actually knowing the solution \(x^*\), since \(F'(x^*)= P(F(x^*))=P(0)\). Let as an example \(F(x)=e^x-1.\) Then, we can choose \(P(x)=x+1\).
     
  3. (3)
    The radius \(r_A=\frac{2}{2L_0+L_1}\) was shown in [4, 5] to be the convergence radius for Newton’s method
    $$\begin{aligned} x_{n+1}=x_n-F'(x_n)^{-1}F(x_n),\quad \text { for each }\,\,n=0,1,2\ldots . \end{aligned}$$
    (2.27)
    under condition (2.7) and
    $$\begin{aligned} \Vert F'(\xi )^{-1}(F'(x)-F'(y))\Vert \le L_1\Vert x-y\Vert \quad \text { for each }\,\,x,y\in U(\xi ,R_0). \end{aligned}$$
    Notice that \(L_0\le L_1\) and \(L\le L_1\). It follows from (2.4) and the definition of \(r_1\) that the convergence radius r of the method (2.4) cannot be larger than the convergence radius \(r_A\) of the second order Newton’s method (2.27). As already noted in [4, 5] \(r_A\) is at least as large as the convergence ball give by Rheinboldt [33] or Traub [34]
    $$\begin{aligned} r_R=\frac{2}{3L}. \end{aligned}$$
    (2.28)
    In particular, for \(L_0<L<L_1\) we have that
    $$\begin{aligned}&r_R<r_A<r_1,\\&\dfrac{r_R}{r_A}\rightarrow \dfrac{1}{3}\quad \text {as}\quad \dfrac{L_0}{L_1}\rightarrow 0\quad \text { and }\,\,\dfrac{r_A}{r_1}\rightarrow \dfrac{L}{L_1}>1. \end{aligned}$$
     
  4. (4)
    It is worth noticing that method (1.10) is not changing if we use the conditions of Theorem 2.1 instead of the stronger conditions given in [9]. Moreover, for the error bounds in practice we can use the computational order of convergence (COC) [4, 5]
    $$\begin{aligned} \psi =\sup \frac{{ ln}\frac{\Vert x_{n+2}-x_{n+1}\Vert }{\Vert x_{n+1}-x_{n}\Vert }}{{ ln}\frac{\Vert x_{n+1}-x_{n}\Vert }{\Vert x_{n}-x_{n-1}\Vert }},\quad for all n=1,2,\ldots \end{aligned}$$
    or the approximate computational order of convergence (ACOC) [13]
    $$\begin{aligned} \psi ^*=\sup \frac{{ ln}\frac{\Vert x_{n+2}-\xi \Vert }{\Vert x_{n+1}-\xi \Vert }}{{ ln}\frac{\Vert x_{n+1}-\xi \Vert }{\Vert x_{n}-\xi \Vert }},\quad for all \,\,n=0,1,2,\ldots \end{aligned}$$
    This way we obtain in practice the order of convergence in a way that avoids the bounds involving estimates higher than the first Fréchet derivative. Notice that the computation of \(\xi \) does not require the knowledge of \(\xi \).
     
  5. (5)
    Let us show how to choose function A in the case when function \(\Gamma \) is given by (1.7) and \(X=Y=\mathbb {R}\). We have in turn the estimate
    $$\begin{aligned} |1-A(s(x_n))|= & {} \left| 1-1-\dfrac{1-\dfrac{F'(y_n)}{F'(x_n)}}{2}+\dfrac{\left( \dfrac{F'(y_n)}{F'(x_n)}-1\right) ^2}{24}(9-p-q)\right| \\= & {} \left| \dfrac{F'(x_n)-F'(y_n)}{2F'(x_n)}+\dfrac{(F'(x_n)-F'(y_n))^2}{24(F'(x_n))^2}(9-p-q)\right| \\\le & {} \dfrac{1}{2}\dfrac{|F'(\xi )^{-1}F'(x_n)-F'(y_n)|}{|F'(\xi )^{-1}F(x_n)|}\\&\quad +\,\dfrac{|9-p-q|}{24}\dfrac{|F'(\xi )^{-1}(F'(y_n)-F'(x_n))|^2}{|F'(\xi )^{-1}F(x_n)|^2}\\\le & {} \dfrac{1}{2}\dfrac{L_0(|x_n-\xi |+|y_n-\xi |)}{1-L_0|x_n-\xi |} +\,\dfrac{|9-p-q|L_0^2}{24}\dfrac{(|x_n-\xi |+|y_n-\xi |)^2}{(1-L_0|x_n-\xi |)^2}\\\le & {} \dfrac{1}{2}\dfrac{L_0(1+g_1(|x_n-\xi |)|x_n-\xi |)}{1-L_0|x_n-\xi |}\\&\quad +\,\dfrac{|9-p-q|L_0^2}{24}\dfrac{1+g_1(|x_n-\xi |))^2|x_n-\xi |^2}{(1-L_0|x_n-\xi |)^2}\\= & {} A(|x_n-\xi |),\\ \end{aligned}$$
    where
    $$\begin{aligned} A(t)=\dfrac{L_0}{24(1{-}L_0 t)^2}\left[ 12(1{-}L_0t)(1{+}g_1(t))+|9-p-q|L_0(1+g_1(t))^2t\right] t.\nonumber \\ \end{aligned}$$
    (2.29)
    It is worth noticing that the condition \(MA(0)-1<0\) is satisfied, since by the definition of function \(A, MA(0)-1=0-1=-\,1\). Moreover, we must have \(s(x_n)\in D\). Let us define function \(g_4\) on the interval \([0,\frac{1}{L_0})\) by \(g_4(t)=\frac{MLt}{2(1-L_0t)^2}\) and parameter
    $$\begin{aligned} 0<r_4=\dfrac{2}{2L_0+\frac{ML}{2}+\sqrt{(2L_0+\frac{ML}{2})^2}-4L_0^2}< \dfrac{1}{L_0}. \end{aligned}$$
    Notice that \(r_4\in (0,\frac{1}{L_0})\). Then, we have that \(0\le g_4(t)\le 1\) for each \(t\in (0,r_4)\). Set
    $$\begin{aligned} r={ min}\{r_0,r_2,r_3,r_4\}. \end{aligned}$$
    (2.30)
    Then, we get
    $$\begin{aligned} |s_n|= & {} \left| \dfrac{F'(y_n)}{F'(x_n)}\right| =\left| \dfrac{F'(\xi )^{-1}F'(y_n)}{F'(\xi )^{-1}F'(x_n)}\right| \le \dfrac{M|y_n-\xi |}{1-L_0|x_n-\xi |}\\\le & {} \dfrac{ML|x_n-\xi |^2}{2(1-L_0|x_n-\xi |)^2}|x_n-\xi |\\= & {} g_4(|x_n-\xi |)|x_n-\xi |\le |x_n-\xi |. \end{aligned}$$
    Then, condition (2.12) is satisfied for these choices of function A and radius r. Similarly, we can define function A by using the other choices of function \(\Gamma \) given in the introduction of this study.
     

3 Application

Now, in order to show the applicability of our theory in a real problem we are going to consider the following quartic equation that describes the fraction of the nitrogen–hydrogen feed that gets converted to ammonia, called the fractional conversion showed in Fig. 1 they process of ammonia is shown.
For 250 atm. and \(500\,^{\circ }\mathrm{C}\), this equation takes the form:
$$\begin{aligned} f(x)=x^4-7.79075x^3+14.7445x^2+2.511x-1.674 \end{aligned}$$
Let \(S = \mathbb {R}, D = [0,1], \xi =0\). Define function F on D by Then, we get
$$\begin{aligned} L_0= & {} 2.59403\\ L= & {} 3.28225 \end{aligned}$$
and
$$\begin{aligned} M = 1.44197. \end{aligned}$$
Then, by the definition of the \(g_i, i=0,1,2,3,4\) functions and by choosing \(\Gamma \) as in (1.7) with \(p = q = 1\) and A(t) as in (2.27) we obtain
$$\begin{aligned} r_0= & {} 2.61383\ldots ,\quad r_1=0.236119\ldots ,\quad r_2=0.115116\ldots ,\\ r_3= & {} 0.0579031\ldots ,\quad r_4=0.153306\ldots . \end{aligned}$$
Consequently,
$$\begin{aligned} r = r_3 =0.0579031\ldots . \end{aligned}$$
Hence, we can guarantee the convergence of the method (1.10) by means of Theorem 1.

Notes

Acknowledgements

The research of this author was supported by Universidad Internacional de La Rioja (UNIR, http://www.unir.net), under the Plan Propio de Investigación, Desarrollo e Innovación 4 [2017–2019]. Research group: Modelación matemática aplicada a la ingeniería(MOMAIN), by Programa de Apoyo a la investigación de la fundación Séneca-Agencia de Ciencia y Tecnología de la Región de Murcia 19374/PI714 and by Ministerio de Ciencia y Tecnología MTM2014-52016-C2-01-P.

References

  1. 1.
    S. Amat, S. Busquier, S. Plaza, Dynamics of the King and Jarratt iterations. Aequ. Math. 69(3), 212–223 (2005)CrossRefGoogle Scholar
  2. 2.
    S. Amat, M.A. Hernández, N. Romero, A modified Chebyshev’s iterative method with at least sixth order of convergence. Appl. Math. Comput. 206(1), 164–174 (2008)Google Scholar
  3. 3.
    S. Amat, S. Busquier, S. Plaza, Chaotic dynamics of a third-order Newton-type method. J. Math. Anal. Appl. 366(1), 24–32 (2010)CrossRefGoogle Scholar
  4. 4.
    I.K. Argyros, Convergence and Application of Newton-Type Iterations (Springer, Berlin, 2008)Google Scholar
  5. 5.
    I.K. Argyros, S. Hilout, Numerical Methods in Nonlinear Analysis (World Scientific, Singapore, 2013)Google Scholar
  6. 6.
    I.K. Argyros, Á.A. Magreñán, Iterative Methods and Their Dynamics with Applications (CRC Press, London, 2017)CrossRefGoogle Scholar
  7. 7.
    I.K. Argyros, S. George, Á.A. Magreñán, Local convergence for multi-point-parametric Chebyshev–Halley-type methods of high convergence order. J. Comput. Appl. Math. 282, 215–224 (2015)CrossRefGoogle Scholar
  8. 8.
    D.D. Bruns, J.E. Bailey, Nonlinear feedback control for operating a nonisothermal CSTR near an unstable steady state. Chem. Eng. Sci. 32, 257–264 (1977)CrossRefGoogle Scholar
  9. 9.
    D. Budzko, A. Cordero, J.R. Torregrosa, A new family of iterative methods widening areas of convergence. Appl. Math. Comput. 252, 405–417 (2015)Google Scholar
  10. 10.
    V. Candela, A. Marquina, Recurrence relations for rational cubic methods I: the Halley method. Computing 44, 169–184 (1990)CrossRefGoogle Scholar
  11. 11.
    V. Candela, A. Marquina, Recurrence relations for rational cubic methods II: the Chebyshev method. Computing 45(4), 355–367 (1990)CrossRefGoogle Scholar
  12. 12.
    C. Chun, Some improvements of Jarratt’s method with sixth-order convergence. Appl. Math. Comput. 190(2), 1432–1437 (1990)Google Scholar
  13. 13.
    A. Cordero, J.R. Torregrosa, Á.A. Magreñán, C. Quemada, Stability study of eighth-order iterative methods for solving nonlinear equations. J. Comput. Appl. Math. 291(9960), 348–357 (2016)CrossRefGoogle Scholar
  14. 14.
    J.A. Ezquerro, M.A. Hernández, Recurrence relations for Chebyshev-type methods. Appl. Math. Optim. 41(2), 227–236 (2000)CrossRefGoogle Scholar
  15. 15.
    J.A. Ezquerro, M.A. Hernández, On the R-order of the Halley method. J. Math. Anal. Appl. 303, 591–601 (2005)CrossRefGoogle Scholar
  16. 16.
    J.A. Ezquerro, M.A. Hernández, New iterations of R-order four with reduced computational cost. BIT Numer. Math. 49, 325–342 (2009)CrossRefGoogle Scholar
  17. 17.
    J.M. Gutiérrez, M.A. Hernández, Recurrence relations for the super-Halley method. Comput. Math. Appl. 36(7), 1–8 (1998)CrossRefGoogle Scholar
  18. 18.
    J.M. Gutiérrez, Á.A. Magreñán, J.L. Varona, The “gauss-seidelization” of iterative methods for solving nonlinear equations in the complex plane. Appl. Math. Comput. 218(6), 2467–2479 (2011)Google Scholar
  19. 19.
    D. Herceg, D.J. Herceg, Sixth order modifications on Newton’s method based on Stolarsky and Gini means. J. Comput Appl. Math. 267, 244–253 (2014)CrossRefGoogle Scholar
  20. 20.
    M.A. Hernández, Chebyshev’s approximation algorithms and applications. Comput. Math. Appl. 41(3–4), 433–455 (2001)CrossRefGoogle Scholar
  21. 21.
    M.A. Hernández, M.A. Salanova, Sufficient conditions for semilocal convergence of a fourth order multipoint iterative method for solving equations in Banach spaces. Southwest J. Pure Appl. Math 1, 29–40 (1999)Google Scholar
  22. 22.
    P. Jarratt, Some fourth order multipoint methods for solving equations. Math. Comput. 20(95), 434–437 (1966)CrossRefGoogle Scholar
  23. 23.
    J. Kou, Y. Li, An improvement of the Jarratt method. Appl. Math. Comput. 189, 1816–1821 (2007)Google Scholar
  24. 24.
    J. Kou, X. Wang, Semilocal convergence of a modified multi-point Jarratt method in Banach spaces under general continuity conditions. Numer. Algorithms 60, 369–390 (2012)CrossRefGoogle Scholar
  25. 25.
    D. Li, P. Liu, J. Kou, An improvement of the Chebyshev–Halley methods free from second derivative. Appl. Math. Comput. 235, 221–225 (2014)CrossRefGoogle Scholar
  26. 26.
    Á.A. Magreñán, Different anomalies in a Jarratt family of iterative root-finding methods. Appl. Math. Comput. 233, 29–38 (2014)Google Scholar
  27. 27.
    Á.A. Magreñán, A new tool to study real dynamics: the convergence plane. Appl. Math. Comput. 248, 215–224 (2014)Google Scholar
  28. 28.
    Á.A. Magreñán, J.M. Gutiérrez, Real dynamics for damped Newton’s method applied to cubic polynomials. J. Comput. Appl. Math. 275, 527–538 (2015)CrossRefGoogle Scholar
  29. 29.
    Á.A. Magreñán, A. Coredero, J.M. Gutiérrez, J.R. Torregrosa, Real qualitative behavior of a fourth-order family of iterative methods by using the convergence plane. Math. Comput. Simul. 105, 49–61 (2014)CrossRefGoogle Scholar
  30. 30.
    S.K. Parhi, D.K. Gupta, Recurrence relations for a Newton-like method in Banach spaces. J. Comput. Appl. Math. 206(2), 873–887 (2007)CrossRefGoogle Scholar
  31. 31.
    L.B. Rall, Computational Solution of Nonlinear Operator Equations (Robert E. Krieger, New York, 1979)Google Scholar
  32. 32.
    H. Ren, Q. Wu, W. Bi, New variants of Jarratt method with sixth-order convergence. Numer. Algorithms 52(4), 585–603 (2009)CrossRefGoogle Scholar
  33. 33.
    W.C. Rheinboldt, An adaptive continuation process for solving systems of nonlinear equations. Pol. Acad. Sci. Banach Ctr. Publ. 3, 129–142 (1978)Google Scholar
  34. 34.
    J.F. Traub, Iterative Methods for the Solution of Equations. Prentice-Hall Series in Automatic Computation (Prentice-Hall, Englewood Cliffs, 1964)Google Scholar
  35. 35.
    X. Wang, J. Kou, C. Gu, Semilocal convergence of a sixth-order Jarratt method in Banach spaces. Numer. Algorithms 57, 441–456 (2011)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Á. A. Magreñán
    • 1
  • I. K. Argyros
    • 2
  • J. J. Rainer
    • 1
  • J. A. Sicilia
    • 1
  1. 1.Escuela de IngenieríaUniversidad Internacional de La RiojaLogroñoSpain
  2. 2.Department of Mathematics SciencesCameron UniversityLawtonUSA

Personalised recommendations