Ball convergence theorems for eighth-order variants of Newton’s method under weak conditions

We present a local convergence analysis for eighth-order variants of Newton’s method in order to approximate a solution of a nonlinear equation. We use hypotheses up to the first derivative in contrast to earlier studies such as Amat et al. (Appl Math Comput 206(1):164–174, 2008), Amat et al. (Aequationes Math 69:212–213, 2005), Chun et al. (Appl Math Comput. 227:567–592, 2014), Petkovic et al. (Multipoint methods for solving nonlinear equations. Elsevier, Amsterdam, 2013), Potra and Ptak (Nondiscrete induction and iterative processes. Pitman Publ, Boston, 1984), Rall (Computational solution of nonlinear operator equations. Robert E. Krieger, New York, 1979), Ren et al. (Numer Algorithms 52(4):585–603, 2009), Rheinboldt (An adaptive continuation process for solving systems of nonlinear equations. Banach Center, Warsaw, 1975), Traub (Iterative methods for the solution of equations. Prentice Hall, Englewood Cliffs, 1964), Weerakoon and Fernando (Appl Math Lett 13:87–93, 2000), Wang and Kou (J Differ Equ Appl 19(9):1483–1500, 2013) using hypotheses up to the seventh derivative. This way the applicability of these methods is extended under weaker hypotheses. Moreover, the radius of convergence and computable error bounds on the distances involved are also given in this study. Numerical examples are also presented in this study.

convergence. The semi-local convergence matter is, based on the information around an initial point, to give conditions ensuring the convergence of the iterative procedure; while the local one is, based on the information around a solution, to find estimates of the radii of convergence balls [3,4,[20][21][22]24,25].
Other single and multipoint methods can be found in [2,3,20,25] and the references therein. The local convergence of the preceding methods has been shown under hypotheses up to the seventh derivative (or even higher) although only the first derivative appears in method (1.2). These hypotheses restrict the applicability of these methods. As a motivational example, let us define function We have that Then, obviously, function f is unbounded on D. In the present paper, we use hypotheses only on the first derivative. Moreover, we provide a radius of convergence and computable error estimates on the distances using |x n − x * | with Lipschitz constants not provided in the earlier studies by Taylor expansions. This way we expand the applicability of method (1.2). The rest of the paper is organized as follows: Sect. 2 contains the local convergence analysis of methods (1.2). The numerical examples are presented in the concluding Sect. 3.

Local convergence for method (1.2)
We present the local convergence analysis of method (1.2) in this section. Let U (v, ρ),Ū (v, ρ) stand for the open and closed balls in S, respectively, with center v ∈ S and of radius ρ > 0.
For the local convergence analysis that follows we define some functions and parameters. Let L i > 0, i = 0, 1, 2, 3, 4, M ∈ (0, 3) be given parameters. Define functions on the interval [0, 1 L 0 ) by and parameters Notice that r 1 > 0 and g 1 (r 1 ) = 1. Moreover, define functions on the interval [0, 1 L 0 ) by We have that h 0 (0) = −1 < 0 and h 0 (t) → +∞ as t → 1 We have that h 2 (0) = −1 < 0 and h 2 (t) → +∞ as t → r − 0 . Hence, function h 2 has zeros in the interval (0, r 0 ). Denote the smallest such zero by r 2 . Furthermore, define functions p, p 1 and p 2 on the interval (0, r 0 ) by We have thatp 1 (0) = −1 < 0 andp 1 (t) → +∞ as t → r − 0 . Hence, functionp 1 has a smallest zero in (0, r 0 ) denoted by rp 1 . Similarly functionp 2 has a smallest zero in (0, r 0 ) denoted by rp 2 . Define function Finally, define function on the interval [0, r 0 ) by Hence, function h 3 has a smallest zero denoted by r 3 in the interval (0, r 0 ). Set Then, we have that Function p 0 is defined in terms of L 0 , L 1 , L 2 , g 0 , g 1 and g 2 (i.e., as function of p 1 ) or in terms of L 0 , L 1 , L 4 , g 0 , g 1 and g 2 (i.e., as function of p 2 ). In practice, we shall choose the choice of p 0 leading to the largest radius which will be rp 1 or rp 2 , since we need to obtain the largest possible convergence ball. Next, using the above notation we can present the local convergence analysis of method (1.2). andŪ where r is defined above Theorem 2.1. Then, sequence {x n } generated for x 0 ∈ U (x * , r ) by method (1.2) is well defined, remains in U (x * , r ) for each n = 0, 1, 2, . . . and converges to x * . Moreover, the following estimates hold for each n = 0, 1, 2, . . . , and where the "g" functions are defined above Theorem 2.
Remark 2.2 1. In view of (2.9) and the estimate condition (2.14) can be dropped and M can be replaced by 2. The results obtained here can be used for operators F satisfying autonomous differential equations [3] of the form where P is a continuous operator. Then, since F (x * ) = P(F(x * )) = P(0), we can apply the results without actually knowing x * . For example, let F(x) = e x − 1. Then, we can choose: P(x) = x + 1. 3. The radius r A given by (2.1) was shown by us to be the convergence radius of Newton's method [2][3][4] x n+1 = x n − F (x n ) −1 F(x n ) for each n = 0, 1, 2, . . . (2.32) under the conditions (2.9) and (2.10). It follows from (2.2) and r < r A that the convergence radius r of the method (1.2) cannot be larger than the convergence radius r A of the second-order Newton's method (2.32). As already noted in [2,3] r A is at least as large as the convergence ball given by Rheinboldt [24] r R = 2 3L . (2.33) In particular, for L 0 < L we have that That is our convergence ball r A is at most three times larger than Rheinboldt's. The same value for r R was given by Traub [25]. 4. It is worth noticing that method (1.2) is not changing when we use the conditions of Theorem 2.1 instead of the stronger conditions used in [1,2,[8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][25][26][27]. Moreover, we can compute the computational order of convergence (COC) defined by or the approximate computational order of convergence This way we obtain in practice the order of convergence in a way that avoids the bounds involving estimates using estimates higher than the first Fréchet derivative of operator F.

Numerical example
We present a numerical example in this section.  Table 1.
It is well known that due to errors and since higher order derivatives do not appear in the definition of ξ or ξ 1 the computations may not necessarily lead to exactly ξ 1 = 8 as indicated by the Example 3.1.

Conclusion
We presented a new local convergence analysis for an eighth-order method for solving equations based on contractive techniques and Lipschitz constants under hypotheses only on the first derivative. This way we expanded the applicability of method (1.2), since its convergence was shown using hypotheses up to the seventh derivative [8,20]. Moreover, we provided computable radius of convergence as well as error estimates not given in earlier studies [8,20]. The same advantages can be obtained if our technique is used on similar eighth-order methods listed in the references (see [8,20] and the references therein).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.