Sharp estimation of local convergence radius for the Picard iteration

We investigate the local convergence radius of a general Picard iteration in the frame of a real Hilbert space. We propose a new algorithm to estimate the local convergence radius. Numerical experiments show that the proposed procedure gives sharp estimation (i.e., close to or even identical with the best one) for several well known or recent iterative methods and for various nonlinear mappings. Particularly, we applied the proposed algorithm for classical Newton method, for multi-step Newton method (in particular for third-order Potra–Ptak method) and for fifth-order “M5” method. We present also a new formula to estimate the local convergence radius for multi-step Newton method.


Introduction
The problem of the estimation of the local radius of convergence for various iterative methods was considered by several authors and numerous results were obtained particularly for the Newton method and its variants. However "... effective, computable estimates for convergence radii are rarely available" [1]. Similar remarks were made in more recent papers: "... no estimate for the size of an attraction ball is known" [2], "The location of starting approximations, from which the iterative methods converge to a solution of the equation, is a difficult problem to solve" [3].
We propose a new algorithm to estimate the local convergence radius for general Picard iteration in the frame of a real Hilbert space and we apply it to classical Newton method, to multi-step Newton method (in particular to the third-order Potra-Ptak method) and to the fifth-order M5 method [4,5].
The multi-step Newton method(other terms:"modified Newton method", "multi-step frozen Jacobian version of the Newton method", etc.) is, in its essence, the classical Newton method in which the first derivative is re-evaluated periodically after m steps. It can be defined as follows. Let F : C → B be a nonlinear mapping, where B is a Banach space and C is an open subset of B. Suppose that F satisfies some common conditions (for example, F is Fréchet differentiable and there exits F (x) −1 on C). If x denotes the current iteration, then the iteration function T is defined by: (1.1) The method has high convergence order, equal to m + 1, and the computational cost per iteration is due by the LU factorization and the inner steps for the computation of y k . The semilocal convergence of (1.1) and the study of its efficiency and dynamics are presented in [6].
The particular case m = 2 was considered by Potra and Ptak [7]. Using non-discrete induction, they proved the order three of convergence and gave sharp a priori and a posteriori error for this particular case. Often it is called "Potra-Ptak" method [8,9]. Ortega and Rheinboldt [10] proved order three of convergence for Potra-Ptak method in n-dimensional spaces (Theorem 10.2.4, [10]). Note that Potra-Ptak method is a particular case of a multipoint iterative process with order three of convergence considered by Ezquerro and Hernandez [11].
We will discuss also the fifth-order M5 method introduced relative recently (2015) by Cordero et al. [4,5] and defined by: ). The M5 method is similar with the multi-step Newton method (1.1) for m = 3 and with some coefficients in inner steps and outer iteration. It is worth noticing that with this very simple modification (which does not affect the computational effort) the convergence order grows from 4 to 5. The M5 method was studied recently (2016) by Sharma and Guha [12], showing that "in general, the new method is more efficient than the existing counterparts".
The advantages of such methods (higher order of convergence and improved efficiency index) are partly canceled by reducing to a great extent the domain of convergence (attraction basin). Indeed, the attraction basin of these iterations, as commonly occurs for high-order methods, is an unpredictable and sophisticated set. Therefore, finding a local convergence ball (or a good starting point) for these methods is a very difficult task.
In Sect. 4, we are concerned with the estimation of the convergence ball for the Picard iteration. Based on the convergence properties of such type of iteration for the class of demicontractive mappings [13,14], we propose an algorithm for estimating the convergence radius of the Picard iteration. Numerical experiments show that the proposed algorithm gives convergence radius close to or even identical with the best one. The Potra-Ptak and M5 methods are examples for this remarkable property of the proposed algorithm. Recently, Hernandes and Romero [3] gave the following algorithm (formula) to estimate the local convergence radius for Ezquerro-Hernandez method. Suppose that x * is a solution of the equation and ζ 0 is the positive real root of a polynomial equation of degree three (in the particular case of Potra-Ptak iteration this equation is t 3 + 4t 2 − 8 = 0). Thenr estimates the local radius of convergence.
In [2], Catinas proposes a simple and elegant formula to estimate the radius of convergence for the general Picard iteration and the algorithm presumptively gives a sharp value. More precisely, let T : D ⊂ R m → D be a nonlinear mapping and x * a fixed point of T . Suppose that T is differentiable on some ball centred in x * , B(x * , r 1 ), and the derivative of T satisfies then r = min{r 1 , r 2 } is an estimation of local convergence radius.

Preliminary lemmas
.., m be a numerical sequence defined recursively by where r > 0. Then {r k } is strictly decreasing if α < 2 3 and strictly increasing if α > 2 3 . The proof can be done easily by induction on k. Define the function g m : R + → R + by: It is easy to prove that g m (0) = 0 and g m (1) > 1. Let η be such that 0 < η < 1. The polynomial equation g m (y) − η = 0 has at least a solution in (0, 1).

Lemma 2.
For any η, 0 < η < 1, the equation g m (y) − η = 0 has a unique positive solution α m and The polynomial sequence {f k } and the function g m have the following two properties.
The proof can be done easily by induction on m.

A simple formula for the local convergence radius of the multi-step Newton method
Let H be a real Hilbert space with inner product ·, · and norm · and let C be an open subset of H. We assume that F : C → H is Fréchet differentiable on C and that the set of solutions of the equation F (x) = 0 is nonempty (or the set of fixed points of T is nonempty).  p in B(p, r). Proof. We prove first that y k ∈ C, k = 1, ..., m. Let x be a point in B(p, r).
We prove by induction that y k − p ≤ r k . We have y 1 − p 0 = x − p ≤ r = r 1 . Suppose that y k − p ≤ r k . We can write ). Thus We can conclude that y k ∈ C, k = 1, ..., m.
We prove now that for any x ∈ B(p, r) We have and successively replacing y k − p by its previous value, we get and Therefore, From Lemma 3, we have We prove now that η is a bound for Δ(x) . We have From F (x) −1 ≤ β and (3.1), we obtain Therefore, From (2.1), (3.2) and Lemma 2 we have that β F (x) − Δ k ≤ f k (βLr), k = 1, ..., m. We obtain Δ(x) ≤ g m (βLr) and from βLr < α m it results Therefore, T is quasi-expansive and p is the unique fixed point of T in B(p, r). The convergence of the sequence generated by x n+1 = T (x n ) results from

Radius of convergence
It is worth mentioning that, in general, the estimation of convergence radius given by Theorem 1, r ≤ α m /(βL), though is satisfactory good, it is not very sharp (see the example in Table 1, Section 5). More interesting estimates can be obtained by applying the convergence property of the Picard iteration in the case of demicontractive mapping [16].
A mapping T : C → H is said to be demicontractive if the set of fixed points of T is nonempty, F ix(T ) = ∅, and where L > 0. This condition is equivalent to either of the following two: where λ = (1 − L)/2. Note that (4.1) is often more suitable in Hilbert spaces, allowing easier handling of the scalar products and norms. The condition 28 Page 8 of 13 Ştefan Mȃruşter and Laura Mȃruşter JFPTA (4.2) was considered in [17] to prove T-stability of the Picard iteration for this class of mappings. Note that the set of fixed points of a demicontractive mapping is closed and convex [18]. We say that the mapping T is quasi-expansive if where β > 0. If β < 1 then x − p ≤ β 1−β T (x) − p which justifies the terminology. It is also obvious that the set of fixed points of a mapping T which satisfies (4.3) consists of a unique element p in C. The main idea of the proposed algorithm is to find a ball as large as possible on which the conditions of Theorem 2 are satisfied. In finite dimensional spaces the condition of quasi-expansivity is superfluous, the first two conditions are sufficient for the convergence of Picard iteration. Therefore, supposing that condition (i) is fulfilled we can develop the following algorithm to estimate the local radius of convergence in finite dimensional spaces: Find the largest value for r such that This procedure involves the following main processing: 1. Apply a search line algorithm (for example of the type half-step algorithm) on the positive real axis to find the largest value for r; 2. At every step 1 solve the constrained optimization problem (4.4) and verify the condition m > 0.5.

Remark 2.
Solving repeatedly the constrained optimization problem is the most expensive computation of the proposed algorithm. However, the computation is relieved by the fact that the cost function is defined with the help of vector norm and scalar product. To our best knowledge, there exists a few number of algorithms having this good calculability characteristic, for example, the algorithm of Deufhlard and Potra [19].

Numerical experiments
The numerical experiments in this section are devoted to illustrate the sharpness of the proposed algorithm. We performed a significant number of numerical experiments for different iterative methods and for mappings in one or several variables. We present here the results for classical Newton method, for Potra-Ptak method, for M5 method and for the following two functions in one variable, f (x) = x 5 − 2x 2 + x, p = 1, f (x) = x − cos(x), p = 0.739..., and for the following three functions in two variables (referred in the rest of the paper as Examples 1, 2 and 3), . Experiment 1. We investigated the sharpness of the proposed algorithm for functions in one variable. In the most of considered cases, it gives radii of convergence very close to the best ones. For example, in the case of Newton method and for the function f (x) = x−cos(x), the estimation is 2.5671684091 and the best radius is 2.5671684098. In the case of the Potra-Ptak method the two radii (estimation and the best) are, practically, identical. For example, for the function f (x) = x 5 − 2x 2 + x the estimate and the best radius (computed with 15 decimal digits) are identical, r = 0.080959069788847.

Experiment 2.
In this experiment, we investigated the sharpness of the proposed algorithm for the classical Newton method and for functions in two variables. In the case of considered test functions the results are present graphically in Fig. 1.
The black area represents the entire attraction basin and the white circle the local convergence ball. It can be seen that the proposed algorithm  gives convergence radii satisfactory good. In our experiments, the attraction basin was computed by directly checking the convergence of the iteration process starting from all points of a given net of points. The attraction basin (hence the maximum convergence radius) computed in this way has only relative precision. Nevertheless, we obtain significant information about the attraction basins, and the characteristics of the proposed algorithm can be evaluated.

Experiment 3.
In this experiment, we investigated the sharpness of the proposed algorithm for the Potra-Ptak method and for functions in several variables. For the test function the results are given in Fig. 2.
The numerical values of convergence radii for the Potra-Ptak method and for Example 1 computed with six decimal digits are given in Table 1.
For comparison, in Table 1 the values computed with several algorithms/formulas (Hernandez-Romerro algorithm, Formula from Theorem 1, Catinas formula, proposed algorithm, and the best convergence radius) are presented. Remark 3. It is worth noticing that in several cases the proposed algorithm provides convergence radii identical with the best ones (as in the first part of Experiment 1 and in Example 1, Table 1). In other cases the radius computed with this algorithm is close to the best one, but it is not identical with it. For example, in the case of Potra-Ptak method and Example 3, the radius given by proposed algorithm (computed with six decimal digits) is 0.274267 and the best radius is 0.275904.

Experiment 4.
In this experiment, we computed the convergence radius with the proposed algorithm for M5 method and for functions in several variables. In the case of the three test systems the results are presented in Fig. 3.
It can be seen that the proposed algorithm gives radii of convergence very close to the best ones.
Open Access. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/ by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.