Simultaneous roots for vectorial problems

In this manuscript, we design an iterative step that can be added to any numerical process for solving systems of nonlinear equations. By means of this addition, the resulting iterative scheme obtains, simultaneously, all the solutions to the vectorial problem. Moreover, the order of this new iterative procedure duplicates that of their original partner. We apply this step to some known methods and analyse the behaviour of these new algorithms, obtaining simultaneously the roots of several nonlinear systems.

(a) Newton's scheme (b) Steffensen's scheme solving it exactly. These systems are described as F(x) = 0, where F : D ⊆ R m → R r is a vectorial function of m variables. One way to obtain these approximations is by using iterative methods. Iterative methods create a sequence of approximations starting from a seed that, under specific conditions, tends to the solution.
But, what if instead of searching for only one of the solutions, we aim to obtain more? One option would be trying to obtain different solutions by using several different initial approximations. But with this we have a problem, what happens if both estimates converge to the same root? In the following, we are going to show this problem by representing the dynamical planes for Newton's and Steffensen's methods to illustrate what we have just mentioned. What we observe in these cases is if the initial points converge or not to the roots of our problems. In this case, both methods are applied to p(x) = x 2 − 1, with roots 1 and −1.
We chose a mesh of 400 × 400 points to create the dynamical planes, and what we do then is apply our methods to each one of these points, taking it as an initial estimation. The X and Y axes correspond to the real and imaginary part of the initial estimate, respectively. In addition, we set the maximum number of iterations to 80, and we consider the convergence of the initial point to a solution if the distance to that root is less than 10 −3 . In orange, we mark the initial points that converge to −1, in green those that converge to 1, and in black those that do not converge to any root.
In this case, if we take in Fig. 1 two different initial approximations and both are in the same basin of attraction, we obtain two sequences converging to the same solution.
Therefore, if we design a method that calculates simultaneously as many sequences of approximations as solutions the problem has, we will avoid this problem. This is due to the process of calculating an iterate takes into account who are the other iterates.
Schemes to find simultaneously roots of scalar equations are usually defined for polynomial functions. They generate, starting with an initial set of estimations, sequences of approximations that simultaneously tend to several roots. See, for instance (Petković et al.   Petković et al. 1997;Proinov and Ivanov 2019;Proinov and Vassileva 2019) and their references inside. From Ehrlich's method (Ehrlich 1967), the authors in Cordero et al. (2022), design an iterative procedure that can be added to any root-finding scheme for solving arbitrary nonlinear functions. In addition, if the original iterative method has order of convergence p, the new scheme will duplicate its order to 2 p. In addition, the new procedure can be used for obtaining the solutions of the equation simultaneously.
Next, we build the dynamical planes of the modified Newton' and Steffensen's methods, applied to the same equation with the same convergence criterion. In this case, one axis corresponds to one component of the initial estimation x In Fig. 2, we show Newton's and modified Newton's procedures applied in the polynomial dynamical planes. The basins of attraction cover the entire space in case of Newton's scheme, and also for its variant for finding roots simultaneously.
In Fig. 3, dynamical planes are shown for Steffensen' and modified Steffensen's methods. In this case, as we can see, there is no convergence for Steffensen's method in some black zones, for example at z = −5, but its variant converges to the roots at any point, except a small zone where x As we can see in the dynamical planes of the modified Newton' and Steffensen's methods, taking suitable initial estimations x (0) 1 = 2 and x (0) 2 = 5 we converge to both roots. This can not be achieved by iterating both non simultaneous methods in parallel, since both were in the basin of the attraction of the root 1.
In this paper, we generalize this idea to nonlinear systems, thus designing an iterative procedure that can be added to any iterative method for solving systems of nonlinear equations. Indeed, the new resultant method duplicates the order of convergence and can get several solutions simultaneously. In Sect. 2, we design the mentioned step and study the order of convergence of the resulting method. In the third Section, we carry out some numerical experiments with known iterative procedures to which we add the simultaneous step, to see the performance of the new schemes. We finish the work in Sect. 4 with some conclusions, derived from the study.

Convergence analysis
Let F(x) = 0, F : R m → R r , be a system of nonlinear equations, where the number of unknowns is m and the number of equations is r . Note that the system is written as a column vector of size r × 1. Suppose that this system has n solutions, which are denoted by We design an iterative scheme to obtain all the solutions simultaneously. Therefore, we take a set of n initial approximations which we denote by x then we design the iterative step as: for i = 1, . . . , n and k = 0, 1, . . .

As we can see, the size of matrix
, j = i, as the above vectors are a column vector of size r × 1 and a row vector of size 1 × m, respectively. We denote this method by P S. Now, we prove that this iterative procedure has quadratic order.
Theorem 1 Let F : R m −→ R r be an enough differentiable function in a neighbourhood of α i , D i ⊂ R m . Let us also assume that F(α i ) = 0, for i = 1, ..., n and F (α i ) is nonsingular for i = 1, ..., n. Then, by taking an estimation x i } of approximations generated by the P S scheme converges to α i with order 2.
If we derive this with respect to the variable x i q , with q = 1, 2, . . . , m, we get Then, Then, we can rewrite the last equation as for j 1 ∈ {1, 2, . . . , m}.
If we write A i q := A i q ,1 , A i q ,2 , . . . , A i q ,m , and then write A which rows are A i 1 , A i 2 , . . ., A i m we obtain that From the above relationship, it follows that ⎛ If we apply the above relation, then e i,k+1 has the following form It is, therefore, proven that P S has convergence order 2.
Let φ(x) be the fixed point of an iterative method. We define P S φ as follows n , for i = 1, . . . , n and k = 0, 1, . . . that is, an iterative scheme in which we first perform method described by φ and then the P S scheme. i } generated by P S φ method converges to α i with order 2 p, where p is the order of convergence of scheme described by φ.
Proof By Theorem 1, Since we have that φ has order p, this means that e i,k+1 ∼ e p i,k . Substituting the last relation into the equation (4) we obtain that Thus, it is proven that procedure P S φ has order 2 p.

Numerical experimentation
We use Matlab R2021b with arithmetic precision of one thousand digits for the computational calculations. As a stopping criterion we compare the mean of the norm of the function evaluated at the last iterations with a tolerance of 10 −50 , that is, if we try to find n solutions, We denote by We also use a maximum of 100 iterations as a stopping criterion. For the numerical experiments, we use three different algorithms. The first one is the method P S, that we have defined in (1). The others ones are P S N and P S N 2 that are the composition of P S with N and N 2, where N denote Newton's scheme and N 2 denote Newton's method composed with itself, that is , for i = 1, . . . , n and k = 0, 1, . . .
The numerical results we are going to compare the methods in the different examples are: • the obtained approximation, • the mean of the norm of the system evaluated in that set of approximations, • the distance between the last two sets of iterates, • the number of iterations needed to achieve the tolerance, • the computational time and the ACOC (approximate computational convergence order), defined by Cordero and Torregrosa in Cordero and Torregrosa (2007), defined as follows We are going to solve the following nonlinear problems.
The exact solutions of this problem are Then, we take as initial estimations x (0) 3 = (0.5, −1) and x (0) 4 = (−0.5, 1). As we can see in Table 1, all the methods obtain good results for the chosen initial points. The approximate computational convergence order coincides with the theoretical one or is greater than that one.

Example 2
The second academical problem to be solved is to obtain all the critical points of function g (x, y) = 1 3 x 3 + y 2 + 2x y − 6x − 3y + 4.
As we can see in Table 2, all the methods obtain good results for the chosen initial points. The approximate computational convergence order matches the theoretical one.
Example 3 Now, we solve the following nonlinear system with size 200 × 200 . . . , 199 F 200  This system has infinite solutions, but we aim to find two of them. As initial points we choose x (0) 1 = 0.8(1, 1, . . . , 1) and x (0) 2 = −0.8 (1, 1, . . . , 1). In this case, we use Matlab2021b with arithmetic precision of 10 digits and 10 −5 as tolerance. As we can see in Table 3, the method that does not use a predictor needs 17 iterations to achieve that tolerance.
We conclude that in many cases is very useful to use a predictor scheme, due to the fact that P S N 2 obtains the exact solution in only 2 iterations. It is clear that methods with a higher order of convergence converge faster.

Outcomes and conclusions
In this paper, we define an iterative step that can be added to any iterative scheme for nonlinear systems obtaining a new iterative method that simultaneously find several roots, which has twice the order of the original scheme.
Different known iterative procedures were selected to add this step, and some experiments were carried out numerically to confirm the good properties of the new iterative procedures.