1 Introduction

The Newton’s method is one of the most commonly known numerical algorithms—that at first was proposed around 1669 by Newton and then modified in 1690 by Raphson—for finding real roots of polynomials. Further, Simpson (1740) extended the method to solve nonlinear equations, and Cayley (1879) used the method for finding complex roots of polynomials. Cayley faced a strange and unpredictable chaotic behaviour—that he could not explain—of Newton’s method in the case of a simple equation \(z^3-1=0\) in the complex plane. The solution to that problem was then found in 1919 by Julia. Julia sets led next Mandelbrot in the 1970s to the discovery of the famous Mandelbrot set and fractals. Newton’s method is also applied in optimisation for finding a local extreme of a differentiable function that is a root of its gradient [28]. The Newton’s method, depending on a starting point, might be convergent or divergent, has a local quadratic convergence and is undefined for critical points (i.e. points for which the first derivative of a function equals to 0). This method is sensitive to a starting point and finds stable or unstable cyclic values. So then, Newton’s method dynamic behaviour is very rich [16, 34].

In the literature [15, 23, 24, 30, 35] there are known many modifications and improvements of Newton’s method. Here, we would like to point out some of the most recent results in this area. In [10] various modifications of Newton’s method with arithmetic, harmonic, and contraharmonic means are investigated. In [3, 8, 14] Newton’s method was modified by using fractional derivatives instead of the classical ones. Wang and Tao [33] proposed a new way to construct a self-accelerating parameter in Newton’s method with memory.

The most striking effects that can be observed when finding the roots of the polynomial \(z^3 - 1\) on the complex plane are the fractal boundaries among the basins of attractions caused by chaos and instability. In the case of complex polynomials, the Newton method, producing Newton fractals, is an important model of deterministic chaos [35] similarly as the logistic equation in the real case [22]. Basins of attraction form stunning images that are ones of the most beautiful fractals. Fractal boundaries in the form of braids can be decreased or even, to some extent, eliminated using the damped Newton method [11]. Recently, Kalantari [19] proposed the Robust Newton Method (RNM) that radically smooths the boundaries among basins of attraction that form sharp lines. That result is obtained by precise controlling of the step’s length in the Newton method. Unfortunately, the RNM usually needs a very large number of iterations. Kalantari [19] showed that for a given accuracy \(\varepsilon > 0\) the number of iterations needed by the RNM to converge to the root or critical point is \(\mathcal {O}(1/\varepsilon ^2)\). By using this bound, for example, for \(\varepsilon = 0.01\), we obtain \(1/\varepsilon ^2 = 10{,}000\). Thus, even for not so accurate computations for some points, we need to perform a large number of iterations, and this number increases with the increase in the accuracy. This can be treated as a drawback of the RNM.

In this paper, we propose the modification of the RNM, which relies on replacing the Picard iteration used in the RNM by the Mann iteration. This modification will decrease the average number of iterations needed to find the solution and, at the same time, the method will preserve its other features. We will analyse numerically different aspects of the proposed modification, e.g. basins of attraction, dynamics, and the average number of iterations. Moreover, by using various sequences of parameters in the Mann iteration, we will generate very complex and intriguing patterns from the modified method dynamics. These patterns might have artistic applications.

The paper is organised as follows. Section 2 presents the description of the RNM. In Sect. 3, modification of the RNM with the Mann iteration is given. Then, in Sect. 4, the plots of average number of iterations (ANI), convergence area index (CAI) and time generation characterising the RNM with Mann iteration are given. Also, polynomiographs presenting basins of attraction and dynamic properties of the considered root finding processes for a given set of polynomials are shown. Section 5 is devoted to artistic aspects of the RNM with Mann iteration related to the generation of very complex and beautiful polynomiographs from dynamics. The last section, Sect. 6, concludes the paper and points out future directions of research.

2 The robust Newton’s method

The robust Newton method (RNM) was proposed to overcome the Newton’s method main drawback—the lack of definition at critical points [19]. The RNM guarantees the reduction of the polynomial’s modulus in successive iterations and has several differences compared to the classical Newton’s method. Firstly, it converges globally, whereas the classical Newton’s method converges locally. Secondly, it finds roots and critical points of polynomials, whereas classical Newton’s method finds only the roots.

The RNM is based on the following facts:

  • Instead of finding zeros for polynomial p(z), the equivalent modulus minimisation of \(F(z) = |p(z)|^2 = p(z)\overline{p(z)}\) is considered. Observe that minimisation of |p(z)| and F(z) are also equivalent.

  • In the classical Newton’s method applied to F(z) at every iteration, a descent direction is chosen according to the geometric modulus principle (GMP) [18], and the step size is taken on the base of the modulus reduction theorem (MRT) [19].

The GMP gives a complete characterisation of all descent and ascent directions for \(|p(z_0)|\) at an arbitrary point \(z_0\) and additionally reveals a surprising geometric pattern showing partition of a unit disc, centred at \(z_0\), into sectors of ascent and descent directions. The GMP for polynomials [18] says that if a non-constant polynomial p(z) at \(z_0\) equals to zero, then every direction at \(z_0\) is an ascent direction for \(|p(z_0)|\). If \(p(z_0) \ne 0\) then the cones of ascent and descent directions at \(z_0\) divide the unit disc centred at \(z_0\) into alternating ascent and descent sectors of equal angle \(\pi /k\), where \(k \ge 1\) is the smallest index with \(p^{(k)}(z_0) \ne 0\). Such typical disc partitionings, except for possible rotations, for \(k = 0,1,2,3\) are presented in Fig. 1.

Fig. 1
figure 1

Sectors of ascent (grey colour) and descent (orange colour) directions for various values of k. (Color figure online)

Next, the MRT assures a reduction in the modulus of p(z) when moving from \(z_0\) to \(z_1\) by the amount proportional to \(|p(z_0)p'(z_0)|^2\). Summing up, the GMP enables to choose consequently a descent direction, whereas the MRT guarantees the decrease of F(z) along the chosen descent direction at every iteration step of the RNM. Here, it is worth to add that a random direction at \(z_0\) has a fifty per cent chance to be a descent (or ascent) direction—what stresses the great importance of the GMP.

In the sequel, we recall the formulas defining the RNM, following [19].

Let us consider a complex polynomial p:

$$\begin{aligned} p(z) = a_n z^n + a_{n-1} z^{n-1} + \cdots + a_1 z + a_0, \end{aligned}$$
(1)

where \(z \in \mathbb {C}\), \(a_n, a_{n-1}, \ldots , a_0 \in \mathbb {C}\), and \(n \in \mathbb {N}\).

Assume that \(p(z) \ne 0\), \(z \in \mathbb {C}\) and let us define [19]:

$$\begin{aligned} k \equiv k(z)= & {} \min \{ j \ge 1 : p^{(j)}(z) \ne 0 \}, \end{aligned}$$
(2)
$$\begin{aligned} A(z)= & {} \max \left\{ \frac{|p^{(j)}(z)|}{j!} : j = 0, \ldots , n \right\} , \end{aligned}$$
(3)
$$\begin{aligned} u_{k}\equiv & {} u_{k}(z) = \frac{1}{k!} p(z) \overline{p^{(k)}(z)}. \end{aligned}$$
(4)

Moreover, let us define:

$$\begin{aligned}&\gamma _k = 2 \cdot {\mathrm{Re}}\left( u^{k-1}_{k}\right) , \end{aligned}$$
(5)
$$\begin{aligned}&\delta _k = -2 \cdot {\mathrm{Im}}\left( u^{k-1}_{k}\right) , \end{aligned}$$
(6)
$$\begin{aligned}&c_{k} = \max \{ |\gamma _k|, |\delta _k| \}. \end{aligned}$$
(7)

Additionally, let \(\theta _k\) be the angle given by the following formula:

$$\begin{aligned} \theta _k = {\left\{ \begin{array}{ll} 0, &{} \text {if } c_{k} = |\gamma _k| \wedge \gamma _k< 0, \\ \pi /k, &{} \text {if } c_{k} = |\gamma _k| \wedge \gamma _k> 0, \\ \pi /(2k), &{} \text {if } c_{k} = |\delta _k| \wedge \delta _k < 0, \\ 3\pi /(2k), &{} \text {if } c_{k} = |\delta _k| \wedge \delta _k > 0. \end{array}\right. } \end{aligned}$$
(8)

Now, the RNM for the starting point \(z_0 \in \mathbb {C}\) is defined as:

$$\begin{aligned} z_{i+1} = N_{p}(z_{i}), \quad i = 0, 1, 2, \ldots , \end{aligned}$$
(9)

where

$$\begin{aligned} N_{p}(z_{i}) = z_i + \frac{C_{k}(z_i)}{3} \frac{u_{k}}{|u_{k}|} e^{\mathbf {i} \theta _k} \end{aligned}$$
(10)

and \(\mathbf {i}\) denotes the imaginary unit, i.e. \(\mathbf {i} = \sqrt{-1}\), and

$$\begin{aligned} C_{k}(z_i) = \frac{c_{k} |u_{k}|^{2-k}}{6 A^{2}(z_{i})}. \end{aligned}$$
(11)

The term \((u_k / |u_k|) e^{\mathbf {i} \theta _k}\) is called as the normalised robust Newton direction at \(z_i\) [19] and \(C_k(z_i) / 3\) is the step size.

Because the RNM finds roots and critical points, the stopping criterion has some other form than the one for the classical Newton’s method, and is given by the following condition:

$$\begin{aligned} |p(z_i)|< \varepsilon \; \vee \; |p'(z_i)| < \varepsilon , \end{aligned}$$
(12)

where \(\varepsilon > 0\) is the accuracy.

To summarise the RNM, in Algorithm 1, we present its pseudocode, and in Algorithm 2, the pseudocode for the \(N_p(z)\) computation.

figure a

3 The robust Newton’s method with the Mann iteration

The problem of finding a fixed point of a given mapping is well-studied in the literature [1, 6]. In these studies, one of the directions is using iterative approximation methods for finding the fixed points. The most famous method is the Picard iteration [27]:

$$\begin{aligned} z_{i+1} = T(z_i), \quad i = 0, 1, 2, \ldots , \end{aligned}$$
(13)

where T is a given mapping for which we search fixed points and \(z_0 \in \mathbb {C}\) is a starting point.

Another known iteration is the Mann iteration (defined in 1953 by William Robert Mann) [21]:

$$\begin{aligned} z_{i+1} = (1 - \alpha _i) z_i + \alpha _i T(z_i), \quad i = 0, 1, 2, \ldots , \end{aligned}$$
(14)

where \(\alpha _i \in (0, 1]\). Depending on the type of operator T (contractive, expanding, etc.) and the space (Banach, hyperbolic, CAT(0), etc.) the sequence \(\alpha _i\) has to fulfil some additional conditions in order (14) to be convergent to a fixed point.

Let us notice that the Mann iteration for \(\alpha _i = 1\) for all i reduces to the Picard iteration. In the literature, one can find many other iterations. A review of 18 different iterations and their dependencies can be found in [13].

figure b

At this point, we combine the RNM with the Mann iteration. We do so by using \(N_p\) as T in (14) and obtaining:

$$\begin{aligned} z_{i+1} = (1 - \alpha _i) z_i + \alpha _i N_p(z_i), \quad i = 0, 1, 2, \ldots .\nonumber \\ \end{aligned}$$
(15)

From now on, we will call the RNM with the Mann iteration, shortly, as M-RNM.

Now, let us write (15) in the following form:

$$\begin{aligned} z_{i+1}= & {} (1 - \alpha _i) z_i + \alpha _i N_p(z_i) \nonumber \\= & {} z_i + \alpha _i (N_p(z_i) - z_i). \end{aligned}$$
(16)

By looking at this formula, we can observe that at the \((i+1)\)-th iteration the M-RNM moves \(z_i\) into the direction given by vector \(N_p(z_i) - z_i\) scaled by \(\alpha _i\). Let us note that \(N_p(z_i)\) moves \(z_i\) towards the solution assuring reduction in the modulus of p, so \(N_p(z_i) - z_i\) points in the same direction. Now, everything depends on \(\alpha _i\). If \(\alpha _i < 0\), then the point moves in the direction opposite to the modulus reduction. If \(\alpha _i > 0\), the point moves towards the modulus reduction, but, depending on the value of \(\alpha _i\), this point can move faster or slower, i.e. the modulus reduction is larger or smaller or if the value \(\alpha _i\) is too high one can obtain an increase in the modulus.

After including (10) into (16), we obtain the following:

$$\begin{aligned} \begin{aligned} z_{i+1}&= z_i + \alpha _i (N_p(z_i) - z_i) \\&= z_i + \alpha _i \left( z_i + \frac{C_k(z_i)}{3} \frac{u_k}{|u_k|} e^{\mathbf {i} \theta _k} - z_i \right) \\&= z_i + \alpha _i \frac{C_k(z_i)}{3} \frac{u_k}{|u_k|} e^{\mathbf {i} \theta _k}. \end{aligned} \end{aligned}$$
(17)

From this form of the M-RNM, it is easily seen that we obtained the RNM with an additional damping factor \(\alpha _i\). If we join \(\alpha _i\) with the term representing the step size, i.e. \(C_k(z_i)/3\), we see that \(\alpha _i\) changes the step size in the original RNM.

To see how \(\alpha _i\) affects the root finding process of the RNM let us consider the following example. We search for the roots of \(p(z) = z^3 - 1\) and we use \(\alpha _i = {\mathrm{const}} = \alpha \). We presented the orbits for the starting point \(z_0 = 0.9 + \mathbf {i}\) in Fig. 2 for various values of \(\alpha \). We fixed the maximum number of iterations to 500. According to our previous observation, the orbit for \(\alpha = 1\) presents the orbit for the RNM introduced in [19]. When we look at the various orbits, we can see that for \(\alpha = 1\), the orbit forms a smooth curve. If we increase the value of \(\alpha \), we see that the curve becomes a polyline. In the plot presented in Fig. 2a, we can see that for \(\alpha > 1.0\), we obtain a greater modulus reduction and the length of the corresponding orbit is shorter (see Table 1) than in the case of \(\alpha = 1\). However, if the value of \(\alpha \) is too high, then the orbit’s points jump from the neighbourhood of one root to the neighbourhood of the other root and, as a consequence, this method does not find the solution (see Fig. 2b). Therefore, from this example, we see that by a proper selection of \(\alpha \) we can make (15) faster than the original method proposed in [19].

Fig. 2
figure 2

Contour plots of \(|p(z)| = |z^3 - 1|\) and the orbits for the M-RNM for various values of \(\alpha \)

Table 1 Lengths of the orbits presented in Fig. 2

The selection of \(\alpha \) is not an easy task, because the best value of \(\alpha \) is different for different polynomials and there is no obvious dependency between \(\alpha \), polynomial p and the orbit’s length. In each step, we could try to find the best value of \(\alpha \) by minimising the modulus, i.e.

$$\begin{aligned} \alpha _i= & {} {\mathop {{{\,\mathrm{arg\;min}\,}}}\limits _{\alpha> 0}} |p((1 - \alpha ) z_i + \alpha N_p(z_i))| \nonumber \\= & {} {\mathop {{{\,\mathrm{arg\;min}\,}}}\limits _{\alpha > 0}} |p(z_{i+1})|. \end{aligned}$$
(18)

This is a similar equation as in the gradient descent method with the exact line search [31]. By using (18), we can always find \(\alpha \) that assures modulus reduction. This is since for \(\alpha = 1\), we get the RNM which reduces the modulus [19]. Therefore, if, in the current step, we do not find \(\alpha \ne 1\) that reduces the modulus better than the RNM, then we use RNM (\(\alpha = 1\)). This method may give the best value of \(\alpha \), but it is computationally too expensive in practice. Instead, we can use a method similar to the backtracking line search known in optimisation [31]. The proposed method is presented in Algorithm 3.

figure c

Using the same example as in the case of the constant \(\alpha \), i.e. the contour plot of \(p(z) = z^3 - 1\), let us see how the orbits look like when we use Algorithm 3. In this example, we take 19.65 as the starting value of \(\alpha \), and we generate orbits for various values of s. The obtained orbits are presented in Fig. 3. The starting value of \(\alpha \) is the same as for the constant \(\alpha \) used in Fig. 2b, where the orbit has jumped between neighbourhoods of the roots. After using Algorithm 3, we see that the orbits do not jump. Instead, they tend towards the root. For various values of s, we can see that the first step is the same, but the orbits differ from the second step. The lengths of the orbits are gathered in Table 2. From the data, we can see that the orbits for s equal to 0.1, 0.3, 0.5 and 0.7 are much shorter than in the case of the RNM (orbit’s length equal to 80). Only for \(s = 0.9\) we obtain longer orbit, but the method has converged in contrast to the constant \(\alpha \) value equal to 19.65.

Fig. 3
figure 3

Contour plots of \(|p(z)| = |z^3 - 1|\) and the orbits for the M-RNM for various values of s, and the starting value of \(\alpha = 19.65\) according to Algorithm 3

Table 2 Lengths of the orbits presented in Fig. 3

By introducing the Mann iteration in the RNM, we can ask whether the new method is convergent or not. The answer is affirmative when we assume that \(\alpha _i\) is selected so that in each iteration we get modulus reduction. We can make such an assumption, because if in the current iteration we do not find \(\alpha \ne 1\) that reduces the modulus, then we can use \(\alpha = 1\), i.e. the standard RNM for which it has been proven that it reduces the modulus [19]. In this situation, by using the M-RNM for any starting point, we obtain a decreasing sequence bounded below, so it is convergent.

4 Numerical results

In this section, we present the numerical results obtained with the help of the method proposed in Sect. 3 for \(\alpha _i = \mathrm{const} = \alpha \) and for \(\alpha _i\) computed by using Algorithm 3. The results will be divided into two categories. The first category consists of images showing basins of attraction and dynamics, the so-called polynomiography [17]. In the second category, we will present various numerical measures, such as the average number of iterations [4], convergence area index [5] and polynomiograph’s generation time [12].

Nowadays, polynomiography is an essential part of modern analysis of the quality of the root-finding methods [26]. Depending on the selected colouring method, we can visualise various aspects of the considered root finding method with the help of polynomiography. There are two standard methods of colouring polynomiographs.

In the first method, called basins of attraction, each root has its distinct colour, but other than some fixed colour, usually black, that represents the case in which the method has not converged to any root. Then, for the last point \(z_{i+1}\), we search for the closest root (by using the modulus metric) and colour the starting point with the colour of the root if the number of performed iterations was less than M, and with the fixed colour (black) otherwise. For the RNM and M-RNM, we add one more colour to colour the critical points. The algorithm for generating basins of attraction is presented in Algorithm 4.

figure d

In the second method, the so-called iteration colouring, which depicts the speed of convergence and the dynamics of the considered method, the starting point is coloured by linearly mapping the number of performed iterations on the colour in the given colour map. The algorithm for generating this type of polynomiographs is presented in Algorithm 5.

figure e

As we mentioned at the beginning of this section, we used three numerical measures in our comparisons. The first measure is the average number of iterations (ANI) [4]. To compute ANI, we need a polynomiograph generated with the iteration colouring. In this polynomiograph, each point corresponds to the number of iterations needed to find a root. As the name suggests, ANI is computed as the average number of iterations in the given polynomiograph. The second measure we use is the convergence area index (CAI) [5]. This measure can be computed from basins of attraction or a polynomiograph generated with the iteration colouring. CAI is the ratio of the number of convergent points \(N_c\) to the number of all points N in the considered polynomiograph, i.e.

$$\begin{aligned} {\mathrm{CAI}} = \frac{N_c}{N}. \end{aligned}$$
(19)

From the definition of CAI we can see that it is between 0 and 1, and it gives us information about the percentage of polynomiograph’s area that has converged to roots. The last considered measure is the time of generation of the polynomiograph [12]. It gives us information about the real time of computations.

In the numerical examples presented in this section, we used the following three polynomials, which are the most commonly used in the literature:

  • \(p(z) = z^3 - 1\) with the roots: \(-\frac{1}{2} - \frac{\sqrt{3}}{2} \mathbf {i}\), \(-\frac{1}{2} + \frac{\sqrt{3}}{2} \mathbf {i}\), 1,

  • \(p(z) = z^4 - 1\) with the roots: \(-1\), 1, \(-\mathbf {i}\), \(\mathbf {i}\),

  • \(p(z) = z^5 + z\) with the roots: 0, \(-\frac{1 + \mathbf {i}}{\sqrt{2}}\), \(\frac{1 + \mathbf {i}}{\sqrt{2}}\), \(-\frac{1 - \mathbf {i}}{\sqrt{2}}\), \(\frac{1 - \mathbf {i}}{\sqrt{2}}\).

To generate the polynomiographs, we used the following parameters: \(A = [-3, 3]^2\), \(M = 250\), \(\varepsilon = 0.001\), and image resolution of \(800 \times 800\) pixels. The colour map used for the iteration colouring is presented in Fig. 4. In the basins of attraction, the lack of convergence is depicted by the black colour, whereas the critical points by yellow colour and the roots by any other ones. In the experiments with the constant \(\alpha \) parameter used in the Mann iteration, the \(\alpha \) was taken from (0, 50] with the step equal to 0.01. In case of the variable \(\alpha \) parameter, i.e. computed with Algorithm 3, several values for the starting \(\alpha \) were investigated and the s parameter was taken from (0, 1) with the step equal to 0.01.

Fig. 4
figure 4

Colour map used in the dynamics polynomiographs with denoted numbers of steps

All experiments were performed on the computer with the following specifications: Intel i5-9600K (@ 3.70 GHz), 32 GB DDR4 RAM, NVIDIA GeForce GTX 1660 Ti graphics card with 6 GB GDDR6 SDRAM and Windows 10 (64 bit). The software for polynomiograph’s generation was implemented in the C++ programming language with OpenGL (Open Graphics Library) and GLSL (OpenGL Shading Language). The computations of polynomiographs were performed on the graphics card with the use of shaders written in GLSL.

4.1 \(p(z) = z^3 - 1\)

4.1.1 Constant \(\alpha \) sequence

In Fig. 5, the basins of attraction for \(p(z) = z^3 - 1\) for different values of \(\alpha \) are presented. Since this polynomial has three roots and one critical point, we can see four corresponding basins. Let us note that for \(\alpha = 1.0\), we cope with the Picard’s iteration. Thus, we obtained the polynomiograph for the RNM. The boundaries of the basins are sharp and not fractal-like in the case of classical Newton’s method. Moreover, in the centre of this polynomiograph (point \(0 + 0\mathbf {i}\)), there is a yellow dot. This is the critical point of p. Furthermore, we can see that the decrease in \(\alpha \) causes that fewer points converge to the roots. Indeed, when \(\alpha \) tends to zero, the black colour floods from the proximity of the critical point until the total vanishing of the red, green, and blue areas. The yellow dot in the centre stays forever. On the other hand, the increase in \(\alpha \) causes something different and the dynamics of changes is also different. We can observe that starting from about \(\alpha = 10.4\), the basins’ regularity is broken at the borders and tends to be larger. From \(\alpha = 17.78\) the number of convergent points dramatically changes and, finally, these points disappear. Instead, some points converge to the critical point (yellow areas). From about \(\alpha = 34.0\) the points that converge to the critical point start to appear randomly.

Fig. 5
figure 5

Examples of basins of attraction for \(p(z) = z^{3} - 1\) for various values of \(\alpha \)

In Fig. 6, the dynamics for the same polynomial and the same values of \(\alpha \) are presented. In such a way, we can observe how fast the proposed method converges for the plotted starting points according to the colour map from Fig. 4. However, because we have chosen the colour map that is not monotonic, and we did it to emphasize the dynamics and clearly see the differences, it is a little bit difficult to see the increase in the speed of convergence for appropriate points without looking at the colour map. However, yet one can observe that, as in the case of basins of attraction, for the same values of \(\alpha \) the same points are non-convergent (black areas). Additionally, for \(\alpha = 1.0\) one, can see that the closer the point is to one of the roots, the faster the convergence is (what is in accordance with intuition). When we tend with \(\alpha \) to zero, we see that the colours tend to move to the right on the colour map (see Fig. 4). It means that the overall convergence becomes slower. On the other hand, when we increase \(\alpha \) we move to the left on the colour map, so the overall convergence becomes faster. However, from about \(\alpha = 10.4\), the convergence becomes slower again. This is the visual confirmation of the facts discussed in Sect. 3 that depending on the value of \(\alpha \) the speed of convergence changes, and we see here that this is done globally for all points.

Fig. 6
figure 6

Examples of the speed of convergence and dynamics polynomiographs for \(p(z) = z^{3} - 1\) for various values of \(\alpha \)

Fig. 7
figure 7

Numerical results for \(p(z) = z^3 - 1\) and the various measures

Fig. 8
figure 8

Examples of basins of attraction for \(p(z) = z^{3} - 1\) for various values of starting parameters \(\alpha \) and various values of s

Fig. 9
figure 9

Examples of the speed of convergence and dynamics polynomiographs for \(p(z) = z^{3} - 1\) for various values of starting parameters \(\alpha \) and various values of s

To make the considerations complete, in Fig. 7, the plots of the commonly used measures: ANI, CAI and generation time (in seconds) are presented. From Fig. 7a, one can see that the average number of iterations is the smallest for \(\alpha = 9.32\) and equals to 5.63, which is considerably smaller than in the case of the RNM for which the value of ANI is equal to 84.89. For \(\alpha \in [ 17.9, 33.74]\), we get nearly maximal number of iterations. Then, for \(\alpha > 33.74\), the average number of iterations changes randomly and has a quite large magnitude. By observing index CAI (representing the normalised convergence area), one can see that for the RNM method (\(\alpha = 1.0\)), we have a high CAI value, i.e. about 0.99, which means that some points have not converged. From \(\alpha = 1.74\) up to \(\alpha = 17.75\), all points have converged, i.e. CAI equals to 1.0, so the convergence is better than in the case of the RNM. For \(\alpha \in [17.75, 33.74]\), only a small percentage (more or less \(1 \%\)) of points have converged. For \(\alpha > 33.74\) the number of converging points changes in a random way. By observing the time plot, one can see a similar tendency as for ANI. Since each iteration takes nearly the same amount of time, the time plot is in high correlation with the average number of iterations. When \(\alpha \) increases, the time of computations is getting shorter down to its minimum 0.021 s taken for \(\alpha = 9.29\). Then, the time is getting longer up to \(\alpha = 17.9\). For \(\alpha > 17.9\), the time oscillates around 0.76 s. When we compare the times obtained by the M-RNM, especially with the minimal value (0.021 s), to the time obtained by the RNM, i.e. 0.27 s, we see that the M-RNM can generate the polynomiograph much faster (using less number of iterations). As one can expect, all these plots confirm the visual results presented in Figs. 5 and 6.

Fig. 10
figure 10

Numerical results for \(p(z) = z^3 - 1\) and the various measures and the various starting points

Fig. 11
figure 11

Examples of basins of attraction for \(p(z) = z^{4} - 1\) for various values of \(\alpha \)

4.1.2 Variable \(\alpha \) sequence

In this subsection, we analyse the M-RMN algorithm for the case of non-constant \(\alpha \) sequence. We take the starting values of \(\alpha \) as 20 and 50, and a discrete scale values s from the interval (0, 1). As a result of the experiments the images of basins of attractions, presented in Fig. 8, the images of dynamics, shown in Fig. 9, for \(z^3 -1\) and numerical characteristics of ANI, CAI and time generation, given in Fig. 10, were obtained. From the sets of the obtained images, the most representable ones were chosen. In the images in Fig. 8a, e dominates the colour black, what means non-convergence and denotes very bad behaviour of the M-RNM algorithm. The remaining images, from Fig. 8b–d, show that the M-RMN method is convergent. In these images, mostly three colours are clearly seen: red, green, and blue, all denoting basins of attractions, and small yellow areas related to the critical point. The obtained basins look different compared to those generated using the RNM or by the classical Newton’s method. They have complicated shapes with rotational symmetry, split into several bigger and smaller areas. Their structure is not fractal with typical braids as in the classical Newton’s method and different to the three triangles creating basins as in the case of the RNM algorithm. Thus then, the M-RMN in the considered cases of variable \(\alpha \) sequences is less robust in comparison with the RMN or M-RNM with constant \(\alpha \) sequences.

Fig. 12
figure 12

Examples of the speed of convergence and dynamics polynomiographs for \(p(z) = z^{4} - 1\) for various values of \(\alpha \)

For the same choice of \(\alpha \) and s parameters as for the basins of attraction, in Fig. 9, dynamics of the M-RNM root finding process with variable \(\alpha \) sequence is presented. In Fig. 9a, e dominates the colour black what means the lack of convergence, whereas in Fig. 9b–d bright bluish colour is mostly seen what means, according to the used colour map (Fig. 4), a very low number of iterations, i.e. very high speed of convergence.

Fig. 13
figure 13

Numerical results for \(p(z) = z^4 - 1\) and the various measures

Fig. 14
figure 14

Examples of basins of attraction for \(p(z) = z^{4} - 1\) for various values of starting parameters \(\alpha \) and various values of s

Fig. 15
figure 15

Examples of the speed of convergence and dynamics polynomiographs for \(p(z) = z^{4} - 1\) for various values of starting parameters \(\alpha \) and various values of s

In these experiments, we obtained the following best results: ANI = 5.54, CAI = 1.0, time generation = 0.027 s for starting \(\alpha = 20\), \(s = 0.46\) and ANI = 7.34, CAI = 1.0, time generation = 0.040 s for starting \(\alpha = 50\), \(s = 0.43\). When comparing them to the constant case for \(\alpha = 9.32\), ANI = 5.62, CAI = 1.0, time generation = 0.021 s, we see similar numerical results in both cases but basins of attraction are very different—they are more complicated in the case of variable \(\alpha \) sequence. More details concerning the M-RNM with non-constant \(\alpha \) sequence can be extracted from the plots shown in Fig. 10. Namely, from Fig. 10a, b, one can see that the blue lines for starting \(\alpha = 20\) are more regular, whereas the lines for starting \(\alpha = 30\) and even more for starting \(\alpha = 50\) are chaotic. The investigated M-RNM with variable \(\alpha \) sequence has the best behaviour for values of s parameter taken from the middle of the interval (0, 1), whereas when s is close to 0 or 1, then the algorithm works badly. Moreover, a suitable choice of the starting \(\alpha \) is essential because it influences the shape of the basins of attractions. The work of the considered algorithm seems to be the best (i.e. robust with low ANI and CAI close to one) for starting \(\alpha \) equal to 20.0 and s close to 0.5.

4.2 \(p(z) = z^4 - 1\)

4.2.1 Constant \(\alpha \) sequence

In Fig. 11, the polynomiographs’ sequence showing the evolution of the five basins (four for the roots and one for the critical point \(0 + 0\mathbf {i}\)) of \(z^4 -1\) depending on \(\alpha \) is given. The basins of the roots are coloured by four colours: red, green, blue, and magenta. Black colour denotes the lack of convergence, whereas yellow—convergence to the critical point. In Fig. 11c–f, four almost triangular basins fill the whole area of polynomiographs except the yellow critical point. When \(\alpha \) is decreasing, starting from \(\alpha = 1.0\) (corresponding to the RNM), the basins became smaller. At \(\alpha = 0.1\) they disappear entirely, and almost all points are not convergent except for the yellow critical one. When the parameter \(\alpha \) is increasing starting from 30.0 to 34.5 on the basins’ boundaries gradually appear yellow artefacts that are related to points converging to the critical one and are opening new areas of convergence to the roots inside their old basins. Further increase in \(\alpha \) causes that the new areas of convergence to the roots are getting larger and the area of yellow colour is expanding, whereas the old basins are getting smaller up to their disappearance. The regular symmetric areas visible on the polynomiographs have colour according to some easily understandable scheme. With the further increase in \(\alpha \), one can observe that areas of yellow colour are gradually replaced by new areas of convergence to roots. For \(\alpha = 40.0\), a new symmetric pattern of polynomiograph appears, in which one can observe lots of areas of convergence to the roots or the critical point and areas of non-convergence. Finally, for \(\alpha \ge 40.73\), only convergence to the critical point or non-convergence is observed. In this case, a regular pattern occurs or a randomly distributed mix of yellow and black points are visible.

Further information concerning the speed of convergence of the method can be extracted from the polynomiographs presented in Fig. 12 that show dynamics of the considered root-finding process for the same sequence of \(\alpha \) values as in Fig. 11. Figure 12 presents the symmetric reference image for \(\alpha = 1.0\), i.e. the case of the RNM. Four roots and the critical points are well-visible. Yellow, violet, blue, and light blue colours are seen in the image, which means the high number of iterations. No chaos and fractal structures are seen on the diagonals. If \(\alpha \) is decreasing from 1.0 more and more nonconvergency appears (Fig. 12a, b). When \(\alpha \) is increasing from 1.0 up to about 32.0 the colours in Fig. 12e–h are moving to the left part of the colour map (Fig. 4) what means that the number of iterations is going down. Further increase of \(\alpha \), as in Fig. 12i–l, reveals intriguing symmetric patterns coloured by colours in the left part of the colour map (Fig. 4) that is related to a rather relatively small number of iterations. For \(\alpha \ge 40.0\) (Fig. 12m–o) more and more random points are observed, and a large number of iterations is performed what is coded by the colours used in the right part of the colour map.

The plots of ANI, CAI and time generation (in seconds) versus \(\alpha \) are presented in Fig. 13. This completes the analysis of the considered root-finding method in the case of \(z^4 -1\). Observe that for \(\alpha = 1.0\) (RNM) the following values of parameters were obtained: ANI \(= 179.9\), CAI \(= 0.98\), time \(= 0.68\) s, whereas for \(\alpha = 20.47\) ANI parameter obtained minimal value 5.41 when CAI \(= 1.0\) and time \(= 0.025\) s. This means that the M-RNM for \(\alpha = 20.47\) has attained the best improvement in comparison to the RNM—the number of iterations was reduced 33 times and time reduction was 27 times with CAI close to 1.0. Similar behaviour occurs on the flat parts of ANI and CAI plots for \(\alpha \) values roughly in the range [10, 30]. When \(\alpha > 40.0\) random fading oscillations are seen, what means that the M-RNM loses its stability and stops working properly.

Summing up, the M-RNM improves the RNM in the number of iterations essentially in the case of polynomial \(z^4 -1\) for \(\alpha = 20.47\). Less improvement is observable for \(\alpha \) values less or greater than the optimal one being \(\alpha = 20.47\). Both algorithms have global convergence. The M-RNM algorithm does not lose its robustness for parameter \(\alpha \) varying in a relatively wide range of \(\alpha \). We cannot see typical fractal braids that are observed for polynomiographs generated via classical Newton’s method.

4.2.2 Variable \(\alpha \) sequence

In this subsection, we show some examples of the M-RMN for variable \(\alpha \) sequence. As in Sect. 4.1.2, we fixed the starting values of \(\alpha \) as 20 and 50 since they are the most representative. In Fig. 14, the examples of basins of attraction for the considered polynomial are presented for these starting values of \(\alpha \) and different values of parameter s. As in the previous polynomial case, the values of s were chosen to give the most diversity of this presentation. It is worth mentioning that in the case of starting \(\alpha = 20\) for all values of s the polynomiograph looks the same, whereas for starting \(\alpha = 50\), it changes from the one with much non-converging points (black areas, Fig. 14b) via the lack of them (Fig. 14c) and back again (Fig. 14d).

Similarly as in the previous subsection, in Fig. 15, the dynamics polynomiographs are presented for different starting values of \(\alpha \) and s (the same as in Fig 14). For starting \(\alpha = 20\), the dynamics remains the same for all values of s. On the other hand, for starting \(\alpha = 50\) we can see the diversity of convergence for different values of s.

Finally, in Fig. 16, the plots of ANI, CAI and generation time are presented for the three different starting values of \(\alpha \). As one can see for starting \(\alpha \) equal to 20 and 30 the number of iterations is constant and so the convergence area index and time of generation. In the case of starting \(\alpha = 50\), we can observe some peaks in the plots. These peaks represent the polynomiographs with many non-convergent points. The higher the peak, the more the non-convergent points. Moreover, we can observe that the best choice of s is about 0.4 since it assures the shortest generation time and the best convergence for all tested starting values of \(\alpha \). In more detail, for the variable \(\alpha \) sequence the shortest generation times were obtained as follows: for \(\alpha = 20\), the time was oscillating around 0.033 s, for \(\alpha = 30\), the time was oscillating around 0.067 s, and for \(\alpha = 50\), the shortest time was 0.040 s.

Fig. 16
figure 16

Numerical results for \(p(z) = z^4 - 1\) and the various measures and the various starting points

Fig. 17
figure 17

Examples of basins of attraction for \(p(z) = z^{5} + z\) for various values of \(\alpha \)

Fig. 18
figure 18

Examples of the speed of convergence and dynamics polynomiographs for \(p(z) = z^{5} + z\) for various values of \(\alpha \)

4.3 \(p(z) = z^5 + z\)

4.3.1 Constant \(\alpha \) sequence

In the last example, we will consider the polynomial \(p(z) = z^5 + z\). It has five distinct roots and four critical points, i.e. \(-0.4728 + 0.4728\mathbf {i}\), \(-0.4728 - 0.4728\mathbf {i}\), \(0.4728 - 0.4728\mathbf {i}\), \(0.4728 + 0.4728\mathbf {i}\). The basins of attraction obtained for this polynomial are presented in Fig. 17. The roots are coloured by red, green, blue, cyan, and magenta colours. Firstly, let us look at the basins of attraction for the RNM with the Picard iteration (Fig. 17c). We can easily observe that many of the points have not converged to any root or critical point. Most of the converging points are converging to one of the roots, namely to \(0 + 0\mathbf {i}\) (cyan colour). For the Picard iteration, we can increase the number of converging points by simply increasing the value of M obtaining convergence of all points in the considered area for \(M = 1255\), which is a high value. If we decrease the value of \(\alpha \) in the Mann iteration below 1.0, then we can observe that the M-RNM converges to only one of the roots and that the basins for this root shrunk. Thus, the method behaves worse than the RNM. If we increase the value of \(\alpha \) above 1.0, then we see, the opposite behaviour, i.e. the non-convergent points for \(\alpha = 1.0\) start to converge to the four roots other than \(0 + 0\mathbf {i}\). For \(\alpha = 3.0\), we see almost full coverage of the area with the convergent points (see Fig. 17e), but for \(\alpha = 10.0\), we obtain the situation in which all points converge (see Fig. 17f). We obtain this by using \(M = 250\) in the Mann iteration, which is considerably smaller than \(M = 1255\) for the Picard iteration. By increasing \(\alpha \) further, we can observe that some points lose convergence to the root represented by the cyan colour. For \(\alpha = 22.0\), we obtain the situation in which none of the points converges to the \(0 + 0\mathbf {i}\) root. By increasing \(\alpha \) further, we see that some chaotic behaviour occurs. We still see some visible basins of attraction for the roots, but they are much smaller than for the lower values of \(\alpha \). Most of the points converge to a random root.

When we look at the dynamics presented in Fig. 18, we can observe a very interesting behaviour of the M-RNM. Similarly to the basins of attraction, we start with the polynomiograph for the RNM with the Picard iteration (Fig. 18c). From this polynomiograph, we can observe that very fast convergence is obtained near the \(0 + 0\mathbf {i}\) root. Moreover, the farther away from the root, the more iterations the method needs to perform to reach this root. When we decrease the value of \(\alpha \) in the Mann iteration below 1.0, the speed of convergence is also decreasing, i.e. we need to perform more iterations to find the root. The number of iterations—depicted by colour—changes radially from the \(0 + 0 \mathbf {i}\) root. If we take values of \(\alpha \) greater than 1.0, then we can observe a considerable reduction in the number of performed iterations. For instance, in the central part, we see a more dark pink colour, which means that to reach the root, we need fewer iterations than in the case of \(\alpha = 1.0\). In the same time, we can observe that the dynamics increases. From Fig. 18f, we see that the M-RNM method needs a very small number of iterations to find the roots (we see many points with dark pink and light violet colours). However, when we increase the value of \(\alpha \) further, we see that in the cross-like area in the centre of the polynomiograph, the number of iterations increases and the method loses its convergence (black colour represents the maximal number of iterations), see Fig. 18h. After a certain threshold, the M-RNM starts to behave increasingly chaotic. In Fig. 18i–o, we see this behaviour. Moreover, we can observe that for the points that converge a similar number of iterations is needed to find the roots, i.e. we see light violet and pink colour, which means that we need less than half of the maximal number of iterations (\(M = 250\)).

The plots of ANI, CAI and polynomiograph generation time (in seconds) for \(p(z) = z^5 + z\) are presented in Fig. 19. The values of these measures for the RNM with the Picard iteration (\(\alpha = 1.0\) in the Mann iteration) are equal to 229.77, 0.20 and 1.02 s, respectively. When we look at the ANI plot (Fig. 19a), we see that for values of \(\alpha \) less than 1.0 the value of ANI increases its value obtaining the maximal value of 250 for \(\alpha < 0.07\). For values of \(\alpha \) greater than 1.0, we see that ANI decreases with the increase in \(\alpha \). The local minimal value of 25.23 is attained at \(\alpha = 15.59\). Then, ANI increases obtaining the local maximal value of 197.11 for \(\alpha = 39.46\). For \(\alpha > 39.46\), we can observe another decrease in ANI. The minimal value of 23.92 (which is the global minimum) is attained at \(\alpha = 48.71\). By comparing the results for \(\alpha > 1.0\) with the result obtained for the RNM (229.77) we see that for all the values of \(\alpha \) we get a lower value of ANI. Now, let us look at the CAI plot presented in Fig. 19b. From the plot, we see that for \(\alpha < 1.0\) we obtain results worse than for the RNM. The lowest value of CAI equal to 0.0 is obtained for \(\alpha < 0.07\). If we change \(\alpha \) in the opposite direction, i.e. we increase its value above 1.0, we see that the value increases violently obtaining the maximal value of 1.0 for \(\alpha = 4.16\). The value 1.0 of CAI does not change until \(\alpha = 17.79\), when it starts to decrease. The local minimum of 0.31 is attained at \(\alpha = 39.46\). Then, again the value of CAI starts to increase obtaining the value near 1.0 (\(\approx 0.99\)) for \(\alpha > 40.7\). Additionally, when we compare the results for \(\alpha > 1.0\), then we see that the obtained results for the M-RNM are better than in the case of the RNM (0.20). In the last plot, presented in Fig. 19c, we see the polynomiographs’ generation times. The overall tendency observed in this plot is very similar to the one obtained for ANI (Fig. 19a). For \(\alpha < 1.0\) we get longer times than in the case of the RNM, obtaining the longest time 1.27 s for \(\alpha = 0.01\), whereas for \(\alpha > 1.0\) times get shorter. The visible local extrema of 0.12 s, 1.02 s and 0.324 s are attained at \(\alpha = 15.81\), 39.44 and 48.42, respectively. The shortest time of 0.12 s is considerably shorter than the time for the RNM (1.02 s).

Fig. 19
figure 19

Numerical results for \(p(z) = z^5 + z\) and the various measures

One interesting thing that we can observe by comparing the plots for ANI (Fig. 19a) and times (Fig. 19c) is that the value of \(\alpha \) for which we get the lowest value of ANI (\(\alpha = 48.71\)) does not coincide with the value for the shortest time (\(\alpha = 15.81\)). Moreover, when we compare the CAI values for \(\alpha = 48.71\) (0.99) and \(\alpha = 15.81\) (1.0), we see that in this case, the lowest value of ANI does not guarantee the best convergence in the area, i.e. some points for \(\alpha = 48.71\) do not converge to any root, whereas for \(\alpha = 15.81\), all points have converged. Therefore, when we want to select the best value of \(\alpha \) that assures good convergence and performance, we need to consider all three measures (ANI, CAI, time) simultaneously.

4.3.2 Variable \(\alpha \) sequence

In this subsection, we present some examples obtained for the M-RMN with a variable \(\alpha \) sequence. The graphical examples—like in the case of the previous polynomials—were generated for two starting values of \(\alpha \), namely 20 and 50, and variable s. The obtained basins of attraction are presented in Fig. 20. For the starting \(\alpha = 20\) (see Fig. 20a, b), the basins of attraction for the majority of the s values look like the basins presented in Fig. 20a. The method converges to all roots, and there are no visible non-converging points. The loss of convergence of the method appears for the s values near 1, where the method loses its convergence in the cross-like region, see Fig. 20b. For the other starting \(\alpha \) value (see Fig. 20d, e), the basins of attraction look different, i.e. we do not see the cross-like region. However, also in this case, for the majority of the s values, the basins look like the ones presented in Fig. 20c. For values of s near 1 and for some single values of s in the rest of the (0, 1) interval, e.g. for \(s = 0.36\), we loose convergence of the method, see Fig. 20d, e.

Fig. 20
figure 20

Examples of basins of attraction for \(p(z) = z^{5} + z\) for various values of starting parameters \(\alpha \) and various values of s

The polynomiographs showing the dynamics and the speed of convergence for the same values of the parameters (starting \(\alpha \) and s)—like in the case of basins of attraction—are presented in Fig. 21. When we look at starting \(\alpha = 20\), we see that the dynamics and the speed of convergence change only in the cross-like region. In the other regions, the polynomiographs do not change. The speed of convergence, compared to the RNM (Fig. 18c), is faster for the majority of the s values. Only for values of s near 1 the speed in the cross-like region is worse. For starting \(\alpha = 50\), we see that for the polynomiographs with good convergence (Fig. 21c) the speed of convergence is very fast. For the points that converge in the other cases (Fig. 21d, e), the speed of convergence is faster or comparable to the RNM. Moreover, when we compare the polynomiographs for starting \(\alpha = 50\), and the one for a constant \(\alpha = 50\) (Fig. 18o), we see that in the variable case there is no chaotic behaviour that is visible for the constant case.

Fig. 21
figure 21

Examples of the speed of convergence and dynamics polynomiographs for \(p(z) = z^{5} + z\) for various values of starting parameters \(\alpha \) and various values of s

The plots of ANI, CAI and the generation time for three starting values of \(\alpha \), namely 20, 30 and 50, are presented in Fig. 22. From the ANI plot (Fig. 22a), we can see that in all three cases the method obtains larger values of ANI for s near 0 and 1. Moreover, for starting \(\alpha \) values equal to 30 and 50 we observe some peaks, which correspond to the exemplary basins of attractions presented in Fig. 20d. When we compare the obtained results with the result for the RNM, i.e. 229.77, we see that for the variable \(\alpha \) sequence, we obtain much lower values of ANI. Only in the peaks visible for starting \(\alpha = 50\), we can see the values close to 229.77. The minimal values of ANI—for the three considered starting \(\alpha \) values—equal to 16.298, 9.789 and 3.671 were attained for s equal to 0.45, 0.3 and 0.18, respectively. By looking at the plots of CAI (Fig. 22b), we can observe that for most of the s values the value of CAI is equal to 1.0. The values less than 1.0 are only obtained for s close to 1 and in the visible peaks that appear in the same points as the peaks in the ANI plot. By comparing these results with the value of CAI obtained for RNM, i.e. 0.20, we can see that the convergence of M-RNM with the variable \(\alpha \) sequence is much better. From the last plot (Fig. 22c), i.e. the generation time plot, we can observe a very similar tendency as in the case of the ANI plot. The only difference is that for the values of s near 1 we can observe a very large time difference compared to the other peaks. The best times in all three considered cases are much shorter than in the case of the RNM (1.02 s), namely for \(\alpha = 20\) the time was 0.1 s (attained at \(s = 0.45\)), for \(\alpha = 30\) the time was 0.063 s (attained at \(s = 0.3\)), and for \(\alpha = 50\) the time was 0.026 s (attained at \(s = 0.18\)).

Fig. 22
figure 22

Numerical results for \(p(z) = z^5 + z\) and the various measures and the various starting points

4.4 Comments

We only consider positive values of parameter \(\alpha \) in the M-RNM algorithm because for negative values of \(\alpha \) convergence cannot be obtained. For \(\alpha < 1\), when \(\alpha \) is getting smaller and smaller, the M-RNM behaves increasingly badly—more and more points of non-convergence appear in polynomiographs. The M-RNM behaves better than the RNM for \(\alpha > 1\), i.e. more points converge, and the number of iterations is smaller.

The optimal value of \(\alpha \) is such that the ANI measure is as low as possible, and the CAI index simultaneously is as close as possible to 1. Such cases occur for \(z^3 - 1\) and \(z^4 -1\) when \(\alpha \) attain 9.32 and 20.47 for which ANI measure attain global minimum 5.63 and 5.41, respectively. For other polynomials, the optimal value of \(\alpha \) can be taken as the one for which ANI measure attains local minimum and CAI index is close to 1. Such a situation occurs for polynomial \(z^5 + z\). That example shows that because of mutual nonlinear dependency between parameters ANI, CAI and time versus \(\alpha \) it is not possible to formulate general recommendations on how to choose optimal \(\alpha \) to achieve the best improvement of the M-RNM algorithm. However, the best values of \(\alpha \) for a specific polynomial could be determined experimentally via analysing of ANI, CAI and time plots.

From the examples for the variable \(\alpha \) given by Algorithm 3, we can observe that also, in this case, we cannot give general recommendations on the choice of the starting \(\alpha \) and s values. The relationship between these parameters is also nonlinear, but one should avoid the values of s near 0 and 1 because for these values the method obtains worse results than for the other values in (0, 1).

Similarly, like in the case of the RNM, the boundaries between basins of attraction for M-RNM are sharp in a wide range of \(\alpha \) values for all considered polynomials, and this method is stable. However, after some threshold value (different for each of the polynomials), the basins of attraction look chaotic and the method losses its stability.

5 Artistic patterns from dynamics of the M-RNM

Dynamics of discrete dynamical systems, besides its essential meaning in the analysis of these systems, can also have artistic applications. Graphical presentation of the phase portraits of many discrete dynamical systems can reveal artistic patterns of aesthetic value. We can find many examples of such patterns in the literature, e.g. Ouyang et al. [25] presented spiral patterns that are coloured by the dynamics of the dynamical system that is compatible with some symmetry group, Lu et al. [20] and Chung et al. [9] proposed methods for wallpaper patterns creation from dynamics, and patterns from the dynamics of combined root-finding methods were presented by Gdawiec [12].

When we look at the dynamics of polynomiographs generated by the RNM, e.g. Figs. 6c, 12c or 18c, we can see rather boring patterns—from the artistic point of view. However, when we look at the patterns generated by the Mann iteration in conjunction with the RNM method (see, for example, Fig. 12), we can see that very interesting patterns can emerge. The images presented in Sects. 4.1.1, 4.2.1 and 4.3.1 were obtained for constant sequences of \(\alpha _i\). In the Mann iteration, we are not limited to only constant sequences, especially if we are interested in artistic applications. Therefore, in the rest of this section, we will introduce some non-constant sequences that will generate very interesting patterns from the dynamics of the M-RNM.

Before presenting examples of artistic patterns, we need to consider two technical subjects that greatly influence the appeal of the patterns presented in the polynomiographs. Let us look at Fig. 23a which was obtained for \(p(z) = z^5 + z\), \(\alpha = 17.7\), \(M = 200\), \(A = [-3, 3]^2\) and \(\varepsilon = 0.001\). When we look closely at the cross pattern, we see many unpleasant artefacts that look like jagged shape, noise or chaos. This unpleasant effect is caused by the fact that the image has a discrete nature. We sample the signal with a high frequency, i.e. we discretize the space that contains many tiny details that cannot be captured by the discretization process. This is a well-known problem in computer graphics, and it is called aliasing effect [2]. To reduce this effect, one can use one of the many methods of anti-aliasing. The simplest anti-aliasing method is supersampling [2]. In this method, we render the image at a higher resolution than the image to be displayed, usually 2, 4 or 8 times higher. And then, we average the colours in non-overlapping blocks. The block size depends on the resolution’s magnification factor, i.e. if f is the factor, then the block size is equal to \(f \times f\). The result of using anti-aliasing method to the pattern from Fig. 23a is presented in Fig. 23b, c. We see that the appeal of the pattern in the areas with high frequencies is much better. Now, the pattern does not look noisy or chaotic. In all examples presented in this section, we will use this anti-aliasing method.

Fig. 23
figure 23

Polynomiographs of \(p(z) = z^5 + z\) generated without anti-aliasing a and with anti-aliasing for two different magnification factors: b 2, c 4

The other technical subject that we need to consider when we want to use polynomiographs generated by the proposed M-RNM is choosing a proper colour map. In Sect. 4 in the polynomiographs with iteration colouring, we used only one colour map. However, when we think about artistic applications, we can use various colour maps—even for the same polynomiograph—to obtain very interesting results. In Fig. 24, we see an example of the same polynomiograph (\(p(z) = z^3 - 1\), \(\alpha = 17\), \(M = 200\), \(A = [-3, 3]^2\) and \(\varepsilon = 0.001\)), but coloured with various colour maps. From the images, we can see that by a proper choice of the colour map we can, for example, emphasize different parts of the same pattern (e.g. see Fig. 24b, e), show banding effect, or hide it (e.g. see Fig. 24a, f). Moreover, the proper colours can create the depth, movement, mood and harmony of an image [29, 32]. Therefore, in the all presented examples in this section, we use various colour maps that greatly increase the polynomiographs’ artistic values.

Fig. 24
figure 24

Examples of the same polynomiograph coloured with various colour maps. The colour map is presented below the polynomiograph. (Color figure online)

Fig. 25
figure 25

Dynamics polynomiographs for \(p(z) = z^3 - 1\) generated with the M-RNM and various values of \(\alpha \)

The first sequence that we will consider is very simple, and it is based on the switching of several constant values. Let \(m \in \mathbb {N}\). Then, we define \(\alpha _i\) in the following way:

$$\begin{aligned} \alpha _i = {\left\{ \begin{array}{ll} q_0, &{} \text {if } i \mod m = 0, \\ q_1, &{} \text {if } i \mod m = 1, \\ \ldots \\ q_{m-1}, &{} \text {if } i \mod m = m-1, \end{array}\right. } \end{aligned}$$
(20)

where \(q_0, q_1, \ldots , q_{m-1} \in \mathbb {R}\).

As an example of the use of (20) let us take \(m = 3\) and three constant values \(-4.1\), 14.0 and 19.0. As we mentioned in our discussion in Sect. 3, negative values of \(\alpha \) cause that the method moves the point from the previous iterations in the opposite direction, than the reduction in the modulus. If we were interested in the convergence to the roots, then it would be undesirable. However, in this section, we are interested in obtaining artistic patterns, so such values are permissible. In our example, we generate dynamics polynomiographs for \(p(z) = z^3 - 1\), \(A = [-3.5, 3.5]^2\), \(M = 200\) and \(\varepsilon = 0.001\). Firstly, let us see how the polynomiographs look like for the three values of \(\alpha \). In Fig. 25, we present these polynomiographs. From these three images, we can see that for \(\alpha = -4.1\) and 19.0 the convergence is poor and the patterns are not attractive. In the case of \(\alpha = 14.0\), the situation is better, but still, the pattern is not attractive from the artistic point of view.

Now, let us use sequence (20) with the three values \(-4.1\), 14.0, 19.0 in various combinations:

  1. (a)

    \(q_0 = -4.1\), \(q_1 = 19.0\), \(q_2 = 14.0\),

  2. (b)

    \(q_0 = 19.0\), \(q_1 = -4.1\), \(q_2 = 14.0\),

  3. (c)

    \(q_0 = -4.1\), \(q_1 = 14.0\), \(q_2 = 19.0\),

  4. (d)

    \(q_0 = 14.0\), \(q_1 = -4.1\), \(q_2 = 19.0\),

  5. (e)

    \(q_0 = 19.0\), \(q_1 = 14.0\), \(q_2 = -4.1\),

  6. (f)

    \(q_0 = 14.0\), \(q_1 = 19.0\), \(q_2 = -4.1\).

In Fig. 26, we see the obtained polynomiographs. From these polynomiographs, we can observe that although we used parameters which led to non-attractive patterns, their combinations in (20) allow us to obtain a variety of interesting patterns. Each of the obtained patterns has three visible braids, but they significantly differ in details.

Fig. 26
figure 26

Dynamics polynomiographs for \(p(z) = z^3 - 1\) generated with the M-RNM, where \(\alpha _i\) is given by (20)

Another way of defining \(\alpha _i\) for artistic applications is the use of linear interpolation between two given values, i.e.

$$\begin{aligned} \alpha _i = q_0 \left( 1 - \frac{i}{M} \right) + q_1 \frac{i}{M}, \end{aligned}$$
(21)

where \(q_0, q_1 \in \mathbb {R}\) and M is the maximum number of iterations.

Let us consider two values \(-17.0\) and 30.0 and the following parameters for the polynomiographs: \(p(z) = z^4 + z^2 - 1\), \(A = [-3, 3]^2\), \(M = 50\) and \(\varepsilon = 0.1\). Similarly like in the previous example, let us start with the polynomiographs generated for the two considered values, see Fig. 27. These two polynomiographs present two completely different patterns. In Fig. 27a, we see a very simple and small pattern in the centre of the polynomiograph, whereas in Fig. 27b, the pattern occupies the whole area and it is rich in details. When we modify \(\alpha _i\) by using the linear combination of these two values: (a) \(q_0 = -17.0\), \(q_1 = 30.0\), (b) \(q_0 = 30.0\), \(q_1 = -17.0\), then we obtain the polynomiographs presented in Fig. 28. We see that both polynomiographs differ from the ones presented in Fig. 27. However, we can notice that some characteristic structures were inherited from the original patterns, i.e. in the pattern from Fig. 28a, we see the small pattern from Fig. 27a, and in the pattern from Fig. 28b, we see that the overall shape is similar to the pattern from Fig. 27b.

Fig. 27
figure 27

Dynamics polynomiographs for \(p(z) = z^4 + z^2 - 1\) generated with the M-RNM and various values of \(\alpha \)

Fig. 28
figure 28

Dynamics polynomiographs for \(p(z) = z^4 + z^2 - 1\) generated with the M-RNM, where \(\alpha _i\) is given by (21)

The linear interpolation applied in (21) can be used in many different ways to define \(\alpha _i\). For instance, let us notice that when we generate a polynomiograph the \(\frac{i}{M}\) term in (21) belongs to [0, 1]. We can divide [0, 1] into non-overlapping intervals, and on each of them, we can use linear interpolation with different values of \(q_0\) and \(q_1\). Basing on the parameters used in the previous example, we will present some polynomiographs that use this approach. Let us consider two sequences of \(\alpha _i\) defined in the following way:

$$\begin{aligned} \alpha _i= & {} {\left\{ \begin{array}{ll} -17.0 \left( 1 - \frac{i}{M} \right) + 30.0 \frac{i}{M}, &{} \text {if } \frac{i}{M} < T, \\ 30.0 \left( 1 - \frac{i}{M} \right) - 17.0 \frac{i}{M}, &{} \text {if } \frac{i}{M} \ge T, \\ \end{array}\right. } \end{aligned}$$
(22)
$$\begin{aligned} \alpha _i= & {} {\left\{ \begin{array}{ll} 30.0 \left( 1 - \frac{i}{M} \right) - 17.0 \frac{i}{M}, &{} \text {if } \frac{i}{M} < T, \\ -17.0 \left( 1 - \frac{i}{M} \right) + 30.0 \frac{i}{M}, &{} \text {if } \frac{i}{M} \ge T, \\ \end{array}\right. } \end{aligned}$$
(23)

where \(T \in [0, 1]\) is a threshold value that divides [0, 1] into two non-overlapping intervals. These two sequences differ only in the order of \(q_0\) and \(q_1\) values. Figures 29 and 30 present the polynomiographs obtained with these two sequences and varying values of T. From these polynomiographs, we can see that by changing the division of [0, 1] by the value of T, we are able to obtain a variety of interesting patterns that differ from the ones generated in Figs. 27 and 28.

Fig. 29
figure 29

Dynamics polynomiographs for \(p(z) = z^4 + z^2 - 1\) generated with the M-RNM, where \(\alpha _i\) is given by (22) and the T has various values

Fig. 30
figure 30

Dynamics polynomiographs for \(p(z) = z^4 + z^2 - 1\) generated with the M-RNM, where \(\alpha _i\) is given by (23) and the T has various values

Recently, Bisheh–Niasar and Gdawiec [7] studied the use of periodic parameters in the S-iteration for the Bisheh–Niasar–Saadatmandi root finding method. For instance, they used simple trigonometric functions to define the parameters. By using this approach, they obtained some interesting patterns. In the case of the Mann iteration, considered in this paper, we can also use trigonometric functions in a variety of ways to define \(\alpha _i\). In our example, we generated polynomiographs for the following parameters: \(p(z) = z^4 + 4\), \(A = [-3, 3]^2\), \(M = 200\) and \(\varepsilon = 0.001\). The only parameters that vary among the polynomiographs are the colour map and Mann’s iteration parameter \(\alpha _i\). Figure 31 presents the polynomiographs for the following sequences of \(\alpha _i\):

  1. (a)

    \(\alpha _i = 10 + 15 \cos ( 0.5 i + 3.1415 / 2 )\),

  2. (b)

    \(\alpha _i = 10 + 4 \sin ( i ) \tan ( 7.9 i )\),

  3. (c)

    \(\alpha _i = 15 + 31 \sin ( i ) \cos ( 5 i )\),

  4. (d)

    \(\alpha _i = 16 + 10 \sin ( i ) \tan ( 19 i ) - 6 \cos ( i )\),

  5. (e)

    \(\alpha _i = 24 - \frac{4}{4.5}i + 15 \sin ( i )\),

  6. (f)

    \(\alpha _i = 69 - \frac{34}{4.5}i + 25 \cos ( 2i )\).

Fig. 31
figure 31

Dynamics polynomiographs for \(p(z) = z^4 + 4\) generated with the M-RNM, where \(\alpha _i\) is defined by using trigonometric functions

When we look at the polynomiographs in Fig. 31, we can see that the patterns differ in a great manner between each other, although they were generated from the same polynomial. They contain many fine details, which make these patterns very attractive from the artistic point of view.

In every presented example, we generated a single pattern, which can be used to create paintings, patterns on t-shirts, mugs, etc. However, when we look closely at these patterns, we can notice that with a proper selection of the area A, these patterns can serve as a template for a wallpaper or frieze symmetry patterns. Figure 32 presents examples of wallpaper symmetry patterns obtained from the templates generated by polynomiographs with the following parameters:

  1. (a)

    \(p(z) = z^4 + 4\), \(A = [-3, 3]^2\), \(M = 200\), \(\varepsilon = 0.001\) and \(\alpha _i = 24 - \frac{4}{4.5}i + 15 \sin ( i )\), (see Fig. 31e),

  2. (b)

    \(p(z) = z^4 - 1\), \(A = [-3, 3]^2\), \(M = 250\), \(\varepsilon = 0.001\) and \(\alpha _i = 37.75\), (see Fig. 12k),

  3. (c)

    \(p(z) = z^3 - 1\), \(A = [-3, 3]^2\), \(M = 200\), \(\varepsilon = 0.001\) and \(\alpha = 17\), (see Fig. 24c),

  4. (d)

    \(p(z) = z^6 - 1\), \(A = [-4.1, 4.1]^2\), \(M = 60\), \(\varepsilon = 0.1\) and \(\alpha _i = 25 + 30 \sin ( i ) \tan ( 40 i )\) (see Fig. 33a),

  5. (e)

    \(p(z) = (4 + 4\mathbf {i}) z^4 + 8\mathbf {i} z^2 + 4\), \(A = [-3, 3]^2\), \(M = 200\), \(\varepsilon = 0.001\) and \(\alpha _i = 40 - \frac{13.5}{4.5} i\), (see Fig. 33b),

  6. (f)

    \(p(z) = z^2 + 0.4375 + 1.5 \mathbf {i}\), \(A = [-3.7, 3.7]^2\), \(M = 200\), \(\varepsilon = 0.001\) and \(\alpha _i = 17.6 + 18.5 \sin ( 0.2 i ) \cos ( 10 i )\), (see Fig. 33c).

When we look at the templates in Figs. 31e and 12k, we see that they have a fourfold symmetry and four axes of symmetry (two diagonal, one horizontal and one vertical). Thus, using this kind of templates, we can create a wallpaper pattern of p4m symmetry (Fig. 32a, b) by repeating the template on a square lattice. If we place template from Fig. 24c on a square lattice, then we see that it has no translational symmetry nor glide symmetry. However, when between two adjacent cells in the square lattice we add a mirror reflection in the horizontal direction, then we obtain a pmm symmetry pattern (Fig. 32c). The template in Fig. 33a has a twofold symmetry and two axes of symmetry (one horizontal and one vertical). These types of symmetry of the template will not give the p4m symmetry if we repeat it on a square lattice. Instead, we obtain a pmm symmetry pattern (Fig. 32d). A twofold symmetry is visible in the templates from Fig. 33b, c. These templates lack the horizontal and vertical mirror symmetries present in the template from Fig. 33a, so we cannot repeat them on a square lattice to obtain a pmm symmetry. However, we can use the mirror symmetries in horizontal and vertical directions between two adjacent cells in the square lattice to obtain a cmm symmetry patterns (Fig. 32e, f).

Fig. 32
figure 32

Wallpaper symmetry patterns obtained from the templates generated by the M-RNM

Fig. 33
figure 33

Polynomiographs generated by the M-RNM that can serve as templates for creating wallpaper or frieze symmetry patterns

Next, Fig. 34 presents examples of frieze symmetry patterns obtained from the templates generated by the polynomiographs with the following parameters:

  1. (a)

    \(p(z) = z^3 - 1\), \(A = [-3.5, 3.5]^2\), \(M = 200\), \(\varepsilon = 0.001\) and \(\alpha _i\) given by (20) with \(m = 3\) and \(q_0 = -4.1\), \(q_1 = 19.0\), \(q_2 = 14.0\), (see Fig. 26a),

  2. (b)

    \(p(z) = z^7 - 6 z^5 - 31 z^3 + 36 z\), \(A = [-3.7, 3.7]^2\), \(M = 200\), \(\varepsilon = 0.001\) and \(\alpha _i = 30 + 10 \sin ( i ) \tan ( 8 i )\), (see Fig. 33d),

  3. (c)

    \(p(z) = z^4 + z^2 - 1\), \(A = [-3, 3]^2\), \(M = 50\), \(\varepsilon = 0.1\) and \(\alpha _i\) is given by (22) with \(T = 0.789\), (see Fig. 29a).

Fig. 34
figure 34

Frieze symmetry patterns obtained from the templates generated by the M-RNM

The template from Fig. 26a cannot be directly used to obtain a frieze symmetry pattern, because it lacks translational or glide symmetry after repeating it in a horizontal direction. Nevertheless, if we rotate it by 90 degrees and repeat this template in a horizontal direction, then we obtain a pm1 symmetry pattern (Fig. 34a). When we look at the templates in Figs. 29a and 33d, we see that they have a twofold symmetry and two axes of symmetry (one horizontal and one vertical). Now, by repeating such templates in a horizontal direction, we obtain patterns of pmm symmetry (Fig. 34b, c).

6 Conclusions

The RNM is a stable global root-finding process that needs a large number of iterations to achieve a root or a critical point of any polynomial. No chaos and no fractal structures are observable on the boundaries of basins when solving polynomial equations on the complex plane, contrary to the classical Newton’s method. In this paper, we modified the RNM by replacing the standard Picard iteration with the Mann iteration.

The proposed M-RNM root-finding algorithm significantly improves the RNM algorithm. The M-RNM is much faster than the RNM. Its speed of convergence depends on the parameter \(\alpha \) and the considered polynomial. The best value of \(\alpha \), which assures good convergence and performance, can be determined experimentally for a given polynomial. The M-RNM algorithm does not lose good properties of the RNM such as global convergence, stability, and robustness even for relatively large deviations from the best value of \(\alpha \). As the parameter \(\alpha \) increases and moves far away from the best value, the patterns become increasingly dynamic and eventually become more and more random and chaotic. Also, for the proposed variable \(\alpha \) sequence, we can observe the good behaviour of the M-RNM, obtaining better results than the RNM. The appearance of the polynomiographs generated with the M-RNM can be easily modified by using various colour palettes. However, the artistic aspects of the images can be additionally achieved with the help of specially selected sequences of \(\alpha _i\) as in Sect. 5. Furthermore, these images have been successfully used as templates for wallpaper symmetry or frieze symmetry patterns giving nicely looking examples of applied art.

Encouraged by the improvement of the speed of the RNM algorithm by using the Mann iteration in place of Picard’s iteration, we are thinking about checking whether the use of other types of iterations known from the literature and collected, e.g. in [13], can lead to an even better improvement of the RNM algorithm. It also seems likely to find other intriguing patterns by using other types of iterations that may be considered aesthetic. Moreover, one can try to embed a proper template created with the M-RNM to the fundamental region of the spiral patterns introduced in [25]. These tasks will be the subject of our further research.