1 Introduction

To solve the nonlinear equation

$$ f(x)=0 $$
(1)

is the oldest problem of science in general and in mathematics in particular. These nonlinear equations have diverse applications in many areas of science and engineering. In general, to find the roots of (1), we look towards iterative schemes, which can be further classified as to approximate a single root and all roots of (1). There exists another class of iterative methods in literature which solves nonlinear systems. In this article, we are going to work on all these three types of iterative methods. A lot of iterative methods for finding roots of nonlinear equations and their system of different convergence order already exist in the literature (see [112]). The aforementioned methods are used to approximate one root at a time. But mathematician are also interested in finding all roots of (1) simultaneously. This is due to the fact that simultaneous iterative methods are very popular due to their wider region of convergence, are more stable as compared to single root finding methods, and implemented for parallel computing as well. More details on simultaneous determination of all roots can be found in [1325] and the references cited therein.

The main aim of this paper to construct a family of optimal third order iterative methods and then convert them into simultaneous iterative methods for finding all distinct as well as multiple roots of nonlinear equation (1). We further extend this family of iterative methods for solving a system of nonlinear equations. Basins of attractions of single roots finding methods are also given to show the convergence behavior of iterative methods.

2 Constructions of a family of methods for single root and convergence analysis

Here, we present some well-known existing methods of third order iterative methods.

Singh et al. [4] presented the following optimal third order method (abbreviated as E1):

$$ \textstyle\begin{cases} y^{(k)}=x^{(k)}-\frac{2}{3} ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) , \\ z^{(k)}=x^{(k)}- \frac{4f(x^{(k)})}{f^{\prime }(x^{(k)})+3f^{\prime }(y^{(k)})}.\end{cases} $$

Huen et al. [26] gave the third order optimal method as follows (abbreviated as E2):

$$ \textstyle\begin{cases} y^{(k)}=x^{(k)}-\frac{2}{3} ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) , \\ z^{(k)}=x^{(k)}-\frac{f(x^{(k)})}{4} ( \frac{1}{f^{\prime }(x^{(k)})}+\frac{3}{f^{\prime }(y^{(k)})} ) .\end{cases} $$

Amat et al. [5] in (2007) gave the following third order optimal method (abbreviated as E3):

$$ \textstyle\begin{cases} y^{(k)}=x^{(k)}- ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) , \\ z^{(k)}=y^{(k)}- ( \frac{f(y^{(k)})}{f^{\prime }(x^{(k)})} ) .\end{cases} $$

Chun et al. [27] gave the third order optimal method as follows (abbreviated as E4):

$$ \textstyle\begin{cases} y^{(k)}=x^{(k)}- ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) , \\ z^{(k)}=x^{(k)}-\frac{1}{2} ( 3- \frac{f^{\prime }(y^{(k)})}{f^{\prime }(x^{(k)})} ) ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) .\end{cases} $$

Kou et al. [28] gave the third order optimal method as follows (abbreviated as E5):

$$ \textstyle\begin{cases} y^{(k)}=x^{(k)}+ ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) , \\ z^{(k)}=y^{(k)}- ( \frac{f(y^{(k)})}{f^{\prime }(x^{(k)})} ) .\end{cases} $$

Chun et al. [27] gave the following third order optimal method (abbreviated as E6):

$$ \textstyle\begin{cases} y^{(k)}=x^{(k)}- ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) , \\ z^{(k)}=x^{(k)}- \frac{f(x^{(k)})(2+3(f^{\prime }(x^{(k)}))^{2}-f^{\prime }(x^{(k)})f^{\prime }(y^{(k)}))}{f^{\prime }(x^{(k)})+2(f^{\prime }(x^{(k)}))^{3}+f^{\prime }(y^{(k)})}.\end{cases} $$

Here, we propose the following families of iterative methods (abbreviated as Q1):

$$ \textstyle\begin{cases} y^{(k)}=x^{(k)}- ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) , \\ z^{(k)}=y^{(k)}- ( \frac{f^{\prime }(x^{(k)})-f^{\prime }(y^{(k)})}{\alpha f^{\prime }(y^{(k)})+(2-\alpha )f^{\prime }(x^{(k)})} ) ( \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})} ) ,\end{cases} $$
(2)

where \(\alpha \in \mathbb{R} \). For iteration schemes (2), we have the following convergence theorem by using CAS Maple 18 and the error relation of the iterative schemes defined in (2).

Theorem 1

Let \(\zeta \in I\) be a simple root of a sufficiently differential function \(f:I\subseteq R\longrightarrow R\) in an open interval I. If \(x_{0}\) is sufficiently close to ζ, then the convergence order of the family of iterative methods (2) is three and the error equation is given by

$$ e^{(k+1)}= \biggl(2c_{2}^{2}+\frac{1}{2}c_{3}- \alpha c_{2}^{2} \biggr) \bigl(e^{(k)} \bigr)^{3}+O \bigl( \bigl(e^{(k)} \bigr)^{4} \bigr), $$
(3)

where \(c_{m}=\frac{f^{m}(\zeta )}{m!f^{\prime }(\zeta )}\), \(m\geq 2\).

Proof

Let ζ be a simple root of f and \(x^{(k)}=\zeta +e^{(k)}\). By Taylor’s series expansion of \(f(x^{(k)})\) around \(x^{(k)}=\zeta \), taking \(f(\zeta )=0\), we get

$$ f \bigl(x^{(k)} \bigr)=f^{{\prime }}(\zeta ) \bigl(e^{(k)}+c_{2} \bigl(e^{(k)} \bigr)^{2}+c_{3} \bigl(e^{(k)} \bigr)^{3}+c_{4} \bigl(e^{(k)} \bigr)^{4}+O \bigl(e^{(k)} \bigr)^{5} \bigr) $$
(4)

and

$$ f^{\prime } \bigl(x^{(k)} \bigr)=f^{{\prime }}(\zeta ) (1+2c_{2} \bigl(e^{(k)} \bigr)+3c_{3} \bigl(e^{(k)} \bigr)^{2}+4c_{4} \bigl(e^{(k)} \bigr)^{3}+O \bigl( \bigl(e^{(k)} \bigr)^{4} \bigr). $$
(5)

Dividing (4) by (5), we have

$$ \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})}=e^{(k)}-c_{2} \bigl(e^{(k)} \bigr)^{2}+ \bigl(2c_{2}^{2}-2c_{3} \bigr) \bigl(e^{(k)} \bigr)^{3}+O \bigl( \bigl(e^{(k)} \bigr)^{4} \bigr) $$
(6)

and

$$\begin{aligned}& y^{(k)}=c_{2} \bigl(e^{(k)} \bigr)^{2}+ \bigl(-2c_{2}^{2}+2c_{3} \bigr) \bigl(e^{(k)} \bigr)^{3}+\cdots, \end{aligned}$$
(7)
$$\begin{aligned}& f^{\prime } \bigl(y^{(k)} \bigr)=1+2c_{2}^{2} \bigl(e^{(k)} \bigr)^{2}+2c_{2} \bigl(-2c_{2}^{2}+2c_{3} \bigr) \bigl(e^{(k)} \bigr)^{3}+\cdots. \end{aligned}$$
(8)

We have

$$ \frac{f^{\prime }(x^{(k)})-f^{\prime }(y^{(k)})}{\alpha f^{\prime }(y^{(k)})+(2-\alpha )f^{\prime }(x^{(k)})}=-c_{2} \bigl(e^{(k)} \bigr)^{2}+ \biggl(4c_{2}^{2}-\frac{3}{2}c_{3}- \alpha c_{2}^{2} \biggr) \bigl(e^{(k)} \bigr)^{3}+\cdots . $$
(9)

From the second step of (2), we have

$$\begin{aligned}& e^{(k+1)}=y^{(k)}- \frac{f^{\prime }(x^{(k)})-f^{\prime }(y^{(k)})}{\alpha f^{\prime }(y^{(k)})+(2-\alpha )f^{\prime }(x^{(k)})} \frac{f(x^{(k)})}{f^{\prime }(x^{(k)})}, \end{aligned}$$
(10)
$$\begin{aligned}& e^{(k+1)}= \biggl(2c_{2}^{2}+\frac{1}{2}c_{3}- \alpha c_{2}^{2} \biggr) \bigl(e^{(k)} \bigr)^{3}+O \bigl( \bigl(e^{(k)} \bigr)^{4} \bigr). \end{aligned}$$
(11)

Hence this proves third order convergence. □

3 Generalizations to simultaneous methods

Suppose that nonlinear equation (1) has n roots. Then \(f(x)\) and \(f^{\prime }(x)\) can be approximated as

$$ f(x)=\prod_{j=1}^{n} ( x-x_{j} ) \quad \text{and}\quad f^{ \prime }(x)=\sum_{k=1}^{n} \underset{\underset{j=1}{j\neq k}}{\overset{n}{\prod }} ( x-x_{j} ) . $$
(12)

This implies

$$ \frac{f^{\prime }(x)}{f(x)}=\sum_{j=1}^{n} \biggl( \frac{1}{(x-x_{j})} \biggr) =\frac{1}{\frac{1}{x-x_{i}}-\sum_{\overset{j=1}{j\neq i}}^{n} ( \frac{1}{(x-x_{j})} ) }. $$
(13)

This gives the Albert Ehrlich method [29]

$$ y_{i}^{(k+1)}=x_{i}^{(k)}- \frac{1}{\frac{1}{N(x_{i}^{(k)})}-\sum_{\overset{j=1}{j\neq i}}^{n} ( \frac{1}{(x_{i}^{(k)}-x_{j}^{(k)})} ) }, $$
(14)

where \(N(x_{i})=\frac{f(x_{i}^{(k)})}{f^{\prime }(x_{i}^{(k)})}\) and \(i,j=1,2,3,\ldots,n\). Now from (13), an approximation of \(\frac{f(x_{i}^{(k)})}{f^{\prime }(x_{i}^{(k)})}\) is formed by replacing \(x_{j}^{(k)}\) with \(z_{j}^{(k)}\) as follows:

$$ \frac{f(x_{i}^{(k)})}{f^{\prime }(x_{i}^{(k)})}= \frac{1}{\frac{1}{N(x_{i}^{(k)})}-\sum_{\overset{j=1}{j\neq i}}^{n} ( \frac{1}{(x_{i}^{(k)}-z_{j}^{(k)})} ) }. $$
(15)

Using (15) in (14), we have

$$ y_{i}^{(k+1)}=x_{i}^{(k)}- \frac{1}{\frac{1}{N(x_{i}^{(k)})}-\sum_{\overset{j=1}{j\neq i}}^{n} ( \frac{1}{(x_{i}^{(k)}-z_{j}^{(k)})} ) }. $$
(16)

In case of multiple roots,

$$ y_{i}^{(k+1)}=x_{i}^{(k)}- \frac{\sigma _{i}}{\frac{1}{N(x_{i}^{(k)})}-\sum_{\overset{j=1}{j\neq i}}^{n} ( \frac{\sigma _{j}}{(x_{i}^{(k)}-z_{j}^{(k)})} ) }\quad (i,j=1,2,3,\ldots,n), $$
(17)

where \(z_{j}^{(k)}=y_{j}^{(k)}- ( \frac{f^{\prime }(x_{j}^{(k)})-f^{\prime }(y_{j}^{(k)})}{\alpha f^{\prime }(y_{j}^{(k)})+(2-\alpha )f^{\prime }(x_{j}^{(k)})} ) ( \frac{f(x_{j}^{(k)})}{f^{\prime }(x_{j}^{(k)})} ) \) and \(y_{j}^{(k)}=x_{j}^{(k)}- ( \frac{f(x_{j}^{(k)})}{f^{\prime }(x_{j}^{(k)})} ) \). Thus, we get the following new family of simultaneous iterative methods for extracting all distinct as well as multiple roots of nonlinear equation (1) abbreviated as SM1. Zhang et al. [30] presented the following fifth order simultaneous methods:

$$ x_{i}^{(k+1)}=x_{i}^{(k)}- \frac{2w_{i}(x_{i}^{(k)})}{1+\sum_{\underset{j=1}{j\neq i}}^{n}\frac{w_{j}(x_{j}^{(k)})}{x_{i}^{(k)}-x_{j}^{(k)}}+\sqrt{\textstyle\begin{array}{c} ( 1+\sum_{\underset{j=1}{j\neq i}}^{n}\frac{w_{j}(x_{j}^{(k)})}{x_{i}^{(k)}-x_{j}^{(k)}} ) ^{2}+4w_{i}(x_{i}^{(k)}) \\ \sum_{\underset{j=1}{j\neq i}}^{n}\frac{w_{j}(x_{i}^{(k)})}{ ( x_{i}^{(k)}-x_{j}^{(k)} ) ( x_{i}^{(k)}-w_{i}(x_{i}^{(k)})-x_{j}^{(k)} ) }\end{array}\displaystyle }}. $$
(18)

3.1 Convergence analysis

In this section, the convergence analysis of a family of simultaneous methods (17) is given in a form of the following theorem. Obviously, convergence for method (17) will follow from the convergence of method (SM1) from theorem (2) when the multiplicities of the roots are simple.

Theorem 2

(2)

Let \(\zeta _{{1}},\ldots,\zeta _{n}\) be the n number of simple roots with multiplicities \(\sigma _{{1}},\ldots,\sigma _{n}\) of nonlinear equation (1). If \(x_{1}^{(0)},\ldots,x_{n}^{(0)}\) is the initial approximations of the roots respectively and sufficiently close to actual roots, the order of convergence of method (SM1) equals five.

Proof

Let

$$\begin{aligned}& \epsilon _{i} = x_{i}^{(k)}-\zeta _{i} \quad \text{and } \end{aligned}$$
(19)
$$\begin{aligned}& \epsilon _{i}^{\prime } = y_{i}^{(k+1)}- \zeta _{i} \end{aligned}$$
(20)

be the errors in \(x_{i}^{(k)}\) and \(y_{i}^{(k+1)}\) approximations respectively. Considering (SM1), we have

$$ y_{i}^{(k+1)}=x_{i}^{(k)}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{N(x_{i}^{(k)})}-\sum_{\underset{j=1}{j\neq i}}^{n} ( \frac{\sigma _{j}}{(x_{i}^{(k)}-z_{j}^{(k)})} ) }, $$
(21)

where

$$ N \bigl(x_{i}^{(k)} \bigr)= \biggl( \frac{f(x_{i}^{(k)})}{f^{\prime }(x_{i}^{(k)})} \biggr) . $$
(22)

Then, obviously, for distinct roots we have

$$ \frac{1}{N(x_{i}^{(k)})}= \biggl( \frac{f^{\prime }(x_{i}^{(k)})}{f(x_{i}^{(k)})} \biggr) =\sum _{j=1}^{n} \biggl( \frac{1}{(x_{i}^{(k)}-\zeta _{j})} \biggr) = \frac{1}{(x_{i}^{(k)}-\zeta _{i})}+\sum_{\underset{j=1}{j\neq i}}^{n} \biggl( \frac{1}{(x_{i}^{(k)}-\zeta _{j})} \biggr) . $$
(23)

Thus, for multiple roots, we have from (17)

$$\begin{aligned}& y_{i}^{(k+1)}=x_{i}^{(k)}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{(x_{i}^{(k)}-\zeta _{i})}+\sum_{\underset{j=1}{j\neq i}}^{n} ( \frac{\sigma _{j}}{(x_{i}^{(k)}-\zeta _{j})} ) -\sum_{\underset{j=1}{j\neq i}}^{n} ( \frac{\sigma _{j}}{(x_{i}^{(k)}-z_{j}^{(k)})} ) }, \end{aligned}$$
(24)
$$\begin{aligned}& y_{i}^{(k+1)}-\zeta _{i}=x_{i}^{(k)}- \zeta _{i}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{(x_{i}^{(k)}-\zeta _{i})}+\sum_{\underset{j=1}{j\neq i}}^{n} ( \frac{\sigma _{j}(x_{i}^{(k)}-z_{j}^{(k)}-x_{i}+\zeta _{j})}{(x_{i}^{(k)}-\zeta _{j})(x_{i}^{(k)}-z_{j}^{(k)})} ) }, \end{aligned}$$
(25)
$$\begin{aligned}& \epsilon _{i}^{\prime }=\epsilon _{i}- \frac{\sigma _{i}}{\frac{\sigma _{i}}{\epsilon _{i}}+\sum_{\underset{j=1}{j\neq i}}^{n} ( \frac{-\sigma _{j}(z_{j}^{(k)}-\zeta _{j})}{(x_{i}^{(k)}-\zeta _{j})(x_{i}^{(k)}-z_{j}^{(k)})} ) }, \end{aligned}$$
(26)
$$\begin{aligned}& \hphantom{\epsilon _{i}^{\prime }}=\epsilon _{i}- \frac{\sigma _{i}\epsilon _{i}}{\sigma _{i}+\epsilon _{i}\sum_{\underset{j=1}{j\neq i}}^{n} ( \frac{-\sigma _{j}(z_{j}^{(k)}-\zeta _{j})}{(x_{i}^{(k)}-\zeta _{j})(x_{i}^{(k)}-z_{j}^{(k)})} ) }, \end{aligned}$$
(27)
$$\begin{aligned}& \hphantom{\epsilon _{i}^{\prime }}=\epsilon _{i}- \frac{\sigma _{i}.\epsilon _{i}}{\sigma _{i}+\epsilon _{i}\sum_{\underset{j=1}{j\neq i}}^{n} ( E_{i}\epsilon _{j}^{4} ) }, \end{aligned}$$
(28)

where \(z_{j}^{(k)}-\zeta _{j}=\epsilon _{j}^{3}\) from (3) and \(E_{i}= ( \frac{-\sigma _{j}}{(x_{i}^{(k)}-\zeta _{j})(x_{i}^{(k)}-z_{j}^{(k)})} ) \).

Thus,

$$ \epsilon _{i}^{{\prime }}= \frac{\epsilon _{i}^{2}\sum_{\underset{j=1}{j\neq i}}^{n} ( E_{i}\epsilon _{j}^{3} ) }{\sigma _{i}+\epsilon _{i}\sum_{\underset{j=1}{j\neq i}}^{n} ( E_{i}\epsilon _{j}^{3} ) }. $$
(29)

If it is assumed that absolute values of all errors \(\epsilon _{j}\) (\(j=1,2,3,\ldots\)) are of the same order as, say \(\vert \epsilon _{j} \vert =O \vert \epsilon \vert \), then from (29) we have

$$ \epsilon _{i}^{{\prime }}=O(\epsilon _{i})^{5}. $$
(30)

Hence the theorem. □

4 Extension to a system of nonlinear equations

In this work, we consider the following system of nonlinear equations:

$$ \mathbf{F(x)=}0, $$
(31)

where in \(\mathbf{F(x)}=(f_{1}(x),f_{2}(x),\ldots,f_{n}(x))^{T}\) and the functions \(f_{1}(x),f_{2}(x),\ldots,f_{n}(x)\) are the coordinate functions of [31].

There are many approaches to solving nonlinear system (31). One of the famous iterative methods is Newton–Raphson method for solving the system of nonlinear equations

$$ \mathbf{y}^{(k)}=\mathbf{x}^{(k)}-\mathbf{F}^{\prime } \bigl( \mathbf{x}^{(k)} \bigr)^{-1}\mathbf{F} \bigl( \mathbf{x}^{(k)} \bigr), $$

where

F ( x ) = F ( x 1 ,, x n )= ( f 1 ( x 1 , , x n ) f 2 ( x 1 , , x n ) f n ( x 1 , , x n ) )

and

F (x)= ( f 1 x 1 f 1 x 2 f 1 x n f 2 x 1 f 2 x 2 f 2 x n f n x 1 f n x 2 f n x n ) .
(32)

Here, we present some well-known third order iterative methods for solving the system of nonlinear equations.

Darvisti et al. [32] presented the following third order iterative method (Abbreviated as EE1):

$$\begin{aligned}& \mathbf{y}^{(k)} = \mathbf{x}^{(k)}-\mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)} \bigr)^{-1}\mathbf{F} \bigl( \mathbf{x}^{(k)} \bigr), \\& \mathbf{z}^{(k)} = \mathbf{y}^{(k)}-\mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)} \bigr)^{-1}\mathbf{F} \bigl( \mathbf{y}^{(k)} \bigr). \end{aligned}$$

Trapezoidal Newton method [33] of third order was presented as follows (Abbreviated asEE2):

$$\begin{aligned}& \mathbf{y}^{(k)} = \mathbf{x}^{(k)}-\mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)} \bigr)^{-1}\mathbf{F} \bigl( \mathbf{x}^{(k)} \bigr), \\& \mathbf{z}^{(k)} = \mathbf{x}^{(k)}-2 \bigl[ \mathbf{F}^{\prime } \bigl( \mathbf{x}^{(k)} \bigr)+ \mathbf{F}^{\prime } \bigl(\mathbf{y}^{(k)} \bigr) \bigr] ^{-1}\mathbf{F} \bigl(\mathbf{x}^{(k)}\mathbf{ \bigr).} \end{aligned}$$

Khirallah et al. [34] presented the following third order iterative method (Abbreviated as EE3):

$$\begin{aligned}& \mathbf{y}^{(k)} = \mathbf{x}^{(k)}-\frac{2}{3} \mathbf{F}^{\prime } \bigl( \mathbf{x}^{(k)}\mathbf{ \bigr)}^{-1}\mathbf{F} \bigl(\mathbf{x}^{(k)} \bigr), \\& \mathbf{z}^{(k)} = \mathbf{x}^{(k)}- \biggl[ \mathbf{F}^{\prime } \bigl( \mathbf{x}^{(k)}\mathbf{ \bigr)}^{-1}+\frac{3}{2}\mathbf{F}^{\prime } \bigl( \mathbf{y}^{(k)} \bigr)^{-1} \biggr] \mathbf{F} \bigl(\mathbf{x}^{(k)} \bigr)+3 \bigl[ \mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)} \bigr)+ \mathbf{F}^{\prime } \bigl( \mathbf{y}^{(k)} \bigr) \bigr] ^{-1}\mathbf{F} \bigl( \mathbf{x}^{(k)} \bigr). \end{aligned}$$

Here, we extend the family of iterative methods (2) for solving the system of nonlinear equations

$$\begin{aligned}& \mathbf{y}^{(k)} = \mathbf{x}^{(k)}-\mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)} \bigr)^{-1}\mathbf{F} \bigl( \mathbf{x}^{(k)} \bigr), \\& \mathbf{z}^{(k)} = \mathbf{y}^{(k)}- \bigl[ \bigl( \alpha \mathbf{F}^{\prime } \bigl(\mathbf{y}^{(k)}\bigr) +(2- \alpha ) \mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)} \bigr) \bigr)^{-1} \bigl( \mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)} \bigr)-\mathbf{F}^{\prime } \bigl( \mathbf{y}^{(k)} \bigr) \bigr) \bigr] \mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)}\mathbf{ \bigr)}^{-1} \mathbf{F} \bigl( \mathbf{x}^{(k)}\mathbf{ \bigr),} \end{aligned}$$
(33)

where \(\alpha \in \mathbb{R} \). We abbreviate this family of iterative methods for approximating roots of the system of nonlinear equations by QQ1.

Theorem 3

Let the function \(\mathbf{F}:E\subseteq \mathbb{R} ^{n}\rightarrow \mathbb{R} ^{n}\) be sufficiently Fréchet differentiable on an open set E containing the root ζ of \(\mathbf{F}(\mathbf{x}^{(k)}\mathbf{)}=0\). If the initial estimation \(\mathbf{x}^{(0)}\) is close to ζ, then the convergence order of the method QQ1 is at least three, provided that \(\alpha \in \mathbb{R} \).

Proof

Let \(\mathbf{e}^{(k)}=\mathbf{x}^{(k)}-\boldsymbol{\zeta}\), , and \(\widehat{\mathbf{e}}^{(k)}=\mathbf{z}^{(k)}-\boldsymbol{\zeta}\) be the errors in developing Taylor series of \(\mathbf{F(x}^{(k)})\) in the neighborhood of ζ assuming that \(\mathbf{F}^{\prime }(\mathbf{r)}^{-1}\) exists, we write

$$ \mathbf{F} \bigl(\mathbf{x}^{(k)} \bigr)=\mathbf{F} \bigl( \mathbf{x}^{(k)}\mathbf{ \bigr)+F}^{ \prime } \bigl( \mathbf{x}^{(k)}\mathbf{ \bigr) \bigl(x-x}^{(k)} \bigr)+ \frac{1}{2!}\mathbf{F}^{{ \prime \prime }} \bigl(\mathbf{x}^{(k)}\mathbf{ \bigr) \bigl(x-x}^{(k)} \bigr)^{2}+\cdots $$
(34)

and

$$\begin{aligned}& \mathbf{F(x)=0,} \end{aligned}$$
(35)
$$\begin{aligned}& \mathbf{F \bigl(x}^{(k)} \bigr)\mathbf{=F}^{\prime }\mathbf{ \bigl(x}^{(k)} \bigr) \bigl\{ \mathbf{e}^{(k)}+ \mathbf{A}_{2} \bigl(\mathbf{e}^{(k)} \bigr)^{2}+ \mathbf{A}_{3} \bigl(\mathbf{e}^{(k)} \bigr)^{3}+\mathbf{\cdots }+\mathbf{A}_{6} \bigl( \mathbf{e}^{(k)} \bigr)^{6} \bigr\} + \bigl\Vert \mathbf{O} \bigl(\mathbf{e}^{(k)} \bigr)^{7} \bigr\Vert , \end{aligned}$$
(36)

where

$$\begin{aligned}& \mathbf{A}_{m}=\frac{1}{m!} \frac{\mathbf{F}^{(m) }\mathbf{(x}^{(k)})}{\mathbf{F}^{\prime }\mathbf{(x}^{(k)})},\quad m=2,3, \ldots \\& \bigl[ \mathbf{F}^{\prime } \bigl(\mathbf{x}^{(k)}\mathbf{ \bigr) \bigr]}^{-1} \mathbf{F \bigl(x}^{(k)} \bigr)= \mathbf{e}^{(k)}-\mathbf{A}_{2} \bigl(\mathbf{e}^{(k)} \bigr)^{2}+(2\mathbf{A}_{2}+2\mathbf{A}_{3}) \bigl(\mathbf{e}^{(k)} \bigr)^{3}+ \bigl\Vert \mathbf{O} \bigl(\mathbf{e}^{(k)} \bigr)^{4} \bigr\Vert , \end{aligned}$$
(37)
(38)

Expanding \(\mathbf{F}^{\prime }\mathbf{(y}^{(k)})\) about ζ and using (38), we obtain

(39)
(40)

Using equations (37) and (39) in the second step of (33), we get

$$ \widehat{\mathbf{e}}^{(k)}=\mathbf{z}^{(k)}-\boldsymbol{\zeta} = \biggl( 2 \mathbf{A}_{2}^{2}+\frac{1}{2} \mathbf{A}_{3}-\alpha \mathbf{A}_{2}^{2} \biggr) \bigl(\mathbf{e}^{(k)} \bigr)^{3}+ \bigl\Vert \mathbf{O} \bigl(\mathbf{e}^{(k)} \bigr)^{4} \bigr\Vert . $$
(41)

Hence, it proves the theorem. □

5 Complex dynamical study of families of iterative methods

Here, we discuss the dynamical study of iterative methods (Q1, E1–E6). We investigate the region from where we take the initial estimates to achieve the roots of nonlinear equation. Actually, we numerically approximate the domain of attractions of the roots as a qualitative measure, how the iterative methods depend on the choice of initial estimations. To answer these questions on the dynamical behavior of the iterative methods, we investigate the dynamics of method Q1 and compare it with E1–E6. For more details on the dynamical behavior of the iterative methods, one can consult [3, 35, 36]. Taking a rational function \(\Re _{f}:\mathbb{C} \longrightarrow \mathbb{C} \), where \(\mathbb{C} \) denotes the complex plane, the orbit \(x_{0}\in \mathbb{C} \) defines a set such as \(\operatorname{orb}(x)=\{x_{0},\Re _{f}(x_{0}),\Re _{f}^{2}(x_{0}),\ldots,\Re _{f}^{m}(x_{0}),\ldots \}\). The convergence \(\operatorname{orb}(x)\rightarrow x^{\ast }\) is understood in the sense if \(\underset{x\rightarrow \infty }{\lim }R^{k}(x)=x^{\ast }\) exists. A point \(x_{0}\in \mathbb{C} \) is known as attracting if \(\vert R^{k^{\prime }}(x) \vert <1\). An attracting point \(x_{0}\in \mathbb{C} \) defines the basin of attraction as the set of starting points whose orbit tends to \(x^{\ast }\). For the dynamical and graphically point of view, we take \(2000\times 2000\) grid of square \([-2.5,2.5]^{2}\in \mathbb{C} \). To each root of (1), we assign a color to which the corresponding orbit of the iterative method starts and converges to a fixed point. Take color map as Jet and Hot respectively. We use \(\vert x_{i+1}\text{-}x_{i} \vert <10^{-3}\) and \(\vert f(x_{i}) \vert <10^{-3}\) as stopping criteria, and the maximum number of iterations is taken as 20. We mark a dark blue point when using stopping criteria \(\vert x_{i+1}\text{-}x_{i} \vert <10^{-3}\) and dark black point when using \(\vert f(x_{i}) \vert <10^{-3}\). Different color is used for different roots. Iterative methods have different basins of attraction distinguished by their colors. We obtain basins of attractions for the following three test functions \(f_{1}(x)=x^{4}-ix^{2}+1\), \(f_{2}(x)=(1+2i)x^{5}+1-2i\), and \(f_{3}(x)=x^{6}-ix^{3}+1\). The exact roots of \(f_{1}(x)\), \(f_{2}(x)\), and \(f_{3}(x)\) are given in Table 1. Brightness in color means a lower number of iterations steps.

Table 1 Exact roots of functions \(f_{1}(x)\), \(f_{2}(x)\), and \(f_{3}(x)\)

6 Numerical results

Here, some numerical examples are considered in order to demonstrate the performance of our family of one-step third order single root finding methods (Q1), fifth order simultaneous methods (SM1), and third order family of iterative methods for solving the nonlinear system of equations respectively. We compared our family of single root finding methods (Q1) with third order iterative methods (E1–E6). The family of simultaneous methods (SM1) of order five is compared with Zhang et al. method [30] of the same order (abbreviated as ZPH method). Iterative methods for finding roots of nonlinear system (QQ1) are compared with EE1–EE3 respectively. All the computations are performed using CAS Maple 18 with 2500 (64 digits floating point arithmetic in case of simultaneous methods) significant digits with stopping criteria as follows:

$$\begin{aligned}& \text{(i)}\quad e_{i}^{(k)}= \bigl\vert f \bigl( x_{i}^{(k)} \bigr) \bigr\vert < \in ,\qquad \text{(ii)}\quad e_{i}^{(k)}= \bigl\vert x_{i}^{ ( k ) }- \alpha \bigr\vert < \in , \\& \text{(iii)} \quad \mathbf{e}^{(k)}= \bigl\Vert \mathbf{Fx}^{(k)} \bigr\Vert < \in , \qquad \text{(iv)} \quad \mathbf{e}^{(k)}= \bigl\Vert \mathbf{x}^{(k+1)}-\mathbf{x}^{(k)} \bigr\Vert < \in , \end{aligned}$$

where \(e_{i}\) and \(\mathbf{e}^{(k)} \) represent the absolute error. We take \(\in =10^{-600}\) for the single root finding method, \(\in =10^{-30}\) for simultaneous determination of all roots of nonlinear equation (1), and \(\in =10^{-15}\) for approximating roots of nonlinear system (31).

Numerical test examples from [32, 34, 37, 38] are provided in Tables 28. In Table 3 stopping criterion (i) is used, in Table 2 stopping criteria (i) and (ii) both are used, while in Tables 48 stopping criteria (iii) and (iv) both are used. In all Tables CO represents the convergence order, n represents the number of iterations, ρ represents local computational order of convergence [39], and CPU represents computational time in seconds. We observe that numerical results of the family of iterative methods (in case of single Q1) as well as simultaneous determination (SM1 of all roots) and for approximating roots of system of nonlinear equations QQ1 are better than E1–E6, ZPH, and EE1–EE3 respectively on the same number of iterations. Figures 4(a)–(b)–6(a), (b) represent the residual fall for the iterative methods (Q1, SM1, QQ1, ZPH, E1–E6, EE1–EE3). Figures 4(a) and 4(b) show residual fall for single (Q1, E1–E6) and simultaneous determination of all roots (SM1, ZPH), while Figs. 5(a), (b) and 6(a), (b) show residual fall for (QQ1, EE1–EE3) respectively. Tables 28 and Figs. 16 clearly show the dominance convergence behavior of our family of iterative methods (Q1, SM1, QQ1) over E1–E6, ZPH, and EE1–EE3.

Figure 1
figure 1

Figure 1(a), (e), (g), (i), (k), (m), (o) shows basins of attraction of iterative methods Q1, E1–E6 for the nonlinear function \(f_{1}(x)=x^{4}-ix^{2}+1\) using \(\vert x^{(k+1)}\text{-}x^{(k)} \vert <10^{-3}\). Figure 1(b), (f), (h), (j), (l), (n), (p) shows basins of attraction of iterative methods Q1, E1–E6 using \(\vert f(x^{(k)}) \vert <10^{-3}\). Figure 1(c), (d) shows the basin of attraction for \(\alpha =-0.000001\). In Fig. 1(a)–(p), brightness of color in basins of Q1 shows a lower number of iterations for convergence of iterative methods as compared to methods E1–E6.

Figure 2
figure 2

Figure 2(a), (e), (g), (i), (k), (m), (o) shows basins of attraction of iterative methods Q1, E–E6 for the nonlinear equation \(f_{2}(x)=(1+2i)x^{5}+1-2i\) using \(\vert x^{(k+1)}\text{-}x^{(k)} \vert <10^{-3}\). Figure 2(b), (f), (h), (j), (l), (n), (p) shows basins of attraction of iterative methods Q1, E1–E6 using \(\vert f(x^{(k)}) \vert <10^{-3}\). Figure 2(c), (d) shows the basin of attraction for \(\alpha =-0.000001\). In Fig. 2(a)–(p), brightness of color in basins of Q1 shows a lower number of iterations for convergence of iterative method as compared to methods E1–E6.

Figure 3
figure 3

Figure 3(a), (e), (g), (i), (k), (m), (o) shows basins of attraction of iterative methods Q1, E1–E6 for the nonlinear equation \(f_{3}(x)=x^{6}-ix^{3}+1\) using \(\vert x^{(k+1)}\text{-}x^{(k)} \vert <10^{-3}\). Figure 3(b), (f), (h), (j), (l), (n), (p) shows basins of attraction of iterative methods Q1, E1–E6 using \(\vert f(x^{(k)}) \vert <10^{-3}\). Figure 3(c), (d) shows the basin of attraction for \(\alpha =-0.000001\). In Fig. 3(a)–(p), brightness of color in basins of Q1 shows a lower number of iterations for convergence of iterative method as compared to methods E1–E6.

Figure 4
figure 4

Figures 4(a)–(b), 4(a) show residual graph of single roots finding method Q1, E1–E6 and 4(b) for simultaneous determination of all roots of \(f_{4}(x)\) using ZPH and SM1 respectively.

Table 2 Comparison of optimal 3rd order methods
Table 3 Simultaneous finding of all roots of \(f_{4}(x)\)
Table 4 Comparison of optimal 3rd order methods
Table 5 Comparison of optimal 3rd order methods
Table 6 Comparison of optimal 3rd order methods

7 Application in engineering

In this section, we discuss application in engineering.

Example 1

(Beam designing model [38] (1-dimensional problem))

An engineer considers a problem of embedment x of a sheet-pile wall resulting in a nonlinear equation given as

$$ f_{4}(x)=\frac{x^{3}+2.87x^{2}-10.28}{4.62}-x. $$
(42)

The exact roots of (42) are \(\zeta _{1}=2.0021\), \(\zeta _{2}=-3.3304 \), \(\zeta _{3}=-1.5417\).

The initial estimates for \(f_{4}(x)\) are taken as: \(\overset{ (0)}{x_{1}} =2.5\), \(\overset{ (0)}{x_{2}} =-7.4641\), \(\overset{ (0)}{x_{3}}=-0.5359\).

Example 2

(2-dimensional problem [32, 37])

In case of a 2-dimensional system, we consider the following systems of nonlinear equations:

$$\begin{aligned}& \mathbf{F}_{1}(\mathbf{X})=\textstyle\begin{cases} f_{1}(x_{1},x_{2})=x_{1}^{2}-10x_{1}+x_{2}^{2}+8, \\ f_{2}(x_{1},x_{2})=x_{1}x_{2}^{2}+x_{1}-10x_{2}+8,\end{cases}\displaystyle \mathbf{X}_{0} =( \mathbf{0.6},\mathbf{1.4})^{T}, \\& \mathbf{F}_{2}(\mathbf{X})=\textstyle\begin{cases} f_{1}(x_{1},x_{2})=x_{1}^{2}-2x_{1}-x_{2}+0.5, \\ f_{2}(x_{1},x_{2})=x_{1}^{2}+4x_{2}^{2}-1, \end{cases}\displaystyle \mathbf{X}_{0}, =( \mathbf{1.5},\mathbf{1.0})^{T}. \end{aligned}$$

Example 3

(3-dimensional problems [34])

In case of a 3-dimensional system, we consider the following system of nonlinear equations:

$$\begin{aligned}& \mathbf{F}_{3}(\mathbf{X})=\textstyle\begin{cases} f_{1}(x_{1},x_{2},x_{3})=15x_{1}+x_{2}^{2}-4x_{3}-13, \\ f_{2}(x_{1},x_{2},x_{3})=x_{1}^{2}+10x_{2}-e^{-x_{3}}-11, \\ f_{3}(x_{1},x_{2},x_{3})=x_{2}^{2}-25x_{3}+22,\end{cases}\displaystyle \mathbf{X}=(\mathbf{0.8}, \mathbf{1},\mathbf{0.8})^{T}, \\& \mathbf{F}_{4}(\mathbf{X})=\textstyle\begin{cases} f_{1}(x_{1},x_{2},x_{3})=x_{1}^{2}+x_{2}^{2}-x_{3}^{2}-1, \\ f_{2}(x_{1},x_{2},x_{3})=2x_{1}^{2}+10x_{2}^{2}-4x_{3}1, \\ f_{3}(x_{1},x_{2},x_{3})=3x_{1}^{2}-4x_{2}^{2}-x_{3}^{2},\end{cases}\displaystyle \mathbf{X}=(\mathbf{0.5}, \mathbf{0.5},\mathbf{0.5})^{T}. \end{aligned}$$

Example 4

(N-dimensional problem [34])

Consider the following system of nonlinear equations:

$$ \mathbf{F}_{5}: f_{i}=e^{x_{i}^{2}}-1,\quad i=1,2,3, \ldots,m, $$

the exact solution of this system is \(\mathbf{X}^{\ast }=[0,0,0,\ldots,0]^{T}\), and we take \(\mathbf{X}_{0}=[0.5,0.5,0.5,\ldots, 0.5]^{T}\) as initial estimates. Table shows the results of this system of nonlinear equations.

Example 5

(N-dimensional problem)

Consider the following system of nonlinear equations:

$$ \mathbf{F}_{6}: f_{i}=x_{i}^{2}- \cos (x_{i}-1),\quad i=1,2,3,\ldots,m, $$

the exact solution of this system is \(\mathbf{X}^{\ast }=[1,1,1,\ldots,1]^{T}\), and we take \(\mathbf{X}_{0}=[2,2,2,\ldots,2]^{T}\) as initial estimates. Table shows the results of this system of nonlinear equations.

7.1 Application to differential equation

Example 6

(Nonlinear BVP)

Here, we solve a nonlinear BVP defined as

$$\begin{aligned}& y^{\prime \prime } = \frac{1}{8} \bigl(32+2x^{3}-yy^{\prime } \bigr),\quad 1\leq x \leq 3, \\& y(1) = 17;\qquad y(3)=\frac{43}{3}. \end{aligned}$$
(43)

Using the procedure of finite difference method, we solve this nonlinear BVP. By taking \(h=0.1\), we discretize the interval \([1,3]\) into \(N+1=19+1=20\) equal subintervals (see Table 7). As \(x_{i}=a+h\) gives values of \(x_{i}\), where \(a=1\).

Table 7 Domain discretization for BVP

We use the central difference formula for both \(y^{\prime \prime }(x_{i})\) and \(y^{\prime }(x_{i})\) derived in Burden and Faires in [40] as follows:

$$\begin{aligned}& y^{\prime \prime }(x_{i}) = \frac{1}{h^{2}} \bigl(y(x_{i+1})-2y(x_{i})+y(x_{i-1}) \bigr)- \frac{h^{2}}{12}y^{(iv)}(\xi ) \quad \text{for some }\xi \in (x_{i-1},x_{i+1}), \end{aligned}$$
(44)
$$\begin{aligned}& y^{\prime }(x_{i}) = \frac{1}{2h} \bigl(y(x_{i+1})-y(x_{i-1}) \bigr)- \frac{h^{2}}{6}y^{(iii)}(\eta )\quad \text{for some }\eta \in (x_{i-1},x_{i+1}). \end{aligned}$$
(45)

Putting values of \(y^{\prime \prime }(x_{i})\) and \(y^{\prime }(x_{i})\) in (1), we obtain the following tridiagonal system of nonlinear equations:

F 7 (X)= ( f 1 = 2 x 1 x 2 + 0.01 ( 4 + 0.33275 + x 1 ( x 2 17 ) 1.6 ) 17 f 2 = x 1 + 2 x 2 x 3 + 0.01 ( 4 + 0.432 + x 2 ( x 3 x 1 ) 1.6 ) f 3 = x 2 + 2 x 3 x 4 + 0.01 ( 4 + 0.5495 + x 3 ( x 4 x 2 ) 1.6 ) f 4 = x 3 + 2 x 4 x 5 + 0.01 ( 4 + 0.686 + x 4 ( x 5 x 3 ) 1.6 ) f 5 = x 4 + 2 x 5 x 6 + 0.01 ( 4 + 0.84375 + x 5 ( x 6 x 4 ) 1.6 ) f 6 = x 5 + 2 x 6 x 7 + 0.01 ( 4 + 1.024 + x 6 ( x 7 x 5 ) 1.6 ) f 7 = x 6 + 2 x 7 x 8 + 0.01 ( 4 + 1.22825 + x 7 ( x 8 x 6 ) 1.6 ) f 8 = x 7 + 2 x 8 x 9 + 0.01 ( 4 + 1.458 + x 8 ( x 9 x 7 ) 1.6 ) f 9 = x 8 + 2 x 9 x 10 + 0.01 ( 4 + 1.71475 + x 9 ( x 10 x 8 ) 1.6 ) f 10 = x 9 + 2 x 10 x 11 + 0.01 ( 4 + 2 + x 10 ( x 11 x 9 ) 1.6 ) f 11 = x 10 + 2 x 11 x 12 + 0.01 ( 4 + 2.31525 + x 11 ( x 12 x 10 ) 1.6 ) f 12 = x 11 + 2 x 12 x 13 + 0.01 ( 4 + 2.662 + x 12 ( x 13 x 11 ) 1.6 ) f 13 = x 12 + 2 x 13 x 14 + 0.01 ( 4 + 3.04175 + x 13 ( x 14 x 12 ) 1.6 ) f 14 = x 13 + 2 x 14 x 15 + 0.01 ( 4 + 3.456 + x 14 ( x 15 x 13 ) 1.6 ) f 15 = x 14 + 2 x 15 x 16 + 0.01 ( 4 + 3.90625 + x 15 ( x 16 x 14 ) 1.6 ) f 16 = x 15 + 2 x 16 x 17 + 0.01 ( 4 + 4.394 + x 16 ( x 17 x 15 ) 1.6 ) f 17 = x 16 + 2 x 17 x 18 + 0.01 ( 4 + 4.92075 + x 17 ( x 18 x 16 ) 1.6 ) f 18 = x 17 + 2 x 18 x 19 + 0.01 ( 4 + 5.488 + x 18 ( x 19 x 17 ) 1.6 ) f 19 = x 18 + 2 x 19 + 0.01 ( 4 + + x 19 ( 14.333333 x 18 ) 1.6 ) 14.333333 ) ,
(46)

where \(x_{0}=17\) and \(x_{20}=14.333333\). We take

X 0 = [ 16.86666667 , 16.73333333 , 16.6 , 16.46666667 , 16.33333333 , 16.2 , 16.06666667 , 15.9333333 , 15.8 , 15.66666667 , 15.53333333 , 15.4 , 15.26666667 15.13333333 , 15 , 1.86666667 , 14.733333333 , 14.6 , 14.46666667 ] T .

The solution to our boundary value problem of the nonlinear ordinary differential equation is

X = [ 17.0000 , 16.7605 , 16.5134 , 16.2589 , 15.9974 , 15.7298 , 15.4577 , 15.1829 , 14.9083 , 14.6375 , 14.3750 , 14.1266 , 13.8993 , 13.7018 , 13.5443 , 13.439113 . 401013.447513 . 599913.8843 ] T .

as initial estimates. The result of the above nonlinear system is shown in Table 8.

Table 8 Residual errors of different iterations of QQ1 for solving \(\mathbf{F}_{7}(\mathbf{X})\)

8 Conclusion

We have developed here families of single root finding methods of convergence order three for a nonlinear equation as well as for a system of nonlinear equations and families of simultaneous methods of order five respectively. From Tables 28 and Figs. 15 and 6, we observe that our methods (Q1, SM1, and QQ1) are superior in terms of efficiency, stability, CPU time, and residual error as compared to the methods E1–E6, ZPH, and EE1–EE3 respectively.

Figure 5
figure 5

Figure 5(a)–(b) shows a residual graph of iterative methods QQ1, EE1–EE3 for solving \(\mathbf{F}_{1}(\mathbf{X})\) and \(\mathbf{F}_{2}(\mathbf{X})\) respectively.

Figure 6
figure 6

Figure 6(a)–(b) shows a residual graph of iterative methods QQ1, EE1–EE3 for solving \(\mathbf{F}_{3}(\mathbf{X})\) and \(\mathbf{F}_{4}(\mathbf{X})\) respectively.