1 Introduction

In this article, we consider the problem of approximating a unique solution \(\Theta ^{*}\) of a nonlinear equation

$$\begin{aligned} {\mathfrak {B}}(\Theta )=0, \end{aligned}$$
(1)

where \({\mathfrak {B}}:{\mathcal {C}} \subset {\mathbb {X}} \rightarrow {\mathbb {Y}}\) is continuous but non-differentiable nonlinear operator defined on a nonempty open convex subset \({\mathcal {C}}\) of Banach space \({\mathbb {X}}\) with values in Banach space \({\mathbb {Y}}\). It is well known that finding exact solutions of type (1) is difficult and usually iterative methods are used to approximate the solution of this type of equations. The semilocal convergence is based on the information around an initial point, to give criteria ensuring the convergence of the iterative method; while the local one is, based on the information around a solution, to find estimates of the radii of convergence balls. A recurrence relation is an equation that defines a sequence based on a rule that gives the next term as a function of the previous terms. Newton’s method and Newton-type methods [18, 31,32,33] are the most used iterative methods for approximating the solution. All of such methods require the differentiability of the involved operator which is a drawback for some certain problems where the operator is nondifferentiable. To overcome these types of problems, many researchers [6, 18, 22, 26] have established and studied the iterative methods using divided differences. One of such method which does not use the differentiability of the involved operator is Kurchatov’s method, proposed by Kurchatov [37] and given by:

$$\begin{aligned} \left\{ \begin{array}{ll} \Theta _{-1},\Theta _{0},\Theta _{m+1},\Theta _{m},\Theta _{m-1}\in {\mathcal {C}},\\ \Theta _{m+1}=\Theta _{m}-[\Theta _{m-1},2\Theta _{m}-\Theta _{m-1};{\mathfrak {B}}]^{-1}{\mathfrak {B}}(\Theta _{m}) ,~~&{}\text{ if }\, m\ge 0. \end{array}\right. \end{aligned}$$
(2)

The operator \([e,f;{\mathfrak {B}}] \) is a first-order divided difference operator \({\mathfrak {B}}\) at the points e and \(f~(e\ne f)\) and satisfy \([e,f;{\mathfrak {B}}](e-f)={\mathfrak {B}}(e)-{\mathfrak {B}}(f)\). In the finite-dimensional vector space \({\mathbb {R}}^{m}\), the divided difference of first order [35] is defined as \([\mathbf{e} ,\mathbf{f} ;{\mathfrak {B}}]=([\mathbf{e} ,\mathbf{f} ;{\mathfrak {B}}]_{ij})_{i,j=1}^{m}\in \mathcal {L} ({\mathbb {R}}^{m},{\mathbb {R}}^{m}), \) where \(\mathcal {L} ({\mathbb {R}}^{m},{\mathbb {R}}^{m})\) denotes the space of all bounded linear operators from \({\mathbb {R}}^{m}\) into \({\mathbb {R}}^{m}\) with \(\mathbf{e} \)=\((e_{1},e_{2},\dots ,e_{m})^\mathrm{{T}}\), \(\mathbf{f} \)=\((f_{1},f_{2},\dots ,f_{m})^\mathrm{{T}}\) and

$$\begin{aligned}{}[\mathbf{e} ,\mathbf{f} ,{\mathfrak {B}}]_{ij}=\frac{1}{e_{j}-f_{j}}\big ({{\mathfrak {B}}_{i}(e_{1},\dots ,e_{j},f_{j+1},\dots ,\dots ,f_{m})}-{\mathfrak {B}}_{i}(e_{1},\dots ,e_{j-1},f_{j},\dots ,\dots ,f_{m})\big ). \end{aligned}$$
(3)

To access the solution quickly, some authors [6, 19, 28, 29] also considered two-step derivative-free iterative methods. One such method is two-step Kurchtov method [28, 29] defined by the help of divided difference:

$$\begin{aligned} \left\{ \begin{array}{ll} \Theta {-1},\Theta _{0},\Theta _{m+1},\Theta _{m},\Theta _{m-1},\theta _{m}\in {\mathcal {C}},\\ \theta _{m}=\Theta _{m}-[\Theta _{m-1},2\Theta _{m}-\Theta _{m-1};{\mathfrak {B}}]^{-1}{\mathfrak {B}}(\Theta _{m}),\\ \Theta _{m+1}=\theta _{m}-[\Theta _{m-1},2\Theta _{m}-\Theta _{m-1};{\mathfrak {B}}]^{-1}{\mathfrak {B}}(\theta _{m}),~~&{}\text{ if }\, m\ge 0. \end{array}\right. \end{aligned}$$
(4)

Kumar and Parida [29] gave semilocal convergence of this method under Lipschitz continuity conditions on first-order divided difference operator by using recurrence relations. But to generalize the convergence analysis, some authors [25, 26] also considered \(\omega \)-type condition on the first-order divided difference operator of \({\mathfrak {B}}\):

$$\begin{aligned} \Vert [\Theta ,\theta ;{\mathfrak {B}}]-[e,f;{\mathfrak {B}}]\Vert \le \omega \big (\Vert \Theta -e\Vert ,\Vert \theta -f\Vert \big ),~ \Theta , \theta ,e,f\in {\mathcal {C}}. \end{aligned}$$
(5)

Recall that Hernández and Rubio [25, 26] have provided sufficient convergence conditions for the secant method by considering the conditions (5) under recurrence relations. Ren [36] also gave the new semilocal convergence of secant method under condition (5) with the help of recurrence relation. Argyros et al. [8], Argyros and Ren [7], Hernández and Rubio [24, 27], Ezquerro et. al [20] also studied semilocal convergence of Secant-like methods under same continuity condition. Amat and Busquier [5] also introduced two-step Steffensen method and analyzed the semilocal convergence by the help of recurrence relation under \(\omega \)-continuity condition (5). Ezquerro et al. [21] also worked on semilocal convergence of Kurchatov two-step method (4) by the help of recurrence relations under condition (5).

Recently, Argyros et al. [8] introduced the Chebyshev–Secant-type method and gave the semilocal convergence by the use of recurrence relations under Lipschitz-type conditions on the first-order divided difference operator. Argyros et. al [9] also established semilocal convergence of the same method under \(\omega \)-continuity condition (5) by the help of recurrence relations. Third-order three-step Steffensen method was proposed by Ezquerro et al. [23]:

$$\begin{aligned} \left. \begin{array}{ll} \Theta _{0}, \Theta _{m+1},\Theta _{m},\theta _{m},\vartheta _{m}\in {\mathcal {C}},\\ \theta _{m}=\Theta _{m}-[\Theta _{m},\Theta _{m}+{\mathfrak {B}}(\Theta _{m}); {\mathfrak {B}}]^{-1}{\mathfrak {B}}(\Theta _{m}),\\ \vartheta _{m}=\theta _{m}-[\Theta _{m},\Theta _{m}+{\mathfrak {B}}(\Theta _{m}); {\mathfrak {B}}]^{-1}{\mathfrak {B}}(\theta _{m}), \\ \Theta _{m+1}=\vartheta _{m}-[\Theta _{m},\Theta _{m}+{\mathfrak {B}}(\Theta _{m}); {\mathfrak {B}}]^{-1}{\mathfrak {B}}(\vartheta _{m}),~~&{}\text{ if }\, m\ge 0. \end{array}\right\} \end{aligned}$$
(6)

Authors established the semilocal convergence of method using recurrence relations under Lipschitz continuity condition. In all these above-mentioned methods, divided difference operator was frozen.

In [28], authors discussed about semilocal convergence analysis of three-step Kurchatov method which is defined by

$$\begin{aligned} \left. \begin{array}{ll} \Theta _{-1},\Theta _{0},\Theta _{m+1},\Theta _{m},\Theta _{m-1},\vartheta _{m},\theta _{m}\in {\mathcal {C}},\\ \theta _{m}=\Theta _{m}-[\Theta _{m-1},2\Theta _{m}-\Theta _{m-1};{\mathfrak {B}}]^{-1}{\mathfrak {B}}(\Theta _{m}),\\ \vartheta _{m}=\theta _{m}-[\Theta _{m-1},2\Theta _{m}-\Theta _{m-1};{\mathfrak {B}}]^{-1}{\mathfrak {B}}(\theta _{m}), \\ \Theta _{m+1}=\vartheta _{m}-[\Theta _{m-1},2\Theta _{m}-\Theta _{m-1};{\mathfrak {B}}]^{-1}{\mathfrak {B}}(\vartheta _{m}),~~&{}\text{ if }\, m\ge 0. \end{array}\right\} \end{aligned}$$
(7)

Authors also found the better result for order of convergence, optimal computational efficiency and efficiency index of method (7) as comparison from methods (2) and (4). One of the important benefits of this method is the evaluation of only one divided difference of the operator in each step, whereas order of convergence of the method is 3.

Ezquerro et al. [21] also worked on semilocal convergence of Kurchatov two-step method (4) by the help of recurrence relations under condition (5). They established the R-order of convergence with efficiency index and computational efficiency index. Ren [36] also gave the new semilocal convergence under condition (5) with the help of recurrence relation. Dennis [15], Potra [34], Argyros [7, 8], Ezquerro and Hernández [17] also studied semilocal convergence for that method under same continuity condition. In fact, if \(\Theta _{1},\Theta _{2}\in {\mathcal {C}}\) and \(N,M\ge 0\) such that \(\omega (\Theta _{1},\Theta _{2})=N+M(\Theta _{1}+\Theta _{2})\), we obtain the Lipschitz continuous case.

Recently, Arqub and Shawagfeh [13] analyzed an efficient reproducing kernel algorithm for the numerical solutions of equations in porous media with Dirichlet boundary conditions. Arqub [11] worked on the numerical solution of the time-fractional Schrödinger equation subject to given constraint condition based on the generalized Taylor series formula in the Caputo sense. Author also discussed about reproducing kernel algorithm for obtaining the numerical solutions of fractional-order systems of Dirichlet function in [12]. In [10], author gave the numerical solutions of systems of first-order, two-point BVPs based on the reproducing kernel algorithm.

Ahmad et al. [2] found the numerical solution of three-term time-fractional-order multi-dimensional diffusion equations by using an efficient local meshless method. Kumar et al. [30] worked on fourth-order derivative-free iterative methods for finding a multiple root. Authors also shows the performance of the new technique is a good competitor to existing optimal fourth-order Newton-like techniques. Bazighifan and Cesarano [14] worked using the technique of Riccati transformation and integral averaging method, authors get conditions to ensure oscillation of solutions of fourth-order Neutral differential equations. Ahmad et al. [3] solved the initial and boundary value problems by the use of variational iteration algorithm-I with an auxiliary parameter and also discussed about effectiveness and utilization of this proposed technique. Ahmad et al. [1] found the two new modified variational iteration algorithms for solving the numerical solutions of coupled Berger’ equations. Authors also evaluated the errors norms, accuracy of method and speed of convergence by numerical problems. Elabbasy et al. [16] studied about asymptotic behavior of a class of higher-order delay differential equations with a p-Laplacian-like operator.

The structure of this paper is given as follows: in Sect. 2, we gave the new semilocal convergence analysis of the three-step Kurchatov method by the help of new type recurrence relations. In Sect. 3, we discussed about optimal computational efficiency of Kurchatov-type method with comparison graph. Numerical examples for differentiable and non-differentiable operators under higher-dimensional problems are given in Sect. 4. Finally, conclusion is given in Sect. 5.

Throughout the paper, we denote \(\overline{B(\Theta _0,{\mathfrak {R}})}=\{\theta \in {\mathcal {C}}:\Vert \theta -\Theta _{0}\Vert \le {\mathfrak {R}}\}\) and \(B(\Theta _0,{\mathfrak {R}})=\{\theta \in {\mathcal {C}}:\Vert \theta -\Theta _{0}\Vert < {\mathfrak {R}}\}\). Where, \({\mathfrak {R}}\) is the least positive real root and \(\overline{B(\Theta _0,{\mathfrak {R}})}, B(\Theta _0,{\mathfrak {R}})\subseteq {\mathcal {C}}\).

2 Semilocal convergence of Kurchatov three-step method

In this Section, we shall analyze semilocal convergence for method (7) by the help of recurrence relations. Let us assume that the operator \({\mathfrak {B}}\) satisfies the following condition P:

$$\begin{aligned} \left. \begin{array}{l} {(P1)~\Theta _{-1}, \Theta _{0}, ~2\Theta _{0}-\Theta _{-1}\in {\mathcal {C}}~ \mathrm {such ~that~} \Vert \Theta _{-1}-\Theta _{0}\Vert =\varphi },\\ {(P2)~\Vert {\mathfrak {B}}(\Theta _{0})\Vert \le \varpi },\\ {(P3)~\mathrm {there~ exists~} T_{0}^{-1}=[\Theta _{-1},2\Theta _{0}-\Theta _{-1};{\mathfrak {B}}]^{-1}~ \mathrm {such~ that~} \Vert T_{0}^{-1}\Vert \le \phi },\\ {(P4)~ \Vert [\Theta ,\theta ;{\mathfrak {B}}]-[e,f;{\mathfrak {B}}]\Vert \le q+ \omega (\Vert \Theta -e\Vert ,\Vert \theta -f\Vert )},\\ {\quad ~\mathrm {where}~q,\varphi , \varpi ,\phi \ge 0;~\Theta ,\theta ,e,f\in {\mathcal {C}};~ \Theta \ne \theta ,~e\ne f},\\ \mathrm {~where~\omega : {\mathbb {R}}_{+}\times {\mathbb {R}}_{+}\rightarrow {\mathbb {R}}_{+} is~ a~ continuous~ nondecreasing~ function~ in~ its~ two~ arguments}. \end{array}\right\} \end{aligned}$$
(8)
  1. (P5)

    If \(w(\varTheta )=\phi \big (q+\omega (2\varTheta +\varphi ,3\varTheta +\varphi )\big )\) and \( \psi =\phi \varpi \). Let \({\mathfrak {R}}\) be the least positive root of given equation

    $$\begin{aligned} \varTheta =\frac{2\big (1-w(\varTheta )\big )\psi }{\big (1-2w(\varTheta )\big )}, \end{aligned}$$
    (9)

    with \(w({\mathfrak {R}})<\frac{1}{2}\).

Here, we provide a lemma which is important role for deriving the recurrence relations for method (7).

Lemma 2.1

Let \(\{\Theta _{m}\}\), \(\{\theta _{m}\}\) and \(\{\vartheta _{m}\}\) be the sequences which is generated by the method (7) with distinct points \(\Theta _{m},\theta _{m-1},\theta _{m},\vartheta _{m-1},\vartheta _{m}\in {\mathcal {C}}\). Suppose \(T_{m}=[\Theta _{m-1},2\Theta _{m}-\Theta _{m-1};{\mathfrak {B}}]\). Then

  1. (i)

    \({\mathfrak {B}}(\theta _{m})=\big ([\theta _{m},\Theta _{m};{\mathfrak {B}}] -T_{m}\big )(\theta _{m}-\Theta _{m})\),

  2. (ii)

    \({\mathfrak {B}}(\vartheta _{m})=\big ([\vartheta _{m},\theta _{m};{\mathfrak {B}}] -T_{m}\big )(\vartheta _{m}-\theta _{m})\),

  3. (iii)

    \({\mathfrak {B}}(\Theta _{m})=\big ([\Theta _{m},\vartheta _{m-1};{\mathfrak {B}}] -T_{m-1}\big )(\Theta _{m}-\vartheta _{m-1})\).

Proof

Proof of above Lemma is given in [28]. \(\square \)

Lemma 2.2

Suppose conditions (P1)–(P5) satisfy on the operator \({\mathfrak {B}}\) with distinct points \(\Theta _{m},\theta _{m},\theta _{0},\Theta _{0},\vartheta _{m}\in {\mathcal {C}}\). Then the recurrence relations hold for \(m\ge 1\) which is given below:

  1. (i)

    \(T_{m}^{-1}\) exists and \(\Vert T_{m}^{-1}\Vert \le \displaystyle \frac{\phi }{1-w({\mathfrak {R}})}\),

  2. (ii)

    \(\Vert \theta _{m}-\Theta _{m}\Vert <\displaystyle \left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{3m}\Vert \theta _{0}-\Theta _{0}\Vert \),

  3. (iii)

    \(\Vert \theta _{m}-\Theta _{0}\Vert <\displaystyle \sum _{j=0}^{3m}\left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{j}\Vert \theta _{0}-\Theta _{0}\Vert \),

  4. (iv)

    \(\Vert {\mathfrak {B}}(\theta _{m})\Vert <\big (q+\omega (2{\mathfrak {R}},\psi )\big )\Vert \theta _{m}-\Theta _{m}\Vert \),

  5. (v)

    \(\Vert \vartheta _{m}-\theta _{m}\Vert < \displaystyle \left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{3m+1}\Vert \theta _{0}-\Theta _{0}\Vert \),

  6. (vi)

    \(\Vert \vartheta _{m}-\Theta _{m}\Vert <\left( \left( \displaystyle \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{3m+1}+\displaystyle \left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{3m}\right) \Vert \theta _{0}-\Theta _{0}\Vert \),

  7. (vii)

    \(\Vert \vartheta _{m}-\Theta _{0}\Vert <\left( \displaystyle \sum _{j=0}^{3m+1}\left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{j}\right) \Vert \theta _{0}-\Theta _{0}\Vert \),

  8. (viii)

    \(\Vert {\mathfrak {B}}(\vartheta _{m})\Vert <\big (q+\omega (2{\mathfrak {R}},2\psi )\big )\Vert \vartheta _{m}-\theta _{m}\Vert \).

Proof

This lemma is proved using mathematical induction. It follows from initial hypotheses (P1) and from (P5) that \(\theta _{0}, \vartheta _{0}\) and \(\Theta _{1}\) are well defined because by our assumption \(T_{0}^{-1}\) exists. We can easily conclude that \(\psi <{\mathfrak {R}}\). Also,

$$\begin{aligned} \Vert \theta _{0}-\Theta _{0}\Vert \le \Vert T_{0}^{-1}\Vert ~\Vert {\mathfrak {B}}(\Theta _{0})\Vert \le \phi \varpi =\psi <{\mathfrak {R}}, \end{aligned}$$

from (P2) and (P3) and then using Lemma 2.1 and (P4) we get

$$\begin{aligned} \Vert {\mathfrak {B}}(\theta _{0})\Vert\le & {} \big (q+\omega (\psi +\varphi ,\varphi )\big )\Vert \theta _{0}-\Theta _{0}\Vert \\\le & {} \big (q+\omega (\psi +\varphi ,\varphi )\big )\phi \varpi <w({\mathfrak {R}})\varpi . \end{aligned}$$

So,

$$\begin{aligned} \Vert \vartheta _{0}-\theta _{0}\Vert \le \Vert T_{0}^{-1}\Vert \Vert {\mathfrak {B}}(\theta _{0})\Vert \le \phi \big (q+\omega (\psi +\varphi ,\varphi )\big )\Vert \theta _{0}-\Theta _{0}\Vert<w({\mathfrak {R}}) \Vert \theta _{0}-\Theta _{0}\Vert<\psi <{\mathfrak {R}}, \end{aligned}$$

and

$$\begin{aligned} \Vert \vartheta _{0}-\Theta _{0}\Vert \le \Big (1+\phi \big (q+\omega (\psi +\varphi ,\varphi )\big )\Vert \theta _{0} -\Theta _{0}\Vert<\Bigg (1+\frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\Bigg )\Vert \theta _{0} -\Theta _{0}\Vert <{\mathfrak {R}}. \end{aligned}$$

Using the above Lemma 2.1 and (P4), we get

$$\begin{aligned} \Vert {\mathfrak {B}}(\vartheta _{0})\Vert\le & {} \big (q+\omega ({\mathfrak {R}}+\varphi ,\psi +\varphi )\big )\Vert \vartheta _{0}-\theta _{0}\Vert \\< & {} \big (q+\omega ({\mathfrak {R}} +\varphi ,\psi +\varphi )\big )\psi<w({\mathfrak {R}})\varpi <\varpi , \end{aligned}$$

and

$$\begin{aligned} \Vert \Theta _{1}-\vartheta _{0}\Vert \le \Vert T_{0}^{-1}\Vert \Vert {\mathfrak {B}}(\vartheta _{0})\Vert<w({\mathfrak {R}})\Vert \vartheta _{0}-\theta _{0}\Vert<\displaystyle \Bigg (\frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\Bigg )^{2}\Vert \theta _{0}-\Theta _{0}\Vert<\psi <{\mathfrak {R}}. \end{aligned}$$

Again, we have

$$\begin{aligned} \Vert \Theta _{1}-\Theta _{0}\Vert\le & {} \Vert \Theta _{1}-\vartheta _{0}\Vert +\Vert \vartheta _{0} -\Theta _{0}\Vert \\< & {} \left( 1+\frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}+\Big (\frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\Big )^{2}\right) \Vert \theta _{0}-\Theta _{0}\Vert<\frac{1-w({\mathfrak {R}})}{1-2w({\mathfrak {R}})}\psi <{\mathfrak {R}}, \end{aligned}$$

then by use of Eq. (9)

$$\begin{aligned} \Vert (2\Theta _{1}-\Theta _{0})-\Theta _{0}\Vert = 2\Vert \Theta _{1}-\Theta _{0}\Vert < 2\frac{1-w({\mathfrak {R}})}{1-2w({\mathfrak {R}})}\psi ={\mathfrak {R}}. \end{aligned}$$

So, \(\Theta _{1}, ~2\Theta _{1}-\Theta _{0}\in B(\Theta _{0}, {\mathfrak {R}})\). We see that (i)–(viii) are true for \(m=1\). Now, assuming they are true for \(m\le n-1\). Now, We have to prove that (i)–(viii) are true for \(m=n\).

Since \(\Vert I-T_{0}^{-1}T_{n}\Vert \le \Vert T_{0}^{-1}\Vert ~\Vert T_{0}-T_{n}\Vert<\phi \Big ( q+\omega \big ({\mathfrak {R}}+\varphi ,3{\mathfrak {R}}+\varphi \big )\Big )<w({\mathfrak {R}})<1,\) by using Banach lemma [35], \(T_{n}^{-1}\) exists and

$$\begin{aligned} \Vert T_{n}^{-1}\Vert \le \displaystyle \frac{\phi }{1-w({\mathfrak {R}})}. \end{aligned}$$
(10)

Hence, (i) is true for \(m=n.\) Now, we have to prove (ii), (iii) and (iv). Using Eq. (10), and assumptions (P2), (P3) and (P5), we obtain

$$\begin{aligned} \Vert \theta _{n}-\Theta _{n}\Vert \le \Vert T_{n}^{-1}\Vert \Vert {\mathfrak {B}}(\Theta _{n})\Vert< \left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{3n}\Vert \theta _{0}-\Theta _{0}\Vert<\psi <{\mathfrak {R}}, \end{aligned}$$

and using (P5)

$$\begin{aligned} \begin{aligned} \Vert \theta _{n}-\Theta _{0}\Vert \le \Vert \theta _{n}-\Theta _{n}\Vert +\Vert \Theta _{n}-\Theta _{0}\Vert<\displaystyle \sum _{j=0}^{3n}\left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{j}\Vert \theta _{0}-\Theta _{0}\Vert<\frac{1-w({\mathfrak {R}})}{1-2w({\mathfrak {R}})}\psi <{\mathfrak {R}}. \end{aligned} \end{aligned}$$

Again using Lemma 2.1 and (P4),

$$\begin{aligned} \Vert {\mathfrak {B}}(\theta _{n})\Vert\le & {} \Vert [\theta _{n},\Theta _{n};{\mathfrak {B}}]-[\Theta _{n-1},2\Theta _{n}-\Theta _{n-1};{\mathfrak {B}}]\Vert ~\Vert \theta _{n}-\Theta _{n}\Vert \\< & {} \big (q+\omega (2{\mathfrak {R}},\psi )\big )\Vert \theta _{n}-\Theta _{n}\Vert . \end{aligned}$$

Using Eq. (10) and (P5), we have

$$\begin{aligned} \Vert \vartheta _{n}-\theta _{n}\Vert\le & {} \Vert T_{n}^{-1}\Vert \Vert {\mathfrak {B}}(\theta _{n})\Vert< \frac{\phi }{1-w({\mathfrak {R}})}\Big (q+\omega \big (2{\mathfrak {R}},\psi \big )\Big )\Vert \theta _{n}-\Theta _{n}\Vert \\< & {} \displaystyle \Bigg (\frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\Bigg )^{3n+1}\Vert \theta _{0}-\Theta _{0}\Vert<\psi <{\mathfrak {R}}, \end{aligned}$$

so, (v) is proved. Using (P5), we have

$$\begin{aligned} \Vert \vartheta _{n}-\Theta _{n}\Vert\le & {} \Vert \vartheta _{n}-\theta _{n}\Vert +\Vert \theta _{n}-\Theta _{n}\Vert \\ {}< & {} \left( \left( \displaystyle \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{3n+1}+\displaystyle \left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{3n}\right) \Vert \theta _{0}-\Theta _{0}\Vert<\psi<{\mathfrak {R}},\\ \Vert \vartheta _{n}-\Theta _{0}\Vert\le & {} \Vert \vartheta _{n}-\Theta _{n}\Vert +\Vert \Theta _{n}-\Theta _{0}\Vert \\< & {} \left( \displaystyle \sum _{j=0}^{3n+1}\left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{j}\right) \Vert \theta _{0}-\Theta _{0}\Vert<\frac{1-w({\mathfrak {R}})}{1-2w({\mathfrak {R}})}\psi <{\mathfrak {R}}, \end{aligned}$$

using Lemma 2.1, (P4) and (P5)

$$\begin{aligned} \Vert {\mathfrak {B}}(\vartheta _{n})\Vert\le & {} \Big (q+\omega \big (\Vert \vartheta _{n}-\Theta _{0}\Vert +\Vert \Theta _{n-1}-\Theta _{0}\Vert ,\Vert \theta _{n}-\Theta _{n}\Vert +\Vert \Theta _{n}-\Theta _{n-1}\Vert \big )\Big )\Vert \vartheta _{n}-\theta _{n}\Vert \\< & {} \big (q+\omega (2{\mathfrak {R}},2\psi )\big )\Vert \vartheta _{n}-\theta _{n}\Vert< w({\mathfrak {R}})\varpi <\varpi . \end{aligned}$$

Hence, (vi), (vii) and (viii) hold. Therefore, (i)–(viii) are true for \(m=n\). Hence, all relations are true for any natural numbers m by mathematical induction and consequently lemma is proved. \(\square \)

Also, we find some important results from the above Lemma

$$\begin{aligned} \Vert \Theta _{n+1}-\vartheta _{n}\Vert\le & {} \Vert T_{n}^{-1}\Vert \Vert {\mathfrak {B}}(\vartheta _{n})\Vert<\displaystyle \left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) \Vert \vartheta _{n}-\theta _{n}\Vert \\ {}< & {} \displaystyle \left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{3n+2}\Vert \theta _{0}-\Theta _{0}\Vert<\psi<{\mathfrak {R}},\\ \Vert \Theta _{n+1}-\Theta _{n}\Vert\le & {} \Vert \Theta _{n+1}-\vartheta _{n}\Vert +\Vert \vartheta _{n}-\Theta _{n}\Vert \\< & {} \displaystyle \left( \sum _{j=3n}^{3n+2}\left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{j}\right) \Vert \theta _{0}-\Theta _{0}\Vert<\frac{1-w({\mathfrak {R}})}{1-2w({\mathfrak {R}})}\psi <{\mathfrak {R}}, \end{aligned}$$

and

$$\begin{aligned} \Vert \Theta _{n+1}-\Theta _{0}\Vert\le & {} \Vert \Theta _{n+1}-\vartheta _{n}\Vert +\Vert \vartheta _{n}-\Theta _{0}\Vert \\ {}< & {} \left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{3n+2}\Vert \theta _{0}-\Theta _{0}\Vert +\sum _{j=0}^{3n+1}\left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{j}\Vert \theta _{0}-\Theta _{0}\Vert \\= & {} \displaystyle \left( \sum _{j=0}^{3n+2}\left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{j}\right) \Vert \theta _{0}-\Theta _{0}\Vert<\frac{1-w({\mathfrak {R}})}{1-2w({\mathfrak {R}})}\psi <{\mathfrak {R}}. \end{aligned}$$

Using Lemma 2.2, then we have

$$\begin{aligned} \Vert {\mathfrak {B}}(\Theta _{n+1})\Vert\le & {} \Big (q+\omega \big (\Vert \Theta _{n+1}-\Theta _{0}\Vert +\Vert \Theta _{n-1}-\Theta _{0}\Vert ,\\&\Vert \vartheta _{n}-\Theta _{n}\Vert +\Vert \Theta _{n}-\Theta _{n-1}\Vert \big )\Big )\Vert \Theta _{n+1}-\vartheta _{n}\Vert \\ {}< & {} \big (q+\omega (2{\mathfrak {R}},2\psi )\big )\Vert \Theta _{n+1}-\vartheta _{n}\Vert<w({\mathfrak {R}})\varpi <\varpi , \end{aligned}$$

and

$$\begin{aligned} \Vert (2\Theta _{n+1}-\Theta _{n})-\Theta _{0}\Vert \le \Vert \Theta _{n+1}-\Theta _{n}\Vert +\Vert \Theta _{n+1}-\Theta _{0}\Vert < \frac{2\big (1-w({\mathfrak {R}})\big )}{1-2w({\mathfrak {R}})}\psi ={\mathfrak {R}}. \end{aligned}$$

Now, we give the convergence theorem for iterative method (7).

Theorem 2.3

Let \({\mathfrak {B}}:{\mathcal {C}} \subset {\mathbb {X}} \rightarrow {\mathbb {Y}}\) be a nonlinear operator defined on a nonempty open convex domain \({\mathcal {C}}\) of a Banach space \({\mathbb {X}}\) with values in \({\mathbb {Y}}\). Suppose that \({\mathfrak {B}}\) satisfies the condition P given in (8). Suppose \(B(\Theta _{0}, {\mathfrak {R}})\), \(B(\Theta _{-1}, {\mathfrak {R}})\subset {\mathcal {C}}\). Then the above method (7), starting at \(\Theta _{0}\) and \(\Theta _{-1}\) are well defined, the iterates \(\Theta _{m}\) belong to \(\overline{B(\Theta _{0}, {\mathfrak {R}})}\) and converges to a unique solution \(\Theta ^{*}\) of \({\mathfrak {B}}(\Theta )=0\) in \(\overline{B(\Theta _{0}, {\mathfrak {R}})}\).

Proof

Here, \(\{s_{n}\}\) converges if \(\{\Theta _{n}\}\) is a Cauchy sequence. So, we have to show that \(\{\Theta _{n}\}\) is a Cauchy sequence. Now,by using Lemma 2.2, we have

$$\begin{aligned} \Vert \Theta _{m+n}-\Theta _{n}\Vert\le & {} \sum _{i=1}^{m}\Vert \Theta _{n+i}-\Theta _{n+i-1}\Vert \\< & {} \sum _{i=1}^{m} \left( \sum _{i=3n+3i-3}^{3n+3i-1}\left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{j}\right) \Vert \theta _{0}-\Theta _{0}\Vert<\psi <{\mathfrak {R}}. \end{aligned}$$

Therefore, \(\{\Theta _{m}\}\) is Cauchy sequence and consequently, it converges to \(\Theta ^*\in \overline{{\mathfrak {B}}(\Theta _{0},{\mathfrak {R}})}\). Using Lemma 2.2(vi), we easily see that \(\{\vartheta _{m}\}\) converges to \(\Theta ^*\). Also, we have,

$$\begin{aligned} \Vert {\mathfrak {B}}(\Theta _{m})\Vert <\big (q+\omega (2{\mathfrak {R}},2\psi )\big )\Vert \Theta _{m}-\vartheta _{m-1}\Vert , \end{aligned}$$

then \(\Vert \Theta _{m}-\vartheta _{m-1}\Vert \rightarrow 0\) as \(m\rightarrow \infty \). Hence by the continuity of \({\mathfrak {B}}\) implies \({\mathfrak {B}}(\Theta ^{*})=0\). Now, we have to show that uniqueness of \(\Theta ^{*}\). We assume that \(\Theta ^{**}\) is other solution of \({\mathfrak {B}}(s)=0\) in \(\overline{B(\Theta _0,{\mathfrak {R}})}\). Then apply properties of divided difference

$$\begin{aligned}{}[\Theta ^{**},\Theta ^{*};{\mathfrak {B}}] (\Theta ^{**}-\Theta ^{*})={\mathfrak {B}}(\Theta ^{**})-{\mathfrak {B}}(\Theta ^{*})=0. \end{aligned}$$

Also, \(\Vert T_{0}-[\Theta ^{**},\Theta ^{*};{\mathfrak {B}}]\Vert =\Vert [\Theta _{-1},2\Theta _{0}-\Theta _{-1};{\mathfrak {B}}]-[\Theta ^{**},\Theta ^{*};{\mathfrak {B}}]\Vert \le q+\omega \big ({\mathfrak {R}}+\varphi ,{\mathfrak {R}}+\varphi \big )\) and \(\Vert I-T_{0}^{-1}[\Theta ^{**},\Theta ^{*};{\mathfrak {B}}]\Vert \le \Vert T_{0}^{-1}\Vert ~\Vert T_{0}-[\Theta ^{**},\Theta ^{*};{\mathfrak {B}}]\Vert \le \phi \Big (q+\omega \big ({\mathfrak {R}}+\varphi ,{\mathfrak {R}}+\varphi \big )\Big )<w({\mathfrak {R}})<1,\) then using Banach lemma, \([\Theta ^{**},\Theta ^{*};{\mathfrak {B}}]^{-1}\) exists. Hence, \(\Theta ^{**}=\Theta ^{*}\) and proof of uniqueness is complete. \(\square \)

3 Optimal computational efficiency

We know that efficiency of an iterative method cannot be measured from only operational cost. The number of evaluations of functions that also needed. Traub defines in [38] the index \(\varrho ^{\frac{1}{v}}\), where \(\varrho \) is the local order of convergence of the method and v represents the number of the evaluations of function. Now, we compare the efficiencies of the three iterative methods (2), (4) and (7), we can use the following computational efficiency index (CEI):

$$\begin{aligned} \mathrm{{CEI}}(\mu ,m)=\varrho ^{\frac{1}{{\mathcal {C}}(\mu ,m)}}, \end{aligned}$$
(11)

where \(\varrho \) is the Q-order of convergence and \({\mathcal {C}}(\mu ,m)\) the computational cost of an iterative method. We denote the CEI of the above-mentioned methods, respectively, by

$$\begin{aligned} \mathrm{{CEI}}_{i}(\mu ,m)=\varrho _{i}^{\frac{1}{{\mathcal {C}}_{i}(\mu ,m)}}, ~~i=1,2,3, \end{aligned}$$
(12)

where \(\varrho _{i}\) is the Q-order of convergence and \(\mathrm{{CEI}}_{i}(\mu ,m)\) the computational cost of the above corresponding methods. The computational cost is given by

$$\begin{aligned} {\mathcal {C}}_{i}(\mu ,m)=a_{i}(m)\mu +p_{i}(m), ~~i=1,2,3, \end{aligned}$$
(13)

where \(a_{i}(m)\) is the number of scalar functions, \(p_{i}(m)\) is the number of products per iteration and \(\mu \) is the ratio between products and divisions and evaluations of functions that are required to express \({\mathcal {C}}_{i}(\mu ,m)\) in terms of products and divisions.

We observe that method (2) requires the m functions \({\mathfrak {B}}_{i},~i=1,2,\dots ,m\), and the \(m^{2}\) evaluations of functions in the divided difference matrix of first-order [35]  given by \([\mathbf{p} ,\mathbf{q} ;{\mathfrak {B}}]=([\mathbf{p} ,\mathbf{q} ;{\mathfrak {B}}]_{ij})_{i,j=1}^{m}\in \mathcal {L} ({\mathbb {R}}^{m},{\mathbb {R}}^{m}), \) with

$$\begin{aligned}{}[\mathbf{p} ,\mathbf{q} ,{\mathfrak {B}}]_{ij}=\frac{1}{p_{j}-q_{j}}\big ({{\mathfrak {B}}_{i}(p_{1},\dots ,p_{j},q_{j+1},\dots ,\dots ,q_{m})} -{\mathfrak {B}}_{i}(p_{1},\dots ,p_{j-1},q_{j},\dots ,\dots ,q_{m})\big ), \end{aligned}$$

\(1\le i, j\le m\), \(\mathbf{p} =(p_{1},p_{2},\dots ,p_{m})^\mathrm{{T}}\) and \(\mathbf{q} =(q_{1},q_{2},\dots ,q_{m})^\mathrm{{T}}\) be evaluated per iteration, so total number of evaluations of the function in \(m^{2}+m\), i.e \(a_{1}(m)=m^{2}+m\). Noted that method (2) requires \(m^{2}\) divisions to compute \([\Theta _{n-1},2\Theta _{n}-\Theta _{n-1};{\mathfrak {B}}]\), \(\frac{m^{3}-m}{3}\) products and divisions in the decomposition LU and \(m^{2}\) products and divisions for solving two triangular systems. Therefore, \(p_{1}(m)=\frac{m}{3}(m^{2}+6m-1)\). Also for method (4), it is easy to obtain that \(a_{2}(m)=a_{1}(m)+m=m^{2}+2m\), because a new evaluation of the function \({\mathfrak {B}}\) is considered. The linear systems to solve are the same, the decomposition LU is already made and only two more triangular linear systems have to be solved, then \(p_{2}({m})=p_{1}(m)+m^{2}=\frac{m}{3}(m^{2}+9m-1)\). Lastly, we calculate for method (7), we obtain \(a_{3}(m)=2a_{1}(m)=2(m^{2}+m)\), because a new divided difference and a new evaluation of \({\mathfrak {B}}\) are considered, and \(p_{3}(m)=p_{2}(m)+m^{2}=\frac{m}{3}(m^{2}+12m-1)\), because two linear systems have to be solved. Now, we summarize the following costs:

$$\begin{aligned} \begin{aligned}&{\mathcal {C}}_{1}(\mu ,m)=(m^{2}+m)\mu +\dfrac{m}{3}(m^{2}+6m-1),\quad \varrho _{1}=2;\\ {}&{\mathcal {C}}_{2}(\mu ,m)=(m^{2}+2m)\mu +\dfrac{m}{3}(m^{2}+9m-1),\quad \varrho _{2}=\frac{1+\sqrt{17}}{2};\\ {}&{\mathcal {C}}_{3}(\mu ,m)=(m^{2}+3m)\mu +\dfrac{m}{3}(m^{2}+12m-1),\quad \varrho _{3}=3. \end{aligned} \end{aligned}$$

We can compare the corresponding \(\mathrm{{CEI}}_{i}(\mu ,m)\) using the following expressions

$$\begin{aligned} P_{ij}=\frac{\log \mathrm{{CEI}}_{i}}{\log \mathrm{{CEI}}_{j}}=\frac{{\mathcal {C}}_{j}}{{\mathcal {C}}_{i}}\frac{\log \varrho _{i}}{\log \varrho _{j}},~~~(i,j)=(2,1), (3,1), (3,2), \end{aligned}$$

and we can write the theorem:

Theorem 3.1

We have

  1. (1)

    \(\mathrm{{CEI}}_{2}>\mathrm{{CEI}}_{1}\) for \(m\ge 3\) and for \(m=2\) with \(\mu >3.024\dots \),

  2. (2)

    \(\mathrm{{CEI}}_{3}>\mathrm{{CEI}}_{1}\) for \(m\ge 5\) and for \(m=4\) with \(\mu >0.427\dots \),

  3. (3)

    \(\mathrm{{CEI}}_{3}>\mathrm{{CEI}}_{2}\) for \(m\ge 9\) and for \(m=8\) with \(\mu >0.649\dots \),

  4. (4)

    \(\mathrm{{CEI}}_{3}>\mathrm{{CEI}}_{2}>\mathrm{{CEI}}_{1}\) for \(m\ge 9\).

Proof

(1) For \(m=2\), we assume that \(\mathrm{{CEI}}_{2}>\mathrm{{CEI}}_{1}\). Then, to calculate the value of \(\mu \), take logarithm on both sides, we have,

$$\begin{aligned} \log \Bigg (\frac{\mathrm{{CEI}}_{2}}{\mathrm{{CEI}}_{1}}\Bigg )>0 \implies \frac{\log \mathrm{{CEI}}_{2}}{\log \mathrm{{CEI}}_{1}}>1\implies \frac{{\mathcal {C}}_{1}}{{\mathcal {C}}_{2}}\frac{\log \varrho _{2}}{\log \varrho _{1}} >1, \end{aligned}$$

or,

$$\begin{aligned} \frac{\log \varrho _{2}}{\log \varrho _{1}}>\frac{{\mathcal {C}}_{2}}{{\mathcal {C}}_{1}} \implies \mu >\frac{\frac{m}{3}\Big ((m^2+9m-1)-(m^2+6m-1)1.357018637\Big )}{(m^2+m)1.357018637-(m^2+2m)}, \end{aligned}$$
(14)

or \(\mu >3.024\dots \) for \(m=2\). Also \(\mathrm{{CEI}}_{2}<\mathrm{{CEI}}_{1}\), for \(0<\mu \le 3.024\dots \) Furthermore, for \(m\ge 3\) the value of \(\mu \) is always negative from (14). But the value of \(\mu \) is always positive. Hence, in this case, \(\mathrm{{CEI}}_{2}>\mathrm{{CEI}}_{1}\) is always true for \(m\ge 3\). We can see that from Fig. 1, the computational efficiencies indexes of method (4) have larger values than method (2) for \(m\ge 3\) and for \(m=2\) with \(\mu >3.024\dots \)

(2) For \(m=4\), we assume that \(\mathrm{{CEI}}_{3}>\mathrm{{CEI}}_{1}\), then we have to calculate the value of \(\mu \). Taking log on both sides, we have

$$\begin{aligned} \log \Bigg (\frac{\mathrm{{CEI}}_{3}}{\mathrm{{CEI}}_{1}}\Bigg )>0 \implies \frac{\log \mathrm{{CEI}}_{3}}{\log \mathrm{{CEI}}_{1}}>1\implies \frac{{\mathcal {C}}_{1}}{{\mathcal {C}}_{3}}\frac{\log \varrho _{3}}{\log \varrho _{1}} >1. \end{aligned}$$

Again,

$$\begin{aligned} \frac{\log \varrho _{3}}{\log \varrho _{1}}>\frac{{\mathcal {C}}_{3}}{{\mathcal {C}}_{1}} \implies \mu >\frac{\frac{m}{3}\big ((m^2+12m-1)-(m^2+6m-1)1.584962501\big )}{(m^2+m)1.584962501-(m^2+3m)}. \end{aligned}$$
(15)

Now, we put \(m=4\) then we have \(\mu >0.427\dots \) and also \(\mathrm{{CEI}}_{3}<\mathrm{{CEI}}_{1}\), for \(0<\mu \le 0.427\dots \) Furthermore, for \(m\ge 5\) we have the value of \(\mu \) is negative from (15). But the value of \(\mu \) is always positive. Hence, in this case, \(\mathrm{{CEI}}_{3}>\mathrm{{CEI}}_{1}\) always true for \(m\ge 5\). We can see that from Fig. 2, the computational efficiencies indexes of method (7) have larger values than method (2) for \(m\ge 5\) and for \(m=4\) with \(\mu >0.427\dots \)

(3) For \(m=8\), we assume that \(\mathrm{{CEI}}_{3}>\mathrm{{CEI}}_{2}\), then we have to calculate the value of \(\mu \). Taking log on both sides, we have

$$\begin{aligned} \log \Bigg (\frac{\mathrm{{CEI}}_{3}}{\mathrm{{CEI}}_{2}}\Bigg )>0 \implies \frac{\log \mathrm{{CEI}}_{3}}{\log \mathrm{{CEI}}_{2}}>1\implies \frac{{\mathcal {C}}_{2}}{{\mathcal {C}}_{3}}\frac{\log \varrho _{3}}{\log \varrho _{2}} >1. \end{aligned}$$

Again,

$$\begin{aligned} \frac{\log \varrho _{3}}{\log \varrho _{2}}>\frac{{\mathcal {C}}_{3}}{{\mathcal {C}}_{2}} \implies \mu >\frac{\frac{m}{3}\big ((m^2+12m-1)-(m^2+9m-1)1.167974012\big )}{(m^2+2m)1.167974012-(m^2+3m)}. \end{aligned}$$
(16)

Now, we put \(m=8\), then we have \(\mu >0.649\dots \) and also \(\mathrm{{CEI}}_{3}<\mathrm{{CEI}}_{2}\), for \(0<\mu \le 0.649\dots \) Furthermore, for \(m\ge 9\), the value of \(\mu \) is negative from (16). But the value of \(\mu \) is always positive. Hence, in this case, \(\mathrm{{CEI}}_{3}>\mathrm{{CEI}}_{2}\) is always true for \(m\ge 9\). We can see that from Fig. 3, the computational efficiencies indexes of method (7) have larger values than method (4) for \(m\ge 9\) and for \(m=8\) with \(\mu >0.649\dots \) Also, We can easily see that from Fig. 4, the computational efficiencies indexes of method (7) have larger values than methods (2) and (4) for \(m\ge 9\). Here red lines used for method (2), green lines used for method (4) and blue lines used for method (7).

Fig. 1
figure 1

\(\mathrm{{CEI}}_{2}>\mathrm{{CEI}}_{1}\) for \(m\ge 3\) and for \(m=2\) with \(\mu >3.024\dots \)

Fig. 2
figure 2

\(\mathrm{{CEI}}_{3}>\mathrm{{CEI}}_{1}\) for \(m\ge 5\) and for \(m=4\) with \(\mu >0.427\dots \)

Fig. 3
figure 3

\(\mathrm{{CEI}}_{3}>\mathrm{{CEI}}_{2}\) for \(m\ge 9\) and for \(m=8\) with \(\mu >0.649\dots \)

Fig. 4
figure 4

\(\mathrm{{CEI}}_{3}>\mathrm{{CEI}}_{2}>\mathrm{{CEI}}_{1}\) for \(m\ge 9\)

4 Numerical examples

In this Section, we provide some numerical examples to demonstrate the semilocal convergence obtained in this study.

4.1 Application of two-dimensional problem

Example 4.1

Consider the nonlinear system:

$$\begin{aligned} \left. \begin{array}{l} \Theta _{1}^{2}-\Theta _{2}+1+\frac{1}{9}|\Theta _{1}-1|=0,\\ \Theta _{1}+\Theta _{2}^{2}-7+\frac{1}{9}|\Theta _{2}|=0. \end{array}\right\} \end{aligned}$$
(17)

If we denote \({\varvec{\Theta }}=(\Theta _{1},\Theta _{2})\), \({\mathfrak {B}}_{1}({\varvec{\Theta }})=\Theta _{1}^{2}-\Theta _{2}+1+\frac{1}{9}|\Theta _{1}-1|\) and \({\mathfrak {B}}_{2}({\varvec{\Theta }})=\Theta _{1}+\Theta _{2}^{2}-7+\frac{1}{9}|\Theta _{2}|\), then this problem is equivalent to solve \({\mathfrak {B}}({\varvec{\Theta }})=0\), where \({\mathfrak {B}}:{\mathbb {R}}^{2}\rightarrow {\mathbb {R}}^{2}\) with \({\mathfrak {B}}=({\mathfrak {B}}_{1},{\mathfrak {B}}_{2})\). The divided difference operator \(\left[ \mathbf{e} , \mathbf{f} ; {\mathfrak {B}} \right] \) of \({\mathfrak {B}}\) can be represented as

$$\begin{aligned} \left[ \mathbf{e} , \mathbf{f} ; {\mathfrak {B}} \right]= & {} \left[ \begin{array}{ll} \dfrac{e_{1}^{2}-f_{1}^{2}}{e_{1}-f_{1}} &{} -1\\ 1 &{} \dfrac{e_{2}^{2}-f_{2}^{2}}{e_{2}-f_{2}}\end{array}\right] \\&+\frac{1}{9} \left[ \begin{array}{ll} \dfrac{|e_{1}-1|-|f_{1}-1|}{e_{1}-f_{1}} &{} 0\\ 0 &{} \dfrac{|e_{2}|-|f_{2}|}{e_{2}-f_{2}}\end{array}\right] , \end{aligned}$$

where \(\mathbf{e} =(e_{1},e_{2})\) and \(\mathbf{f} =(f_{1},f_{2})\). Now, we take the max-norm as vector norm and matrix norm to obtain

$$\begin{aligned} \Vert \left[ {\varvec{\Theta }}, \varvec{\theta }; {\mathfrak {B}} \right] -\left[ \mathbf{e} , \mathbf{f} ; {\mathfrak {B}} \right] \Vert \le \Vert {\varvec{\Theta }}-\mathbf{e} \Vert + \Vert \varvec{\theta }-\mathbf{f} \Vert +\frac{2}{9}. \end{aligned}$$

Now by use of condition (P4) from semilocal convergence section which is for method (7), it follows that \(q=\frac{2}{9}\) and \(\omega (p_{1},p_{2})=p_{1}+p_{2}\).

Now, we use the method (7) to the solution of system (17) by choosing initial iterates \(\mathbf{r}_\mathbf {-1} \)=(4.2, 6.1) and \(\mathbf{r}_\mathbf {0} \)=(5.0, 3.2). After two iterations, we get

$$\begin{aligned} \mathbf{r} _{1}=(1.16542,2.36075),~ \mathbf{r} _{2}=(1.15936,2.36182), \end{aligned}$$

and \(\varphi =0.00606\). Now, we choose \({\varvec{\Theta }}_{-1}=\mathbf{r}_{1}\) and \({\varvec{\Theta }}_{0}=\mathbf{r}_{2}\), we can easily check that above conditions \((P1)-(P5)\) satisfied and

$$\begin{aligned} \varpi =0.00002 \dots , \phi =0.45771 \dots , \mathrm {and}~ \psi =0.000009 \dots \end{aligned}$$

Thus, from Eq. (9), we get \({\mathfrak {R}}=0.00002 \dots \), so that \(w({\mathfrak {R}})=0.10731 \dots <\frac{1}{2}\). Also, another radius is \({\mathfrak {R}}=0.17159 \dots \) and holds \(w({\mathfrak {R}})=0.49997 \dots <\frac{1}{2}\).

Thus, conditions of Theorem 2.3 satisfy and an unique solution \({\varvec{\Theta }}^{*}\)= \((1.159360850 \dots , 2.361824342 \dots )\) of system (17) exists in \(\overline{B(\Theta _{0}, {\mathfrak {R}})}\) . We calculate the absolute errors \(\Vert \Theta ^{*}-\Theta _{m}\Vert \) and compare it with its competitors methods (2), (4) and two-step Secant method [4]. In Table 1, we compute and compare the absolute errors obtained by the different aforementioned methods with tolerance \(10^{-8}\). It can easily be observed that method (7) converges more rapidly in comparison to the other methods.

Table 1 Absolute errors (\(\Vert \Theta ^{*}-\Theta _{m}\Vert \)) for the system (17)

Example 4.2

Consider a differentiable system of nonlinear equations:

$$\begin{aligned} \left. \begin{array}{l} \Theta _{1}^{2}-2\Theta _{1}-\Theta _{2}-3=0,\\ \Theta _{2}^{2}-\Theta _{1}+2\Theta _{2}-\frac{3}{4}=0. \end{array}\right\} \end{aligned}$$
(18)

We can write the system (18) is equivalent to \({\mathfrak {B}}({\varvec{\Theta }})=0\), where \({\mathfrak {B}}:{\mathbb {R}}^{2}\rightarrow {\mathbb {R}}^{2}\), \({\mathfrak {B}}=({\mathfrak {B}}_{1},{\mathfrak {B}}_{2})\), \({\varvec{\Theta }}\)=\((\Theta _{1},\Theta _{2})\in {\mathbb {R}}^2\), \({\mathfrak {B}}_{1}({\varvec{\Theta }})=\Theta _{1}^{2}-2\Theta _{1}-\Theta _{2}-3\) and \({\mathfrak {B}}_{2}({\varvec{\Theta }})=\Theta _{2}^{2}-\Theta _{1}+2\Theta _{2}-\frac{3}{4}\). For \(\mathbf{e} =(e_1,e_2),\mathbf{f} =(f_1,f_2)\in {\mathbb {R}}^2\) such that \(e_1\ne f_1\) and \(e_2\ne f_2\), the first-order divided difference operator \([\mathbf{e} ,\mathbf{f} ;{\mathfrak {B}}]\in \mathcal {L} (X,Y)\) is given by

$$\begin{aligned} \left[ \mathbf{e} , \mathbf{f} ; {\mathfrak {B}} \right] = \left[ \begin{array}{ll} -2+\dfrac{e_{1}^{2}-f_{1}^{2}}{e_{1}-f_{1}} &{}\quad -1\\ -1 &{}\quad 2+\dfrac{e_{2}^{2}-f_{2}^{2}}{e_{2}-f_{2}}\end{array}\right] . \end{aligned}$$

If we use max-norm, then we have

$$\begin{aligned} \Vert \left[ {\varvec{\Theta }},\varvec{\theta } ; {\mathfrak {B}} \right] -\left[ \mathbf{e} , \mathbf{f} ; {\mathfrak {B}} \right] \Vert \le \Vert {\varvec{\Theta }}-\mathbf{e} \Vert + \Vert \varvec{\theta }-\mathbf{f} \Vert . \end{aligned}$$

This gives us \(q=0\) and \(\omega (p_{1},p_{2})=p_{1}+p_{2}\).

Now, we use the method (7) to the solution of system (18) by choosing initial iterates \(\mathbf{r}_\mathbf {-1} \)=(7.2, 6.2) and \(\mathbf{r}_\mathbf {0} \)=(11.4, 1.5). After two iterations, we get

$$\begin{aligned} \mathbf{r} _{1}=(3.32678,1.25508),~ \mathbf{r} _{2}=(3.29020,1.24504), \end{aligned}$$

and \(\varphi =0.03658\). Now, we choose \({\varvec{\Theta }}_{-1}=\mathbf{r}_{1}\) and \({\varvec{\Theta }}_{0}=\mathbf{r}_{2}\), we can easily check from semilocal convergence section conditions \((P1) - (P5)\) satisfied and

$$\begin{aligned} \varpi =0.000023 \dots , \phi =0.285203 \dots , \psi =0.000006 \dots . \end{aligned}$$

Thus, from Eq. (9), we have two different positive solutions \({\mathfrak {R}}=0.000013 \dots \) and \({\mathfrak {R}}^{*}=0.839972 \dots \) Solutions satisfy \(w({\mathfrak {R}})=0.020877 \dots <\frac{1}{2}\) and \(w({\mathfrak {R}})^{*}=0.499995 \dots <\frac{1}{2}\).

Thus, conditions of Theorem 2.3 satisfy and an unique solution \({\varvec{\Theta }}^{*}\)= \((3.290205263 \dots , 1.245040147 \dots )\) of system (18) exists in \(\overline{B(\Theta _{0}, {\mathfrak {R}})}\) . We calculate the absolute errors \(\Vert \Theta ^{*}-\Theta _{m}\Vert \) and compare it with its competitors methods (2), (4) and two-step Secant method [4]. In Table 2, we compute and compare the absolute errors obtained by the different aforementioned methods with tolerance \(10^{-8}\). It can easily be observed that method (7) converges more rapidly in comparison to the other methods.

Table 2 Absolute errors (\(\Vert \Theta ^{*}-\Theta _{m}\Vert \)) for the system (18)

4.2 Application of Higher-dimensional problem

Example 4.3

Consider

$$\begin{aligned} w(x)=j(x)+\int _{p}^{q}S(x,y)T(y,\Theta (y))\mathrm{{d}}y,~~x\in [p,q], \end{aligned}$$
(19)

where \(-\infty<p<q<+\infty , ~j, ~S\) and T are given functions and \(\Theta \) is an unknown function to be determined. Solving (19) is equivalent to solve \({\mathfrak {B}}(\Theta )=0\), where \({\mathfrak {B}}:{\mathcal {C}}\subset {\mathcal {C}}[p,q]\rightarrow {\mathcal {C}}[p,q]\) and

$$\begin{aligned} {\mathfrak {B}}(\Theta (x))=\Theta (x)-j(x)-\int _{p}^{q}S(x,y)T(y,\Theta (y))\mathrm{{d}}y,~~x\in [p,q]. \end{aligned}$$
(20)

Let S be the Green’s function defined in \([p,q]\times [p,q]\). We use the Gauss–Legendre formula to transform (20) into a finite-dimensional nonlinear system of equation that can be given by

$$\begin{aligned} \int _{p}^{q} Q(y)\mathrm{{d}}y\simeq \sum _{k=1}^{n}W_{k}Q(y_{k}), \end{aligned}$$

where \(W_{k}\) and \(y_{k}\) denote weights and nodes respectively. Now, we denote \(\Theta (y_{k})\) and \(j(y_{k})\) by \(\Theta _{k}\) and \(j_{k}\), \(k=1,2,\dots ,n\), then Eq. (20) can be taken as the following system of nonlinear equations:

$$\begin{aligned} {\mathfrak {B}}({\varvec{\Theta }})\equiv {\varvec{\Theta }}-\mathbf{j} -{\mathcal {A}}{{\mathcal {Z}}}=0, \end{aligned}$$
(21)

where \({\mathfrak {B}}:{\mathbb {R}}^{n}\rightarrow {\mathbb {R}}^{n},\) \({\varvec{\Theta }}\)=\((\Theta _{1},\Theta _{2},\dots ,\Theta _{n})^\mathrm{{T}}\), \({{\mathcal {Z}}}\)=\(\big (T(y_{1},\Theta _{1}),T(y_{2},\Theta _{2}),\dots ,T(y_{n},\Theta _{n})\big )^\mathrm{{T}}\), \({\mathcal {A}}=(a_{rs})_{r,s=1}^{n},\) \(\mathbf{j} =(j_{1},j_{2},\dots ,j_{n})^\mathrm{{T}}\).

The components of \({\mathfrak {B}}\) and \({\mathcal {A}}\) are determined by

$$\begin{aligned} {\mathfrak {B}}_r=\Theta _{r}-j_{r}-\sum _{s=1}^{n}a_{rs}T(y_{s},\Theta _{s}),~s=1,2,\dots ,n, \end{aligned}$$
(22)

and

$$\begin{aligned} a_{rs}=W_{s}S(y_{r},y_{s})= \left\{ \begin{array}{l} W_{s}\frac{(q-y_{r})(y_{s}-p)}{q-p},~~ s \le r,\\ W_{s}\frac{(q-y_{s})(y_{r}-p)}{q-p},~~ s > r. \end{array}\right. \end{aligned}$$

Now, particularly consider a nonlinear integral equation of form (20):

$$\begin{aligned} {\mathfrak {B}}(\Theta (x))=\Theta (x)-2.1-\frac{1}{9}\int _{0}^{1}S(x,y)\big (\Theta (y)^{2}+|\Theta (y)|\big )\mathrm{{d}}y,~~x\in [0,1], \end{aligned}$$

where \(\Theta \in {\mathcal {C}}[0,1], y\in [0,1]\), and the kernel S is the Green’s function in \([0,1]\times [0,1]\).

By taking \(n=8\), we transform the given problem into a system of nonlinear equation

$$\begin{aligned} {\mathfrak {B}}({\varvec{\Theta }})\equiv {\varvec{\Theta }}-\mathbf{2}.1 - \frac{1}{9}{\mathcal {A}}(\mathbf{p} _{{\varvec{\Theta }}}+\mathbf{q} _{{\varvec{\Theta }}}), \end{aligned}$$
(23)

where \({\varvec{\Theta }}= (\Theta _{1},\Theta _{2},\dots ,\Theta _{8})^\mathrm{{T}}\),   \(\mathbf{2}.1 =(2.1,2.1,\dots ,2.1)^\mathrm{{T}}\),  \({\mathcal {A}}=(a_{rs})_{r,s=1}^{8}\), \(\mathbf{p} _{{\varvec{\Theta }}}=(\Theta _{1}^{2},\Theta _{2}^{2},\dots ,\Theta _{8}^{2})^\mathrm{{T}},\) \(\mathbf{q} _{{\varvec{\Theta }}}=(|\Theta _{1}|,|\Theta _{2}|,\dots ,|\Theta _{8}|)^\mathrm{{T}}.\)

The divided difference of first order of \({\mathfrak {B}}\) is \([\mathbf{e} ,\mathbf{f} ;{\mathfrak {B}}]=I-\frac{1}{9}(U+V)\), where \(U=(u_{rs})_{r,s=1}^{8}\) with \(u_{rs}=a_{rs}(e_{s}+f_{s})\), \(V=(v_{rs})_{r,s=1}^{8}\) with \(v_{rs}=a_{ij} \frac{|e_{s}|-|f_{s}|}{e_{s}-f_{s}}\). Then, using sup-norm here, we can have

$$\begin{aligned} \Vert [\Theta ,\theta ;{\mathfrak {B}}]-[e,f;{\mathfrak {B}}]\Vert \le q+\omega (\Vert \Theta -e\Vert ,\Vert \theta -f\Vert ), \end{aligned}$$

with \(q=\frac{2}{9}\Vert {\mathcal {A}}\Vert \) and \(\omega (p_{1},p_{2})=\frac{1}{9}\Vert {\mathcal {A}}\Vert (p_{1}+p_{2})\).

Choosing starting points as \({\varvec{\Theta }}_{-1}\)=\((3.4,3.4,\dots ,3.4)^\mathrm{{T}}\) and \({\varvec{\Theta }}_{0}\)=\((1.9,1.9,\dots ,1.9)^\mathrm{{T}}\).

Using condition \((P1)-(P5)\), we can obtain \(\varphi =1.5\), \(\Vert A\Vert _{\infty }=0.12355\dots ,~ \varpi =0.27564 \dots ,~ \phi =1.06985 \dots ,~ \psi =0.29490 \dots ,~q=0.02745\dots ,~\omega (p_{1},p_{2})=(0.01372\dots )(p_{1}+p_{2}).\)

So, the solution of system (23) leads to \({\mathfrak {R}}=0.68693\), so that \(w({\mathfrak {R}})=0.12388\dots <\frac{1}{2}\). Also another radius is \({\mathfrak {R}}=5.41635 \dots \), so that \(w({\mathfrak {R}})=0.47120 \dots <\frac{1}{2}\). It can be easily seen that all the assumptions of Theorem 2.3 hold true and the uniqueness ball of the solution is given by \(\overline{B({\varvec{\theta }_{0}},{\mathfrak {R}})}\). Using method (7), the approximate numerical solution of system (23) is found out and summarized in Table 3. A comparison of the term-by-term errors is given in Table 4 using the tolerance \(\Vert {\varvec{\Theta }}_{\mathbf{m}}- \varvec{\Theta }_{\mathbf{m-1}}\Vert \) \(<10^{-15}\). We can see that method (7) converges rapidly as compared to its competitors.

Table 3 Numerical approximate solution \({\varvec{\Theta }}^{*}\) of system (23)
Table 4 Errors (\(\Vert \Theta ^{*}-|\theta _{m}\Vert _\infty \)) of the system (23)

Example 4.4

Consider the problem of solving the following system of nonlinear equations:

$$\begin{aligned} {\mathfrak {B}}({\varvec{\Theta }})\equiv {\varvec{\Theta }}-\mathbf{2}.1 -\frac{1}{7} {\mathcal {A}}(\mathbf{q} _{{\varvec{\Theta }}}),~~~{\mathfrak {B}}:{\mathbb {R}}^{8}\rightarrow {\mathbb {R}}^{8}, \end{aligned}$$
(24)

where \({\varvec{\Theta }}\) = \((\Theta _{1},\Theta _{2},\dots ,\Theta _{8})^\mathrm{{T}}\),   \(\mathbf{2}.1 \) = \((2.1,2.1,\dots ,2.1)^\mathrm{{T}},~~{\mathcal {A}}=(a_{rs})_{r,s=1}^{8},~~\mathbf{q} _{\Theta }=(\Theta _{1}^{2},\Theta _{2}^{2},\dots ,\Theta _{8}^{2})^\mathrm{{T}}.\)

Using (3), we get \([\mathbf{e} ,\mathbf{f} ;{\mathfrak {B}}]=I-\frac{1}{7}F\), where \(F=(f_{rs})_{r,s=1}^{8}\) with \(f_{rs}=a_{rs}(e_{s}+f_{s})\).

Now, we choose the initial iterates as \(\mathbf{t} _{-1}=(3.8,3.8,\dots ,3.8)^\mathrm{{T}}\), \(\mathbf{t} _{0}=(2.6,2.6,\dots ,2.8)^\mathrm{{T}}\) and from conditions \((P1)-(P5)\) we can obtain,

$$\begin{aligned} \varphi =1.2,~\Vert A\Vert _{\infty }=0.12355\dots ,~ \varpi =0.49041 \dots ,~ \phi =1.09963 \dots ,~ \psi =0.53927 \dots \end{aligned}$$

Also,\(~q=0\dots ,~\omega (p_{1},p_{2})=(0.01765\dots )(p_{1}+p_{2})\). From Eq. (9) we get the radius is \({\mathfrak {R}}=1.38437\), so that \(w({\mathfrak {R}})=0.18093\dots <\frac{1}{2}\). We also find another radius is \({\mathfrak {R}}=3.82687 \dots \), so that \(w({\mathfrak {R}})=0.41798 \dots <\frac{1}{2}\).

This shows that the convergence conditions of Theorem 2.3 hold true and the uniqueness ball is given by \(\overline{B({ {\varvec{\Theta }}_{\mathbf{0}}},{\mathfrak {R}})}\). Now using method (7), we obtain the solution of system (24) which is given in Table 5. A comparison of the term-by-term errors is given in Table 6 using the tolerance \(\Vert \varvec{\Theta }_{\mathbf{m}}-\varvec{\Theta }_{\mathbf{m-1}}\Vert \) \(<10^{-15}\). We can see that method (7) converges rapidly as compared to its competitors.

Table 5 Numerical approximate solution \({\varvec{\Theta }}^{*}\) of system (24)
Table 6 Errors (\(\Vert {\varvec{\Theta }}^{*}-{\varvec{\Theta }}_{m}\Vert _{\infty }\)) of the system (24)

5 Conclusions

The article was devoted to study the semilocal convergence of three-step Kurchatov-type method under \(\omega \)-condition on the first-order divided difference using recurrence relations. We also analyze the computational efficiency index and important relations for Kurchtov’s-type method. Numerical examples have been computed for higher-dimensional problems including Hammerstein type integral equations for differentiable and non-differentiable operators. Authors also comparison with single-step Kurchatov method, two-step Kurchatov method and two-step secant method. It has been concluded that the present method converges more rapidly than the other three methods. In semilocal convergence analysis, theorems are given for existence-uniqueness balls. Moreover, the domain of the parameters discussed in [28] is given to show the guaranteed convergence of the method. Kurchatov method gave more extended convergence region, better information on the distances as well as the uniqueness. This method does not require the evaluation of derivatives, as opposed to Newton or other well-known third-order methods such as Halley or Chebyshev. This method is useful when the functions are not regular or the evaluation of their derivatives is costly. Kurchatov method is a good derivative-free iterative method with memory with the same order of convergence and number of new functional evaluations per iteration as Newton’s procedure. The results are tested on numerical examples to show that the Kurchatov three-step method provides better results than other methods using similar information.