Abstract
The purpose of this paper to establish the semilocal convergence analysis of three-step Kurchatov method under weaker conditions in Banach spaces. We construct the recurrence relations under the assumption that involved first-order divided difference operators satisfy the \(\omega \) condition. Theorems are given for the existence-uniqueness balls enclosing the unique solution. The application of the iterative method is shown by solving nonlinear system of equations and nonlinear Hammerstein-type integral equations. It illustrates the theoretical development of this study.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this article, we consider the problem of approximating a unique solution \(\Theta ^{*}\) of a nonlinear equation
where \({\mathfrak {B}}:{\mathcal {C}} \subset {\mathbb {X}} \rightarrow {\mathbb {Y}}\) is continuous but non-differentiable nonlinear operator defined on a nonempty open convex subset \({\mathcal {C}}\) of Banach space \({\mathbb {X}}\) with values in Banach space \({\mathbb {Y}}\). It is well known that finding exact solutions of type (1) is difficult and usually iterative methods are used to approximate the solution of this type of equations. The semilocal convergence is based on the information around an initial point, to give criteria ensuring the convergence of the iterative method; while the local one is, based on the information around a solution, to find estimates of the radii of convergence balls. A recurrence relation is an equation that defines a sequence based on a rule that gives the next term as a function of the previous terms. Newton’s method and Newton-type methods [18, 31,32,33] are the most used iterative methods for approximating the solution. All of such methods require the differentiability of the involved operator which is a drawback for some certain problems where the operator is nondifferentiable. To overcome these types of problems, many researchers [6, 18, 22, 26] have established and studied the iterative methods using divided differences. One of such method which does not use the differentiability of the involved operator is Kurchatov’s method, proposed by Kurchatov [37] and given by:
The operator \([e,f;{\mathfrak {B}}] \) is a first-order divided difference operator \({\mathfrak {B}}\) at the points e and \(f~(e\ne f)\) and satisfy \([e,f;{\mathfrak {B}}](e-f)={\mathfrak {B}}(e)-{\mathfrak {B}}(f)\). In the finite-dimensional vector space \({\mathbb {R}}^{m}\), the divided difference of first order [35] is defined as \([\mathbf{e} ,\mathbf{f} ;{\mathfrak {B}}]=([\mathbf{e} ,\mathbf{f} ;{\mathfrak {B}}]_{ij})_{i,j=1}^{m}\in \mathcal {L} ({\mathbb {R}}^{m},{\mathbb {R}}^{m}), \) where \(\mathcal {L} ({\mathbb {R}}^{m},{\mathbb {R}}^{m})\) denotes the space of all bounded linear operators from \({\mathbb {R}}^{m}\) into \({\mathbb {R}}^{m}\) with \(\mathbf{e} \)=\((e_{1},e_{2},\dots ,e_{m})^\mathrm{{T}}\), \(\mathbf{f} \)=\((f_{1},f_{2},\dots ,f_{m})^\mathrm{{T}}\) and
To access the solution quickly, some authors [6, 19, 28, 29] also considered two-step derivative-free iterative methods. One such method is two-step Kurchtov method [28, 29] defined by the help of divided difference:
Kumar and Parida [29] gave semilocal convergence of this method under Lipschitz continuity conditions on first-order divided difference operator by using recurrence relations. But to generalize the convergence analysis, some authors [25, 26] also considered \(\omega \)-type condition on the first-order divided difference operator of \({\mathfrak {B}}\):
Recall that Hernández and Rubio [25, 26] have provided sufficient convergence conditions for the secant method by considering the conditions (5) under recurrence relations. Ren [36] also gave the new semilocal convergence of secant method under condition (5) with the help of recurrence relation. Argyros et al. [8], Argyros and Ren [7], Hernández and Rubio [24, 27], Ezquerro et. al [20] also studied semilocal convergence of Secant-like methods under same continuity condition. Amat and Busquier [5] also introduced two-step Steffensen method and analyzed the semilocal convergence by the help of recurrence relation under \(\omega \)-continuity condition (5). Ezquerro et al. [21] also worked on semilocal convergence of Kurchatov two-step method (4) by the help of recurrence relations under condition (5).
Recently, Argyros et al. [8] introduced the Chebyshev–Secant-type method and gave the semilocal convergence by the use of recurrence relations under Lipschitz-type conditions on the first-order divided difference operator. Argyros et. al [9] also established semilocal convergence of the same method under \(\omega \)-continuity condition (5) by the help of recurrence relations. Third-order three-step Steffensen method was proposed by Ezquerro et al. [23]:
Authors established the semilocal convergence of method using recurrence relations under Lipschitz continuity condition. In all these above-mentioned methods, divided difference operator was frozen.
In [28], authors discussed about semilocal convergence analysis of three-step Kurchatov method which is defined by
Authors also found the better result for order of convergence, optimal computational efficiency and efficiency index of method (7) as comparison from methods (2) and (4). One of the important benefits of this method is the evaluation of only one divided difference of the operator in each step, whereas order of convergence of the method is 3.
Ezquerro et al. [21] also worked on semilocal convergence of Kurchatov two-step method (4) by the help of recurrence relations under condition (5). They established the R-order of convergence with efficiency index and computational efficiency index. Ren [36] also gave the new semilocal convergence under condition (5) with the help of recurrence relation. Dennis [15], Potra [34], Argyros [7, 8], Ezquerro and Hernández [17] also studied semilocal convergence for that method under same continuity condition. In fact, if \(\Theta _{1},\Theta _{2}\in {\mathcal {C}}\) and \(N,M\ge 0\) such that \(\omega (\Theta _{1},\Theta _{2})=N+M(\Theta _{1}+\Theta _{2})\), we obtain the Lipschitz continuous case.
Recently, Arqub and Shawagfeh [13] analyzed an efficient reproducing kernel algorithm for the numerical solutions of equations in porous media with Dirichlet boundary conditions. Arqub [11] worked on the numerical solution of the time-fractional Schrödinger equation subject to given constraint condition based on the generalized Taylor series formula in the Caputo sense. Author also discussed about reproducing kernel algorithm for obtaining the numerical solutions of fractional-order systems of Dirichlet function in [12]. In [10], author gave the numerical solutions of systems of first-order, two-point BVPs based on the reproducing kernel algorithm.
Ahmad et al. [2] found the numerical solution of three-term time-fractional-order multi-dimensional diffusion equations by using an efficient local meshless method. Kumar et al. [30] worked on fourth-order derivative-free iterative methods for finding a multiple root. Authors also shows the performance of the new technique is a good competitor to existing optimal fourth-order Newton-like techniques. Bazighifan and Cesarano [14] worked using the technique of Riccati transformation and integral averaging method, authors get conditions to ensure oscillation of solutions of fourth-order Neutral differential equations. Ahmad et al. [3] solved the initial and boundary value problems by the use of variational iteration algorithm-I with an auxiliary parameter and also discussed about effectiveness and utilization of this proposed technique. Ahmad et al. [1] found the two new modified variational iteration algorithms for solving the numerical solutions of coupled Berger’ equations. Authors also evaluated the errors norms, accuracy of method and speed of convergence by numerical problems. Elabbasy et al. [16] studied about asymptotic behavior of a class of higher-order delay differential equations with a p-Laplacian-like operator.
The structure of this paper is given as follows: in Sect. 2, we gave the new semilocal convergence analysis of the three-step Kurchatov method by the help of new type recurrence relations. In Sect. 3, we discussed about optimal computational efficiency of Kurchatov-type method with comparison graph. Numerical examples for differentiable and non-differentiable operators under higher-dimensional problems are given in Sect. 4. Finally, conclusion is given in Sect. 5.
Throughout the paper, we denote \(\overline{B(\Theta _0,{\mathfrak {R}})}=\{\theta \in {\mathcal {C}}:\Vert \theta -\Theta _{0}\Vert \le {\mathfrak {R}}\}\) and \(B(\Theta _0,{\mathfrak {R}})=\{\theta \in {\mathcal {C}}:\Vert \theta -\Theta _{0}\Vert < {\mathfrak {R}}\}\). Where, \({\mathfrak {R}}\) is the least positive real root and \(\overline{B(\Theta _0,{\mathfrak {R}})}, B(\Theta _0,{\mathfrak {R}})\subseteq {\mathcal {C}}\).
2 Semilocal convergence of Kurchatov three-step method
In this Section, we shall analyze semilocal convergence for method (7) by the help of recurrence relations. Let us assume that the operator \({\mathfrak {B}}\) satisfies the following condition P:
-
(P5)
If \(w(\varTheta )=\phi \big (q+\omega (2\varTheta +\varphi ,3\varTheta +\varphi )\big )\) and \( \psi =\phi \varpi \). Let \({\mathfrak {R}}\) be the least positive root of given equation
$$\begin{aligned} \varTheta =\frac{2\big (1-w(\varTheta )\big )\psi }{\big (1-2w(\varTheta )\big )}, \end{aligned}$$(9)with \(w({\mathfrak {R}})<\frac{1}{2}\).
Here, we provide a lemma which is important role for deriving the recurrence relations for method (7).
Lemma 2.1
Let \(\{\Theta _{m}\}\), \(\{\theta _{m}\}\) and \(\{\vartheta _{m}\}\) be the sequences which is generated by the method (7) with distinct points \(\Theta _{m},\theta _{m-1},\theta _{m},\vartheta _{m-1},\vartheta _{m}\in {\mathcal {C}}\). Suppose \(T_{m}=[\Theta _{m-1},2\Theta _{m}-\Theta _{m-1};{\mathfrak {B}}]\). Then
-
(i)
\({\mathfrak {B}}(\theta _{m})=\big ([\theta _{m},\Theta _{m};{\mathfrak {B}}] -T_{m}\big )(\theta _{m}-\Theta _{m})\),
-
(ii)
\({\mathfrak {B}}(\vartheta _{m})=\big ([\vartheta _{m},\theta _{m};{\mathfrak {B}}] -T_{m}\big )(\vartheta _{m}-\theta _{m})\),
-
(iii)
\({\mathfrak {B}}(\Theta _{m})=\big ([\Theta _{m},\vartheta _{m-1};{\mathfrak {B}}] -T_{m-1}\big )(\Theta _{m}-\vartheta _{m-1})\).
Proof
Proof of above Lemma is given in [28]. \(\square \)
Lemma 2.2
Suppose conditions (P1)–(P5) satisfy on the operator \({\mathfrak {B}}\) with distinct points \(\Theta _{m},\theta _{m},\theta _{0},\Theta _{0},\vartheta _{m}\in {\mathcal {C}}\). Then the recurrence relations hold for \(m\ge 1\) which is given below:
-
(i)
\(T_{m}^{-1}\) exists and \(\Vert T_{m}^{-1}\Vert \le \displaystyle \frac{\phi }{1-w({\mathfrak {R}})}\),
-
(ii)
\(\Vert \theta _{m}-\Theta _{m}\Vert <\displaystyle \left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{3m}\Vert \theta _{0}-\Theta _{0}\Vert \),
-
(iii)
\(\Vert \theta _{m}-\Theta _{0}\Vert <\displaystyle \sum _{j=0}^{3m}\left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{j}\Vert \theta _{0}-\Theta _{0}\Vert \),
-
(iv)
\(\Vert {\mathfrak {B}}(\theta _{m})\Vert <\big (q+\omega (2{\mathfrak {R}},\psi )\big )\Vert \theta _{m}-\Theta _{m}\Vert \),
-
(v)
\(\Vert \vartheta _{m}-\theta _{m}\Vert < \displaystyle \left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{3m+1}\Vert \theta _{0}-\Theta _{0}\Vert \),
-
(vi)
\(\Vert \vartheta _{m}-\Theta _{m}\Vert <\left( \left( \displaystyle \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{3m+1}+\displaystyle \left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{3m}\right) \Vert \theta _{0}-\Theta _{0}\Vert \),
-
(vii)
\(\Vert \vartheta _{m}-\Theta _{0}\Vert <\left( \displaystyle \sum _{j=0}^{3m+1}\left( \frac{w({\mathfrak {R}})}{1-w({\mathfrak {R}})}\right) ^{j}\right) \Vert \theta _{0}-\Theta _{0}\Vert \),
-
(viii)
\(\Vert {\mathfrak {B}}(\vartheta _{m})\Vert <\big (q+\omega (2{\mathfrak {R}},2\psi )\big )\Vert \vartheta _{m}-\theta _{m}\Vert \).
Proof
This lemma is proved using mathematical induction. It follows from initial hypotheses (P1) and from (P5) that \(\theta _{0}, \vartheta _{0}\) and \(\Theta _{1}\) are well defined because by our assumption \(T_{0}^{-1}\) exists. We can easily conclude that \(\psi <{\mathfrak {R}}\). Also,
from (P2) and (P3) and then using Lemma 2.1 and (P4) we get
So,
and
Using the above Lemma 2.1 and (P4), we get
and
Again, we have
then by use of Eq. (9)
So, \(\Theta _{1}, ~2\Theta _{1}-\Theta _{0}\in B(\Theta _{0}, {\mathfrak {R}})\). We see that (i)–(viii) are true for \(m=1\). Now, assuming they are true for \(m\le n-1\). Now, We have to prove that (i)–(viii) are true for \(m=n\).
Since \(\Vert I-T_{0}^{-1}T_{n}\Vert \le \Vert T_{0}^{-1}\Vert ~\Vert T_{0}-T_{n}\Vert<\phi \Big ( q+\omega \big ({\mathfrak {R}}+\varphi ,3{\mathfrak {R}}+\varphi \big )\Big )<w({\mathfrak {R}})<1,\) by using Banach lemma [35], \(T_{n}^{-1}\) exists and
Hence, (i) is true for \(m=n.\) Now, we have to prove (ii), (iii) and (iv). Using Eq. (10), and assumptions (P2), (P3) and (P5), we obtain
and using (P5)
Again using Lemma 2.1 and (P4),
Using Eq. (10) and (P5), we have
so, (v) is proved. Using (P5), we have
using Lemma 2.1, (P4) and (P5)
Hence, (vi), (vii) and (viii) hold. Therefore, (i)–(viii) are true for \(m=n\). Hence, all relations are true for any natural numbers m by mathematical induction and consequently lemma is proved. \(\square \)
Also, we find some important results from the above Lemma
and
Using Lemma 2.2, then we have
and
Now, we give the convergence theorem for iterative method (7).
Theorem 2.3
Let \({\mathfrak {B}}:{\mathcal {C}} \subset {\mathbb {X}} \rightarrow {\mathbb {Y}}\) be a nonlinear operator defined on a nonempty open convex domain \({\mathcal {C}}\) of a Banach space \({\mathbb {X}}\) with values in \({\mathbb {Y}}\). Suppose that \({\mathfrak {B}}\) satisfies the condition P given in (8). Suppose \(B(\Theta _{0}, {\mathfrak {R}})\), \(B(\Theta _{-1}, {\mathfrak {R}})\subset {\mathcal {C}}\). Then the above method (7), starting at \(\Theta _{0}\) and \(\Theta _{-1}\) are well defined, the iterates \(\Theta _{m}\) belong to \(\overline{B(\Theta _{0}, {\mathfrak {R}})}\) and converges to a unique solution \(\Theta ^{*}\) of \({\mathfrak {B}}(\Theta )=0\) in \(\overline{B(\Theta _{0}, {\mathfrak {R}})}\).
Proof
Here, \(\{s_{n}\}\) converges if \(\{\Theta _{n}\}\) is a Cauchy sequence. So, we have to show that \(\{\Theta _{n}\}\) is a Cauchy sequence. Now,by using Lemma 2.2, we have
Therefore, \(\{\Theta _{m}\}\) is Cauchy sequence and consequently, it converges to \(\Theta ^*\in \overline{{\mathfrak {B}}(\Theta _{0},{\mathfrak {R}})}\). Using Lemma 2.2(vi), we easily see that \(\{\vartheta _{m}\}\) converges to \(\Theta ^*\). Also, we have,
then \(\Vert \Theta _{m}-\vartheta _{m-1}\Vert \rightarrow 0\) as \(m\rightarrow \infty \). Hence by the continuity of \({\mathfrak {B}}\) implies \({\mathfrak {B}}(\Theta ^{*})=0\). Now, we have to show that uniqueness of \(\Theta ^{*}\). We assume that \(\Theta ^{**}\) is other solution of \({\mathfrak {B}}(s)=0\) in \(\overline{B(\Theta _0,{\mathfrak {R}})}\). Then apply properties of divided difference
Also, \(\Vert T_{0}-[\Theta ^{**},\Theta ^{*};{\mathfrak {B}}]\Vert =\Vert [\Theta _{-1},2\Theta _{0}-\Theta _{-1};{\mathfrak {B}}]-[\Theta ^{**},\Theta ^{*};{\mathfrak {B}}]\Vert \le q+\omega \big ({\mathfrak {R}}+\varphi ,{\mathfrak {R}}+\varphi \big )\) and \(\Vert I-T_{0}^{-1}[\Theta ^{**},\Theta ^{*};{\mathfrak {B}}]\Vert \le \Vert T_{0}^{-1}\Vert ~\Vert T_{0}-[\Theta ^{**},\Theta ^{*};{\mathfrak {B}}]\Vert \le \phi \Big (q+\omega \big ({\mathfrak {R}}+\varphi ,{\mathfrak {R}}+\varphi \big )\Big )<w({\mathfrak {R}})<1,\) then using Banach lemma, \([\Theta ^{**},\Theta ^{*};{\mathfrak {B}}]^{-1}\) exists. Hence, \(\Theta ^{**}=\Theta ^{*}\) and proof of uniqueness is complete. \(\square \)
3 Optimal computational efficiency
We know that efficiency of an iterative method cannot be measured from only operational cost. The number of evaluations of functions that also needed. Traub defines in [38] the index \(\varrho ^{\frac{1}{v}}\), where \(\varrho \) is the local order of convergence of the method and v represents the number of the evaluations of function. Now, we compare the efficiencies of the three iterative methods (2), (4) and (7), we can use the following computational efficiency index (CEI):
where \(\varrho \) is the Q-order of convergence and \({\mathcal {C}}(\mu ,m)\) the computational cost of an iterative method. We denote the CEI of the above-mentioned methods, respectively, by
where \(\varrho _{i}\) is the Q-order of convergence and \(\mathrm{{CEI}}_{i}(\mu ,m)\) the computational cost of the above corresponding methods. The computational cost is given by
where \(a_{i}(m)\) is the number of scalar functions, \(p_{i}(m)\) is the number of products per iteration and \(\mu \) is the ratio between products and divisions and evaluations of functions that are required to express \({\mathcal {C}}_{i}(\mu ,m)\) in terms of products and divisions.
We observe that method (2) requires the m functions \({\mathfrak {B}}_{i},~i=1,2,\dots ,m\), and the \(m^{2}\) evaluations of functions in the divided difference matrix of first-order [35] given by \([\mathbf{p} ,\mathbf{q} ;{\mathfrak {B}}]=([\mathbf{p} ,\mathbf{q} ;{\mathfrak {B}}]_{ij})_{i,j=1}^{m}\in \mathcal {L} ({\mathbb {R}}^{m},{\mathbb {R}}^{m}), \) with
\(1\le i, j\le m\), \(\mathbf{p} =(p_{1},p_{2},\dots ,p_{m})^\mathrm{{T}}\) and \(\mathbf{q} =(q_{1},q_{2},\dots ,q_{m})^\mathrm{{T}}\) be evaluated per iteration, so total number of evaluations of the function in \(m^{2}+m\), i.e \(a_{1}(m)=m^{2}+m\). Noted that method (2) requires \(m^{2}\) divisions to compute \([\Theta _{n-1},2\Theta _{n}-\Theta _{n-1};{\mathfrak {B}}]\), \(\frac{m^{3}-m}{3}\) products and divisions in the decomposition LU and \(m^{2}\) products and divisions for solving two triangular systems. Therefore, \(p_{1}(m)=\frac{m}{3}(m^{2}+6m-1)\). Also for method (4), it is easy to obtain that \(a_{2}(m)=a_{1}(m)+m=m^{2}+2m\), because a new evaluation of the function \({\mathfrak {B}}\) is considered. The linear systems to solve are the same, the decomposition LU is already made and only two more triangular linear systems have to be solved, then \(p_{2}({m})=p_{1}(m)+m^{2}=\frac{m}{3}(m^{2}+9m-1)\). Lastly, we calculate for method (7), we obtain \(a_{3}(m)=2a_{1}(m)=2(m^{2}+m)\), because a new divided difference and a new evaluation of \({\mathfrak {B}}\) are considered, and \(p_{3}(m)=p_{2}(m)+m^{2}=\frac{m}{3}(m^{2}+12m-1)\), because two linear systems have to be solved. Now, we summarize the following costs:
We can compare the corresponding \(\mathrm{{CEI}}_{i}(\mu ,m)\) using the following expressions
and we can write the theorem:
Theorem 3.1
We have
-
(1)
\(\mathrm{{CEI}}_{2}>\mathrm{{CEI}}_{1}\) for \(m\ge 3\) and for \(m=2\) with \(\mu >3.024\dots \),
-
(2)
\(\mathrm{{CEI}}_{3}>\mathrm{{CEI}}_{1}\) for \(m\ge 5\) and for \(m=4\) with \(\mu >0.427\dots \),
-
(3)
\(\mathrm{{CEI}}_{3}>\mathrm{{CEI}}_{2}\) for \(m\ge 9\) and for \(m=8\) with \(\mu >0.649\dots \),
-
(4)
\(\mathrm{{CEI}}_{3}>\mathrm{{CEI}}_{2}>\mathrm{{CEI}}_{1}\) for \(m\ge 9\).
Proof
(1) For \(m=2\), we assume that \(\mathrm{{CEI}}_{2}>\mathrm{{CEI}}_{1}\). Then, to calculate the value of \(\mu \), take logarithm on both sides, we have,
or,
or \(\mu >3.024\dots \) for \(m=2\). Also \(\mathrm{{CEI}}_{2}<\mathrm{{CEI}}_{1}\), for \(0<\mu \le 3.024\dots \) Furthermore, for \(m\ge 3\) the value of \(\mu \) is always negative from (14). But the value of \(\mu \) is always positive. Hence, in this case, \(\mathrm{{CEI}}_{2}>\mathrm{{CEI}}_{1}\) is always true for \(m\ge 3\). We can see that from Fig. 1, the computational efficiencies indexes of method (4) have larger values than method (2) for \(m\ge 3\) and for \(m=2\) with \(\mu >3.024\dots \)
(2) For \(m=4\), we assume that \(\mathrm{{CEI}}_{3}>\mathrm{{CEI}}_{1}\), then we have to calculate the value of \(\mu \). Taking log on both sides, we have
Again,
Now, we put \(m=4\) then we have \(\mu >0.427\dots \) and also \(\mathrm{{CEI}}_{3}<\mathrm{{CEI}}_{1}\), for \(0<\mu \le 0.427\dots \) Furthermore, for \(m\ge 5\) we have the value of \(\mu \) is negative from (15). But the value of \(\mu \) is always positive. Hence, in this case, \(\mathrm{{CEI}}_{3}>\mathrm{{CEI}}_{1}\) always true for \(m\ge 5\). We can see that from Fig. 2, the computational efficiencies indexes of method (7) have larger values than method (2) for \(m\ge 5\) and for \(m=4\) with \(\mu >0.427\dots \)
(3) For \(m=8\), we assume that \(\mathrm{{CEI}}_{3}>\mathrm{{CEI}}_{2}\), then we have to calculate the value of \(\mu \). Taking log on both sides, we have
Again,
Now, we put \(m=8\), then we have \(\mu >0.649\dots \) and also \(\mathrm{{CEI}}_{3}<\mathrm{{CEI}}_{2}\), for \(0<\mu \le 0.649\dots \) Furthermore, for \(m\ge 9\), the value of \(\mu \) is negative from (16). But the value of \(\mu \) is always positive. Hence, in this case, \(\mathrm{{CEI}}_{3}>\mathrm{{CEI}}_{2}\) is always true for \(m\ge 9\). We can see that from Fig. 3, the computational efficiencies indexes of method (7) have larger values than method (4) for \(m\ge 9\) and for \(m=8\) with \(\mu >0.649\dots \) Also, We can easily see that from Fig. 4, the computational efficiencies indexes of method (7) have larger values than methods (2) and (4) for \(m\ge 9\). Here red lines used for method (2), green lines used for method (4) and blue lines used for method (7).
4 Numerical examples
In this Section, we provide some numerical examples to demonstrate the semilocal convergence obtained in this study.
4.1 Application of two-dimensional problem
Example 4.1
Consider the nonlinear system:
If we denote \({\varvec{\Theta }}=(\Theta _{1},\Theta _{2})\), \({\mathfrak {B}}_{1}({\varvec{\Theta }})=\Theta _{1}^{2}-\Theta _{2}+1+\frac{1}{9}|\Theta _{1}-1|\) and \({\mathfrak {B}}_{2}({\varvec{\Theta }})=\Theta _{1}+\Theta _{2}^{2}-7+\frac{1}{9}|\Theta _{2}|\), then this problem is equivalent to solve \({\mathfrak {B}}({\varvec{\Theta }})=0\), where \({\mathfrak {B}}:{\mathbb {R}}^{2}\rightarrow {\mathbb {R}}^{2}\) with \({\mathfrak {B}}=({\mathfrak {B}}_{1},{\mathfrak {B}}_{2})\). The divided difference operator \(\left[ \mathbf{e} , \mathbf{f} ; {\mathfrak {B}} \right] \) of \({\mathfrak {B}}\) can be represented as
where \(\mathbf{e} =(e_{1},e_{2})\) and \(\mathbf{f} =(f_{1},f_{2})\). Now, we take the max-norm as vector norm and matrix norm to obtain
Now by use of condition (P4) from semilocal convergence section which is for method (7), it follows that \(q=\frac{2}{9}\) and \(\omega (p_{1},p_{2})=p_{1}+p_{2}\).
Now, we use the method (7) to the solution of system (17) by choosing initial iterates \(\mathbf{r}_\mathbf {-1} \)=(4.2, 6.1) and \(\mathbf{r}_\mathbf {0} \)=(5.0, 3.2). After two iterations, we get
and \(\varphi =0.00606\). Now, we choose \({\varvec{\Theta }}_{-1}=\mathbf{r}_{1}\) and \({\varvec{\Theta }}_{0}=\mathbf{r}_{2}\), we can easily check that above conditions \((P1)-(P5)\) satisfied and
Thus, from Eq. (9), we get \({\mathfrak {R}}=0.00002 \dots \), so that \(w({\mathfrak {R}})=0.10731 \dots <\frac{1}{2}\). Also, another radius is \({\mathfrak {R}}=0.17159 \dots \) and holds \(w({\mathfrak {R}})=0.49997 \dots <\frac{1}{2}\).
Thus, conditions of Theorem 2.3 satisfy and an unique solution \({\varvec{\Theta }}^{*}\)= \((1.159360850 \dots , 2.361824342 \dots )\) of system (17) exists in \(\overline{B(\Theta _{0}, {\mathfrak {R}})}\) . We calculate the absolute errors \(\Vert \Theta ^{*}-\Theta _{m}\Vert \) and compare it with its competitors methods (2), (4) and two-step Secant method [4]. In Table 1, we compute and compare the absolute errors obtained by the different aforementioned methods with tolerance \(10^{-8}\). It can easily be observed that method (7) converges more rapidly in comparison to the other methods.
Example 4.2
Consider a differentiable system of nonlinear equations:
We can write the system (18) is equivalent to \({\mathfrak {B}}({\varvec{\Theta }})=0\), where \({\mathfrak {B}}:{\mathbb {R}}^{2}\rightarrow {\mathbb {R}}^{2}\), \({\mathfrak {B}}=({\mathfrak {B}}_{1},{\mathfrak {B}}_{2})\), \({\varvec{\Theta }}\)=\((\Theta _{1},\Theta _{2})\in {\mathbb {R}}^2\), \({\mathfrak {B}}_{1}({\varvec{\Theta }})=\Theta _{1}^{2}-2\Theta _{1}-\Theta _{2}-3\) and \({\mathfrak {B}}_{2}({\varvec{\Theta }})=\Theta _{2}^{2}-\Theta _{1}+2\Theta _{2}-\frac{3}{4}\). For \(\mathbf{e} =(e_1,e_2),\mathbf{f} =(f_1,f_2)\in {\mathbb {R}}^2\) such that \(e_1\ne f_1\) and \(e_2\ne f_2\), the first-order divided difference operator \([\mathbf{e} ,\mathbf{f} ;{\mathfrak {B}}]\in \mathcal {L} (X,Y)\) is given by
If we use max-norm, then we have
This gives us \(q=0\) and \(\omega (p_{1},p_{2})=p_{1}+p_{2}\).
Now, we use the method (7) to the solution of system (18) by choosing initial iterates \(\mathbf{r}_\mathbf {-1} \)=(7.2, 6.2) and \(\mathbf{r}_\mathbf {0} \)=(11.4, 1.5). After two iterations, we get
and \(\varphi =0.03658\). Now, we choose \({\varvec{\Theta }}_{-1}=\mathbf{r}_{1}\) and \({\varvec{\Theta }}_{0}=\mathbf{r}_{2}\), we can easily check from semilocal convergence section conditions \((P1) - (P5)\) satisfied and
Thus, from Eq. (9), we have two different positive solutions \({\mathfrak {R}}=0.000013 \dots \) and \({\mathfrak {R}}^{*}=0.839972 \dots \) Solutions satisfy \(w({\mathfrak {R}})=0.020877 \dots <\frac{1}{2}\) and \(w({\mathfrak {R}})^{*}=0.499995 \dots <\frac{1}{2}\).
Thus, conditions of Theorem 2.3 satisfy and an unique solution \({\varvec{\Theta }}^{*}\)= \((3.290205263 \dots , 1.245040147 \dots )\) of system (18) exists in \(\overline{B(\Theta _{0}, {\mathfrak {R}})}\) . We calculate the absolute errors \(\Vert \Theta ^{*}-\Theta _{m}\Vert \) and compare it with its competitors methods (2), (4) and two-step Secant method [4]. In Table 2, we compute and compare the absolute errors obtained by the different aforementioned methods with tolerance \(10^{-8}\). It can easily be observed that method (7) converges more rapidly in comparison to the other methods.
4.2 Application of Higher-dimensional problem
Example 4.3
Consider
where \(-\infty<p<q<+\infty , ~j, ~S\) and T are given functions and \(\Theta \) is an unknown function to be determined. Solving (19) is equivalent to solve \({\mathfrak {B}}(\Theta )=0\), where \({\mathfrak {B}}:{\mathcal {C}}\subset {\mathcal {C}}[p,q]\rightarrow {\mathcal {C}}[p,q]\) and
Let S be the Green’s function defined in \([p,q]\times [p,q]\). We use the Gauss–Legendre formula to transform (20) into a finite-dimensional nonlinear system of equation that can be given by
where \(W_{k}\) and \(y_{k}\) denote weights and nodes respectively. Now, we denote \(\Theta (y_{k})\) and \(j(y_{k})\) by \(\Theta _{k}\) and \(j_{k}\), \(k=1,2,\dots ,n\), then Eq. (20) can be taken as the following system of nonlinear equations:
where \({\mathfrak {B}}:{\mathbb {R}}^{n}\rightarrow {\mathbb {R}}^{n},\) \({\varvec{\Theta }}\)=\((\Theta _{1},\Theta _{2},\dots ,\Theta _{n})^\mathrm{{T}}\), \({{\mathcal {Z}}}\)=\(\big (T(y_{1},\Theta _{1}),T(y_{2},\Theta _{2}),\dots ,T(y_{n},\Theta _{n})\big )^\mathrm{{T}}\), \({\mathcal {A}}=(a_{rs})_{r,s=1}^{n},\) \(\mathbf{j} =(j_{1},j_{2},\dots ,j_{n})^\mathrm{{T}}\).
The components of \({\mathfrak {B}}\) and \({\mathcal {A}}\) are determined by
and
Now, particularly consider a nonlinear integral equation of form (20):
where \(\Theta \in {\mathcal {C}}[0,1], y\in [0,1]\), and the kernel S is the Green’s function in \([0,1]\times [0,1]\).
By taking \(n=8\), we transform the given problem into a system of nonlinear equation
where \({\varvec{\Theta }}= (\Theta _{1},\Theta _{2},\dots ,\Theta _{8})^\mathrm{{T}}\), \(\mathbf{2}.1 =(2.1,2.1,\dots ,2.1)^\mathrm{{T}}\), \({\mathcal {A}}=(a_{rs})_{r,s=1}^{8}\), \(\mathbf{p} _{{\varvec{\Theta }}}=(\Theta _{1}^{2},\Theta _{2}^{2},\dots ,\Theta _{8}^{2})^\mathrm{{T}},\) \(\mathbf{q} _{{\varvec{\Theta }}}=(|\Theta _{1}|,|\Theta _{2}|,\dots ,|\Theta _{8}|)^\mathrm{{T}}.\)
The divided difference of first order of \({\mathfrak {B}}\) is \([\mathbf{e} ,\mathbf{f} ;{\mathfrak {B}}]=I-\frac{1}{9}(U+V)\), where \(U=(u_{rs})_{r,s=1}^{8}\) with \(u_{rs}=a_{rs}(e_{s}+f_{s})\), \(V=(v_{rs})_{r,s=1}^{8}\) with \(v_{rs}=a_{ij} \frac{|e_{s}|-|f_{s}|}{e_{s}-f_{s}}\). Then, using sup-norm here, we can have
with \(q=\frac{2}{9}\Vert {\mathcal {A}}\Vert \) and \(\omega (p_{1},p_{2})=\frac{1}{9}\Vert {\mathcal {A}}\Vert (p_{1}+p_{2})\).
Choosing starting points as \({\varvec{\Theta }}_{-1}\)=\((3.4,3.4,\dots ,3.4)^\mathrm{{T}}\) and \({\varvec{\Theta }}_{0}\)=\((1.9,1.9,\dots ,1.9)^\mathrm{{T}}\).
Using condition \((P1)-(P5)\), we can obtain \(\varphi =1.5\), \(\Vert A\Vert _{\infty }=0.12355\dots ,~ \varpi =0.27564 \dots ,~ \phi =1.06985 \dots ,~ \psi =0.29490 \dots ,~q=0.02745\dots ,~\omega (p_{1},p_{2})=(0.01372\dots )(p_{1}+p_{2}).\)
So, the solution of system (23) leads to \({\mathfrak {R}}=0.68693\), so that \(w({\mathfrak {R}})=0.12388\dots <\frac{1}{2}\). Also another radius is \({\mathfrak {R}}=5.41635 \dots \), so that \(w({\mathfrak {R}})=0.47120 \dots <\frac{1}{2}\). It can be easily seen that all the assumptions of Theorem 2.3 hold true and the uniqueness ball of the solution is given by \(\overline{B({\varvec{\theta }_{0}},{\mathfrak {R}})}\). Using method (7), the approximate numerical solution of system (23) is found out and summarized in Table 3. A comparison of the term-by-term errors is given in Table 4 using the tolerance \(\Vert {\varvec{\Theta }}_{\mathbf{m}}- \varvec{\Theta }_{\mathbf{m-1}}\Vert \) \(<10^{-15}\). We can see that method (7) converges rapidly as compared to its competitors.
Example 4.4
Consider the problem of solving the following system of nonlinear equations:
where \({\varvec{\Theta }}\) = \((\Theta _{1},\Theta _{2},\dots ,\Theta _{8})^\mathrm{{T}}\), \(\mathbf{2}.1 \) = \((2.1,2.1,\dots ,2.1)^\mathrm{{T}},~~{\mathcal {A}}=(a_{rs})_{r,s=1}^{8},~~\mathbf{q} _{\Theta }=(\Theta _{1}^{2},\Theta _{2}^{2},\dots ,\Theta _{8}^{2})^\mathrm{{T}}.\)
Using (3), we get \([\mathbf{e} ,\mathbf{f} ;{\mathfrak {B}}]=I-\frac{1}{7}F\), where \(F=(f_{rs})_{r,s=1}^{8}\) with \(f_{rs}=a_{rs}(e_{s}+f_{s})\).
Now, we choose the initial iterates as \(\mathbf{t} _{-1}=(3.8,3.8,\dots ,3.8)^\mathrm{{T}}\), \(\mathbf{t} _{0}=(2.6,2.6,\dots ,2.8)^\mathrm{{T}}\) and from conditions \((P1)-(P5)\) we can obtain,
Also,\(~q=0\dots ,~\omega (p_{1},p_{2})=(0.01765\dots )(p_{1}+p_{2})\). From Eq. (9) we get the radius is \({\mathfrak {R}}=1.38437\), so that \(w({\mathfrak {R}})=0.18093\dots <\frac{1}{2}\). We also find another radius is \({\mathfrak {R}}=3.82687 \dots \), so that \(w({\mathfrak {R}})=0.41798 \dots <\frac{1}{2}\).
This shows that the convergence conditions of Theorem 2.3 hold true and the uniqueness ball is given by \(\overline{B({ {\varvec{\Theta }}_{\mathbf{0}}},{\mathfrak {R}})}\). Now using method (7), we obtain the solution of system (24) which is given in Table 5. A comparison of the term-by-term errors is given in Table 6 using the tolerance \(\Vert \varvec{\Theta }_{\mathbf{m}}-\varvec{\Theta }_{\mathbf{m-1}}\Vert \) \(<10^{-15}\). We can see that method (7) converges rapidly as compared to its competitors.
5 Conclusions
The article was devoted to study the semilocal convergence of three-step Kurchatov-type method under \(\omega \)-condition on the first-order divided difference using recurrence relations. We also analyze the computational efficiency index and important relations for Kurchtov’s-type method. Numerical examples have been computed for higher-dimensional problems including Hammerstein type integral equations for differentiable and non-differentiable operators. Authors also comparison with single-step Kurchatov method, two-step Kurchatov method and two-step secant method. It has been concluded that the present method converges more rapidly than the other three methods. In semilocal convergence analysis, theorems are given for existence-uniqueness balls. Moreover, the domain of the parameters discussed in [28] is given to show the guaranteed convergence of the method. Kurchatov method gave more extended convergence region, better information on the distances as well as the uniqueness. This method does not require the evaluation of derivatives, as opposed to Newton or other well-known third-order methods such as Halley or Chebyshev. This method is useful when the functions are not regular or the evaluation of their derivatives is costly. Kurchatov method is a good derivative-free iterative method with memory with the same order of convergence and number of new functional evaluations per iteration as Newton’s procedure. The results are tested on numerical examples to show that the Kurchatov three-step method provides better results than other methods using similar information.
References
Ahmad, H.; Khan, T.A.; Cesarano, C.: Numerical solutions of coupled Burgers’ equations. Axioms 8(4), 119 (2019)
Ahmad, I.; Ahmad, H.; Thounthong, P.; Chu, Y.M.; Cesarano, C.: Solution of Multi-term time-fractional PDE models arising in mathematical biology and physics by local meshless method. Symmetry 12(7), 1195 (2020)
Ahmad, H.; Rafiq, M.; Cesarano, C.; Durur, H.: Variational iteration algorithm-I with an auxiliary parameter for solving boundary value problems. Earthline J. Math. Sci. 3(2), 229–247 (2020)
Amat, S.; Busquier, S.: Convergence and numerical analysis of a family of two-step Steffensen’s methods. Comput. Math. Appl. 49(1), 13–22 (2005)
Amat, S.; Busquier, S.: A two-step Steffensen’s method under modified convergence conditions. J. Math. Anal. Appl. 324(2), 1084–1092 (2006)
Amat, S.; Busquier, S.; Ezquerro, J.A.; Hernández-Verón, M.A.: A Steffensen type method of two steps in Banach spaces with applications. J. Comput. Appl. Math. 291(C), 317–331 (2016)
Argyros, I.K.; Ren, H.: Efficient Steffensen-type algorithms for solving nonlinear equations. Int. J. Comput. Math. 90(3), 691–704 (2013)
Argyros, I.K.; Ezquerro, J.A.; Gutiérrez, J.M.; Hernández-Verón, M.A.; Hilout, S.: On the semilocal convergence of efficient Chebyshev-Secant-type methods. J. Comput. Appl. Math. 235(10), 3195–3206 (2011)
Argyros, I.K.; Ezquerro, J.A.; Gutiérrez, J.M.; Hernández-Verón, M.A.; Hilout, S.: Chebyshev-Secant-type methods for non-differentiable operators. Milan J. Math. 81(1), 25–35 (2013)
Arqub, O.A.: Numerical solutions of systems of first-order, two point BVPs based on the reproducing kernel algorithm. Calcolo 55(31), 1–28 (2018)
Arqub, O.A.: Application of residual power series method for the solution of time-fractional Schrödinger equations in One-dimensional space. Fundamenta Informaticae 166(2), 87–110 (2019)
Arqub, O.A.: Numerical algorithm for the solutions of fractional order systems of Dirichlet function types with Comparative analysis. Fundamenta Informaticae 166(2), 111–137 (2019)
Arqub, O.A.; Shawagfeh, N.: Application of reproducing kernel algorithm for solving Dirichlet time-fractional Diffusion-Gordan types equations in Porous media. J. Porous Media 22(4), 411–434 (2019)
Bazighifan, O.; Cesarano, C.: A Philos-type oscillation criteria for fourth-order neutral differential equations. Symmetry 12(3), 379 (2020)
Dennis, J.E.: Toward a unified convergence theory for Newton-like methods. In: Rall, L.B. (ed.) Nonlinear Functional Analysis and Applications, pp. 425–472. Academic Press, New York (1971)
Elabbasy, E.M.; Cesarano, C.; Bazighifan, O.; Moaaz, O.: Asymptotic and oscillatory behavior of solutions of a class of higher order differential equation. Symmetry 11(12), 1434 (2019)
Ezquerro, J.A.; Hernández-Verón, M.A.: An optimization of Chebyshev’s method. J. Complex. 25(4), 343–361 (2009)
Ezquerro, J.A.; Hernández-Verón, M.A.: Increasing the applicability of Steffensen’s method. J. Math. Anal. Appl. 418(2), 1062–1073 (2014)
Ezquerro, J.A.; Grau, A.; Grau-Sánchez, M.; Hernández-Verón, M.A.: On the efficiency of two variants of Kurchatov’s method for solving nonlinear systems. Numer. Algorithms 64, 685–698 (2013)
Ezquerro, J.A.; Grau-Sánchez, M.; Hernández-Verón, M.A.; Noguera, M.: Semilocal convergence of Secant-like methods for differentiable and nondifferentiable operator equations. J. Math. Anal. Appl. 398(1), 100–112 (2013)
Ezquerro, J.A.; Grau-Sánchez, M.; Hernández-Verón, M.A.; Noguera, M.: A Traub type result for one-point iterative methods with memory. Anal. Appl. 12(3), 323–340 (2014)
Ezquerro, J.A.; Hernández-Verón, M.A.; Velasco, A.I.: An analysis of the semilocal convergence for Secant-like methods. Appl. Math. Comput. 266, 883–892 (2015)
Ezquerro, J.A.; Grau-Sánchez, M.; Hernández-Verón, M.A.; Noguera, M.: A study of optimization for Steffensen-type methods with frozen divided differences. SeMA J. 70(1), 23–46 (2015)
Hernández, M.A.; Rubio, M.J.: On a Newton–Kurchatov-type iterative process. Numer. Funct. Anal. Optim. 37(1), 65–79 (2016)
Hernández-Verón, M.A.; Rubio, M.J.: The Secant method for nondifferentiable operators. Appl. Math. Lett. 15(4), 395–399 (2002)
Hernández-Verón, M.A.; Rubio, M.J.: Semilocal convergence of the Secant method under mild convergence conditions of differentiability. Comput. Math. Appl. 44(3–4), 277–285 (2002)
Hernández-Verón, M.A.; Rubio, M.J.: A modification of Newton’s method for nondifferentiable equations. J. Comput. Appl. Math. 164–165, 409–417 (2004)
Kumar, H.; Parida, P.K.: Three step Kurchatov method for nondifferentiable operators. Int. J. Appl. Comput. Math. 3(4), 3683–3704 (2017)
Kumar, H.; Parida, P.K.: On Semilocal convergence of Two step Kurchatov method. Int. J. Comput. Math. 96(8), 1548–1566 (2019)
Kumar, S.; Kumar, D.; Sharma, J.R.; Cesarano, C.; Agarwal, P.; Chu, Y.M.: An optimal fourth order derivative-free numerical algorithm for multiple roots. Symmetry 12(6), 1038 (2020)
Parhi, S.K.; Gupta, D.K.: Semilocal convergence of Stirling’s method under Hölder continuous first derivative in Banach spaces. Int. J. Comput. Math. 87(12), 2752–2759 (2010)
Parhi, S.K.; Gupta, D.K.: Relaxing convergence conditions for Stirling’s method. Math. Methods Appl. Sci. 33(2), 224–232 (2010)
Parhi, S.K.; Gupta, D.K.: Convergence of Stirling’s method under weak differentiability condition. Math. Methods Appl. Sci. 34(2), 168–175 (2011)
Potra, F.A.: Sharp error bounds for a class of Newton-like methods. Libertas Mathematica 5, 71–84 (1985)
Potra, F.A.; Pták, V.: Nondiscrete Induction and Iterative Processes. Pitman Publishing Limited, Boston (1984)
Ren, H.: New sufficient convergence conditions of the Secant method for nondifferentiable operators. Appl. Math. Comput. 182(2), 1255–1259 (2006)
Shakhno, S.M.: On a Kurchatov’s method of linear interpolation for solving nonlinear equations. Proc. Appl. Math. Mech. 4(1), 650–651 (2004)
Traub, J.F.: Iterative Methods for the Solution of Equations. Prentice-Hall, Englewood Cliffs (1964)
Acknowledgements
Grateful thanks to the Central University of Jharkhand for research facilities.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kumar, H. On semilocal convergence of three-step Kurchatov method under weak condition. Arab. J. Math. 10, 121–136 (2021). https://doi.org/10.1007/s40065-020-00308-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40065-020-00308-8