On semilocal convergence of three-step Kurchatov method under weak condition

The purpose of this paper to establish the semilocal convergence analysis of three-step Kurchatov method under weaker conditions in Banach spaces. We construct the recurrence relations under the assumption that involved first-order divided difference operators satisfy the ω\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\omega $$\end{document} condition. Theorems are given for the existence-uniqueness balls enclosing the unique solution. The application of the iterative method is shown by solving nonlinear system of equations and nonlinear Hammerstein-type integral equations. It illustrates the theoretical development of this study.


Introduction
In this article, we consider the problem of approximating a unique solution * of a nonlinear equation where B : C ⊂ X → Y is continuous but non-differentiable nonlinear operator defined on a nonempty open convex subset C of Banach space X with values in Banach space Y. It is well known that finding exact solutions of type (1) is difficult and usually iterative methods are used to approximate the solution of this type of equations. The semilocal convergence is based on the information around an initial point, to give criteria ensuring the convergence of the iterative method; while the local one is, based on the information around a solution, to find estimates of the radii of convergence balls. A recurrence relation is an equation that defines a sequence based on a rule that gives the next term as a function of the previous terms. Newton's method and Newton-type methods [18,[31][32][33] are the most used iterative methods for approximating the solution. All of such methods require the differentiability of the involved operator which is a drawback for some certain problems where the operator is nondifferentiable. To overcome these types of problems, many researchers [6,18,22,26] have established and studied the iterative methods using divided differences. One of such method which does not use the differentiability of the involved operator is Kurchatov's method, proposed by Kurchatov [37] and given by: The operator To access the solution quickly, some authors [6,19,28,29] also considered two-step derivative-free iterative methods. One such method is two-step Kurchtov method [28,29] defined by the help of divided difference: Kumar and Parida [29] gave semilocal convergence of this method under Lipschitz continuity conditions on first-order divided difference operator by using recurrence relations. But to generalize the convergence analysis, some authors [25,26] also considered ω-type condition on the first-order divided difference operator of B: Recall that Hernández and Rubio [25,26] have provided sufficient convergence conditions for the secant method by considering the conditions (5) under recurrence relations. Ren [36] also gave the new semilocal convergence of secant method under condition (5) with the help of recurrence relation. Argyros et al. [8], Argyros and Ren [7], Hernández and Rubio [24,27], Ezquerro et. al [20] also studied semilocal convergence of Secant-like methods under same continuity condition. Amat and Busquier [5] also introduced two-step Steffensen method and analyzed the semilocal convergence by the help of recurrence relation under ω-continuity condition (5). Ezquerro et al. [21] also worked on semilocal convergence of Kurchatov two-step method (4) by the help of recurrence relations under condition (5).
Recently, Argyros et al. [8] introduced the Chebyshev-Secant-type method and gave the semilocal convergence by the use of recurrence relations under Lipschitz-type conditions on the first-order divided difference operator. Argyros et. al [9] also established semilocal convergence of the same method under ω-continuity condition (5) by the help of recurrence relations. Third-order three-step Steffensen method was proposed by Ezquerro et al. [23]: Authors established the semilocal convergence of method using recurrence relations under Lipschitz continuity condition. In all these above-mentioned methods, divided difference operator was frozen.
In [28], authors discussed about semilocal convergence analysis of three-step Kurchatov method which is defined by Authors also found the better result for order of convergence, optimal computational efficiency and efficiency index of method (7) as comparison from methods (2) and (4). One of the important benefits of this method is the evaluation of only one divided difference of the operator in each step, whereas order of convergence of the method is 3.
Recently, Arqub and Shawagfeh [13] analyzed an efficient reproducing kernel algorithm for the numerical solutions of equations in porous media with Dirichlet boundary conditions. Arqub [11] worked on the numerical solution of the time-fractional Schrödinger equation subject to given constraint condition based on the generalized Taylor series formula in the Caputo sense. Author also discussed about reproducing kernel algorithm for obtaining the numerical solutions of fractional-order systems of Dirichlet function in [12]. In [10], author gave the numerical solutions of systems of first-order, two-point BVPs based on the reproducing kernel algorithm.
Ahmad et al. [2] found the numerical solution of three-term time-fractional-order multi-dimensional diffusion equations by using an efficient local meshless method. Kumar et al. [30] worked on fourth-order derivativefree iterative methods for finding a multiple root. Authors also shows the performance of the new technique is a good competitor to existing optimal fourth-order Newton-like techniques. Bazighifan and Cesarano [14] worked using the technique of Riccati transformation and integral averaging method, authors get conditions to ensure oscillation of solutions of fourth-order Neutral differential equations. Ahmad et al. [3] solved the initial and boundary value problems by the use of variational iteration algorithm-I with an auxiliary parameter and also discussed about effectiveness and utilization of this proposed technique. Ahmad et al. [1] found the two new modified variational iteration algorithms for solving the numerical solutions of coupled Berger' equations. Authors also evaluated the errors norms, accuracy of method and speed of convergence by numerical problems. Elabbasy et al. [16] studied about asymptotic behavior of a class of higher-order delay differential equations with a p-Laplacian-like operator.
The structure of this paper is given as follows: in Sect. 2, we gave the new semilocal convergence analysis of the three-step Kurchatov method by the help of new type recurrence relations. In Sect. 3, we discussed about optimal computational efficiency of Kurchatov-type method with comparison graph. Numerical examples for differentiable and non-differentiable operators under higher-dimensional problems are given in Sect. 4. Finally, conclusion is given in Sect. 5.
Throughout the paper, we denote

Semilocal convergence of Kurchatov three-step method
In this Section, we shall analyze semilocal convergence for method (7) by the help of recurrence relations. Let us assume that the operator B satisfies the following condition P: = θ, e = f , where ω : R + × R + → R + is a continuous nondecreasing function in its two arguments.
Let R be the least positive root of given equation with w(R) < 1 2 . Here, we provide a lemma which is important role for deriving the recurrence relations for method (7).
Proof Proof of above Lemma is given in [28].

Lemma 2.2 Suppose conditions (P1)-(P5) satisfy on the operator
Then the recurrence relations hold for m ≥ 1 which is given below: Proof This lemma is proved using mathematical induction. It follows from initial hypotheses (P1) and from (P5) that θ 0 , ϑ 0 and 1 are well defined because by our assumption T −1 0 exists. We can easily conclude that ψ < R. Also, from (P2) and (P3) and then using Lemma 2.1 and (P4) we get Using the above Lemma 2.1 and (P4), we get Again, we have then by use of Eq. (9) . We see that (i)-(viii) are true for m = 1. Now, assuming they are true for m ≤ n − 1. Now, We have to prove that (i)-(viii) are true for m = n. [35], T −1 n exists and Hence, (i) is true for m = n. Now, we have to prove (ii), (iii) and (iv). Using Eq. (10), and assumptions (P2), (P3) and (P5), we obtain Again using Lemma 2.1 and (P4), Using Eq. (10) and (P5), we have using Lemma 2.1, (P4) and (P5) Hence, (vi), (vii) and (viii) hold. Therefore, (i)-(viii) are true for m = n. Hence, all relations are true for any natural numbers m by mathematical induction and consequently lemma is proved.
Also, we find some important results from the above Lemma Using Lemma 2.2, then we have Now, we give the convergence theorem for iterative method (7).

Optimal computational efficiency
We know that efficiency of an iterative method cannot be measured from only operational cost. The number of evaluations of functions that also needed. Traub defines in [38] the index 1 v , where is the local order of convergence of the method and v represents the number of the evaluations of function. Now, we compare the efficiencies of the three iterative methods (2), (4) and (7), we can use the following computational efficiency index (CEI): where is the Q-order of convergence and C(μ, m) the computational cost of an iterative method. We denote the CEI of the above-mentioned methods, respectively, by where i is the Q-order of convergence and CEI i (μ, m) the computational cost of the above corresponding methods. The computational cost is given by where a i (m) is the number of scalar functions, p i (m) is the number of products per iteration and μ is the ratio between products and divisions and evaluations of functions that are required to express C i (μ, m) in terms of products and divisions. We observe that method (2)  products and divisions in the decomposition LU and m 2 products and divisions for solving two triangular systems. Therefore, p 1 (m) = m 3 (m 2 + 6m − 1). Also for method (4), it is easy to obtain that a 2 (m) = a 1 (m) + m = m 2 + 2m, because a new evaluation of the function B is considered. The linear systems to solve are the same, the decomposition LU is already made and only two more triangular linear systems have to be solved, then p 2 (m) = p 1 (m) + m 2 = m 3 (m 2 + 9m − 1). Lastly, we calculate for method (7), we obtain a 3 (m) = 2a 1 (m) = 2(m 2 + m), because a new divided difference and a new evaluation of B are considered, and p 3 (m) = p 2 (m) + m 2 = m 3 (m 2 + 12m − 1), because two linear systems have to be solved. Now, we summarize the following costs: We can compare the corresponding CEI i (μ, m) using the following expressions and we can write the theorem: Proof (1) For m = 2, we assume that CEI 2 > CEI 1 . Then, to calculate the value of μ, take logarithm on both sides, we have, or μ > 3.024 . . . for m = 2. Also CEI 2 < CEI 1 , for 0 < μ ≤ 3.024 . . . Furthermore, for m ≥ 3 the value of μ is always negative from (14). But the value of μ is always positive. Hence, in this case, CEI 2 > CEI 1 is always true for m ≥ 3. We can see that from Fig. 1, the computational efficiencies indexes of method (4) have larger values than method (2) for m ≥ 3 and for m = 2 with μ > 3.024 . . .
(2) For m = 4, we assume that CEI 3 > CEI 1 , then we have to calculate the value of μ. Taking log on both sides, we have Now, we put m = 4 then we have μ > 0.427 . . . and also CEI 3 < CEI 1 , for 0 < μ ≤ 0.427 . . . Furthermore, for m ≥ 5 we have the value of μ is negative from (15). But the value of μ is always positive. Hence, in this case, CEI 3 > CEI 1 always true for m ≥ 5. We can see that from Fig. 2, the computational efficiencies indexes of method (7) have larger values than method (2) Now, we put m = 8, then we have μ > 0.649 . . . and also CEI 3 < CEI 2 , for 0 < μ ≤ 0.649 . . . Furthermore, for m ≥ 9, the value of μ is negative from (16). But the value of μ is always positive. Hence, in this case, CEI 3 > CEI 2 is always true for m ≥ 9. We can see that from Fig. 3, the computational efficiencies indexes of method (7) have larger values than method (4) for m ≥ 9 and for m = 8 with μ > 0.649 . . . Also, We can easily see that from Fig. 4, the computational efficiencies indexes of method (7) have larger values than methods (2) and (4) for m ≥ 9. Here red lines used for method (2), green lines used for method (4) and blue lines used for method (7).
Example 4.2 Consider a differentiable system of nonlinear equations: We can write the system (18) is equivalent to B( ) = 0, where B : ∈ R 2 such that e 1 = f 1 and e 2 = f 2 , the first-order divided difference operator [e, f; B] ∈ L(X, Y ) is given by If we use max-norm, then we have This gives us q = 0 and ω( p 1 , p 2 ) = p 1 + p 2 . Now, we use the method (7) to the solution of system (18) by choosing initial iterates r -1 =(7.2, 6.2) and r 0 =(11.4, 1.5). After two iterations, we get r 1 = (3.32678, 1.25508), r 2 = (3.29020, 1.24504),   (2), (4) and two-step Secant method [4]. In Table 2, we compute and compare the absolute errors obtained by the different aforementioned methods with tolerance 10 −8 . It can easily be observed that method (7) converges more rapidly in comparison to the other methods.

Application of Higher-dimensional problem
where −∞ < p < q < +∞, j, S and T are given functions and is an unknown function to be determined. Solving (19) is equivalent to solve B( ) = 0, where B : Let S be the Green's function defined in [ p, q] × [p, q]. We use the Gauss-Legendre formula to transform (20) into a finite-dimensional nonlinear system of equation that can be given by where W k and y k denote weights and nodes respectively. Now, we denote (y k ) and j (y k ) by k and j k , k = 1, 2, . . . , n, then Eq. (20) can be taken as the following system of nonlinear equations: where B : R n → R n , =( 1 , 2 , . . . , n ) T , Z= T (y 1 , 1 ), T (y 2 , 2 ), . . . , T (y n , n ) T , A = (a rs ) n r,s=1 , j = ( j 1 , j 2 , . . . , j n ) T .
The components of B and A are determined by a rs T (y s , s ), s = 1, 2, . . . , n,  (20): where ∈ C[0, 1], y ∈ [0, 1], and the kernel S is the Green's function in [0, 1] × [0, 1]. By taking n = 8, we transform the given problem into a system of nonlinear equation So, the solution of system (23) leads to R = 0.68693, so that w(R) = 0.12388 · · · < 1 2 . Also another radius is R = 5.41635 . . . , so that w(R) = 0.47120 · · · < 1 2 . It can be easily seen that all the assumptions of Theorem 2.3 hold true and the uniqueness ball of the solution is given by B(θ 0 , R). Using method (7), the approximate numerical solution of system (23) is found out and summarized in Table 3. A comparison of the term-by-term errors is given in Table 4 using the tolerance m − m−1 < 10 −15 . We can see that method (7) converges rapidly as compared to its competitors.    Also, q = 0 . . . , ω(p 1 , p 2 ) = (0.01765 . . . )(p 1 + p 2 ). From Eq. (9) we get the radius is R = 1.38437, so that w(R) = 0.18093 · · · < 1 2 . We also find another radius is R = 3.82687 . . . , so that w(R) = 0.41798 · · · < 1 2 . This shows that the convergence conditions of Theorem 2.3 hold true and the uniqueness ball is given by B( 0 , R). Now using method (7), we obtain the solution of system (24) which is given in Table 5. A comparison of the term-by-term errors is given in Table 6 using the tolerance m − m−1 < 10 −15 . We can see that method (7) converges rapidly as compared to its competitors.

Conclusions
The article was devoted to study the semilocal convergence of three-step Kurchatov-type method under ωcondition on the first-order divided difference using recurrence relations. We also analyze the computational efficiency index and important relations for Kurchtov's-type method. Numerical examples have been computed for higher-dimensional problems including Hammerstein type integral equations for differentiable and nondifferentiable operators. Authors also comparison with single-step Kurchatov method, two-step Kurchatov method and two-step secant method. It has been concluded that the present method converges more rapidly than the other three methods. In semilocal convergence analysis, theorems are given for existence-uniqueness balls. Moreover, the domain of the parameters discussed in [28] is given to show the guaranteed convergence of the method. Kurchatov method gave more extended convergence region, better information on the distances as well as the uniqueness. This method does not require the evaluation of derivatives, as opposed to Newton or other well-known third-order methods such as Halley or Chebyshev. This method is useful when the functions are not regular or the evaluation of their derivatives is costly. Kurchatov method is a good derivative-free iterative method with memory with the same order of convergence and number of new functional evaluations per iteration as Newton's procedure. The results are tested on numerical examples to show that the Kurchatov three-step method provides better results than other methods using similar information.