Introduction

The method of quasilinearization has been started by Bellman and Kalaba [1], generalized by Lakshmikantham [2, 3], and applied to a variety of problems by [47]. Based on the idea of quasilinearization [1], the generalized quasilinear technique in [8] and later extended in [2] offers two monotonic sequences of linear iterations, uniformly and quadratically convergent to the unique solution of the initial value problem:

u ( t ) = f ( t , u ) , u ( t 0 ) = 0 .

Consider the nonlinear Volterra integral equation:

u ( t ) = y ( t ) + 0 t k ( t , s , u ( s ) ) ds.
(1)

Applying iterative processes to solve this equation, when k(t s u) is nondecreasing with respect to u and satisfies a Lipschitz condition, the successive approximations method [9] yields a monotonic sequence, uniformly convergent to the solution of Equation (1). But the iterations that define the sequences are all nonlinear, and the rate of convergence is linear. The iterations employed in the monotone iterative technique [10, 11] and their convergence rate are linear. The method of the quasilinearization is developed by [12] to solve Equation (1) by the iterative scheme:

u p ( t ) = y ( t ) + 0 t k ( t , s , u p 1 ( s ) ) + k u ( t , s , u p 1 ( s ) ) × u p ( s ) u p 1 ( s ) ds ,
(2)

for p=1,2,⋯, where u0(t) is the lower solution of Equation (1), defined in the following. This scheme is linear; under nondecreasing monotonicity and convexity conditions on k(t s u), it is quadratically convergent to the unique solution of Equation (1) and then rapidly in comparison with the successive processes in [911]. The purpose of this paper is to employ numerical methods to approximate the solution of the linear integral equations (2) in a piecewise continuous polynomial space and then generate a sequence of approximation solutions where converge to the unique solution of the nonlinear integral equation (1) under some conditions on k(t s u) as mentioned above. For numerical integration and quadrature formulae, we employ Chebyshev polynomials and a modified Clenshaw-Curtis quadrature and its error analysis.

This paper has been organized as follows: section “Volterra integral inequalities and generalized quasilinearization” contains a general framework for the idea of quasilinearization used to solve the integral equations. Section “Piecewise polynomials space and collocation method” is devoted to derivation of step by step collocation method in a piecewise polynomials space to approximate the solution of the integral equations. In section “Fully discretizing using quadrature formulae and their errors”, fully discretizing of the algebraic system is obtained by using a modified Clenshaw-Curtis quadrature and its error analysis is given. In section “Convergence of the fully discretized solution”, the convergence of the method is discussed and the numerical examples are given in section “Numerical examples”.

Volterra integral inequalities and generalized quasilinearization

For TR and T>0, let J=[0,T] and D={(t,s)∈J×J:st}, consider

u ( t ) = y ( t ) + 0 t k ( t , s , u ( s ) ) ds ,
(3)

where yC[J,R] and kC[D×R,R].

Definition 1. A function vC[J,R] is called a lower solution of Equation (3) on J if

v ( t ) y ( t ) + 0 t k ( t , s , v ( s ) ) ds , t J ,

and an upper solution if the reversed inequality holds.

It is shown in [12] that if v0(t) and w0(t) in C[J,R] are lower and upper solutions of Equation (3), respectively, then v0(t)≤w0(t) holds on J, when k(t s u) is nondecreasing in u for each fixed (t s)∈D and satisfies one-sided Lipschitz condition

k ( t , s , α ) k ( t , s , β ) L ( α β ) , α β , L > 0 ,

provided v0(0)≤w0(0). If this holds, there exists a solution u(t) of Equation (3) such that

v 0 ( t ) u ( t ) w 0 ( t ) ,

and this solution is unique.

Now, for v 0 , w 0 C[J,R] and v0(t)≤uw0(t) on J, let

Ω = { ( t , s , u ) D × R ; v 0 ( t ) u w 0 ( t ) , t J } ,

and ∥u∥=ma xtJ|u(t)|. By defining two iterative schemes as two linear integral equations

v p ( t ) = y ( t ) + 0 t k ( t , s , v p 1 ( s ) ) + k u ( t , s , v p 1 ( s ) ) × v p ( s ) v p 1 ( s ) ds ,
(4)

and

w p ( t ) = y ( t ) + 0 t k ( t , s , w p 1 ( s ) ) + k u ( t , s , w p 1 ( s ) ) × w p ( s ) w p 1 ( s ) ds ,
(5)

for p=1,2,⋯, and v 0 (t), w 0 (t)C[J,R], lower and upper solutions of Equation (3) respectively, the following theorem in [12] shows the quadratically convergence of two sequences {v p (t)} and {w p (t)} derived from Equations (4) and (5) to the unique solution of Equation (3).

Theorem 1. Assume that ( H 1 ) v 0 , w 0 C[J,R], v0(t)≤w0(t) on J are lower and upper solutions of Equation (3) on J, respectively.

( H 2 )k C 2 [Ω,R], k u (t,s,u)0, k uu (t,s,u)0

for (t,s,u)∈Ω.

Then, the two iterative schemes (4) and (5) define a nondecreasing sequence {v p (t)} and a nonincreasing sequence {w p (t)} in C[J,R] such that v p u and w p u uniformly on J, and the following quadratic convergent estimates hold:

u v p A u v p 1 2 , w p u B w p 1 u 2 + A u v p 1 2 , p = 1 , 2 , , A > 0 , B > 0 .

Also, these two sequences satisfy the relation

v 0 v 1 v p w p w 1 w 0 .

The following two lemmas in [13] and [14] will be used during this work. In the whole of the work, we refer to ∥.∥ as the maximum norm of the functions or matrices.

Lemma 1. Suppose that A is a matrix such that ∥A∥<1. Then, the matrix (IA) is nonsingular and

( I A ) 1 ( 1 A ) 1 .

Lemma 2. (Discrete Gronwall Inequality) Let for all iN(a)={a,a+1,} the following inequality be satisfied:

u i p i + q i j = a i 1 f j u j , a N .

Then, for all iN(a)

u i p i + q i j = a i 1 p j f j k = j + 1 i 1 ( 1 + q k f k ) .

Piecewise polynomials space and collocation method

We set the partition {0=τ0<τ1<⋯<τ N =T} on J and put h n =τ n τn−1 with h=max n {h n } and indicate the above partition by J h . In the theory of interpolation, it is well known that the minimum value of the remainder term is obtained when the polynomial interpolation is carried out on the zeros of Chebyshev polynomials [15]. The first-kind Chebyshev polynomials are defined by the relation

T m ( z ) = cos ( ) , z = cos ( θ ) , m = 0 , 1 , 2 , ,

and the zeros of these polynomials in an increasing arrangement are as

z m k + 1 = cos 2 k 1 2 m Π , k = 1 , , m.

We define the mapping δ n :[−1,1]↦[τn−1,τ n ] with the relation

δ n ( z ) = τ n τ n 1 2 z + τ n + τ n 1 2 .

Now, consider the linear Volterra integral equation (4). It may be shown in the form of

v p ( t ) = y ( t ) + 0 t H p ( t , s ) ds + 0 t k p ( t , s ) v p ( s ) ds , p = 1 , 2 , ,
(6)

where

H p ( t , s ) = k ( t , s , v p 1 ( s ) ) k u ( t , s , v p 1 ( s ) ) v p 1 ( s ) ,
(7)

and

k p ( t , s ) = k u ( t , s , v p 1 ( s ) ) , ( t , s ) D.
(8)

Definition 2. Suppose that J h is a given partition on J. The piecewise polynomial space S μ ( d ) ( J h ) with μ≥0, −1≤dμ is defined by

S μ ( d ) ( J h ) = { q ( t ) C d [ J , R ] : q | σ n Π μ ; 1 n N } ,

where σ n =(τn−1,τ n ] and Π μ denotes the space of the polynomials of degree not exceeding μ, and it is easy to see that S μ ( d ) ( J h ) is a linear vector space.

With this definition, we select the Lagrange polynomials constructed on the zeros of the Chebyshev polynomial T m (z) as a basis for Πm−1 on the subinterval σ n and approximate the solution of the integral equation (6) in the piecewise polynomial space S m 1 ( d ) ( J h ) using collocation method. Then, by letting tn,k=δ n (z k ) and Y h ={tn,k:n=1,…,N, k=1,…,m}, the collocation points, this collocation solution v ̂ p (t) S m 1 ( d ) ( J h ) in the subinterval σ n may be written as

v ̂ p ( t ) = v ̂ p ( δ n ( z ) ) = v ̂ p , n ( z ) = k = 1 m V ̂ n , k p L k ( z ) , z ( 1 , 1 ] , n = 1 , , N ,
(9)

for p=1,2,…, where V ̂ n , k P = v ̂ p ( t n , k ) and L k (z) are the Lagrange polynomials defined by the relation

L k ( z ) = j = 1 j k m z z j z k z j , k = 1 , , m.

We observe that this collocation solution is not necessary to be continuous on J. Then, we must let d=−1 and v ̂ p (t) S m 1 ( 1 ) ( J h ). For tσ i , Equation (6) may be written as

v p ( t ) = y ( t ) + 0 τ i 1 H p ( t , s ) ds + τ i 1 t H p ( t , s ) ds + 0 τ i 1 k p ( t , s ) v p ( s ) ds + τ i 1 t k p ( t , s ) v p ( s ) ds = y ( t ) + j = 1 i 1 τ j 1 τ j H p ( t , s ) + k p ( t , s ) v p ( s ) ds + τ i 1 t H p ( t , s ) ds + τ i 1 t k p ( t , s ) v p ( s ) ds = y ( t ) + j = 1 i 1 h j 2 1 1 H p ( t , δ j ( z ) ) + k p ( t , δ j ( z ) ) v p , j ( z ) dz + h i 2 1 δ i 1 ( t ) H p ( t , δ i ( z ) ) dz + h i 2 1 δ i 1 ( t ) k p ( t , δ i ( z ) ) v p , i ( z ) dz ,
(10)

where vp,i(z)=v p (δ i (z)).The collocation equation is defined by replacing the exact solution v p (t) with the collocation solution v ̂ p (t) in Equation (6) on the collocation points Y h .

Then, using Equation (10) for ti,kY h , the collocation equation has the form

V ̂ i , k p = v ̂ p ( t i , k ) = y ( t i , k ) + j = 1 i 1 h j 2 1 1 H p ( t i , k , δ j ( z ) ) + k p ( t i , k , δ j ( z ) ) v ̂ p , j ( z ) dz + h i 2 1 z k H p ( t i , k , δ i ( z ) ) dz + h i 2 1 z k k p ( t i , k , δ i ( z ) ) v ̂ p , j ( z ) dz = y ( t i , k ) + j = 1 i 1 h j 2 1 1 H p ( t i , k , δ j ( z ) ) ds + q = 1 m V ̂ j , q p 1 1 k p ( t i , k , δ j ( z ) ) L q ( z ) dz + h i 2 1 z k H p ( t i , k , δ i ( z ) ) dz + h i 2 q = 1 m V ̂ i , q p 1 z k k p ( t i , k , δ i ( z ) ) L q ( z ) dz ,
(11)

for k=1,…,m, where we have used Equation (9) and by denoting the following vectors and matrices forms,

V ̂ i p = V ̂ i , 1 p , , V ̂ i , m p T , y i = y ( t i , 1 ) , , y ( t i , m ) T , i = 1 , , N , H i p , j = h i p , j ( 1 ) , , h i p , j ( m ) T = 1 1 H p ( t i , 1 , δ j ( z ) ) dz , , 1 1 H p ( t i , m , δ j ( z ) ) dz T , H i p = h i p ( 1 ) , , h i p ( m ) T = 1 z 1 H p ( t i , 1 , δ i ( z ) ) dz , , 1 z m H p ( t i , m , δ i ( z ) ) dz T , B i p , j = b i p , j ( k , q ) = 1 1 k p ( t i , k , δ j ( z ) ) L q ( z ) dz , k , q = 1 , , m , B i p = b i p ( k , q ) = 1 z k k p ( t i , k , δ i ( z ) ) L q ( z ) dz , k , q = 1 , , m , j = 1 , , i 1 ,
(12)

Equation (11) is transformed to the linear system

I m h i 2 B i p V ̂ i p = y i + h i 2 H i p + j = 1 i 1 h j 2 H i p , j + B i p , j V ̂ j p , i = 1 , , N ,
(13)

where I m is the identity matrix with dimension m.

Fully discretizing using quadrature formulae and their errors

The fundamental theorem of orthonormal Fourier expansions in the space C[−1,1] implies that for any fC[−1,1], there exists an expansion of Chebyshev polynomials as follows:

f ( z ) = r = 0 a r T r ( z ) , a r = 2 Π 1 1 f ( z ) T r ( z ) 1 z 2 dz ,

where the prime denotes that the first term is halved. Integration of both sides of this expansion gives:

I f = 1 1 f ( z ) dz = r = 0 2 a 2 r 1 4 r 2 .
(14)

Then, we approximate the integral (14) by M + 1 first terms of the series:

1 1 f ( z ) dz r = 0 M 2 a 2 r 1 4 r 2 ,

and the coefficients a i also are approximated by the closed Gauss-Chebyshev rule. This is the Clenshaw-Curtis quadrature for estimating the integral (14), but we modify this rule and use it to the components of the matrices H i p , j , H i p , B i p , j , and B i p with the open Gauss-Chebyshev because this rule does not use the end points of the intervals and uses one node less than close Gauss-Chebyshev and same accuracy. M nodes open Gauss-Chebyshev rule for approximating a r has the form

a r ã r M = 2 M l = 1 M f ξ l ( M ) cos r 2 l 1 2 M Π , ξ l ( M ) = cos 2 l 1 2 M Π ,
(15)

and we use the following equation as an approximation for the integral (14):

I f Q f = r = 0 M 2 ã 2 r M 1 4 r 2 .

For the components of the matrices H i p and B i p , we use change of variable z=θ k (w):[−1,1]↦[−1,z k ] defined by

θ k ( w ) = z k + 1 2 w + z k 1 2 , k = 1 , , m.

For the error analysis of this quadrature formula using the equality (14), we have

I f Q f = r = 0 M 2 a 2 r ã 2 r M 1 4 r 2 + r = M + 1 2 a 2 r 1 4 r 2 .
(16)

To compute ( a r ã r M ), we utilize conclusions in [16] which presents the equality

a r ã r M = 2 l = 1 ( 1 ) l c r , 2 Ml , c r , l = 2 Π 1 1 f ( z ) T r ( z ) T l ( z ) 1 z 2 dz.
(17)

Moreover, the coefficients cr,l depend on the properties of the function f(z)T r (z) using the following lemma which follows from results in [17].

Lemma 3. Let f(z)∈Cp[−1,1], and suppose f(p + 1)(z) exists and is continuous in [−1,1] except possibly for a finite number of finite discontinuities, then for some C f and all i>0 the fourier coefficients a i satisfies in the relation

| a i | C f i p 1 .

Substituting this result into (17), we find

| a r ã r M | 2 C f ( 2 M ) p + 1 l = 1 1 l p + 1 2 ( p + 1 ) C f p ( 2 M ) p + 1 ,
(18)

where C f is related to the function f(z)T r (z), and we have used the following relation in [18]

l = R + 1 ( l i ) r ( R + 1 i ) r + 1 r r 1 , r > 1 .

Applying Equations (18) to (16) and reusing lemma (3) for a r M , we obtain the bound

| I f Q f | 4 ( p + 1 ) C f p ( 2 M ) p + 1 + C f ( 2 M + 1 ) ( 2 M + 2 ) p + 1 = M M = O ( 2 M ) p 1 ,
(19)

where the constant C f is related to the function f(z).

According to this numerical quadrature, we define fully discretized matrices related to the matrices (12) as

H ~ i p , j = h ~ i p , j ( 1 ) , , h ~ i p , j ( m ) T , H ~ i p = h ~ i p ( 1 ) , , h ~ i p ( m ) T , B ~ i p , j = b ~ i p , j ( k , q ) , k , q = 1 , , m , B ~ i p = b ~ i p ( k , q ) , k , q = 1 , , m ,

and refresh the linear system (13) for them

I m h i 2 B ~ i p V ~ i p = y i + h i 2 H ~ i p + j = 1 i 1 h j 2 H ~ i p , j + B ~ i p , j V ~ j p , i = 1 , , N ,
(20)

where V ~ i p = V ~ i , 1 p , , V ~ i , m p T . Then, we define the fully discretized solution v ~ (t) S m 1 ( 1 ) ( J h ) on σ i by the relation

v ~ p ( t ) = v ~ p ( δ i ( z ) ) = v ~ p , i ( z ) = k = 1 m V ~ i , k p L k ( z ) , z ( 1 , 1 ] .

Theorem 2. There exists an h ̄ >0 such that for any partition J h with 0<h< h ̄ , the linear system (20) has a unique solution V ~ i p for 1≤iN and p=1,2,….

Proof. We first give a bound for the B ~ i p then use the lemma (1). The matrix B ~ i p has the components

b ~ i p ( k , q ) = r = 0 M 2 a ~ 2 r M 1 4 r 2 , k , q = 1 , , m , a ~ r M = 1 + z k M l = 1 M k p t i , k , δ j θ k ξ l ( M ) L q θ k ξ l ( M ) × cos r 2 l 1 2 M Π .

Then,

| a ~ r M | < 2 K u φ , B ~ i p < r = 0 M 4 K u φ | 1 4 r 2 | 4 K u φ , K u = max Ω k u ( t , s , u ) , φ = max q = 1 , , m L q ( z ) .

Now using lemma (1), the matrix I m h i 2 B ~ i p is nonsingular if we get

0 < h < 1 2 K u φ = h ̄ ,

and for this selection

I m h i 2 B ~ i p 1 1 1 h i 2 B ~ i p 1 1 2 h K u φ ,

and the linear system (20) has the unique solution

V ~ i p = I m h i 2 B ~ i p 1 y i + h i 2 H ~ i p + j = 1 i 1 h j 2 H ~ i p , j + B ~ i p , j V ~ j p i = 1 , , N.
(21)

It is easy to see that with this choice of h, the linear system (13) has a unique solution because of

B i p 2 K u φ.

For the complete error analysis, we need the following bounds in the next section which we can conclude them until now:

B i p 2 K u φ , I m h i 2 B i p 1 1 1 h i 2 B i p 1 1 h K u φ = α h , B i p , j 2 K u φ , B ~ i p , j 4 K u φ , B ~ i p 4 K u φ , H i p 2 ( K + K u u ) , H i p , j 2 ( K + K u u ) , K = max Ω | k ( t , s , u ) | , H ~ i p 4 ( K + K u u ) , H ~ i p , j 4 ( K + K u u ) , I m h i 2 B ~ i p 1 1 1 h i 2 B ~ i p 1 1 2 h K u φ = β h , E H i p = H i p H ~ i p L M , E H i p , j = H i p , j H ~ i p , j L M , E B i p = B i p B ~ i p L M , E B i p , j = B i p , j B ~ i p , j L M , L M = O ( 2 M ) p 1 .
(22)

The last error bounds are obtained by applying Equation (19) to the entries of the matrices H i p H ~ i p , H i p , j H ~ i p , j , B i p B ~ i p , and B i p , j B ~ i p , j .

Convergence of the fully discretized solution

To analyze the error of the fully descretized solution, we employ the Cauchy remainder for polynomial interpolation in σ i

v p ( t ) = v p , i ( z ) = P m 1 v p ( t ) + k = 1 m ( t t i , k ) m ! v p ( m ) ( ξ ) = k = 1 m V i , k p L k ( z ) + 2 T m ( z ) m ! h i 4 m v p ( m ) ( ξ ) = k = 1 m V i , k p L k ( z ) + R m , i ( z ; ξ ) , z ( 1 , 1 ] ,
(23)

where V i , k p = v p ( t i , k ). The error function ε p (t)= v p (t) v ~ p (t) on σ i satisfies in the equation

ε p ( t ) = ε p , i ( z ) = k = 1 m ϵ i , k p L k ( z ) + R m , i ( z ; ξ ) ,
(24)

with ϵ i , k p = v i , k p v ~ i , k p . Then,

ε p , i ( z ) ϵ i p + 2 m ! h 4 m v p ( m ) , i = 1 , , N ,
(25)

where ϵ i p = [ ϵ i , 1 p , , ϵ i , m p ] T . Now, we need a bound for the ϵ i p . Letting Equation (23) in Equation (10) for t=ti,kand applying notation (12), we obtain

V i , k p = v p t i , k = y ( t i , k ) + j = 1 i 1 h j 2 h i p , j ( k ) + q = 1 m V i , q p b i p , j ( k , q ) + h i 2 h i p ( k ) + h i 2 q = 1 m V i , q p b i p ( k , q ) + r i , k p , r i , k p = j = 1 i 1 h j 2 1 1 k p ( t i , k , δ j ( z ) ) R m , j ( z ; ξ ) dz + h i 2 1 z k k p ( t i , k , δ i ( z ) ) R m , i ( z ; ξ ) dz
(26)

for k=1,…,m. The fully discretized version of this equation is

V ~ i , k p = y ( t i , k ) + j = 1 i 1 h j 2 h ~ i p , j ( k ) + q = 1 m V ~ i , q p b ~ i p , j ( k , q ) + h i 2 h ~ i p ( k ) + h i 2 q = 1 m V ~ i , q p b ~ i p ( k , q ) .

From these two equations, we obtain

ϵ i , k p = j = 1 i 1 h j 2 q = 1 m v j , q p b i p , j ( k , q ) v ~ j , q p b ~ i p , j ( k , q ) + h i 2 q = 1 m v i , q p b i p ( k , q ) v ~ i , q p b ~ i p ( k , q ) + j = 1 i 1 h j 2 E h i p , j ( k ) + h i 2 E h i p ( k ) + r i , k p = j = 1 i 1 h j 2 q = 1 m b i p , j ( k , q ) ϵ j , q p + j = 1 i 1 h j 2 q = 1 m v ~ j , q p E b i p , j ( k , q ) + h i 2 q = 1 m b i p ( k , q ) ϵ i , q p + h i 2 q = 1 m v ~ i , q p E b i p ( k , q ) + j = 1 i 1 h j 2 E h i p , j ( k ) + h i 2 E h i p ( k ) + r i , k p , k = 1 , , m ,

where E h i p , j (k)= h i p , j (k) h ~ i p , j (k) and E h i p (k)= h i p (k) h ~ i p (k). This is equivalent to the linear system

I m h i 2 B i p ϵ i p = j = 1 i 1 h j 2 E H i p , j + B i p , j ϵ j p + E B i p , j V ~ j p + h i 2 E H i p + h i 2 E B i p V ~ i p + r i p ,

and r i p = [ r i , 1 p , , r i , m p ] T . With the selection 0<h< h ̄ , this system has the unique solution

ϵ i p = I m h i 2 B i p 1 j = 1 i 1 h j 2 E H i p , j + B i p , j ϵ j p + E B i p , j V ~ j p + h i 2 E H i p + h i 2 E B i p V ~ i p + r i p .

We use this equation and find a bound for the ϵ i p . Taking norm of both sides yields

ϵ i p I m h i 2 B i p 1 × j = 1 i 1 h j 2 E H i p , j + B i p , j ϵ j p + E B i p , j V ~ j p + h i 2 E H i p + h i 2 E B i p V ~ i p + r i p ,

and employing the bounds (22), we get

ϵ i p α h h K u φ j = 1 i 1 ϵ j p + h 2 L M j = 1 i V ~ j p + ih 2 L M + r i p ,
(27)

but from Equation (21), V ~ i p satisfies in the inequality

V ~ i p β h y + h i 2 H ~ i p + j = 1 i 1 h j 2 H ~ i p , j + B ~ i p , j V ~ j p β h y + ih ( K + K u u ) + 2 h K u φ j = 1 i 1 V ~ j p β h y + i β h h ( K + K u u ) + ( β h 1 ) j = 1 i 1 V ~ j p ;

then, we can apply lemma (2) to this inequality and after simplification deduce

V ~ i p β h y + 2 i β h h K + K u u + β h 1 j = 1 i 1 β h y + 2 j β h h K + K u u k = j + 1 i 1 β h = y β h i + 2 h β h β h 1 ( K + K u u ) ( β h i 1 ) .

From this relation, we get

j = 1 i V ~ j p y β h i 1 2 h K u φ + K + K u u K u φ β h i 1 2 h K u φ i ,

and a bound for r i p is obtained using Equation (26) as follows

r i p = max k | r i , k p | h 2 ( i 1 ) 4 K u m ! h 4 m v p ( m ) + 2 z m + 1 K u m ! h 4 m v p ( m ) 2 h K u m ! h 4 m v p ( m ) i = C m i.

Taking these two results into Equation (27) after simplification and noticing that h K u φ α h =α h −1, we have

ϵ i p y 4 K u φ + K + K u u 2 K u φ 2 α h β h i 1 L M + α h C m + h 2 L M h 2 K + K u u K u φ L M i + α h 1 j = 1 i 1 ϵ j p .

We utilize lemma (2) again to this inequality and conclude

ϵ i p y 4 K u φ + K + K u u ( 2 K u φ ) 2 α h β h i 1 L M + α h C m + h 2 L M h 2 K + K u u K u φ L M i + ( α h 1 ) α h i 1 y 4 K u φ + K + K u u ( 2 K u φ ) 2 α h L M × j = 1 i 1 β h α h j j = 1 i 1 1 α h j α h C m + α h h 2 L M α h h 2 K + K u u K u φ L M j = 1 i 1 j α h j .
(28)

Calculating the sums separately

j = 1 i 1 j α h j = j = 1 i 1 k = j i 1 1 α h k = 1 α h 1 α h 1 α h i 2 α h 1 i 1 α h i 1 , j = 1 i 1 β h α h j = β h i α h i 1 β h β h α h , j = 1 i 1 1 α h j = 1 1 α h i 1 α h 1

and setting in Equation (19), we get

ϵ i p y 4 K u φ + K + K u u 2 K u φ 2 α h β h 1 β h α h β h i α h i L M + C m + h 2 L M h 2 K + K u u K u φ L M α h α h 1 α h i 1 ,

and using the following relations

α h α h 1 = 1 h K u φ , α h β h 1 β h α h = 2 .

By recalling the value of C m , yields to

ϵ i p 2 α h i 1 φ m ! h 4 m x ( m ) + y β h i α h i 2 L M .

Setting this bound in Equation (25), we obtain the main result as follows

ε p , i ( z ) m α h i 1 + 1 2 m ! h 4 m v p ( m ) + m 2 K u y + K + K u u K u φ β h i α h i + 1 K + K u u K u φ α h i 1 L M .

In this error bound, the behavior of the α h i and β h i when N is increasing must be specified. To do this for simplicity, we assume that the partition J h is uniform, that is h= T N . With this selection for large N

α h i = 1 1 T K u φ N i 1 1 T K u φ N N e T K u φ , β h i α h i = α h i β h α h i 1 1 1 T K u φ N N × 1 + T K u φ N 2 T K u φ N 1 e T K u φ e T K u φ 1 ,

and when N increases, we deduce that the error bound is asymptotically equal to

ε p ( z ) m e T K u φ 1 + 1 2 m ! h 4 m v p ( m ) + m 2 K u × y + K + K u u K u φ e T K u φ e T K u φ 1 + 1 K + K u u K u φ e T K u φ 1 L M .

After deriving fully discretized solution v ~ p (t), an approximation to the solution v p (t) of linear integral equations (6), we can write general error bound as follows:

u ( t ) v ~ p ( t ) u ( t ) v p ( t ) + v p ( t ) v ~ p ( t ) u ( t ) v p ( t ) + m e T K u φ 1 + 1 × 2 m ! h 4 m v p ( m ) + m 2 K u × y + K + K u u K u φ e T K u φ e T K u φ 1 + 1 K + K u u K u φ e T K u φ 1 L M .

The first part of this general error bound is due to the quasilinearization, and it has been proven in Theorem (1) that is quadratically convergent. The second term is due to collocation which has a convergence order O(hm), and the third part of the above error is due to the quadrature formula which is convergent with the order O((2M)p−1).

Numerical examples

In what follows, the method presented in this paper is applied to solve three numerical examples of the integral equation (3).

Example 1. The following integral equation is discussed in [19]:

u ( t ) = 1 15 2 t 6 + 5 t 4 15 t 2 + 8 t + 20 + 1 t ( 2 s t ) u 2 ( s ) ds ,
(29)

where −1≤t≤1, and k(t s u(s))=(t−2s)u2(s) is convex and nondecreasing with respect to u for u≥0. The exact solution is u(t)=1−t2, and v0(t)=0 is a lower solution. The iterative scheme is

v p ( t ) = 1 15 2 t 6 + 5 t 4 15 t 2 + 8 t + 20 1 t ( 2 s t ) v p 1 ( s ) 2 ds + 2 1 t ( 2 s t ) v p 1 ( s ) v p ( s ) ds.

We employed the presented method for N=4, m=3, and M=4. We obtained the absolute values of the errors for Equation (29), which are shown in Table 1 and Figure 1; these results verify the convergence of the sequence { v ̂ p (t)} to the exact solution.

Figure 1
figure 1

The convergence of the sequence { v ̂ p (t)} to the exact solution of Example 1.

Table 1 Absolute errors for Example 1: N = 4, m = 3 and M = 4

Example 2. The second example,

u ( t ) = t e t + 1 2 0 t t s e ( u ( s ) t ) ds ,

has the exact solution u(t)= t and the lower solution v 0 (t)= t 2 . The absolute values of the errors are presented in Table 2 and Figure 2.

Figure 2
figure 2

The convergence of the sequence { v ̂ p (t)} to the exact solution of Example 2.

Table 2 Absolute errors for Examples 2 and 3: N = 4

Example 3. The third example, is the following integral equation:

u ( t ) = e t sin ( e t ) 0 t e s + t cos ( u ( s ) ) ds , 0 t 0 . 74 .

This equation has the exact solution u(t)=etand the lower solution v0(t)=1 + t. Table 2 shows the absolute values of the errors, and the convergence of the sequence { v ̂ p (t)} to the exact solution is shown in Figure 3.

Figure 3
figure 3

The convergence of the sequence { v ̂ p (t)} to the exact solution of Example 3.

Conclusions

In this paper, we computed an error bound for the numerical solution of nonlinear Volterra integral equations that contain the quasilinearization, collocation, and the numerical quadrature errors separately and showed the effects of the collocation approximation and numerical quadratures on the accuracy of the approximated solution. The provided numerical examples confirm our results.