A novel operational matrix method based on shifted Legendre polynomials for solving second-order boundary value problems involving singular, singularly perturbed and Bratu-type equations

In this article, a new operational matrix method based on shifted Legendre polynomials is presented and analyzed for obtaining numerical spectral solutions of linear and nonlinear second-order boundary value problems. The method is novel and essentially based on reducing the differential equations with their boundary conditions to systems of linear or nonlinear algebraic equations in the expansion coefficients of the sought-for spectral solutions. Linear differential equations are treated by applying the Petrov–Galerkin method, while the nonlinear equations are treated by applying the collocation method. Convergence analysis and some specific illustrative examples include singular, singularly perturbed and Bratu-type equations are considered to ascertain the validity, wide applicability and efficiency of the proposed method. The obtained numerical results are compared favorably with the analytical solutions and are more accurate than those discussed by some other existing techniques in the literature.


Introduction
Spectral methods (see, for instance [1][2][3][4][5][6]) are one of the principal methods of discretization for the numerical solution of differential equations. The main advantage of these methods lies in their accuracy for a given number of unknowns. For smooth problems in simple geometries, they offer exponential rates of convergence/spectral accuracy. In contrast, finite difference and finite-element methods yield only algebraic convergence rates. The three spectral methods, namely, the Galerkin, collocation, and tau methods are used extensively in the literature. Collocation methods [7,8] have become increasingly popular for solving differential equations, since they are very useful in providing highly accurate solutions to nonlinear differential equations. Petrov-Galerkin method is widely used for solving ordinary and partial differential equations; see for example [9][10][11][12][13]. The Petrov-Galerkin methods [14] have generally come to be known as ''stablized'' formulations, because they prevent the spatial oscillations and sometimes yield nodally exact solutions where the classical Galerkin method would fail badly. The difference between Galerkin and Petrov-Galerkin methods is that the test and trial functions in Galerkin method are the same, while in Petrov-Galerkin method, they are not.
The subject of nonlinear differential equations is a wellestablished part of mathematics and its systematic development goes back to the early days of the development of calculus. Many recent advances in mathematics, paralleled by a renewed and flourishing interaction between mathematics, the sciences, and engineering, have again shown that many phenomena in applied sciences, modeled by differential equations, will yield some mathematical explanation of these phenomena (at least in some approximate sense).
Even order differential equations have been extensively discussed by a large number of authors due to their great importance in various applications in many fields. For example, in the sequence of papers [12,[15][16][17], the authors dealt with such equations by the Galerkin method. They constructed suitable basis functions which satisfy the boundary conditions of the given differential equation. For this purpose, they used compact combinations of various orthogonal polynomials. The suggested algorithms in these articles are suitable for handling one-and two-dimensional linear high even-order boundary value problems. In this paper, we aim to give some algorithms for handling both of linear and nonlinear second-order boundary value problems based on introducing a new operational matrix of derivatives, and then applying Petrov-Galerkin method on linear equations and collocation method on nonlinear equations.
Of the important high-order differential equations are the singular and singular perturbed problems (SPPs) which arise in several branches of applied mathematics, such as quantum mechanics, fluid dynamics, elasticity, chemical reactor theory, and gas porous electrodes theory. The presence of a small parameter in these problems prevents one from obtaining satisfactory numerical solutions. It is a well-known fact that the solutions of SPPs have a multiscale character, that is, there are thin layer(s) where the solution varies very rapidly, while away from the layer(s) the solution behaves regularly and varies slowly.
Also, among the second-order boundary value problems is the one-dimensional Bratu problem which has a long history. Bratu's own article appeared in 1914 [19]; generalizations are sometimes called the Liouville-Gelfand or Liouville-Gelfand-Bratu problem in honor of Gelfand [20] and the nineteenth century work of the great French mathematician Liouville. In recent years, it has been a popular testbed for numerical and perturbation methods [21][22][23][24][25][26][27].
Simplification of the solid fuel ignition model in thermal combustion theory yields an elliptic nonlinear partial differential equation, namely the Bratu problem. Also due to its use in a large variety of applications, many authors have contributed to the study of such problem. Some applications of Bratu problem are the model of thermal reaction process, the Chandrasekhar model of the expansion of the Universe, chemical reaction theory, nanotechnology and radiative heat transfer (see, [28][29][30][31][32]).
The Bratu problem is nonlinear (BVP) and extensively used as a benchmark problem to test the accuracy of many numerical methods. It is given by: where k [ 0. The Bratu problem has the following analytical solution: where h is the solution of the nonlinear equation h ¼ ffiffiffiffiffi 2k p cosh h. Our main objectives in the present paper are: • Introducing a new operational matrix of derivatives based on using shifted Legendre polynomials and harmonic numbers. • Using Petrov-Galerkin matrix method (PGMM) to solve linear second-order BVPs. • Using collocation matrix method (CMM) to solve a class of nonlinear second-order BVPs, including singular, singularly perturbed and Bratu-type equations.
The outlines of the paper is as follows. In ''Some properties and relations of Shifted Legendre polynomials and harmonic numbers'', some relevant properties of shifted Legendre polynomials are given. Some properties and relations of harmonic numbers are also given in this section. In ''A shifted Legendre matrix of derivatives'', and with the aid of shifted Legendre polynomials polynomials, a new operational matrix of derivatives is given in terms of harmonic numbers. In ''Solution of second-order linear two point BVPs'', we use the introduced operational matrix for reducing a linear or a nonlinear second-order boundary value problems to a system of algebraic equations based on the application of Petrov-Galerkin and collocation methods, and also we state and prove a theorem for convergence. Some numerical examples are presented in ''Numerical results and discussions'' to show the efficiency and the applicability of the suggested algorithms. Some concluding remarks are given in ''Concluding remarks''.

Some properties and relations of shifted Legendre polynomials and harmonic numbers Shifted Legendre polynomials
The shifted Legendre polynomials L Ã k ðxÞ are defined on [a, b] as: where L k ðxÞ are the classical Legendre polynomials. They may be generated by using the recurrence relation The polynomials L Ã k ðxÞ are eigenfunctions of the following singular Sturm-Liouville equation:

Harmonic numbers
The nth harmonic number is the sum of the reciprocals of the first n natural numbers, i.e., The numbers H n satisfy the recurrence relation and have the integral representation The following Lemma is of fundamental importance in the sequel.

Lemma 1
The harmonic numbers satisfy the following three-term recurrence relation: Proof The recurrence relation (6) can be easily proved with the aid of the relation (5). h:

A shifted Legendre matrix of derivatives
Consider the space (see, [33]) and choose the following set of basis functions: It is not difficult to show that the set of polynomials f/ k ðxÞ : k ¼ 0; 1; 2; . . .g are linearly independent and orthogonal in the complete Hilbert space L 2 0 ½a; b; with respect to the weight function wðxÞ ¼ 1 ðx À aÞ 2 ðb À xÞ 2 , i.e., Any function yðxÞ 2 L 2 0 ½a; b can be expanded as where If the series in Eq. (8) is approximated by the first ðN þ 1Þ terms, then where Now, we state and prove the basic theorem, from which a new operational matrix can be intoduced.
Theorem 1 Let / i ðxÞ be as chosen in (7). Then for all i ! 1, one has where d i ðxÞ is given by Proof We proceed by induction on i. For i ¼ 1; it is clear that the left hand side of (11) is equal to its right-hand side, which is equal to: a À b þ 6 ðx À aÞðb À xÞ b À a . Assuming that relation (11) is valid for ði À 2Þ and ði À 1Þ, we want to prove its validity for i. If we multiply both sides of (3) by ðx À aÞðx À bÞ and make use of relation (7), we get which immediately gives Now, application of the induction step on D/ iÀ1 ðxÞ and D/ iÀ2 ðxÞ in (14), yields where Substitution of the recurrence relation (13) in the form into relation (15), and after performing some rather lenghthy algebraic manipulations, give where Now, the elements m ij in (18) can be written in the alternative form which can be simplified with the aid of Lemma 1, to take the form Repeated use of Lemma 1 in (17), and after performing some manipulation, leads to and by noting that and this completes the proof of Theorem 1. h Corollary 1 Let x 2 ½À1; 1 ¼ ½a; b; w i ðxÞ ¼ ð1 À x 2 Þ L i ðxÞ: Then for all i > 1; one has where c i ðxÞ ¼ À2x; i even; À2; i odd: & Now, and based on Theorem 1, it can be easily shown that the first derivative of the vector UðxÞ defined in (10) can be expressed in the matrix form: where , is an ðN þ 1Þ Â ðN þ 1Þ matrix whose nonzero elements can be given explicitly from relation (11) as: The second derivative of the vector UðxÞ is given by

Solution of second-order linear two-point BVPs
In this section, both of linear and nonlinear second-order two-point BVPs are handled. For linear equations, a Petrov-Galerkin method is applied, while for nonlinear equations, the typical collocation method is applied.

Linear second-order BVPs subject to homogenous boundary conditions
Consider the linear second-order boundary value problem y 00 ðxÞ þ f 1 ðxÞ y 0 ðxÞ þ f 2 ðxÞ yðxÞ ¼ gðxÞ; x 2 ða; bÞ; subject to the homogenous boundary conditions If we approximate y(x) as in (9), making use of relations (23) and (24), we have the following approximations for y(x), y 0 ðxÞ and y 00 ðxÞ: yðxÞ ' C T UðxÞ; ð27Þ If we substitute the relations (27), (28) and (29) into Eq. (25), then the residual R(x), of this equation is given by: The application of Petrov Galerkin method (see, [1]) yields the following ðN þ 1Þ linear equations in the unknown expansion coefficients, c i ; namely, Thus, Eq. (31) generates a set of ðN þ 1Þ linear equations which can be solved for the unknown components of the vector C, and hence the approximate spectral solution y N ðxÞ given in (9) can be obtained.

Solution of second-order nonlinear two-point BVPs
Consider the nonlinear differential equation subject to the homogenous conditions yðaÞ ¼ yðbÞ ¼ 0: If we follow the same procedure of ''Linear second-order BVPs subject to homogenous boundary conditions'', and approximate y(x) as in (27), then after making use of the two relations (23) and (24), then we get the following nonlinear equation in the unknown vector C To find the numerical solution y N ðxÞ, we enforce (37) to be satisfied exactly at the first ðN þ 1Þ roots of the polynomial L Ã Nþ1 ðxÞ. Thus a set of ðN þ 1Þ nonlinear equations is generated in the expansion coefficients, c i . With the aid of the well-known Newton's iterative method, this nonlinear system can be solved, and hence the approximate solution y N ðxÞ can be obtained.

Remark 2
Following a similar procedure to that given in ''Linear second-order BVPs subject to nonhomogeneous boundary conditions'', the nonlinear second-order Eq. (36) subject to the nonhomogeneous boundary conditions given as in (33) can be tackled.

Convergence analysis
In this section, we state and prove a theorem for convergence of the proposed method.
We show that y N ðxÞ is a Cauchy sequence in the complete Hilbert space L 2 0 ½a; b and hence converges. Now, y N ðxÞ À y M ðxÞ 2 wðxÞ ¼ From Bessel's inequality, we have X 1 i¼0 jc i j 2 is convergent, which yields y N ðxÞ À y M ðxÞ 2 wðxÞ ! 0 as M; N ! 1 and hence y N ðxÞ converges to say b(x). We prove that bðxÞ ¼ yðxÞ; This proves X 1 i¼0 c i / i ðxÞ converges to y(x). h As the convergence has been proved, then consistency and stability can be easily deduced.

Numerical results and discussions
In this section, the presented algorithms in ''Solution of second-order linear two point BVPs'' are applied to solve regular, singular as well as singularly perturbed problems. As expected, the accuracy increases as the number of terms of the basis expansion increases.
The exact solution of (38) is In Table 1, the maximum absolute error E is listed for various values of N, while in Table 2 a comparison between the numerical solution of problem (38) obtained by the application of CMM with the two numerical solutions obtained by using a sinc-collocation and a sinc-Galerkin methods in [34] is given.
ð4 þ x s Þ x r y 0 ð Þ 0 ¼ s x rþsÀ2 s x s e y À r À s þ 1 ð Þ ; 0\x\1; yð0Þ ¼ ln 1 4 ; yð1Þ ¼ ln 1 5 ; s ¼ 3 À r; r 2 ð0; 1Þ; with the exact solution yðxÞ ¼ À ln 4 þ x s ð Þ: In Table 3, the maximum absolute error E is listed for various values of r and N, while in Table 4 a comparison between the solution of Example 2 obtained by our method (CMM) with the two numerical solutions obtained in [34] is given for the case r ¼ 1 4 . In addition, Fig. 1 illustrates the absolute error resulting from the application of CMM for the two cases corresponding to N ¼ 10; r ¼ 1 4 and N ¼ 15; r ¼ 1 4 . Example 3 Consider the following singularly perturbed linear second-order BVP (see, [35]) y 00 ðxÞ þ y 0 ðxÞ À yðxÞ ¼ 0; 0\x\1; ; with the exact solution In Table 5, the maximum absolute error E is listed for various values of and N, while in Table 6, we give a comparison between the solution of Example 3 obtained by our method (PGMM) with the solution obtained by the shooting method given in [35].
y 00 ðxÞ þ k e yðxÞ ¼ 0; With the analytical solution where h is the solution of the nonlinear equation The presented algorithm in Section 4.3 is applied to numerically solve Eq. (41), for the three cases corresponding to k ¼ 1; 2 and 3.51 which yield h ¼ 1:51716; 2:35755 and 4.66781, respectively. In Table 7, the maximum absolute error E is listed for various values of N, and in Table 8, we give a comparison between the best errors obtained by various methods used to solve Example 5. This table shows that our method is more accurate compared with the methods developed in [28][29][30][31]. In addition, Fig. 2 illustrates a comparison between different solutions obtained by our algorithm (CMM) in case of k ¼ 1 and N ¼ 1; 2; 3.