A Functionally-Fitted Block Numerov Method for Solving Second-Order Initial-Value Problems with Oscillatory Solutions

A functionally-fitted Numerov-type method is developed for the numerical solution of second-order initial-value problems with oscillatory solutions. The basis functions are considered among trigonometric and hyperbolic ones. The characteristics of the method are studied, particularly, it is shown that it has a third order of convergence for the general second-order ordinary differential equation, y′′=fx,y,y′\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y''=f \left( x,y,y' \right) $$\end{document}, it is a fourth order convergent method for the special second-order ordinary differential equation, y′′=fx,y\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y''=f \left( x,y\right) $$\end{document}. Comparison with other methods in the literature, even of higher order, shows the good performance of the proposed method.


Introduction
Conventionally, second-order differential equations (DEs) whether solving analytically or numerically, can be reduced into an equivalent system of firstorder equations to leverage on methods constructed for first-order systems (see Enright [7], Lambert [28], Brugnano et al. [5] and Fatunla [12] for a numerical point of view). According to Onumanyi et al. [31] and Awoyemi [4], the approach of reducing a second-order differential equation to a system of first-order differential equations is marred by a large human effort and a more demanding storage memory during implementation. It is even more complex when writing a computer code for the method, particularly the subroutine needed to supply starting values required for such method, due to the 259 Page 2 of 28 R. I. Abdulganiy et al. MJOM large dimension of the resulting first-order system. Twizell and Khaliq [42], Coleman and Duxbury [6], Hairer and his collaborators [18,19], and Lambert and Watson [29], established that solving second-order Differential Equations directly saves about half of the storage space. Hence, the direct solution of second-order DEs is more preferable.
In what follows we will consider a Functionally-Fitted Block Numerov's Method (FFBNM) for the integration of second-order Initial-Value Problem (IVP) systems of the form y = f (x, y, y ), y (x 0 ) = y 0 , y (x 0 ) = y 0 , (1.1) whose solution is assumed to be oscillatory or periodic, and the frequency is approximately known in advance, with f : R × R m ×R m → R m a smooth function that satisfies a Lipschitz condition, being m the dimension of the system. As noticed before, solving the problem in (1.1) directly is preferable, since about half of the storage space can be saved, especially, if the dimension of the system is large (see Coleman and Duxbury [6]).
The rest of the paper is organized as follows: the theoretical and basic elements of the FFBNM are discussed in Sect. 2. The basic properties of the FFBNM are studied in Sect. 3. Some numerical examples are considered in Sect. 4 to illustrate the performance of the method, and finally, Sect. 5 puts and end to the paper with some conclusions.

Development of the FFBNM
In this section, we develop a Continuous Block Method of Numerov type (CBNM) on the interval [x n , x n+2 ] to produce a discrete Block Numerov Method. To do this we consider m = 1, that is, the scalar case, although, as can be seen in the numerical section, the method can be applied in a component-wise formulation for solving differential systems. Three complementary formulas as a by-product via the CBNM formula to approximate the first derivatives are generated too, to form the Block Numerov Method. The CBNM has the general form where u = ωh , and α j , β j are coefficients to be determined uniquely, that depend on the parameter frequency ω and the step size h. As usual, y n+j , y n+j are numerical approximations to the exact values y (x n+j ) , y (x n+j ), and f n+j = f x n+j , y n+j , y n+j . We consider that the true solution y(x) is locally approximated on the block interval [x n , x n+2 ] by a solution τ (x) of the form τ (x) = a 0 + a 1 sin (ωx) + a 2 cos (ωx) + a 3 sinh (ωx) + a 4 cosh (ωx), (2.2) where the coefficients a i will be obtained demanding that the following system of equations is satisfied Now we state the main result that aids the construction of the CBNMs as follows: Theorem 2.1. Let τ (x) be the function given in (2.2) which satisfies the system in (2.3). The continuous approximation that will be used to obtain the FFBNM is given by
Proof. Define the following non-singular matrix as the matrix containing the evaluation of the basis functions at the grid points, Substituting equations (2.5) and (2.6) into the equation (2.1) yields T is a vector of undetermined coefficients.
Imposing the conditions in equation (2.3) on equation (2.8), a system of five equations expressed as V Δ=Θ is obtained, being the undetermined coefficients ascertained simultaneously through the LU decomposition method. Factorizing V into a product of a non-singular lower matrix, ς, and a nonsingular upper triangular one, Ω, and using the LU decomposition method we obtain It follows from (2.8) that which is the desired result in (2.4).
Remark 2.2. The specific form of matrices ς, Ω and V is provided in the Appendix.

Specification of FFBNM
The main formula and the three complementary formulas that form the block method FFBNM are obtained by evaluating the equation in (2.4) at x = x n+2 , and its first derivative at the set of points {x n , x n+1 , x n+2 }. The resulting formulas are given by (2.9) The coefficients of these formulas are as follow: − sinh (u) (cos (u)) 2 + 2 sin (u) cos (u) − sin (u) (cosh (u)) 2 .
β 0,1 = b 01 2u sinh (u) (− cosh (u) + cos (u)) sin (u) Remark 2.4. We note that the two coefficients β 0 and β 2 of the main formula of the FFBNM are the same, and thus it is a symmetric formula. The plots of these coefficients are shown in Fig. 1.
For small values of u, the coefficients of the FFBNM may be subject to heavy cancellations. In that case the Taylor series expansion of the coefficients must be used (see Lambert [28]). The series expansion of each of the coefficients, up to the twelfth order of approximation, is as follows It is worth to note here that as u → 0 in the power series expansion of the coefficients (or in the coefficients themselves), the fourth order Numerov method given by is recovered from the main method of the FFBNM.

Basic Properties of the FFBNM
This section discuses the basic properties of the FFBNM which include the Local Truncation Error, Order, Error Constant, Zero-Stability, Convergence, Linear Stability and Region of Stability.

Local Truncation Error, Order and Consistency of the FFBNM Theorem The local truncation error of the main formula in the FFBNM method has the form
where C 6 is the so called error constant.
Proof. For the main formula, with the assumption that y (x) is a sufficiently differentiable function, we consider the Taylor series expansions of y (x n + jh), j = 0, 1, and y (x n + jh) , j = 0, 1, 2. We replace in the formula the approximate values for the exact ones, that is, y n+j → y (x n + jh) , f n+j → y (x n + jh) and substitute the coefficients β 0 (u) , β 1 (u) and β 2 (u) given in (2.13) into the last formula in (2.9). After simplifying, we obtain Following this procedure, the local truncation error is obtained for each of the formulas in (2.9). These errors are given by which indicates that the order of the main formula is p = 4 and the order of each complimentary formula is p = 3. The order of convergence of the FFBNM will be analyzed below. Proof. Solving the differential equation y (6) (x) − ω 4 y (x) = 0 results in the following fundamental set of solutions {1, x, sin(ωx), cos(ωx), sinh(ωx), cosh(ωx)}, which contains the basis functions of the FFBNM, from which the statement follows immediately. Remark 3.3. Following the definition by Lambert [28], a numerical approach for solving (1.1) is said to be consistent if it has an order greater than one. We thus have that the FFBNM is consistent.

Analysis of Convergence of the FFBNM
The analysis of convergence of the FFBNM is done following the guidelines by Jain and Aziz [20], Jator and Li [24] and Abdulganiy et al. [1].

Theorem 3.4. Let Y be a vector that approximates the true solution vector Y , where Y is the solution of the system obtained from the FFBNM given by the equations in (2.9) on the successive block intervals
If we denote by E = (e 1 , . . . , e N , he 1 , . . . , he N ) T the error vector, where e j = y (x j ) − y j and he j = hy (x j ) − hy j , j = 1, 2, . . . , N, assuming that the exact solution is sufficiently differentiable on [x 0 , x N ], then, for sufficiently small h the FFBNM is a third-order convergent method, that is, Proof. Let the (2N × N )-matrices of coefficients obtained from the FFBNM method be defined as follows:     On the other hand, the system may be written as and having in mind that E = (e 1 , . . . , e N , he 1 , . . . , he N ) T , the above equation becomes Applying the Mean-Value Theorem, we obtain that F −F = JE, where J is the (N × 2N )-matrix given as For sufficiently small h, the matrix P +M is invertible (see [20,24]). Therefore, if we denote by and consider the maximum norm, we can obtain after expanding in Taylor series the terms in D that D = O(h −2 ). Finally, we have that Therefore, the FFBNM is a third-order convergent method.
Remark 3.6. In case of a special second-order equation, that is, y = f (x, y), where the first derivative is absent, we can proceed similarly. Nevertheless, in view of the form of the vector L(h) we see that, assuming sufficient smoothness, we obtain a higher order of convergence:

Zero Stability of the FFBNM.
Definition 3.7. A block method is zero-stable if the roots of its first characteristic polynomial, ρ(R) = det(RA (1) − A (0) ), are of modulus less than or equal to one, and for those which modulus one the multiplicity is not greater than two (see Fatunla [12]).

Linear Stability of the FFBNM.
Applying the FFBNM specified by the formulas in (2.9) to the test equation y = λ 2 y and letting z = λh yields is the stability function.
Definition 3.9. The region of linear stability of the method is the domain in the z − u plane in which the spectral radius of the amplification matrix, ρ (M (z, u)), verifies |ρ (M (z, u))| ≤ 1 (see Jator [27]).
The z − u stability region constructed for the FFBNM is plotted in Fig. 2.

Implementation of the FFBNM
The FFBNM is implemented in a block-by-block fashion for solving the problem in (1.1) on [x 0 , x N ]. We use the formulas in (2.9) to obtain the approximations y n+1 , y n+2 , and y n+1 , y n+2 , simultaneously over the sequence of non overlapping intervals [x 0 , For non-linear problems, the code was enhanced by the feature fsolve in Maple 2016.2.

Numerical Examples
This subsection examines the effectiveness of the FFBNM. Seven well-known numerical examples with oscillating solutions that have appeared in the literature are considered. In the numerical investigations, we used two criteria: accuracy and efficiency. A measure of accuracy is investigated using the maximum error of the approximate solution defined as ErrMax = max y (x n ) − y n ∞ , where y(x n ) is the exact solution and y n is the numerical solution obtained using the FFBNM. The computational efficiency criterion is specified at each of the considered examples.

Example 1
Consider the nonlinear Duffing equation given by y + y + y 3 = B cos (Ωx) , whose exact solution is unknown. Its approximate theoretical solution obtained by Van Dooren [44] is given by y (x) = C 1 cos (Ωx) + C 2 cos (3Ωx) + C 3 cos (5Ωx) + C 4 cos (7Ωx) and the suitable initial conditions are y (0) = C 0 , y (0) = 0, where Ω = 1.01, B = 0.002, This problem has been considered by Tsitouras [41], Jator [21] and Jator et al. [27] in the interval 0, 20.5π 1.01 , with ω = 1.01 using an explicit eighth order method (EM8), a seventh order hybrid method (HLMM), and two third derivative block methods of orders eighth and tenth (BTDF8 and BTDF10), respectively. Table 1 shows the maximum norm of the global error for the y-component of FFBNM given in the form 10 −CD , where CD denotes the number of decimal digits in comparison with the aforementioned methods.
The accuracy of FFBNM together with its efficiency, measured by log 10 (ErrM ax), against the logarithm of the number of function evaluations, log 10 (NF E), are presented be means of the efficiency curves in Fig. 3.  (8) 7.9 1000 320 10.6 321 320 10.0 950 320 9.5 1250 811(4) 10.4 3250 As is evident from the results in Table 1 and the efficiency curves in Figure 3, the FFBNM is a more efficient integrator for the considered nonlinear Duffing equation than the other methods used for comparison, even for the higher order method BTDF10 in Jator et al. [27].

Example 2
Consider the nonlinear perturbed system on the range [0, 10] with = 10 −3 The solution in closed form is given by y 1 (x) = cos (5x)+ sin x 2 , y 2 (x) = sin (5x)+ cos x 2 , which represents a periodic motion of constant frequency with a small perturbation of variable frequency. This problem was selected to show the performance of the FFBNM on a nonlinear perturbed system. Thus, we choose ω = 5 as the fitting frequency. The errors of the FFBNM were compared with a fifth order Trigonometrically-Fitted Runge-Kutta-Nyström (TFARKN) method by Fang et al. [9], a fifth order trigonometrically-fitted explicit method (TRI5) by Fang and Wu [8], and a sixth order hybrid method with dissipation order seven (DIS6) in [17], as presented in Table 2. Details of the results given in Table 2 and the efficiency curves plotting the log 10 (ErrMax) versus the log 10 (NFE) in Fig. 4, reveal that the FFBNM is an efficient integrator for the nonlinear perturbed system.

Example 3
Consider the periodically forced nonlinear IVP y + y 3 + y = (cos (t) + sin (10t)) 3 − 99 sin (10t) , 0 ≤ t ≤ 1000 y (0) = 1, y (0) = 10 with = 10 −10 and whose analytic solution y (t) = cos (t) + sin (10t) describes a periodic motion of low frequency with a small perturbation of high   [15] are displayed in Table 3. The accuracy of the FFBNM with the exact solution and its efficiency in terms of number of function evaluation as compared with the aforementioned methods are displayed in Fig. 5. It is apparent from the data in Table 3 and the plots in Fig. 5 that the FFBNM is an efficient numerical integrator for this problem when compared with the TFARKN, EFRK and EFRKN methods.

Example 4
As a fourth test problem, we consider the Duffing equation y + ω 2 + κ 2 y = 2κ 2 y 3 , y(0) = 1, y (0) = ω, whose solution in closed form is given by y (t) = sn ωt; k ω and represents a periodic motion in terms of the Jacobian elliptic function sn . In our experiment, we take κ = 0.03, and ω = 5 as the fitting parameter. This Problem was solved in the interval [0,100] with step sizes h = 1 2 i for ETFFSH6S, a six stage explicit trigonometrically method of order seven by Li et al. [30], ARK4 and ARK3/8, both adapted RK methods of fourth order and four stages by Franco [15], and the RK6, the Butcher's sixth-order method given in Hairer et al. [19]. Table 4 shows the − log 10 (MaxErr) of the numerical results. The accuracy of FFBNM in comparison with the exact results and the numerical results for h = 1 2 i , i = 2, 3, 4, 5 with the above mentioned methods are presented in Fig. 6.
This system represents a motion on a perturbed circular orbit in the complex plane which occurs in classical mechanics and flexible body dynamics. The numerical results of the FFBNM on this problem with ω = 1.01 are compared with the Trigonometric Fourier Collocation (TFC) method by Wang [47] and the method BNM developed by Jator and Oladejo [23] on [0, 1000]. The results given in Table 5 and the graphical representations in Fig. 7   In our computations, we took the parameter δ as δ = 10 −3 and the fitting frequency as ω = 1. The integration of the problem was carried out in the interval [0, 100]. For the comparison of the error of different methods, we use step lengths h = 1 2 i , i = 1, 2, 3, 4. Since the analytic solution of this problem does not exists, we used a reference numerical solution which was obtained by Anderson and Geer [2] and Verhulst [45]. The Maximum global error, − log 10 ( y (x) − y n ∞ ), of the FFBNM compared with the Gauss methods studied in [49] are displayed in Table 6, while the efficiency curves are shown in Fig. 8, respectively. Whereas from the Table 6 and Fig. 8 it is evident that the FFBNM outperformed the ExtGauss2 of the same order and Gauss3 of order six, it competes favourably well with the ExtGauss3 of order six. In fact, the efficiency in terms of computational time of the FFBNM is almost half the computational time of the ExtGauss3.

Example 7
As our last example, we investigate the linear periodic problem studied in [10,16] y + ωy = (ω) 2 − 4x 2 cos(x 2 ) − 2 sin(x 2 ), y(0) = 1, y (0) = ω whose solution in closed form y(x) = cos(x 2 ) + sin(ωx) represents a periodic motion that involves a constant frequency and a variable frequency. For this problem, we integrate in the interval [0, 5], taking the principal frequency as ω = 50. The numerical results stated in the Table 7 and Fig. 9 have been computed with the step lengths h = 0.1 2 i , i = 1, 2, 3, 4. As can be seen in Table 7 and Fig. 9, the BNM, which is the limit method of the FFBNM, is not as accurate as the other fourth order methods AZONN-EVELD4(3), AMERSON4(3)and MAMERSON4(3) (abbreviated AZD, AME and MAM respectively, for easy readability) presented in [16] but uses fewer number of function evaluations. However, the FFBNM performs better than its limiting method and some of the existing methods in the literature.