Monic Chebyshev pseudospectral differentiation matrices for higher-order IVPs and BVPs: applications to certain types of real-life problems

We introduce new differentiation matrices based on the pseudospectral collocation method. Monic Chebyshev polynomials (MCPs) were used as trial functions in differentiation matrices (D-matrices). Those matrices have been used to approximate the solutions of higher-order ordinary differential equations (H-ODEs). Two techniques will be used in this work. The first technique is a direct approximation of the H-ODE. While the second technique depends on transforming the H-ODE into a system of lower order ODEs. We discuss the error analysis of these D-matrices in-depth. Also, the approximation and truncation error convergence have been presented to improve the error analysis. Some numerical test functions and examples are illustrated to show the constructed D-matrices’ efficiency and accuracy.


Introduction
In Sect. 4, new pseudospectral D-matrices of MCPs are constructed. The error analysis to estimate the error bounds of the MC approximations has been derived in Sect. 5. In Sect. 6, we applied the MC approximation for some test functions and compared the obtained results with others. The proposed method and how to use the MC D-matrices were introduced in Sect. 7. Through Sect. 8, two techniques are applied to solve H-ODEs. The first technique, solves the H-ODEs directly using the MC D-matrices. The second technique transforms the H-ODE to a system of lower order ODEs. The obtained results are compared with other methods and the bvp5c MATLAB (if possible) function to show the accuracy and efficiency of MC D-matrices.

Preliminaries and notations
In this section, we shall introduce CPs and MCPs. Then, we give a brief summary of the pseudospectral method.
Using relation (8), we have: The recursive formula of MCPs is: The the recursive relation for MCPs in terms of its derivatives is Abdelhakem et al. 2019b: The MCPs constitute an orthogonal basis w.r.t. w(x) = 1/ √ 1 − x 2 (the same weight of CPs):

Gauss-Lobatto quadrature
Throughout, this paper we shall use Gauss-Lobatto quadrature (GLQ) as the collocation points . Let {q n (x)} ∞ n=0 , defined on the interval [u, v], be orthogonal polynomials w.r.t the weight function w(x). Then (Shen et al. 2011): where α, β are given by solving the equation: Definition 2.1 (Shen et al. 2011) The inner product of the orthogonal polynomials {q n (x)} ∞ n=0 w.r.t the weight function w(x) over the interval [u, v], denoted by (q n , q n ) w = ||q n || 2 w , is defined by: Definition 2.2 (Shen et al. 2011) The GLQ points of the orthogonal polynomials q n (x) are the zeros of function (13) and the ends points u, v.
Lemma 1 (Shen et al. 2011) Let {x s } N 0 be the GLQ points of the orthogonal polynomials q n (x). Then {x s } N 0 are the zeros of the equation: Theorem 2 (Shen et al. 2011) Let {x s } N 0 be the GLQ points of the orthogonal polynomials q n (x). Then, there is a unique set of quadrature weight (QW) {w s } N 0 given by: where k N is the leading coefficient of the polynomial q N (x) , and U N −2 2ŵ is the inner product of U N −2 with respect toŵ.
Definition 2.3 (Shen et al. 2011) The discrete inner product of the orthogonal polynomials {q n (x)} ∞ n=0 with respect to the weight function w j , denoted by q n , q n N ,w = q n 2 N ,w is defined by: By using Eqs. (16), (17), (18), and (19), we get the Chebyshev GLQ of the CPs (C-GLQ) point as: and the Chebyshev QW (C-QW): where θ 0 = θ N = 1/2 and θ s = 1; 0 < s < N . By using above definitions and properties, we have the following theorem.
Theorem 3 (Shen et al. 2011) The discrete inner product of CPs is defined as:

Pseudospectral method
The pseudospectral method is a technique in which the unknown function f (x) of the ODEs is still approximated as in a spectral method: Use the discrete inner product with {x j , w j } N j=0 as associated GLQ points with the QW to get: This approximation is actually represented not by its coefficients but by the values of the unknown function f (x j ) at (N + 1) GLQ points x j , j = 0, 1, 2, . . . , N (Shen et al. 2011).

On monic Chebyshev polynomials
"If a single flap of a butterfly's wings can be instrumental in generating a tornado" (Lorenz 1993)-Professor Edward Lorenz. As mentioned in the above sections, the only difference between MCPs and CPs is the leading coefficient (The single flap). But in the results, we recognized a huge difference (The tornado). The effect of this difference will be shown in the rapid rate of convergence (Sect. 5.3). This section aims to present the properties of MCPs. Some theorems for MCPs will be presented, such as QW (MC-QW), the constants of finite expansion for f (x), the zeros (MC-GLQ) of MCPs, and the discrete inner product.

Monic Chebyshev Gauss-Lobatto quadrature weight
In this section, the MC-QW will be deductive. The importance of that weight comes from them that it's needed to discuss the discrete orthogonal relation of MCPs.

Orthogonality of monic Chebyshev polynomials
The importance of this relation comes from its use to set up the D-matrices.

Theorem 7
The discrete inner product of MCPs is: Proof Straightforward using Theorem 3.

Monic Chebyshev differentiation matrices
In this section, some important theorems and lemmas have been presented. These theorems and lemmas are needed to set up the MC D-matrices.

Lemma 8 Let f (x) be a continuous function that can be approximated by the MC approximation over N+1 MC-GLQ points as:
Then, such that, Now, according to Theorem (7): At n = 0: and for 0 < n < N : Finally, at n = N : Hence, the lemma is proved. It is clear that, there is a slight difference between the constants, c n : n = 0, 1, . . . , N , and those in CPs (Elbarbary and El-Sayed Salah 2005).
Let f (x) be r + 1, r is a positive integer, differentiable function on the interval [−1, 1]. Since, f (r ) (x) and f (r +1) (x) are two continuous functions on the interval [−1, 1]. Then, form Eq. (47): and By differentiating Eq. (54) w.r.t. x: By equating the coefficients of Q n (x) of Eq. (55) with Eq. (56), we have: This difference equation can be solved to give: Proof By using mathematical induction, at "r = 1 : a (58)). Assume that, the lemma holds for "r". So, we have to show that: From Eq. (58) at "r + 1 and replacing n by n + 2i − 1: From Lemma 1 in Ref. Doha (1991): Then, The following theorem is the last needed step to set up the MC D-matrices.
Theorem 10 The r th derivative of the MCPs is: where, Proof Substituting from Eq. (59) into Eq. (54) to get: Put l = n + 2 j + r − 2 and 2s = l + n − r : By differentiating Eq. (47) r times: Then, equating the coefficients of a l from Eqs. (68) and (69): which proved the theorem. Finally, the following corollary constructs the MC D-matrices. The construction will be easy due to the above lemmas, theorems, and steps.
where D (r ) = [d Proof By differentiating Eq.(47) r times w.r.t. x: Use Theorem (10) to get: such that: Another form of the matrices can be obtained by using the trigonometric identity: and the periodic properties of the cosine function is:

Error analysis and convergence
In the section, the error analysis and convergence discussions have been categorized into three subsections.

Error upper-bound for D-matrices
This section is concerned with the roundoff error in the elements of the MC D-matrices. In finite precision arithmetic: where δ = max k |δ k |, δ k denotes a small error, with |δ k | approximately equal to machine precision ε, and x * k is the exact value while x k is the computed value with unit roundoff ε = 2.22e − 16. The absolute errors of the quantities x k x n are Baltensperger and Trummer (2003): Considering Eq. (77), the roundoff error on the matrix's elements at r = 1 are given by: Hence, this order is in agreement with the order obtained in Ref. Elbarbary and El-Sayed Salah (2005). For r = 2, the error on the elements are given by: And so on, the roundoff error of the elements of the MC-matrices can be calculated for any order. In the end, the roundoff error of d (r ) i j observed to be O(N 2r δ). The obtained roundoff error for the higher derivative is disturbing. But this issue does not affect due to the condition numbers.

The condition numbers of MC D-matrices
It is known that the system is said to be ill-conditioned if its condition number is too large. Table 1 represents the condition numbers of MC D-matrices and Chebyshev D-matrices (Khalil et al. 2012) for different orders at different N. As shown, the condition number decreases by increasing the number of MC-GLQ points.

Convergence analysis
In this section, we shall investigate and introduce some essential lemmas and theorems. These theorems and lemmas will be used to prove the boundedness and the convergence of the expansions. Lemma 12 |Q n (x)| ≤ 2 1−n , for n ≥ 0.
Proof In case of n = 0, from Definition (8): On the other hand, when n > 0: So, by using the property (4): The above lemma shows that the roundoff error of approximation (47) tends to zero as n tends to infinity. The result neutralize the roundoff error of the elements of the high derivative MC D-matrices.
The technique of the proof as in Abd-Elhameed and Youssri (2014, 2019): from Eqs. (12) and (47): Apply the integration by parts two times: Then, For the second item E = ∞ n=0 a n Q n (x) − N n=0 a n Q n (x) = ∞ n=N +1 a n Q n (x)

Test functions
Hence the differentiation matrices have been constructed. We shall apply them to some test functions to show the efficiency of these matrices. Comparisons with exact solutions and other numerical methods have been made. The test will be started with a power function.  Elbarbary and El-Sayed Salah (2005).
The MAE is "7.3e−12" at N = 8 for the presented method. While the MAE by the authors in Elbarbary and El-Sayed Salah (2005) is "6.8e−09" at N = 16. This proved the efficiency and accuracy of this method.

Example 2
f For a different values of N , the MAE for the fourth derivative of f (x) = sin x is shown in Table 3. Those results are compared by the method in Elbarbary and El-Sayed Salah (2005).
The MC D-matrices have been tested as a derivative tool in the above section. But the essential task of the MC D-matrices is solving ODEs and real-life applications represented by BVPs.

Proposed method
The technique of the D-matrix is effortless to apply. Consider the ODE of order r as follow: with appreciate and suitable initial and boundary conditions as regular, where e(x), s(x) are real functions of x and h(y) linear or nonlinear function of y. From Eq. (74): Substitute into the ODE (Eq. (92)): A similar procedure will be done with the initial and boundary conditions. The equations mentioned above, (94), with those from the initial and boundary conditions form a system of algebraic equations of N + 1 unknowns at maximum (y(x i ), i=0,1,…,N). The number of unknowns depends on the given conditions. The algebraic system will be solved by any solver analytically or numerically. Algorithm (1) has been created to enable the readers to code a program easily.
Step 2: Construct the elements of the MC D-matrices (72).

Numerical examples
In this section, we apply the MC D-matrices to some H-ODEs. Then, comparisons with exact solutions, other numerical methods, and the bvp5c MATLAB function (if possible) have been made. But the H-ODEs must be transformed into a system of 1st order ODE to use bvp5c. This transformation will reduce the efficiency of the bvp5c due to the magnification of the variables. The parameters of bvp5c were taken as RelTol = 1e−16 and AbsTol = 1e−16. That codes of the MATLAB software run using i7-4500 CPU @ 1.80GHz Intel, that supported by SSD hard disk.
Example 4 Consider the linear eighth-order BVP: subject to: and exact solution is In that example, H-BVP (Eq. (99)) has been solved directly and by transforming it into a system of lower-order (4th, 2nd and 1st) ODEs, respectively. Table 4, represents the point wise absolute errors (AEs) for Example 4 in a comparison with those in Ogunrinde and Ojo (2018) and the bvp5c Matlab function. This comparison showed the privilege and the demonstration of the high accuracy and the the efficiency of MC D-matrices The following two examples discuss two real-life applications.
Example 5 This example treats with general unified Magnetohydrodynamics (MHD) boundarylayer flow of a viscous fluid (Karkeraa et al. 2020). The authors transformed the boundarylayer equations into a governing problem over an unbounded domain. This governing problem takes the form of Falkner-Skan-type equation: subject to: where: : the parameter of composite velocity. β : the moving boundary rate. M : Hartmann number.
For more details, refer to Karkeraa et al. (2020). The authors of Karkeraa et al. (2020) discussed several cases. Here, we chose one of them as a sample, for = M = 0 and β = −1 with the exact solution y = √ 2 tanh x √ 2 . After the transformation η = 1 − 2e −x , the MAE reaches 10 −4 at N = 38 using the MC D-matrices. While the MAE almost tended to 10 −4 after the 6th level of resolution in Karkeraa et al. (2020). The 6th level of resolution in Karkeraa et al. (2020) means 2 6+1 − 1 = 127 unknowns or iterations. This proved the dominance of the accuracy. Also, as a privilege of the presented method, the MC D-matrices are very easy to apply than the techniques that used in Karkeraa et al. (2020), Haar wavelet collocation and Haar wavelet quasilinearization. In the process of the transformation of the domain, the condition y (1) (∞) = has been lost. Consequently, the bvp5c Matlab function cant be used due to the insufficient number of conditions. However, our method can be applied without any problems.
In the introduction, the importance of BVPs was mentioned. This importance has appeared in our lives as in the pandemic of COVID-19.  The personal protection β The early diagnosis treatment γ The delay diagnosis treatment δ The environment spraying x The time in days φ The proportion of the infectious with a timely diagnosis y 3 The exposed members of the population y 1 Example 6 The authors in Rong et al. (2020) and Moore and Okyere (2022) discussed the spread of COVID-19. They admitted that the rapid spread was due to diagnosis delay and lack of resources. The model of transmission of COVID-19 was investigated in Rong et al. (2020). The following model is modified in Moore and Okyere (2022): For the initial conditions, all numeric parameters, and the meaning of the variables and the parameters, refer to Rong et al. (2020), Table 3. Moore and Okyere (2022) presented four strategies to handle the above system and described the model for 100 days as a time interval. They used the fourth-order Runge-Kutta forward-backward sweep method. To reach 100 days, they ought to iterate the equation a considerable number of iterations. As a sample, the first case has been examined by our method. To understand the presented graph, some notions will be explained in Table 5. By changing the range of the dependent variable (time) from [0, 100] to [−1, 1], Fig. 2 represents the exposed population (y 3 ) during a hundred days approximated by the MC Dmatrices and the bvp5c Matlab function. The figure is identical to the same case in Moore and Okyere (2022) with a few numbers of iterations. Also, the Matlab function reported that the maximum error of bvp5c is 1.270e+04. This showed that our procedure is more efficient, and it is effortless to apply to the system than the method in Moore and Okyere (2022).
Finally, the following example will discuss a famous elastic foundation problem.
Example 7 Consider the 4th ODE for the ill-posed problem beam (Agarwal et al. 2020;Hussain et al. 2016;Dong et al. 2014): subject to: where, y represent the bar deviation.
By applying the same routine, we get the results that have been shown in Table 6. These results demonstrate the accuracy and the efficiency over the bvp5c. Also, the MC D-matrices obtained more accurate results than the results in Agarwal et al. (2020). Figure 3 represents the AE using the system of 1 st order differential equations at N = 10.  Finally, we will proceed to the last example that describes the chaotic velocity nature of turbulent flows.
Example 8 Consider the 7th ODE (Akram and Beck 2015): with exact solution: y = x(1 − x)e x , where, y is particles' velocity for a limited time.
The following results can be concluded from Table 7: • MC D-matrices are more efficient and accurate than the method used in Akram and Beck (2015) as a direct method without transformation.
• By transforming the given problem into seven 1st order ODEs, MC D-matrices are still more efficient and accurate than the bvp5c Matlab function. • Since the bvp5c Matlab function deals with the 1st order ODEs only so, in our case, the bvp5c Matlab function handles 5(N + 1) variables. On the other hand, the direct method in MC D-matrices runs N + 1 only. That means high efficiency.

Conclusion
Some basic properties and concepts for the MCPs have been introduced. These concepts are used to set up higher-order MC D-Matrices. Then, we investigated the error analysis for the proposed method and D-matrices. This analysis included three items. The first item was the upper roundoff error for the elements of MC D-matrices for those matrices. While the second item was the condition number of MC D-matrices. Finally, the convergence of the approximation and the truncation error was presented. Consequently, the MC D-matrices has been tested by two different test functions. To prove the efficiency and the power of that technique, we In contrast, the represented method has no problems. Generally, the technique of MC Dmatrices is reliable and easy to apply. Almost one algorithm may be used to solve most problems, whether linear ODE, nonlinear ODE, or system of ODEs. The MC D-Matrices can be extended to deal with partial differential equations in future work. Moreover, it can be generalized and applied to fractional calculus.
regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.