1 Introduction

BVPs appear in various domains and applications, particularly in mathematical physics [1, 2]. In [3, 4], they used IBVPs in Nano-fluid mechanics. However, analytic methods can only solve some of the ordinary differential equations in these applications [5], especially the even-order BVPs that arise in some problems and applications. Many authors introduced several approximate methods to solve these problems [6]. The authors in [7] solve the fourth-order BVPs for the beam equation. The Sinc-collocation method was applied in [8] to solve the eighth-order BVPs. Other authors solve linear and nonlinear fourth-order BVPs [9,10,11,12,13, 40].

Spectral methods have the most notoriety against the other approximated methods as finite difference and finite element methods. Spectral methods have many advantages; the higher accuracy caused in some Bvps, the exact solution can be found [14]. Because spectral methods converge relatively quickly in space and time, they are very efficient for solving PDEs [15]. Spectral methods are highly adaptable and can be used to solve a wide variety of problems, including linear and nonlinear systems with homogeneous or non-homogeneous boundary conditions. The algorithms of the spectral methods are easy to apply. They are a family of techniques used in mathematical applications to generate numerical solutions to a wide range of problems. Spectral methods include three main kinds of scenarios. The first method, the Galerkin method, has been used in [16,17,18,19,20]. The Galerkin method’s selected bases function must satisfy the initial and boundary conditions. While in the second method, the Tau method, this condition is unnecessary [21,22,23,24, 31]. Thirdly, in the collocation method (pseudospectral), the unknown function’s derivative of the differential equation can be expanded in terms of itself [25, 26].

The basic principle of using the spectral method is to select a base function. These basis functions may be orthogonal [27] or not orthogonal [28]. The Chebyshev polynomials (CH-Ps) are the most used in spectral methods. The authors used it in [29] to solve fractional optimal control problems. While the authors solved the fractional integrodifferential equations by CH-Ps in [30]. Mixed Volterra–Fredholm Delay Integro-Differential Equations have been solved in [32].

Due to the high accuracy and precision results obtained by CH-Ps, a novel class of orthogonal polynomials derived from CH-Ps is introduced. We named it enhanced shifted Chebyshev polynomials (ESCH-Ps). ESCH-Ps are constructed to satisfy the initial and boundary conditions. These polynomials were used in spectral methods as basis functions. The suggested methods are the Galerkin and the Tau method to solve even-order BVPs. As with any residual weighted methods, the proposed techniques depended on converting IBVPs and their conditions to an algebraic system of equations. Consequently, this algebraic system will be solved to get the values of spectral expansion’s constants.

This article consists of six sections; some direct relations and definitions need to be presented in Sect. 2. Sect. 3, the recurrence relation and the orthogonal relation with its weight function of ESCH-Ps are generated. Then, the operational matrix has been constructed. The two spectral algorithms for solving BVPs and handling non-homogenous conditions are detailed in Sect. 4. The convergence and error analysis is investigated in Sect. 5. Finally, we solved even-order boundary value problems and compared our solutions with other authors.

2 Some important relations

In this section, some essential properties and relations of CH-Ps will be presented. The recurrence relation of CH-Ps [33,34,35]:

$$\begin{aligned} T_{k+2}(x)=2x\,T_{k+1}(x)-T_{k}(x) \quad k=0,1,2,... \end{aligned}$$
(1)

such that its initials \(T_0(x)=1\) and \(T_1(x)=x\).

The CH-Ps are orthogonal with respect to \(w(x)=\dfrac{1}{\sqrt{1-x^2}}\) as:

$$\begin{aligned} \int _{-1}^{1} T_i(x)\,T_k(x)\,w(x)\,dx={\left\{ \begin{array}{ll} 0, \quad i\ne k,\\ \pi , \quad i=k=0,\\ \frac{\pi }{2}, \quad i=k>0. \end{array}\right. } \end{aligned}$$
(2)

Here are some identities and inequality of CH-Ps:

$$\begin{aligned}{} & {} T_k(-1)=(-1)^k,\quad T_k(1)=1, \end{aligned}$$
(3)
$$\begin{aligned}{} & {} T'_k(-1)=(-1)^{k-1}k^2,\quad T'_k(1)=k^2,\end{aligned}$$
(4)
$$\begin{aligned}{} & {} \left| T_k(x)\right| \le 1,\quad \left| T_k'(x)\right| \le k^2. \end{aligned}$$
(5)

Also, the series of CH-Ps can be formulated as:

$$\begin{aligned} T_k(x)=k\sum _{j=0}^{\lfloor k/2\rfloor }(-1)^j \dfrac{2^{k-2j-1}(k-j-1)!}{(k-2j)!(2j)!}x^{k-2j}. \end{aligned}$$
(6)

While the SCH-Ps \((T^*_k(x);k=0,1,...;x \in [a,b])\) of degree k can be defined as

$$\begin{aligned} T^*_k(x)=T_k\left( \dfrac{2x-b-a}{b-a}\right) ,\quad k=0,1,2,... \end{aligned}$$
(7)

Also, the polynomials \(\{T^*_k(x)\}_{i=0}^{N}\) are orthogonal with respect to \(w^*(x)=\dfrac{1}{\sqrt{(x-a)(b-x)}}\) as:

$$\begin{aligned} \int _a^b w^*(x)T^*_i(x)T^*_k(x)dx={\left\{ \begin{array}{ll} 0, \quad i\ne k,\\ \pi , \quad i=k=0,\\ \frac{\pi }{2}, \quad i=k>0. \end{array}\right. } \end{aligned}$$
(8)

The product of two SCH-Ps is linearized as:

$$\begin{aligned} T^*_i(x) T^*_j(x)=\dfrac{T^*_{i+j}(x)+T^*_{|i-j|}(x)}{2}. \end{aligned}$$
(9)

3 Enhanced shifted Chebyshev polynomials and their derivatives

In this section, we shall define a new class of orthogonal polynomials from SCH-Ps. Moreover, the operational matrix of the investigated polynomials’ derivatives will be presented.

3.1 Enhanced shifted Chebyshev polynomials

Firstly, the definition of the ESCH-Ps on [ab] will be introduced.

Definition 1

The ESCH-Ps \(\left( \phi _{n,k}(x);\, k,n=0,1,2,\ldots ;\quad x \in [a,b]\right) \) of degree \((k+2n)\) will be formed as:

$$\begin{aligned} \phi _{n,k}(x)=(b-x)^n (x-a)^n T^*_k(x),\quad k=0,1,2,\ldots . \end{aligned}$$
(10)

Therefore, the first three terms of ESCH-Ps will be:

$$\begin{aligned}{} & {} \phi _{n,0}(x)=(b-x)^n (x-a)^n, \end{aligned}$$
(11)
$$\begin{aligned}{} & {} \phi _{n,1}(x)=(b-x)^n (x-a)^n \left( \dfrac{2x-b-a}{b-a}\right) , \end{aligned}$$
(12)
$$\begin{aligned}{} & {} \phi _{n,2}(x)=(b-x)^n (x-a)^n \left( \dfrac{8x^2-8(a+b)x+(a+b)^2+4ab}{(b-a)^2}\right) . \end{aligned}$$
(13)

Also, its recurrence relation can be deduced from Eq. (1) and Definition 1 as:

$$\begin{aligned} \phi _{n,k+2}(x)=2\left( \dfrac{2x-b-a}{b-a}\right) \phi _{n,k+1}(x)-\phi _{n,k}(x)\quad k=0,1,2,..., \end{aligned}$$
(14)

with the initial Eqs. (11, 12).

In addition, the initials and boundaries are:

$$\begin{aligned} \phi _{n,k}(a)= & {} \phi _{n,k}(b)=0, \quad n>0, \end{aligned}$$
(15)
$$\begin{aligned} \phi ' _{n,k}(a)= & {} \phi ' _{n,k}(b)=0, \quad n>1. \end{aligned}$$
(16)

Since, \(|x-a|\le (b-a)\) and \(|b-x|\le (b-a)\), and according to the inequality (5). The ESCH-Ps satisfy that:

$$\begin{aligned} |\phi _{n,k}(x)|\le & {} (b-a)^{2n}, \end{aligned}$$
(17)
$$\begin{aligned} |\phi '_{n,k}(x)|\le & {} (b-a)^{2n} k^2. \end{aligned}$$
(18)

The orthogonality relation of polynomials \(\{\phi _{n,k}(x)\}_{k,n\ge 0}\) is expressed in the next equation concerning the weight function \(\hat{w}(x)=\dfrac{1}{(b-x)^{2n}(x-a)^{2n}\sqrt{(x-a)(b-x)}}\) as:

$$\begin{aligned} \int _a^b \hat{w}(x)\, \phi _{n,k}(x)\phi _{n,i}(x)dx={\left\{ \begin{array}{ll} 0, \quad i\ne k,\\ \pi , \quad i=k=0,\\ \frac{\pi }{2}, \quad i=k>0. \end{array}\right. } \end{aligned}$$
(19)

Remark 1

The linearization formula for ESCH-Ps is defined as:

$$\begin{aligned} \phi _{n,k}(x)\phi _{n,i}(x)=\left[ \dfrac{(b-x)^n(x-a)^n}{2}\right] \left[ \phi _{n,k+i}(x)+\phi _{n,|k-i|}(x)\right] \end{aligned}$$
(20)

This relation will be essential during the discussion of the tau method.

3.2 The operational matrix of ESCH-Ps for integer order derivative

In this subsection, the first derivative of \(\phi _{n,k}(x)\) will be introduced in terms of itself. Consequently, the first derivative operational matrix of ESH-ps will be constructed. Finally, the mth operational matrix will be deduced.

Theorem 1

The first derivative of \(\phi _{n,k}(x)\) can be expressed as:

$$\begin{aligned} \frac{d}{dx} \phi _{n,k}(x) =\frac{2}{b-a} \sum _{i=o}^{k-1}\dfrac{2\lambda _{k+i}}{\gamma _{i}}[i+(2n+1)(k-i)]\phi _{n,i}(x)+\Delta _{k}(x), \end{aligned}$$
(21)

where

$$\begin{aligned} \lambda _{j}= & {} {\left\{ \begin{array}{ll} 0 \quad j \ even,\\ 1 \quad j \ odd, \end{array}\right. } \end{aligned}$$
(22)
$$\begin{aligned} \gamma _{j}= & {} {\left\{ \begin{array}{ll} 2 \quad j=0,\\ 1 \quad j \ne 0, \end{array}\right. } \end{aligned}$$
(23)

and

$$\begin{aligned} \Delta _{i}= {\left\{ \begin{array}{ll} -n((b-x)(x-a))^{n-1}(2x-a-b) \quad i \ even,\\ -n((b-x)(x-a))^{n-1}(b-a) \quad \quad \quad i \ odd. \end{array}\right. } \end{aligned}$$
(24)

Proof

By using mathematical induction, we have the following steps:

For \(k=0\):

$$\begin{aligned} \phi '_{n,0}(x) =-n((b-x)(x-a))^{n-1}(2x-a-b), \end{aligned}$$
(25)

Then, using the derivative of Eq. (14) at \(k=j-1\) and considering the assumption of Eq. (21) at \(k=j\), and with the aid of (6), we get:

$$\begin{aligned} \begin{aligned}&\frac{d}{dx} \phi _{n,j+1}(x)= \frac{4}{b-a}\phi _{n,j}(x) \\&\quad +2\left( \dfrac{2x-b-a}{b-a}\right) \left[ \sum _{i=o}^{j-1}\dfrac{4\lambda _{j+i}}{(b-a)\gamma _{i}}[i+(2n+1)(j-i)]\phi _{n,i}(x)+\Delta _{j}(x)\right] \\&\quad -\left[ \sum _{i=o}^{j-2}\dfrac{4\lambda _{j+i-1}}{(b-a)\gamma _{i}}[i+(2n+1)(j-i-1)]\phi _{n,i}(x)+\Delta _{j-1}(x)\right] \end{aligned} \end{aligned}$$
(26)

By using some algebraic manipulations on the previous equation, the relation can be proved. \(\square \)

The matrix form of the previous theorem can be written according to the following corollary.

Corollary 1

Let \(\phi (x)=[\phi _{n,0}(x),\phi _{n,1}(x),...,\phi _{n,N}(x)]^{T}\). Then the first derivative of \(\phi (x)\) can be defined as:

$$\begin{aligned} \phi '(x)=V \phi (x)+\delta (x), \end{aligned}$$
(27)

where \(\phi '(x)=[\phi '_{n,0}(x),\phi '_{n,1}(x),...,\phi '_{n,N}(x)]^{T}\), \(\delta (x)=[\Delta _0(x),\Delta _1(x),...,\Delta _N(x)]^T\), and \(V=(v_{ki})_{k,i=0}^N\) is the square Matrix \((N+1) \times (N+1)\):

$$\begin{aligned} v_{ki}=\dfrac{4 \lambda _{k+i}}{(b-a) \gamma _i}[i+(2n+1)(k-i)]\quad \quad \quad i,k=0,..,N. \end{aligned}$$
(28)

By differentiating Eq. (27):

$$\begin{aligned} \phi ''(x)=V \phi '(x)+\delta '(x), \end{aligned}$$
(29)

Using Corollary (1) to get:

$$\begin{aligned} \phi ''(x)=V^2 \phi (x)+V\, \delta (x)+\delta '(x). \end{aligned}$$
(30)

The mathematical induction can be used to introduce the following Corollary:

Corollary 2

The mth order derivative of \(\phi (x)\) can be formed as:

$$\begin{aligned} \phi ^{(m)}(x)={\left\{ \begin{array}{ll} V^m\phi (x)+\sum _{j=0}^{m-1} V^{m-j-1}\delta ^{(j)}(x) \quad m=1,...,N,\\ \\ \sum _{j=0}^{m-1} V^{m-j-1}\delta ^{(j)}(x) \quad \quad \quad \quad \quad \quad m>N, \end{array}\right. } \end{aligned}$$
(31)

where \(V^0\) is the identity matrix.

In the next section, the structure of the BVPs is presented. Then two methods for approximating the solutions of those problems will be presented.

4 Two spectral techniques for solving BVPS

At the being, the problem formulation will be presented. Consider BVP of the even order l:

$$\begin{aligned} U^{(l)}(x)=\mathbb {F}(x,U(x),U'(x),...,U^{(l-1)}),\quad x\in [a,b], \end{aligned}$$
(32)

while its homogeneous initial and boundary conditions are:

$$\begin{aligned} \begin{aligned} U(a)&=U'(a)=U''(a)=\dots =U^{(\frac{l}{2}-1)}(a)=0,\\ U(b)&=U'(b)=U''(b)=\dots =U^{(\frac{l}{2}-1)}(b)=0 . \end{aligned} \end{aligned}$$
(33)

The approximate spectral solution of Eq. (32) is assumed as:

$$\begin{aligned} U(x)\simeq \sum _{k=0}^{N} c_{k} \phi _{n,k}(x). \end{aligned}$$
(34)

Computing the residual of Eq. (32) is obtained by using Theorem (1) and Corollary (2) to get:

$$\begin{aligned} \mathcal {R}(x)=\sum _{k=0}^{N} c_{k} \phi _{n,k}^{(l)}(x)-\mathcal {F}\left( x,\sum _{k=0}^{N} c_{k} \phi _{n,k}(x),\sum _{k=0}^{N} c_{k} \phi _{n,k}'(x),...,\sum _{k=0}^{N} c_{k} \phi _{n,k}^{(l-1)}(x)\right) . \end{aligned}$$
(35)

4.1 Galerkin spectral method via ESCH-Ps (ESCH-Galerkin)

As the definition of the introduced function (10), we recognized that the function and its derivatives would be zero, for certain values of n, at the endpoints. So, this assumption is compatible with the BVP’s homogeneous initial/boundary to use Galerkin. Consider the collocation points \(x_r\in [a,b]\); \(r=0,1,\ldots , N\), the zeros of SCH-Ps of degree \((N+1)\), the equidistant points, or any suitable points. Now, Collocating Eq. (35) to obtain the following algebraic system of \(N+1\) equations the unknowns \(c_k\); \(r=0,1,\ldots , N\):

$$\begin{aligned} \mathcal {R}(x_r)=0,\quad r=0,1,...N. \end{aligned}$$
(36)

It is easy to introduce the approximated solution (34) by solving the algebraic system Eq. (36).

4.2 Tau spectral method via ESCH-Ps (ESCH-Tau)

The second spectral method will be the Tau method. The trial functions are chosen to be ESCH-Ps themselves. On the other hand, the weight function will be \(\bar{w}(x)=\dfrac{1}{(b-x)^{n}(x-a)^{n}\sqrt{(x-a)(b-x)}}\). Now, applying the Tau method to get:

$$\begin{aligned} \int _{a}^{b} \mathcal {R}(x) \phi _{n,k}(x) \bar{w}(x) dx =0, \quad k=0,1,\ldots , N-v, \end{aligned}$$
(37)

where v is the number of initial and boundary conditions.

Since the introduced problem’s initial/boundary conditions are homogeneous. Consequently, the Tau’s integration (37) transformed to:

$$\begin{aligned} \int _{a}^{b} \mathcal {R}(x) \phi _{n,k}(x) \bar{w}(x) dx =0, \quad k=0,1,\ldots , N \end{aligned}$$
(38)

The outcomes of Eq. (38) will be an algebraic system of \(N+1\) equations and \(N+1\) unknowns. Solving that system to get the values of spectral contacts of the approximated solution (34).

Remark 2

The linearity of the algebraic systems (36) and (38) depends on whether the BVP (32) is linear. The matrix decomposition method will be used to solve the linear algebraic system. While any numerical method, such as Newton Raphson’s method, will be used for the nonlinear one.

Remark 3

In many cases, especially in the applications, the homogeneous initials/boundary conditions can not be guaranteed. Therefore, we need to transform these conditions into homogeneous conditions. This can be done by the following. Let:

$$\begin{aligned} u(x)=U(x)+\sum _{i=0}^{l-1} A_{i}x^i, \end{aligned}$$
(39)

such that

$$\begin{aligned} \begin{aligned} u(a)&=u'(a)=u''(a)=\dots =u^{\left( \frac{l}{2}-1\right) }(a)=0,\\ u(b)&=u'(b)=u''(b)=\dots =u^{\left( \frac{l}{2}-1\right) }(b)=0, \end{aligned} \end{aligned}$$
(40)

where, \(A_i\) are constants were calculated by solving Eqs. (39, 40). Thus, the BVP (32, 40) will be solved for the unknown function u(x).

It is essential to ensure the convergence of the spectral expansion before applying the method to the numerical calculation. The following section is devoted to studying the theoretical convergence, stability, and error analysis.

5 Convergence and error analysis

The convergence analysis of our basic function was covered in this section. Two fundamental theorems were proposed and verified.

Lemma 1

[36] Let u(x) be a given function such that \(u(k) = a_k\). Suppose that the following assumptions are satisfied:

  1. 1.

    u(x) is continuous, positive, decreasing function for \(x \ge m\).

  2. 2.

    \(\sum a_m\) is convergent, and \(R_m=\sum _{k=m+1}^{\infty } a_k\), then

    $$\begin{aligned} R_m \le \int _{m} ^{\infty } u(x)dx. \end{aligned}$$

Definition 2

[14] Let \(H_{w}^{r}(a,b)\) be a Sobolev space such that

$$\begin{aligned} H_{w}^{r}(a,b)=\{v\in L_w^2 (a,b):\quad v^{(k)} \in L_w^2 (a,b), k=0,1,2,...,r\} \end{aligned}$$
(41)

Let \(H_{0,w}^{r}(a,b)\) be a Sobolev subspace of \(H_{w}^{r}(a,b)\) such that

$$\begin{aligned} H_{0,w}^{r}(a,b)=\{v\in H_w^r (a,b):\quad v^{(k)}(a)=v^{(k)}(b)=0 , k=0,1,2,...,r\} \end{aligned}$$
(42)

Theorem 2

Consider that U(x) can be defined as \(U(x)=(x-a)^{n}(b-x)^{n} \bar{U}(x) \in H_{0,w}^{n}(a,b) \), with \(|\bar{U}^{(m)}(x)|\le L_{m}, m\ge 1\), for some positive real number constants \(L_m\). Therefore, the following assumption is verified by expansion’s coefficients:

$$\begin{aligned} |c_{k}|\lesssim \dfrac{(b-a)^{m} L_{m}}{2^{m} k^{m}}, \quad \forall k >1. \end{aligned}$$
(43)

Proof

Suppose the approximation of function U(x) as:

$$\begin{aligned} U(x)\simeq U_N(x)=\sum _{k=0}^{N} c_k \phi _{n,k}(x), \end{aligned}$$
(44)

Using the relation of orthogonality, Eq. (19), and the definition of \(\phi _{n,k}(x)\), Eq. (10), to get the coefficient \(c_k\) as:

$$\begin{aligned} c_{k}=\frac{1}{\Lambda _k}\int _{a}^{b}\dfrac{\bar{U}(x) T^*_k(x)}{\sqrt{(x-a)(b-x)}} dx, \end{aligned}$$
(45)

where

$$\begin{aligned} \Lambda _k={\left\{ \begin{array}{ll} \pi , \quad k=0,\\ \frac{\pi }{2}, \quad k>0. \end{array}\right. } \end{aligned}$$

Use the substitution \(x=\frac{1}{2}[b+a+(b-a)\cos \theta ]=\zeta \), \(c_k\) expressed as:

$$\begin{aligned} c_k= \frac{1}{\Lambda _k}\int _0^{\pi } \bar{U}(\zeta )\cos k\theta d\theta . \end{aligned}$$
(46)

By applying the integration by parts:

$$\begin{aligned} c_k=\dfrac{b-a}{2k\Lambda _k}\int _0^{\pi } \bar{U}'(\zeta )\alpha _1(\theta )d\theta , \end{aligned}$$
(47)

where

$$\begin{aligned} \alpha _1(\theta )=\sin \theta \sin k\theta . \end{aligned}$$
(48)

It is clear that \(|\alpha _1(\theta )|\le 1\). Thus,

$$\begin{aligned} |c_k|\lesssim \dfrac{(b-a)L_1}{2k} \end{aligned}$$
(49)

Similarly, by applying the integration for the second time:

$$\begin{aligned} c_k=\dfrac{(b-a)^2}{2^2k(k^2-1)\Lambda _k}\int _0^{\pi } U'(\zeta )\alpha _2(\theta )d\theta , \end{aligned}$$
(50)

where \(\alpha _2(\theta )=\sin k \theta \cos \theta \sin \theta -k \cos k \theta \sin ^2 \theta \) with \(|\alpha _2 (\theta )|\le k+1\). Consequently:

$$\begin{aligned} |c_k|\lesssim \dfrac{(b-a)^2L_2}{2^2 k^2} \end{aligned}$$
(51)

Repeating the steps \(m-2\) to complete the proof. \(\square \)

Theorem 3

If U(x) verifies the assumptions of Theorem (2) and Lemma (1), then the absolute error is observed as:

$$\begin{aligned} |u-u_N|\lesssim O\left( \frac{1}{N^{m-1}}\right) \end{aligned}$$
(52)

Proof

Eq.(44), as stated, shows that

$$\begin{aligned} |U-U_N|=|\sum _{k=N+1}^{\infty }c_k \phi _{n,k}(x) |. \end{aligned}$$
(53)

From the inequalities Eqs. (17) and (43), we have:

$$\begin{aligned} |U-U_N|\lesssim \dfrac{(b-a)^{m+2n} L_{m}}{2^{m}}\left| \sum _{k=N+1}^{\infty }\frac{1}{k^m}\right| . \end{aligned}$$
(54)

Applying Lemma (1) to get:

$$\begin{aligned} |u-u_N|\lesssim \dfrac{(b-a)^{m+2n} L_{m}}{2^{m}N^{m-1}}. \end{aligned}$$
(55)

\(\square \)

In the forthcoming section, the theoretical convergences will be verified numerically by solving several BVPs.

In the next section, some numerical examples will be solved and approximated via the introduced polynomials. The examples include applications for beam models and Emden-Flower-type equations. All the simulations have been executed by Mathematica 13.2 via Intel(R) Core(TM) i7-4810MQ CPU @ 2.80GHz 2.80 GHz, 8.00 GB RAM.

6 Solving even-order boundary value problems

Through this section, the introduced methods, ESCH-Galerkin and ESCH-Tau, via our novel basis functions, will be used to approximate the solution of BVPs of even order. In addition, the model of the beam model of its two cases, clamped-clamped and pinned-pinned, in addition to the Emden-Flower type, was studied. Finally, the obtained results are compared with the methods of others.

Example 1

Consider the fourth-order boundary value problem, which describes the model of bending of a beam hinged from both sides:

$$\begin{aligned} U^{(4)}(x)+4U(x)=1, \quad x\in [-1,1], \, U(\pm 1)=U''(\pm 1)=0, \end{aligned}$$
(56)

and its exact solution

\(U(x)= \frac{1}{4}\left[ 1-\dfrac{2(\sin 2 \sinh 1 \sin x \sinh x+\cos 1 \cosh 1 \cos x \cosh x)}{\cos 2+\cosh 2}\right] \).

To satisfy the homogeneous conditions, the value of n will be chosen as \(n=3\). Table 1 compares ESCH-Galerkin and ESCG-Tau methods with two other methods in [37, 38] for various values of N. The two techniques achieved high accuracy and efficiency. The authors in [37] used the Lucas polynomials as the polynomials function. While some quasi-orthogonal approximations were used in [38]. The log error is displayed in Fig. 1 for different values of N using the ESCH-Galerkin method. That proved the stability of our method.

Table 1 The MAE for Example 1
Fig. 1
figure 1

Log error for Example 1 using ESCH-Galerkin

Example 2

Consider the nonlinear fourth-order equation:

$$\begin{aligned} U^{(4)}(x)&=e^{-x}U^2(x), \quad x\in [0,1], \nonumber \\ U(0)&=1,U(1)=e, U''(0)=1,U''(1)=e, \end{aligned}$$
(57)

and its exact solution \(U(x)=e^x\).

Before solving this example, we converted the conditions to homogeneous using relation (39) to get \(U(x)+\sum \nolimits _{i=0}^{5} A_{i}x^i\) where \(A_0=-1\), \(A_1=-1\), \(A_2=\frac{-1}{2}\), \(A_3=\frac{1}{2}(35 - 13 e)\), \(A_4= \frac{1}{2} (-49 + 18 e)\), and \(A_5=\frac{1}{2} (19 - 7 e)\). For \(n=3\), the MAE of the two techniques and another method are presented in Table 2. Bernstein and Bernoulli polynomials were applied as basis functions in [39]. The double precision at \(N=6\) has been achieved by using the ESCH-Galerkin method. In contrast, Fig. 2 shows the stability of the ESCH-Galerkin and ESCH-Tau methods.

Table 2 The MAE of Example 2 via ESCH-Galerkin
Fig. 2
figure 2

Log error for Example 2

Example 3

Consider the nonlinear Emden–Flower-type Equation [41]:

$$\begin{aligned} U^{(4)}(x)+\frac{8}{x} U^{(3)}(x)+\frac{12}{x^2}U^{(2)}(x)+U^m(x)=0,\quad x\in (0,1), \, m\in \mathbb {N}, \end{aligned}$$
(58)

with the initial conditions \(U(0)=1\) and \(U'(0)=U''(0)=U'''(0)=0\).

While the exact solution for \(m=0\) is \(U(x)=1-\frac{x^4}{360}\).

For \(n=2\), the transformation, according to Eq. (39), will be \(A_0=-1\), \(A_1=0\), \(A_2=\frac{-1}{360}\), and \(A_3=\frac{1}{180}\). The application of the two proposed methods for \(N=0,1,2,\ldots \), we found this approximate solution: \(u_N(x)=\sum _{k=0}^{N} c_k \phi _{n,k}(x)\), where \(c_0=\frac{-1}{360}\), \(c_k=0\); \(k=1,2,3\cdots \) i.e. \(u_N(x)=\frac{-x^2}{360}+\frac{x^3}{180}-\frac{x^4}{360}\), which is the exact solution.

Example 4

Consider the eight-orde IBVP:

$$\begin{aligned} \begin{aligned}&U^{(8)}(x)+\frac{1}{x^4} U^{(4)}(x)+\frac{1}{1-x}U(x)=f(x),\quad x\in [0,1]\\&\quad U^{(r)}(0)=U^{(r)}(1)=0\quad r=0,1,2,3. \end{aligned} \end{aligned}$$
(59)

While its exact solution \(U(x)=x^4(1-x)^4\). the f(x) can be obtained.

By applying the two techniques directly at \(n=4\), we achieved the exact solution at a small iteration \(N=2\). While the author [8] reached \(2.6\times 10^{-12}\) as a MAE at \(N=32\).

Example 5

Consider the following eighth-order BVP:

$$\begin{aligned}{} & {} U^{(8)}(x)-U(x)=-8(2x \cos x+7\sin x), \quad x\in [-1,1]\nonumber \\{} & {} U(-1)=0, \quad U'(-1)=2 \sin (1), \quad U''(-1)=-4\cos (1)-2\sin (1),\quad \nonumber \\{} & {} U'''(-1)=6\cos (1)-6\sin (1),\nonumber \\{} & {} U(1)=0,\quad U'(1)=2 \sin (1), \quad U''(1)=4\cos (1)+2\sin (1), \quad \nonumber \\{} & {} U'''(1)=6\cos (1)-6\sin (1). \end{aligned}$$
(60)

While its exact solution \(U(x)=(x^2-1) \sin x\). Cause of the non-homogeneous conditions, The unknown function will be converted to \(u(x)=U(x)+\sum \nolimits _{i=0}^{7}\), where \(A_0=0\), \(A_1=\frac{1}{8} (-7) (\cos 1-2 \sin 1)\), \(A_2=0\), \(A_3=\frac{1}{8} (17 \cos 1-22 \sin 1)\), \(A_4=0\), \(A_5=\frac{1}{8} (10 \sin 1-13 \cos 1)\), \(A_6=0\), and \(A_7=\frac{1}{8} (3 \cos 1-2 \sin 1)\). Table 3 compares the results of the two proposed methods and the method in [42], which used the generalized Jacobi polynomials as basis functions.

Table 3 The MAE for Example 5 at various N

Example 6

Consider the nonlinear eight-order equation:

$$\begin{aligned} \begin{aligned}&U^{(8)}(x)=e^{-x}U^2(x), \quad x\in [0,1], \\&U(0)=U'(0)= U''(0)= U'''(0)=1,\\&U(1)=U'(1)=U''(1)=,U'''(1)=e, \end{aligned} \end{aligned}$$
(61)

and its exact solution \(U(x)=e^x\).

Using similar procedures for the non-homogenous conditions, Table 4 has presented the AE between the proposed methods for \(n=4\) and the method in [43]. The authors in [43] used the non-orthogonal Vieta–Lucas Polynomials. In addition, the \(O\left( \frac{1}{N^{m-1}}\right) \) of this example is \(7.8\times 10^{-3}\), and the computational time is 0.025 mins. Fig. 3 presents the log error of ESCH-Galerkin.

Table 4 The AE of different methods for Example 6
Fig. 3
figure 3

Log error for Example 6 by using ESCH-Galerkin

7 Conclusion

New orthogonal polynomials are generated from shifted Cheyshev polynomials. These polynomials have been called ESCH-Ps throughout this paper. Some of the essential relations of ESCH-Ps are investigated and proved. Then, the operational matrix of the mth derivative has been formed. This matrix has been applied via Galerkin and Tau method for solving even-order BVPS. In addition, the expansion’s error analysis and convergence are discussed in depth. Finally, some even-order BVPs have been solved by the two proposed techniques. Comparing the obtained results and other methods confirms the effectiveness and efficiency of the presented matrices and methods. We aim to extend the presented numerical schemes to handle partial differential equations in one temporal space and one/two spatial variables in the near future.