1 Introduction

There are many methods that have vital roles in numerical analysis, see for example [23, 24, 33]. Of these methods are the spectral methods that have effective roles in dealing with partial differential equations, ordinary differential equations and fractional differential equations (see [18, 40]). The principal idea of spectral methods is based on the assumption that the approximate solution can be written as linear combinations of certain basis functions which may be orthogonal or otherwise. Common spectral methods include collocation, tau, and Galerkin methods. The latter methods have been widely used by many authors. For example, the typical collocation method is utilized in Atta et al. [8] to treat multi-term fractional differential equations (FDEs). The spectral tau method is applied in Bhrawy et al. [11] along with the Jacobi operational matrix to treat the time-fractional diffusion-wave equations. Also, a spectral solution of the non-linear one-dimensional Burgers’ equation was developed based on introducing new derivatives formulas of the Chebyshev polynomials of the sixth-kind along with the application of the spectral tau method in [1]. The Galerkin approach was followed by Youssri and Abd-Elhameed [39] to solve the time-fractional telegraph equations. For some other articles employ different spectral methods, see for example Refs. [10, 14, 29].

Because of their relevance in a variety of disciplines, several types of special functions have been intensively researched. The Fibonacci and Lucas polynomials, as well as their variations and extensions, are examples of these important special functions (see [27]). These polynomials have been investigated theoretically and practically. For example, the authors in [6] established new connection formulas between the Fibonacci polynomials and some Chebyshev polynomials. Supersymmetric Fibonacci polynomials were discussed in [38]. Numerically, these polynomials were utilized to treat several types of differential equations. For examples, the authors in [2, 5], used respectively, the Fibonacci polynomials and their generalizations to treat some types of FDEs, while the same authors in [3, 4] employed respectively, the Lucas polynomials and their generalized polynomials to treat some types of FDEs. Mixed types of FDEs were treated in [31] via modified Lucas polynomials. In [21], the authors used Fibonacci polynomials for proposing numerical solutions for the variable-order space-time fractional Burgers–Huxley equation. Fibonacci wavelets were utilized in [35] along with the Galerkin method to treat some types of fractional optimal control problems. In [9], the authors used the generalized Fibonacci operational tau method to treat the fractional Bagley–Torvik Equation. An approximate solution of the two-dimensional Sobolev equation was proposed by employing mixed Lucas and Fibonacci polynomials in [20].

It is known that the telegraph equation is one of the most important problems because it describes many phenomena in different fields. For example, it describes energetic particle transport in the interplanetary medium [19]. Recently, the telegraph type equation has been discussed by many authors. For example, in [16], the authors have proposed a numerical algorithm to solve the one-dimensional hyperbolic telegraph equation using the Galerkin algorithm. In [13], the authors have discussed the one-dimensional hyperbolic telegraph equation using the collocation method. For further studies, one can see [25, 26, 32, 36].

Our main objectives in the current paper can be listed in the following items:

  • Stating and proving some new theorems concerned with the generalized Lucas polynomials (GLPs) and their modified ones.

  • Employing the modified GLPs to obtain a numerical solution of the one-dimensional hyperbolic telegraph equation.

  • Investigate carefully the convergence analysis arising from the proposed modified generalized Lucas expansion.

  • Performing some comparisons of the proposed technique with other methods to clarify the efficiency and accuracy of the presented technique.

To the best of our knowledge, there are some advantages of the proposed technique can be mentioned as follows:

  • Selecting the basis functions in terms of the modified GLPs enables one to obtain approximate solutions with high accuracy by taking a few numbers of the retained modes. This leads to less computational time and computational errors.

  • The presence of two parameters in the GLPs enables one to obtain various approximate solutions for different choices of the two involved parameters.

    The above advantages motivate our interest to employ the GLPs. In addition, the numerical investigations based on the GLPs are few. This also gives us another motivation to utilize numerically this kind of polynomials.

The following is how the paper is structured: Section 2 is devoted to presenting some preliminary information and relationships of the GLPs that will be utilized throughout the work. Section 3 is interested in transforming the linear one-dimensional telegraph type equation into its corresponding integral equation. Section 4 focuses on the selection of the basis function and the utilization of the Galerkin method to solve the linear one-dimensional telegraph problem. Section 5 presents a description of the convergence and error analysis of the proposed expansion. Section 6 displays the numerical results accompanied by some comparisons. Finally, in Sect. 7, a conclusion is displayed.

2 Some fundamental properties of the GLPs

The following recurrence relation

$$\begin{aligned} {\phi _{i}^{a,b}}(x)=a\,x\,{\phi _{i-1}^{a,b}}(x)+b\,{\phi _{i-2}^{a,b}}(x),\quad i\ge 2, \end{aligned}$$
(2.1)

generates the sequence of GLPs with the initial values:

$$\begin{aligned} {\phi _{0}^{a,b}}(x)=2 \quad {\text {and}} \quad {\phi _{1}^{a,b}}(x)=a\,x, \end{aligned}$$

where a and b be any nonzero real constants.

The power form representation of the GLPs is given by

$$\begin{aligned} {\phi _{i}^{a,b}}(x)=i\,\sum _{r=0}^{\lfloor \frac{i}{2}\rfloor }\frac{\left( {\begin{array}{c}i-r\\ r\end{array}}\right) (a\,x)^{i-2r}(b)^r}{i-r},\quad i\ge 1, \end{aligned}$$

which can be expressed alternatively as

$$\begin{aligned} {\phi _{i}^{a,b}}(x)=2\,i\,\sum _{k=0}^{i}\frac{{a^k}\,b^{\frac{i-k}{2}}\,\delta _{i+k}\,\left( {\begin{array}{c}\frac{i+k}{2}\\ \frac{i-k}{2}\end{array}}\right) }{i+k}x^k,\quad i\ge 1, \end{aligned}$$
(2.2)

where

$$\begin{aligned} \delta _r={\left\{ \begin{array}{ll} 1, &{} \text{ if } r\, \text {even}, \\ 0, &{} \text{ if } r\, \text {odd}. \end{array}\right. } \end{aligned}$$
(2.3)

The Binet’s form for GLPs can be written in the following form:

$$\begin{aligned} {\phi _{i}^{a,b}}(x)=\frac{\left( a\,x+\sqrt{a^2\,x^2+4\,b}\right) ^i+\left( a\,x-\sqrt{a^2\,x^2+4\,b}\right) ^i}{2^i}, \quad i\ge 0. \end{aligned}$$

3 Integral form of linear one-dimensional telegraph type equation

In this section, we focus on transforming the linear one-dimensional telegraph type equation into its corresponding integral equation, and after that, we handle the integral equation numerically using our presented technique in the next section.

Consider the following linear one-dimensional telegraph type equation [13]:

$$\begin{aligned} {\partial _{tt}}u(x,t)+\alpha \,{\partial _{t}}u(x,t)+\beta \,u(x,t)={\partial _{xx}}u(x,t)+f(x,t), \quad 0<x\le \ell , \quad 0<t\le \tau , \end{aligned}$$
(3.1)

subject to the initial conditions:

$$\begin{aligned} u(x,0)=g_{1}(x), \quad \partial _{t}u(x,0)=g_{2}(x), \quad 0<x\le \ell , \end{aligned}$$
(3.2)

and the nonhomogeneous boundary conditions:

$$\begin{aligned} u(0,t)=h_{0}(t), \quad u(\ell ,t)=h_{\ell }(t), \quad 0<t\le \tau , \end{aligned}$$
(3.3)

where f(xt) represents the source term and \(\alpha \), \(\beta \) are real constants.

By integrating Eq. (3.1) twice with respect to the variable t to reduce the number of conditions, one gets

$$\begin{aligned} u(x,t)+\alpha \,\int _{0}^{t}u(x,z)\,\mathrm{d}z+\beta \,\int _{0}^{t}\int _{0}^{w}u(x,z)\,\mathrm{d}z\,\mathrm{d}w=\int _{0}^{t}\int _{0}^{w}\partial _{xx} u(x,z)\,\mathrm{d}z\,\mathrm{d}w+f_{1}(x,t), \end{aligned}$$
(3.4)

subject to the following nonhomogeneous boundary conditions:

$$\begin{aligned} u(0,t)=h_{0}(t), \quad u(\ell ,t)=h_{\ell }(t), \quad 0<t\le \tau , \end{aligned}$$
(3.5)

with

$$\begin{aligned} f_{1}(x,t)=\int _{0}^{t}\int _{0}^{w}f(x,z)\,\mathrm{d}z\,\mathrm{d}w+g_{1}(x)\,(1+\alpha \,t)+g_{2}(x)\,t. \end{aligned}$$

Therefore, Eq. (3.4) may be solved under condition (3.5) instead of solving Eq. (3.1) under the conditions (3.2) and (3.3).

To further develop our spectral algorithm for treating (3.4)–(3.5), the following transformation is suitable to transform the non-homogeneous boundary conditions into the homogeneous ones [15]:

$$\begin{aligned} v(x,t)=u(x,t)-\left( 1-\frac{x}{\ell }\right) \,h_{0}(t)-\frac{x}{\ell }\,h_{\ell }(t). \end{aligned}$$
(3.6)

Now, Eqs. (3.4)–(3.6) can be combined to give the following integral equation:

$$\begin{aligned} v(x,t)+\alpha \,\int _{0}^{t}v(x,z)\,\mathrm{d}z+\beta \,\int _{0}^{t}\int _{0}^{w}v(x,z)\,\mathrm{d}z\,\mathrm{d}w=\int _{0}^{t}\int _{0}^{w}\partial _{xx} v(x,z)\,\mathrm{d}z\,\mathrm{d}w+f_{2}(x,t), \end{aligned}$$
(3.7)

subject to the homogeneous boundary conditions:

$$\begin{aligned} v(0,t)= v(\ell ,t)=0,\quad 0<t\le \tau , \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} f_{2}(x,t)&=f_{1}(x,t)-\left( 1-\frac{x}{\ell }\right) \,h_{0}(t)-\frac{x}{\ell }\,h_{\ell }(t)-\alpha \,\int _{0}^{t}\left[ \left( 1-\frac{x}{\ell }\right) \,h_{0}(z)+\frac{x}{\ell }\,h_{\ell }(z)\right] \mathrm{d}z\\ {}&\quad -\beta \,\int _{0}^{t}\int _{0}^{w}\left[ \left( 1-\frac{x}{\ell }\right) \,h_{0}(z)+\frac{x}{\ell }\,h_{\ell }(z)\right] \mathrm{d}z\,\mathrm{d}w. \end{aligned} \end{aligned}$$

4 Numerical spectral treatment for linear one-dimensional telegraph type equation

This section focuses on analyzing a spectral Galerkin algorithm for numerically solving the linear one-dimensional telegraph type equation. First, we consider the following two kinds of basis functions:

$$\begin{aligned} \psi _i(x)&=x\,(\ell -x)\,{\phi _{i}^{a,b}}(x), \end{aligned}$$
(4.1)
$$\begin{aligned} \gamma _j(t)&={\phi _{j}^{a,b}}(t). \end{aligned}$$
(4.2)

Consider the following two spaces:

$$\begin{aligned}&P_M(\Omega )=\mathrm{{span}}\{{\psi _i(x)}\,{\gamma _j(t)}:i,j=0,1,\ldots ,M\},\\&P(\Omega )=\{z\in P_M(\Omega ):z(0,t)=z(\ell ,t)=0,\, 0<t\le \tau \}, \end{aligned}$$

where \(\Omega =(0,\ell )\times (0,\tau ]\).

Now, two important theorems related to the basis functions \(\psi _i(x)\) and \(\gamma _j(t)\) are stated and proved. The first theorem gives an expression for the n-times repeated integrations of \(\gamma _j(t)\), while the second introduces an expression for nth derivative of \(\psi _i(x)\).

Lemma 4.1

The general formula that converts the n-times repeated integrals to a single integral is given by [37]

$$\begin{aligned} {\int _{0}^{x}\int _{0}^{x_1}\ldots \int _{0}^{x_{n-1}}}{\,u(x_n)}\, {\mathrm{d}x_{n}\,\mathrm{d}x_{n-1}\ldots \mathrm{d}x_1}=\frac{1}{(n-1)!}\int _{0}^{x}\,(x-t)^{n-1}\,u(t)\,\mathrm{d}t. \end{aligned}$$
(4.3)

Theorem 4.2

The n-times repeated integrations of \(\gamma _j(x)\) are given by

$$\begin{aligned} {I^{(n)}}\gamma _j(x)=\frac{2}{(n-1)!}{\left\{ \begin{array}{ll} \sum _{k=0}^{n-1}\,\lambda _{n,k}\,x^n, &{} \text{ if } j=0, \\ \sum _{k=0}^{n-1}\displaystyle \sum _{s=0}^{j}\,\mu _{j,s,n,k}\,x^{n+s}, &{} \text{ if } j\ge 1, \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} \lambda _{n,k}=\frac{(-1)^k\,\left( {\begin{array}{c}n-1\\ k\end{array}}\right) }{k+1}, \end{aligned}$$

and

$$\begin{aligned} \mu _{j,s,n,k}=\frac{j\,(-1)^k\,\delta _{j+s}\,\left( {\begin{array}{c}n-1\\ k\end{array}}\right) \,\left( {\begin{array}{c}\frac{j+s}{2}\\ \frac{j-s}{2}\end{array}}\right) \,a^s\,b^{\frac{j-s}{2}}}{(j+s)\,(s+k+1)}. \end{aligned}$$

Proof

Making use of relation (4.3), one finds

$$\begin{aligned} {I^{(n)}}\gamma _j(x)=\frac{1}{(n-1)!}\int _{0}^{x}\,(x-t)^{n-1}\,\gamma _j(t)\,\mathrm{d}t. \end{aligned}$$

For \(j=0\), with the aid of \(\gamma _0(t)=2\), one gets

$$\begin{aligned} \begin{aligned} {I^{(n)}}\gamma _0(x)&=\frac{2}{(n-1)!}\int _{0}^{x}\,(x-t)^{n-1}\,\mathrm{d}t\\&=\frac{2}{(n-1)!}\int _{0}^{x}\,\sum _{k=0}^{n-1}\,\left( {\begin{array}{c}n-1\\ k\end{array}}\right) \,(-t)^k\,x^{n-k-1}\,\mathrm{d}t. \end{aligned} \end{aligned}$$

After integrating the previous equation, we get the desired result.

For \(j\ge 1\), one has

$$\begin{aligned} {I^{(n)}}\gamma _j(x)=\frac{1}{(n-1)!}\int _{0}^{x}\,(x-t)^{n-1}\,\gamma _j(t)\,\mathrm{d}t . \end{aligned}$$
(4.4)

Based on substituting by the formula (2.2) into (4.4) yields

$$\begin{aligned} \begin{aligned} {I^{(n)}}\gamma _j(x)&=\frac{1}{(n-1)!}\int _{0}^{x}\,(x-t)^{n-1}\,\sum _{s=0}^{j}\frac{2\,j\,{a^s}\,b^{\frac{j-s}{2}}\,\delta _{j+s}\,\left( {\begin{array}{c}\frac{j+s}{2}\\ \frac{j-s}{2}\end{array}}\right) }{j+s}t^s\,\mathrm{d}t\\&=\frac{1}{(n-1)!}\int _{0}^{x}\,\sum _{k=0}^{n-1}\,\sum _{s=0}^{j}\,\frac{2\,j\,(-1)^k\,{a^s}\,b^{\frac{j-s}{2}}\,\delta _{j+s}\,\left( {\begin{array}{c}n-1\\ k\end{array}}\right) \,\left( {\begin{array}{c}\frac{j+s}{2}\\ \frac{j-s}{2}\end{array}}\right) }{j+s}\,x^{n-k-1}\,t^{s+k}\,\mathrm{d}t, \end{aligned} \end{aligned}$$
(4.5)

and accordingly, the result for \(j\ge 1\) can be obtained after integrating Eq. (4.5). \(\square \)

As a direct consequence of Theorem 4.2, two specific integrals formulas can be deduced. The following corollary exhibits these formulas.

Corollary 4.3

The following two integrals formulas hold:

$$\begin{aligned}&\int _{0}^{x}{\gamma _j({x_1})}\mathrm{d}{x_1}={\left\{ \begin{array}{ll} 2\,x, &{} \text{ if } j=0, \\ \\ \sum _{s=0}^{j}\,\frac{2\,j\,\delta _{j+s}\,\left( {\begin{array}{c}\frac{j+s}{2}\\ \frac{j-s}{2}\end{array}}\right) \,a^s\,b^{\frac{j-s}{2}}}{(j+s)\,(s+1)}\,x^{s+1},&\text{ if } j\ge 1, \end{array}\right. }\\&\int _{0}^{x}\int _{0}^{x_1}{\gamma _j({x_2})}\mathrm{d}{x_2}\,\mathrm{d}{x_1}={\left\{ \begin{array}{ll} x^2, &{} \text{ if } j=0, \\ \\ \sum _{k=0}^{1}\sum _{s=0}^{j}\,\frac{2\,j\,(-1)^k\,\delta _{j+s}\,\left( {\begin{array}{c}1\\ k\end{array}}\right) \,\left( {\begin{array}{c}\frac{j+s}{2}\\ \frac{j-s}{2}\end{array}}\right) \,a^s\,b^{\frac{j-s}{2}}}{(j+s)\,(s+k+1)}\,x^{s+2},&\text{ if } j\ge 1. \end{array}\right. } \end{aligned}$$

Theorem 4.4

For all positive integers i,n and \(i\ge {n}\), the nth derivative of \(\psi _i(x)\) can be expressed in the form

$$\begin{aligned} D^{n}\psi _i(x)= \sum _{k=0\atop {(i+k+n)}\,\text {odd}}^{i-n+1}\rho _{k,i,n}\,\left[ \ell \,x^k-\frac{(k+n+1)}{(k+1)}\,x^{k+1}\right] -\sigma _{i,n}, \end{aligned}$$
(4.6)

where

$$\begin{aligned}&\rho _{k,i,n}= \frac{2\,i\,(k+n)\Gamma (\frac{i+k+n+1}{2})\,a^{k+n-1}\,b^{\frac{i-k-n+1}{2}}}{(i+k+n-1)\,\Gamma (k+1)\,\Gamma (\frac{i-k-n+3}{2})}, \\&\sigma _{i,n}=\frac{2\,i\,\delta _{i+n-2}\,\eta _{i+n}\,(n^2-n)\,\Gamma (\frac{i+n}{2})\,a^{n-2}\,b^{\frac{i-n+2}{2}}}{\Gamma (\frac{i-n+4}{2})}, \end{aligned}$$

\(\delta _{i+n-2}\) is defined in (2.3) and

$$\begin{aligned} \eta _{m}={\left\{ \begin{array}{ll} 1, &{} m=2, \\ \frac{1}{m-2}, &{} otherwise. \end{array}\right. } \end{aligned}$$

Proof

Equation (2.2) enables one to expand \(\psi _{i}(x)\) in the following form:

$$\begin{aligned} \psi _i(x)=2\,i\,\mathop {\sum }\limits _{k=0\atop {{(i+k)}\, \text {even}}}^{i}\frac{{a^k}\,b^{\frac{i-k}{2}}\,\left( {\begin{array}{c}\frac{i+k}{2}\\ \frac{i-k}{2}\end{array}}\right) }{i+k}(\ell x^{k+1}-x^{k+2}),\quad i\ge 1. \end{aligned}$$

Based on the well-known identity:

$$\begin{aligned} D^{n}x^r=\frac{r!}{(r-n)!}\,x^{r-n},\quad r\ge n,\quad n=1,2,\ldots ,r, \end{aligned}$$

the following relation can be obtained

$$\begin{aligned} D^{n}{\psi _i(x)}=2\,i\,\sum _{k=0\atop {(i+k)\, \text {even}}}^{i}\frac{{a^k}\,b^{\frac{i-k}{2}}\,\left( {\begin{array}{c}\frac{i+k}{2}\\ \frac{i-k}{2}\end{array}}\right) }{i+k}\left( \frac{\ell \,(k+1)!}{(k-n+1)!} \,x^{k-n+1}-\frac{(k+2)!}{(k-n+2)!}\,x^{k-n+2}\right) . \end{aligned}$$

After performing some rather lengthy manipulations, the last formula turns into

$$\begin{aligned} \begin{aligned} D^{(n)}{\psi _i(x)}&=2\,i\,\sum _{k={n-1}\atop {{i+k}\, \text {even}}}^{i}\frac{{a^k}\,b^{\frac{i-k}{2}}\,(k+1)\,(\frac{i+k}{2})!}{(i+k)\,(\frac{i-k}{2})!\,(k-n+1)!}\left( \ell \,x^{k-n+1}-\frac{k+2}{k-n+2}\,x^{k-n+2}\right) \\&\quad -\frac{2\,i\,\delta _{i+n-2}\,\eta _{i+n}\,(n^2-n)\,(\frac{i+n-2}{2})!\,a^{n-2}\,b^{\frac{i-n+2}{2}}}{(\frac{i-n+2}{2})!}. \end{aligned} \end{aligned}$$

Replace \(k\rightarrow k+n-1\), the desired formula (4.6) can be obtained. \(\square \)

As a direct consequence of Theorem 4.4, the following special result holds:

Corollary 4.5

For all \(i\ge 2\), one has

$$\begin{aligned} D^2\psi _i(x)={\left\{ \begin{array}{ll} -4, &{} \text{ if } i=0, \\ \\ \displaystyle \sum _{k=0}^{i-1}\frac{2\,i\,\delta _{i+k+1}\,(k+2)\Gamma (\frac{i+k+3}{2})\,a^{k+1}\,b^{\frac{i-k-1}{2}}}{(i+k+1)\,\Gamma (k+1)\,\Gamma (\frac{i-k+1}{2})}\,\left[ \ell \,x^k-\frac{(k+3)}{(k+1)}\,x^{k+1}\right] -4\,\delta _{i}\,b^{\frac{i}{2}}, &{} \text{ if } i\ge 1. \end{array}\right. } \end{aligned}$$

4.1 Galerkin technique for handling Eq. (3.7)

If we assume that \(v(x,t)\in P(\Omega )\), then v(xt) can be written as

$$\begin{aligned} v_M(x,t)=\sum _{i=0}^{M}\sum _{j=0}^{M}c_{ij}\,\psi _i(x)\,\gamma _j(t). \end{aligned}$$

Now, and for the sake of applying the Galerkin method, we first compute the residual of Eq. (3.7). It can be written in the form

$$\begin{aligned} \begin{aligned} {\varvec{R}}(x,t)&=\sum _{i=0}^{M}\sum _{j=0}^{M}c_{ij}\,\psi _i(x)\,\gamma _j(t)+\alpha \,\sum _{i=0}^{M}\sum _{j=0}^{M}c_{ij}\,\psi _i(x)\,\int _{0}^{t}\gamma _j(z)\,\mathrm{d}z\\&\quad +\beta \,\sum _{i=0}^{M}\sum _{j=0}^{M}c_{ij}\,\psi _i(x)\,\int _{0}^{t}\int _{0}^{w}\gamma _j(z)\,\mathrm{d}z\,\mathrm{d}w-\sum _{i=0}^{M}\sum _{j=0}^{M}c_{ij}\,\partial _{xx}\psi _i(x)\,\int _{0}^{t}\int _{0}^{w}\gamma _j(z)\,\mathrm{d}z\,\mathrm{d}w\\ {}&-f_{2}(x,t). \end{aligned} \end{aligned}$$

In virtue of Eqs. (4.1), (4.2) and Corollaries 4.3, 4.5, the residual \({\varvec{R}}(x,t)\) may be obtained. And hence the application of the Galerkin method leads to

$$\begin{aligned} \int _{0}^{\tau }\int _{0}^{\ell }{\varvec{R}}(x,t)\,\psi _i(x)\,\gamma _j(t)\,\mathrm{d}x\,\mathrm{d}t=0,\quad 0\le {i,j}\le M. \end{aligned}$$
(4.7)

Now, Eq. (4.7) generates a linear system of equations in the unknown expansion coefficients \(c_{ij}\) of dimension \((M +1)^2\), they may be solved via Gaussian elimination procedure.

5 Convergence and error analysis

In this section, the convergence and error analysis of the proposed double generalized Lucas expansion are discussed. Several required lemmas are employed in this discussion.

Lemma 5.1

As shown in Abramowitz and Stegun [7], the following inequality holds:

$$\begin{aligned} \sum _{i=0}^{\infty }\frac{x^{n+2\,i}}{i!\,(i+n)!}=I_n(2\,x), \end{aligned}$$

where \(I_n(x)\) is the modified Bessel function of order n of the first kind.

Lemma 5.2

As shown in Luke [28], the following inequality holds:

$$\begin{aligned} |I_n(x)|\le \frac{x^n\,\cosh (x)}{2^n\,\Gamma (n+1)}, \quad x>0. \end{aligned}$$

Lemma 5.3

As shown in Abd-Elhameed and Youssri [4], let f(x) be an infinitely differentiable function at the origin. Then

$$\begin{aligned} f(x)=\sum _{i=0}^{\infty }\sum _{j=0}^{\infty }\frac{(-1)^j\,\delta _i\,a^{-i-2\,j}\,b^j\,f^{(i+2\,j)}(0)}{j!\,(i+j)!}\,{\phi _{i}^{a,b}}(x). \end{aligned}$$

Lemma 5.4

The following inequality holds for GLPs:

$$\begin{aligned} |{\phi _{i}^{a,b}}(x)|\le 2\,\epsilon ^{i}, \quad x\in [0,\ell ], \quad \forall \ \ell >0, \end{aligned}$$
(5.1)

where \(\epsilon =\sqrt{|a|^2\,\ell ^2+2\,|b|}\).

Proof

We proceed by induction on i. Suppose that the inequality (5.1) is true at \((i-1)\) and \((i-2)\), one gets

$$\begin{aligned} |{\phi _{i-1}^{a,b}}(x)|\le 2\,\epsilon ^{i-1}\quad and \quad |{\phi _{i-2}^{a,b}}(x)|\le 2\,\epsilon ^{i-2}, \end{aligned}$$
(5.2)

recurrence relation (2.1) and inequalities (5.2) enable us to write

$$\begin{aligned} \begin{aligned} |{\phi _{i}^{a,b}}(x)|&=|a\,x\,{\phi _{i-1}^{a,b}}(x)+b\,{\phi _{i-2}^{a,b}}(x)|\\ {}&\le 2\,|a|\,\ell \,\epsilon ^{i-1}+2\,|b|\,\epsilon ^{i-2}= 2\,\epsilon ^{i-1}\left( |a|\,\ell +\frac{|b|}{\epsilon }\right) . \end{aligned} \end{aligned}$$
(5.3)

With the aid of the following identity

$$\begin{aligned} \left( \epsilon -\frac{|b|}{\epsilon }\right) ^2-|a|^2\,\ell ^2=\frac{|b|^2}{{\epsilon }^2}\ge 0, \end{aligned}$$

one finds

$$\begin{aligned} \left( |a|\,\ell +\frac{|b|}{\epsilon }\right) \le \epsilon , \end{aligned}$$

inserting the last inequality into Eq. (5.3) yields

$$\begin{aligned} |{\phi _{i}^{a,b}}(x)|\le 2\,\epsilon ^{i}. \end{aligned}$$

We obtain the desired result. \(\square \)

Lemma 5.5

The following inequality holds:

$$\begin{aligned} |{\psi _{i}^{a,b}}(x)|\le \frac{\ell ^2}{2}\,\epsilon ^{i}, \quad x\in [0,\ell ], \quad \forall \ \ell >0,\quad and \quad \epsilon =\sqrt{|a|^2\,\ell ^2+2\,|b|}. \end{aligned}$$
(5.4)

Proof

Eq. (4.1) enables one to write

$$\begin{aligned} |{\psi _{i}^{a,b}}(x)|=|x\,(\ell -x)\,{\phi _{i}^{a,b}}(x)|\le \frac{\ell ^2}{4}|{\phi _{i}^{a,b}}(x)|, \end{aligned}$$

by making use of Lemma 5.4, the desired inequality (5.4) can be obtained. \(\square \)

Theorem 5.6

If a function \(u(x,t)=x\,(\ell -x)\,f_{1}(x)\,f_{2}(t)\in L^{2}(\Omega )\), with \(|f_{k}^{(i)}(0)|\le Q_{k}^i\), \(k=1,2\), \(i\ge 0\), \(Q_{k}\) is a positive constant. And if  u(xt) has the expansion \(u(x,t)=\displaystyle \sum _{i=0}^{\infty }\displaystyle \sum _{j=0}^{\infty }c_{ij}\,\psi _i(x)\,\gamma _j(t),\) the following conclusions are obtained:

  1. 1.

    \(|c_{ij}|\le \frac{|a|^{-i-j}\,Q_{1}^{i}\,Q_{2}^{j}\,A}{i!\,j!}\), where \(A=\cosh (2\,|a|^{-1}\,|b|^{\frac{1}{2}}\,Q_{1})\,\cosh (2\,|a|^{-1}\,|b|^{\frac{1}{2}}\,Q_{2})\).

  2. 2.

    The series converges absolutely.

Proof

With the aid of Lemma 5.3 and according to the assumption \(u(x,t)=x\,(\ell -x)\,f_{1}(x)\,f_{2}(t)\), we have

$$\begin{aligned} c_{ij}=\sum _{k=0}^{\infty }\sum _{s=0}^{\infty }\frac{(-1)^{k+s}\,a^{-i-j-2\,k-2\,s}\,b^{k+s}\,\delta _i\,\delta _j\,f_{1}^{i+2\,k}(0)\,f_{2}^{j+2\,s}(0)}{k!\,s!\,(i+k)!\,(j+s)!}, \end{aligned}$$

using the assumption \(|f_{1,2}^{(i)}(0)|\le Q_{1,2}^i\), one can write

$$\begin{aligned} |c_{ij}|\le \sum _{k=0}^{\infty }\frac{|a|^{-i-2\,k}\,|b|^k\,Q_{1}^{i+2\,k}}{k!\,(i+k)!}\,\sum _{s=0}^{\infty }\frac{|a|^{-j-2\,s}\,|b|^s\,Q_{2}^{j+2\,s}}{s!\,(j+s)!}. \end{aligned}$$

By making use of Lemma 5.1, one gets

$$\begin{aligned} |c_{ij}|\le |b|^{\frac{-i-j}{2}}\,I_{i}\left( 2\,|a|^{-1}\,|b|^{\frac{1}{2}}\,Q_{1}\right) \,I_{j}\left( 2\,|a|^{-1}\,|b|^{\frac{1}{2}}\,Q_{2}\right) . \end{aligned}$$

Now, the application of Lemma 5.2 leads to

$$\begin{aligned} |c_{ij}|\le \frac{|a|^{-i-j}\,Q_{1}^{i}\,Q_{2}^{j}\,A}{i!\,j!},\quad where \quad A=\cosh (2\,|a|^{-1}\,|b|^{\frac{1}{2}}\,Q_{1})\,\cosh (2\,|a|^{-1}\,|b|^{\frac{1}{2}}\,Q_{2}), \end{aligned}$$

which proves the first part of Theorem 5.6.

To prove the second part of Theorem 5.6, using the inequality of the first part, we have

$$\begin{aligned} \sum _{i=0}^{\infty }\sum _{j=0}^{\infty }\left| c_{ij}\psi _i(x)\,\gamma _j(t)\right| \le \sum _{i=0}^{\infty }\sum _{j=0}^{\infty }\left| \frac{|a|^{-i-j}\,Q_{1}^{i}\,Q_{2}^{j}\,A}{i!\,j!}\psi _i(x)\,\gamma _j(t)\right| , \end{aligned}$$

with the aid of Lemmas 5.4 and 5.5, one gets

$$\begin{aligned} \begin{aligned} \sum _{i=0}^{\infty }\sum _{j=0}^{\infty }\left| c_{ij}\psi _i(x)\,\gamma _j(t)\right|&\le \sum _{i=0}^{\infty }\sum _{j=0}^{\infty }\left| \frac{|a|^{-i-j}\,Q_{1}^{i}\,Q_{2}^{j}\,A\,\ell ^2\,\epsilon ^{i+j}}{i!\,j!}\right| \\&\le A\,\ell ^2\,e^{{|a^{-1}\,Q_{1}\,\epsilon |}+{|a^{-1}\,Q_{2}\,\epsilon |}}. \end{aligned} \end{aligned}$$

Applying the comparison test implies that the series \(\displaystyle \sum _{i=0}^{\infty }\displaystyle \sum _{j=0}^{\infty }\left| c_{ij}\psi _i(x)\,\gamma _j(t)\right| \) converges absolutely. \(\square \)

Theorem 5.7

If \(u(x,t)\in L^{2}(\Omega )\), satisfy the assumptions of Theorem 5.6, one gets

$$\begin{aligned} |u-u_M|\le \frac{C_{\xi }\,\xi ^{M+1}}{(M+1)!}, \end{aligned}$$

where \(C_{\xi }=2\,A\,\ell ^2\,e^{2\,\xi }\).

Proof

It is clear that

$$\begin{aligned} \begin{aligned} |u-u_M|&=\left| \sum _{i=0}^{\infty }\sum _{j=0}^{\infty }c_{ij}\,\psi _i(x)\,\gamma _j(t)-\sum _{i=0}^{M}\sum _{j=0}^{M}c_{ij}\,\psi _i(x)\,\gamma _j(t)\right| \\ {}&\le \left| \sum _{i=0}^{M}\sum _{j=M+1}^{\infty }c_{ij}\,\psi _i(x)\,\gamma _j(t)\right| +\left| \sum _{i=M+1}^{\infty }\sum _{j=0}^{\infty }c_{ij}\,\psi _i(x)\,\gamma _j(t)\right| . \end{aligned} \end{aligned}$$

Theorem 5.6 enables us to write

$$\begin{aligned} |u-u_M|\le {A\,\ell ^{2}}\,\left[ \sum _{i=0}^{M}\frac{[\xi _{1}]^i}{i!}\,\sum _{j=M+1}^{\infty }\frac{[\xi _{2}]^j}{j!}+\sum _{i=M+1}^{\infty }\frac{[\xi _{1}]^i}{i!}\,\sum _{j=0}^{\infty }\frac{[\xi _{2}]^j}{j!}\right] , \end{aligned}$$
(5.5)

where \(\xi _{1}={|a^{-1}\,Q_{1}\,\epsilon |}\) and \(\xi _{2}={|a^{-1}\,Q_{2}\,\epsilon |}\).

Now, Inequality (5.5) may be formulated as

$$\begin{aligned} \begin{aligned} |u-u_M|&\le {A\,\ell ^{2}\,e^{\xi _{1}}\,e^{\xi _{2}}}\,\left[ \frac{\Gamma (M+1,\xi _{1})}{\Gamma (M+1)}\,\frac{\gamma (M+1,\xi _{2})}{\Gamma (M+1)}+\frac{\gamma (M+1,\xi _{1})}{\Gamma (M+1)}\right] \\ {}&\le {A\,\ell ^{2}\,e^{\xi _{1}}\,e^{\xi _{2}}}\,\left[ \frac{\gamma (M+1,\xi _{1})}{\Gamma (M+1)}+\frac{\gamma (M+1,\xi _{2})}{\Gamma (M+1)}\right] \\ {}&\le \frac{A\,\ell ^{2}\,e^{\xi _{1}}\,e^{\xi _{2}}}{\Gamma (M+1)}\,\left[ \int _{0}^{\xi _{1}}e^{-t}\,t^M\,dt+\int _{0}^{\xi _{2}}e^{-t}\,t^M\,dt\right] , \end{aligned} \end{aligned}$$

where \(\Gamma (.)\), \(\Gamma (.,.)\) and \(\gamma (.,.)\) denote, respectively, gamma, upper incomplete gamma, and lower incomplete gamma functions [22].

In virtue of simple inequality: \(e^{-t}\le 1,\quad \forall \ t\ \ge 0\), one gets

$$\begin{aligned} |u-u_M|\le \frac{A\,\ell ^2\,e^{\xi _{1}}\,e^{\xi _{2}}\,\left[ \xi _{1}^{M+1}+\xi _{2}^{M+1}\right] }{(M+1)!}. \end{aligned}$$

Take \(\xi =\max \{\xi _{1},\xi _{2}\}\), the desired result can be obtained. \(\square \)

6 Illustrative examples

In this section, the generalized Lucas Galerkin method (GLGM) is applied for obtaining a numerical solution to the linear one-dimensional telegraph type equation with different conditions. The accuracy of the numerical results is measured using \(L_\infty \), \(L_2\) and root mean square error (RMSE).

$$\begin{aligned} \begin{aligned}&L_\infty ={||v-v_M||}_\infty =\max \{|v(x,t)-{v_M}(x,t)|, \quad 0\le x\le \ell , \quad 0\le t\le \tau \},\\&L_2={||v-v_M||}_2=\left( \int _{0}^{\ell }\int _{0}^{\tau }|v-v_M|^{2}\mathrm{d}t\,\mathrm{d}x\right) ^{\frac{1}{2}}\\&\mathrm{RMSE}=\sqrt{\frac{\sum _{i=0}^{M}(v(x_i,t)-v_M(x_i,t))^2}{M+1}}. \end{aligned} \end{aligned}$$

Remark 6.1

All results of Examples 1, 2, 3 and 4 are calculated at \(a=b=1\).

Example 6.2

As given in [16, 34], consider Eqs. (3.1)–(3.3) with the following choices:

$$\begin{aligned} \ell =1,\ \tau =1,\ \alpha =1,\ \beta =1,\ g_1(x)=x^2,\ g_2(x)=1,\ h_0(t)=t,\ h_1(t)=t+1,\ f(x,t)=x^2+t-1, \end{aligned}$$

where the exact solution is \(u(x,t)=x^2+t\). Now, applying the technique described in Section 3, one gets

$$\begin{aligned} \begin{aligned}&v(x,t)+\int _{0}^{t}v(x,z)\,\mathrm{d}z+\int _{0}^{t}\int _{0}^{w}v(x,z)\,\mathrm{d}z\,\mathrm{d}w-\int _{0}^{t}\int _{0}^{w}\partial _{xx} v(x,z)\,\mathrm{d}z\,\mathrm{d}w\\&\quad =x\,(x-1)\,\left( 1+t+\frac{t^2}{2}\right) -t^2, \end{aligned} \end{aligned}$$

subject to the homogeneous boundary conditions:

$$\begin{aligned} v(0,t)= v(1,t)=0,\quad 0<t\le 1, \end{aligned}$$

and the exact solution is: \(v(x,t)=x^2-x\).

The application of GLGM described in Sect. 4 for \(M=1\) yields the following system of equations:

$$\begin{aligned}&640\, c_{00} + 108 \,c_{01} + 160\, c_{10} + 27\,c_{11}= -160,\\&2120\, c_{00} + 396\, c_{01} + 530\, c_{10} + 99\, c_{11} = -530, \\&1120\, c_{00} + 189\, c_{01} + 384\, c_{10} + 62\, c_{11} = -280, \\&7420\, c_{00} + 1386\, c_{01} + 2600\, c_{10} + 460\, c_{11} = -1855, \end{aligned}$$

which yield \(\{c_{00}=\frac{-1}{4},\,c_{01}=0,\,c_{10}=0,\,c_{11}=0\}\), and therefore \(v_M(x,t)=x^2-x\), that is \(u_M(x,t)=x^2+t\), which is the exact solution.

Example 6.3

As given in [30], consider Eqs. (3.1)–(3.3) with the following choices:

$$\begin{aligned}&\ell =1,\ \tau =1,\ \alpha =1\ \beta =1,\ g_1(x)=0,\ g_2(x)=0,\ h_0(t)=0,\ h_1(t)=0,\\&\quad f(x,t)=e^{-t}\, \left[ (2-2t+t^2)(x-x^2)+2\, t^2\right] , \end{aligned}$$

where the exact solution is \(u(x,t)=(x-x^2)\,t^2\,e^{-t}\).

In Table 1, a comparison between the numerical solutions with the exact solution is presented for the case corresponding to \(M=8\) at different values of time t. Table 2 presents the best \(L_2\) and \(L_\infty \) errors compared with those obtained in [30]. Also, Table 3 shows the computational time (CPU time) for different values of M. In addition, Figure 1 shows the maximum absolute errors at different values of t for the case \(M=8\). We can see from Tables 1, 2 and Fig. 1 that the proposed method is appropriate and effective.

Table 1 Comparison of the numerical solutions with the exact solution of Example 6.3
Table 2 Comparison of the best \(L_2\) and \(L_\infty \) errors of Example 6.3
Table 3 CPU time (seconds) of Example 6.3
Fig. 1
figure 1

Maximum absolute errors of Example 6.3

Example 6.4

As given in [12], consider Eqs. (3.1)–(3.3) with the following choices:

$$\begin{aligned} \ell =1,\ \tau =1,\ g_1(x)=\sin (x),\ g_2(x)=0,\ h_0(t)=0,\ h_1(t)=\cos (t)\,\sin (1), \end{aligned}$$
$$\begin{aligned} f(x,t)=\sin (x)\,\left[ \beta \,\cos (t)-\alpha \,\sin (t)\right] , \end{aligned}$$

where the exact solution is: \(u(x,t)=\cos (t)\,\sin (x)\).

The RMSEs for different values of \(\alpha \), \(\beta \) and time t at \(M=3,5,7\) are shown in Table 4. Table 5 compares the RMSEs with those obtained in [12] at \(M=7\) and \(t=0.5\). In Table 6, we present the absolute error for \(M=7\) at different values of \(\alpha \), \(\beta \) and t. Furthermore, Fig. 2 shows the graphs of the approximate solution and absolute error for \(\alpha =40\), \(\beta =100\) at \(M=8\). The results of Tables 4, 5, 6 and Fig. 2 show that our numerical results by taking few terms of the proposed Generalized Lucas expansion are more accurate.

Table 4 RMSEs of Example 6.4
Table 5 Comparison of RMSEs for Example 6.4
Table 6 The absolute errors for Example 6.4
Fig. 2
figure 2

The graphs of the approximate solution (left side) and absolute error (right side) for Example 6.4

Table 7 The absolute errors of Example 6.5
Table 8 Comparison of the best maximum absolute errors of Example 6.5
Fig. 3
figure 3

The absolute error graphs of Example 6.5

Example 6.5

As given in [17], consider Eqs. (3.1)–(3.3) with the following choices:

$$\begin{aligned}&\ell =1,\ \tau =1,\ g_1(x)=\sinh (x),\ g_2(x)=-2\,\sinh (x),\ h_0(t)=0,\ h_1(t)=e^{-2\,t}\,\sinh (1),\\&f(x,t)=(3-2\,\alpha +\beta )\,e^{-2\,t}\,\sinh (x), \end{aligned}$$

where the exact solution is \(u(x,t)=e^{-2\,t}\,\sinh (x)\).

In Table 7, the absolute errors that obtained by GLGM in solving the problem for different values of \(\alpha \), \(\beta \) at different values of time t are listed at \(M=8\). In Table 8, we give a comparison between the maximum absolute errors obtained from the application of the numerical scheme presented in [17] and our method for the two cases corresponding to \(\alpha =20,\, \beta =25\) at \(M=7\) and \(\alpha =40,\, \beta =100\) at \(M=8\), respectively. Figure 3 shows the absolute error for the case \(\alpha =20,\, \beta =25\) at \(M=7\). We can see from the tabulated absolute errors of Tables 7, 8 and Fig. 3 that the proposed method is suitable and powerful for solving the linear one-dimensional telegraph equation.

7 Concluding remarks

In this paper, a new numerical technique to solve the one-dimensional linear hyperbolic telegraph type equation using the Galerkin method was analyzed in detail. Two new basis functions of the generalized Lucas polynomials were employed as basis functions, and the spectral Galerkin method is applied to reduce the equation governed by its conditions to a linear system of equations that may be solved with the aid of a suitable numerical solver. The convergence and error analysis of the generalized Lucas expansion were deeply investigated. In addition, our numerical findings are compared with exact solutions and with the solutions obtained by some other approximate methods. These results demonstrate the good accuracy and applicability of this technique.