Introduction

The telegraph equation is a second-order hyperbolic partial differential equation (HPDE). It is usually used for transmission and propagation of electrical signals [1, 2]. There are many applications of the hyperbolic telegraph equation in many fields such as microwaves and radio frequency fields [3]. In recent years, there has been a growing interest to study the hyperbolic telegraph equation numerically. For instance, in [4], the authors presented two schemes based on Lagrange polynomials to deal with the numerical solution of the second-order two-dimensional telegraph equation with the Dirichlet boundary conditions. An approximate solution of the hyperbolic telegraph equation was proposed in [5] by using the Bernoulli collocation method. In [6], the authors developed an efficient computational method for numerically solving the hyperbolic telegraph equation in two and three-dimensional spaces by using the generalized finite difference method. A Haar wavelet collocation approach is followed in [7] for treating one and two-dimensional second-order linear and nonlinear hyperbolic telegraph equations. Another numerical algorithm depending on the meshless approach is developed for the numerical solution of some types of hyperbolic telegraph equation in [8,9,10]. A Galerkin wavelet approach was proposed in [11] for solving the telegraph equation. Some other contributions regarding telegraph type models can be found in [12,13,14].

As we know, partial differential equations (PDEs) and fractional PDEs are used to describe a variety of physical phenomena which are often difficult to solve analytically; therefore, the employment of numerical techniques to treat such equations becomes necessary. For some of these studies, see [15,16,17,18,19,20]. Numerical techniques based on spectral methods are efficient methods for solving PDEs since they are able to provide excellent error properties and converge exponentially. The main idea behind spectral methods is building approximate solutions of differential equations in terms of expansions of orthogonal polynomials. The three celebrated versions of the spectral methods are the collocation, tau and Galerkin methods. These methods are used as popular techniques to find the coefficients of the used expansion. There are very promising efforts in developing spectral methods for solving various types of differential equations. In this regard, collocation method is applied in [21,22,23,24,25], tau method is used in [26,27,28,29,30,31,32], and Galerkin method was employed in [33,34,35,36,37].

Recently, there have been renewed interests in employing different orthogonal polynomials in spectral methods. In particular, there are considerable contributions concerned with the first-, second-, third- and fourth- kinds of Chebyshev polynomials. These four kinds of Chebyshev polynomials have played important roles in the numerical solutions of various types of differential equations using the different versions of spectral methods (see, [38,39,40,41,42]). One of the advantages of using Chebyshev polynomials is the good representation of smooth functions by finite Chebyshev expansion. In addition, the coefficients in the Chebyshev expansion approach zero faster than any inverse power in m as m tends to infinity. In [43], Masjed-Jamei presented other two half-trigonometric orthogonal Chebyshev polynomials, and he named them the fifth- and sixth-kinds. We comment here that the authors in [28, 44] presented complete trigonometric representations for, respectively, the Chebyshev polynomials of the fifth- and sixth-kinds. In fact, these two types of polynomials are special polynomials of the so-called generalized ultraspherical polynomials (see, [45]). Recently, these two families of orthogonal polynomials have received considerable attention among many researchers. For instance, the authors in [44] and [28] used, respectively, the fifth- and the sixth-kinds Chebyshev polynomials as basis functions to solve some types of linear and nonlinear fractional-order differential equations. In [46], the reaction-diffusion-convection equation was solved by using the sixth-kind Chebyshev collocation method.

Our main aim in the current paper can be summarized as follows:

  • Stating and proving new theorems concerned with the shifted sixth-kind Chebyshev polynomials and their modified ones.

  • Employing the modified shifted polynomials to obtain the proposed numerical solution of the telegraph equation.

  • Investigate carefully the convergence analysis arising from the proposed expansion.

  • Performing some comparisons of the proposed scheme with schemes proposed by some other researchers to clarify the efficiency and accuracy of the presented scheme.

To the best of our knowledge, the novelty of this paper comes from the following points:

  • Selecting the basis functions in terms of the modified Chebyshev polynomials of the sixth-kind is new.

  • Combining the use of the spectral tau method with Kronecker’s algebra enables us to solve the resulting linear system easier.

The rest of the paper is organized as follows. In Sect. 2, some properties and relations of the Chebyshev polynomials of the sixth-kind and their shifted ones are presented. Section 3 is interested in employing the shifted sixth-kind Chebyshev tau method to solve the hyperbolic telegraph type equation. Section 4 is devoted to the discussion of the convergence and error analysis of the suggested scheme. Section 5 is interested in providing some numerical results and comparisons to illustrate the efficiency of the proposed technique. In the end, a conclusion is presented in Sect. 6.

An overview on Chebyshev polynomials of the sixth-kind

The main objective of this section is to introduce some properties and relations of Chebyshev polynomials of the sixth-kind which will be used in the current study.

Definition 1

[43] The sixth-kind Chebyshev polynomials \(\{Y_{i}(t), i=0,1,2,...\}\) are a sequence of orthogonal polynomials on \([-1,1],\) that may be defined as \(Y_{j}(t)=\tilde{S}^{-5,2,-1,1}_{j}(t),\) where

$$\begin{aligned}&\tilde{S}^{m,n,p,q}_{k}(t)\\&\quad =\left( \prod _{i=0}^{\lfloor \frac{k}{2}\rfloor -1}\frac{(2\,i+(-1)^{k+1}+2)\,{q}+n}{(2\,i+(-1)^{k+1}+2\,\lfloor \frac{k}{2}\rfloor )\,{p}+m}\right) \,S^{m,n,p,q}_{k}(t), \end{aligned}$$

and

$$\begin{aligned}&S^{m,n,p,q}_{k}(t)\\&\quad =\sum _{r=0}^{\lfloor \frac{k}{2}\rfloor }\left( \left( {\begin{array}{c}\lfloor \frac{k}{2}\rfloor \\ r\end{array}}\right) \,\left( \prod _{i=0}^{\lfloor \frac{k}{2}\rfloor -r-1}\frac{(2\,i+(-1)^{k+1}+2\,\lfloor \frac{k}{2}\rfloor )\,{p}+m}{(2\,i+(-1)^{k+1}+2)\,{q}+n}\right) \,t^{k-2\,r}\right) , \end{aligned}$$

where \(\left\lfloor z\right\rfloor\) denotes the well-known floor function.

These polynomials satisfy the following orthogonality relation ( [28]):

$$\begin{aligned} \int \limits _{-1}^{1}t^2\,\sqrt{1-t^2}\,Y_i(t)\,Y_j(t)\,dt=h_i\,\delta _{i,j}, \end{aligned}$$

where

$$\begin{aligned} h_i=\frac{\pi }{2^{2\,i+3}}\,{\left\{ \begin{array}{ll} 1 , &{} \hbox {if }i\, \text {even}, \\ \frac{i+3}{i+1} ,&{} \hbox {if }i\, \text {odd}, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} \delta _{i,j}={\left\{ \begin{array}{ll} 1, &{} \hbox {if }\,i=j, \\ 0, &{} \hbox {if }\,i\ne j. \end{array}\right. } \end{aligned}$$

The polynomials \(Y_{i}(t)\) may be determined with the aid of the following recurrence relation:

$$\begin{aligned} Y_i(t)=t\,Y_{i-1}(t)-\alpha _{i}\,Y_{i-2}(t),\ Y_0(t)=1, \quad Y_1(t)=t,\quad i\ge 2, \end{aligned}$$
(1)
$$\begin{aligned} \alpha _{i}=\frac{i\,(i+1)+(-1)^{i}\,(2\,i+1)+1}{4\,i\,(i+1)}. \end{aligned}$$

The shifted orthogonal Chebyshev polynomials of the sixth-kind \(Y^{*}_{i}(t)\) are defined on \([0,\tau ]\) as

$$\begin{aligned} Y^{*}_{i}(t)=Y_i\left( \frac{2\; t}{\tau }-1\right) , \quad \tau > 0. \end{aligned}$$
(2)

The orthogonality relation of \(Y^{*}_{i}(t)\) on \([0,\tau ]\) is given by:

$$\begin{aligned} \int _{0}^{\tau }\omega (t)\, Y^{*}_{i}(t)\,Y^{*}_{j}(t)\,\,dt=h_{\tau ,i}\,\delta _{i,j}, \end{aligned}$$
(3)

where

$$\begin{aligned} \omega (t)=(2\,t-\tau )^2\,\sqrt{t\,\tau -t^2}, \end{aligned}$$

and

$$\begin{aligned} h_{\tau ,i}=\frac{\tau ^4}{4}\, h_i. \end{aligned}$$
(4)

The problem of expressing the derivatives of several orthogonal polynomials in terms of their original polynomials is of great interest in the scope of numerical analysis, and in particular, in the numerical solutions of different types of differential equations. The following theorem exhibits the derivatives of the polynomials \(Y^{*}_{i}(t)\) in terms of their original ones.

Theorem 1

[47] The following expression for the derivatives of the polynomials \(Y^{*}_{i}(t)\) is valid

$$\begin{aligned} \begin{aligned} \displaystyle \frac{d^m Y^{*}_{i}(t)}{d\, t^m}=\sum _{r=0}^{\left\lfloor \frac{i-m}{2}\right\rfloor } A_{r,i,m}\, Y^{*}_{i-m-2\, r}(t),\quad i\ge m, \end{aligned} \end{aligned}$$
(5)

with

$$\begin{aligned} \begin{aligned}&A_{r,i,m}=\frac{i!}{2^{2 r-m}\, \tau ^m\, r!\, (i-r-m+2)!} \\ &\qquad\quad \times {\left\{ \begin{array}{ll} (i-m+1) (2+i-2 r-m)\\ \quad \times \, _{4} F_{3}\left. \left( \begin{array}{cccc} -r,\frac{-i+m+1}{2},-i+r+m-2,-\left\lfloor \frac{i+1}{2}\right\rfloor -\frac{1}{2}\\ -i-1,\frac{-i+m-1}{2},\frac{1}{2}-\left\lfloor \frac{i+1}{2}\right\rfloor \end{array} \right| 1\right) ,\\ \quad \!\!\! (i+m)\ even,\\ (i-m+2) (1+i-2 r-m)\\ \quad \times \, _{4} F_{3}\left. \left( \begin{array}{cccc} -r,\frac{m-i}{2},-i+r+m-2,-\left\lfloor \frac{i+1}{2}\right\rfloor -\frac{1}{2}\\ -i-1,\frac{-i+m}{2}-1,\frac{1}{2}-\left\lfloor \frac{i+1}{2}\right\rfloor \end{array} \right| 1\right) ,\\ \quad \!\!\! (i+m)\ odd. \end{array}\right. } \end{aligned} \end{aligned}$$

Remark 1

Although, the two hypergeometric functions that appear in the coefficients \(A_{r,i,m}\) are terminating, they can not be summed in general for all m. Regarding the two cases \(m=1\) and \(m=2\), these hypergeometric functions can be summed, and accordingly, the expressions of the first- and second-order derivatives can be obtained in simple forms. The following two lemmas concerning the reduction of certain two terminating hypergeometric functions are required.

Lemma 1

Let \(\ell\) and i be any two nonnegative integers. One has

$$\begin{aligned} \begin{aligned}&\ _{4} F_{3}\left. \left( \begin{array}{cccc} -\ell ,-2 i+\ell ,-\frac{1}{2}-i,\frac{3}{2}-i\\ -1-2 i,\frac{1}{2}-i,\frac{1}{2}-i \end{array} \right| 1\right) \\&\quad =\displaystyle \frac{\ell !}{(2i)!\, (2i-1)} \\& \qquad\quad \times {\left\{ \begin{array}{ll} \left( -1-\ell ^2+2 i (1+\ell )\right) (2 i-\ell )!,&{} \ell \ even,\\ (1+2 i-\ell )! (1+\ell ),&{} \ell \ odd.\\ \end{array}\right. } \end{aligned} \end{aligned}$$
(6)

Proof

To obtain a reduction formula for the above terminating hypergeometric function, set

$$\begin{aligned} G_{\ell ,i}=\ _{4} F_{3}\left. \left( \begin{array}{cccc} -\ell ,-2 i+\ell ,-\frac{1}{2}-i,\frac{3}{2}-i\\ -1-2 i,\frac{1}{2}-i,\frac{1}{2}-i \end{array} \right| 1\right) , \end{aligned}$$

and we resort to the celebrated algorithm of Zeilberger ( [48]) to show that \(G_{\ell ,i}\) satisfies the following recurrence relation:

$$\begin{aligned}&-\ell (\ell -1) \left( 2\, \ell ^2-4 \ell \, i-2 \ell -2 i+1\right) \, G_{\ell -2,i}\\&\quad +4 \ell (\ell -i-1) (-2 i-2+\ell )\, G_{\ell -1,i}\\&\quad +(-2 i+\ell -1) (-2 i-2+\ell ) \left( 2 \ell ^2-4 \ell i-6 \ell +2 i+5\right) \,\\&\qquad \times G_{\ell ,i}=0, \quad G_{0,i}=1,\ G_{1,i}=\frac{2}{2i-1}. \end{aligned}$$

The exact solution of the above recurrence relation is given by

$$\begin{aligned}&G_{\ell ,i}=\displaystyle \frac{\ell !}{(2i)!\, (2i-1)}\ {\left\{ \begin{array}{ll} \left( -1-\ell ^2+2 i (1+\ell )\right) (2 i-\ell )!,\\ \quad \ell \ even,\\ (1+2 i-\ell )! (1+\ell ),\\ \quad \ell \ odd.\\ \end{array}\right. } \end{aligned}$$

\(\square\)

Lemma 2

Let \(\ell\) and i be two nonnegative integers. One has

$$\begin{aligned} \begin{aligned}&\ _{4} F_{3}\left. \left( \begin{array}{cccc} -\ell ,-1-2 i+\ell ,-\frac{3}{2}-i,\frac{1}{2}-i\\ -2-2 i,-\frac{1}{2}-i,-\frac{1}{2}-i \end{array} \right| 1\right) \\&\quad = \frac{\ell !}{(1+2 i) (2+2 i)!}\,\\&\qquad \times {\left\{ \begin{array}{ll} (1+3 \ell +2 i (1+\ell )) (2+2 i-\ell )!,\\ \quad \ell \ even,\\ \left( 4+4 i^2-2 i (-5+\ell )-3 \ell \right) (1+2 i-\ell )!\, (\ell +1),\\ \quad \ell \ odd.\\ \end{array}\right. } \end{aligned} \end{aligned}$$
(7)

Proof

First set

$$\begin{aligned} H_{\ell ,i}= \ _{4} F_{3}\left. \left( \begin{array}{cccc} -\ell ,-1-2 i+\ell ,-\frac{3}{2}-i,\frac{1}{2}-i\\ -2-2 i,-\frac{1}{2}-i,-\frac{1}{2}-i \end{array} \right| 1\right) . \end{aligned}$$

The application of Zeilberger’s algorithm again ( [48]) enables one to obtain the following recurrence relation for \(H_{\ell ,i}\):

$$\begin{aligned} \begin{aligned}&(1-\ell )\, \ell \, (-1-i+\ell )\, \left( -1-2 i-4\, \ell -4\, i\, \ell +2\, \ell ^2\right) \,\\&H_{\ell -2,i}+\ell (-3-2 i+\ell ) \\& \times \left( 5+12\, i+4\, i^2-6\, \ell -4\, i\, \ell +2\, \ell ^2\right) H_{\ell -1,i}\\&+(-3-2 i+\ell ) (-2-2 i+\ell ) (-2-i+\ell ) \\& \times \left( 5+2\, i-8\, \ell -4\, i\, \ell +2\, \ell ^2\right) \, H_{\ell ,i}=0,\\ &H_{0,i}=1,\ H_{1,i}=\frac{2 \left( 1+8 i+4 i^2\right) (2 i)!}{(1+2 i) (2+2 i)!}, \end{aligned} \end{aligned}$$

whose exact solution is given by

$$\begin{aligned} H_{\ell ,i}= & {} \frac{\ell !}{(1+2 i) (2+2 i)!}\,\\&\times{\left\{ \begin{array}{ll} (1+3 \ell +2 i (1+\ell )) (2+2 i-\ell )!,\\ \quad \ell \ even,\\ \left( 4+4 i^2-2 i (-5+\ell )-3 \ell \right) (1+2 i-\ell )!\, (\ell +1),\\ \quad \ell \ odd.\\ \end{array}\right. } \end{aligned}$$

\(\square\)

Now, we are going to establish two expressions for the first- and second-order derivatives of the shifted Chebyshev polynomials of the sixth-kind.

Corollary 1

For all \(i \ge 1\), the first derivative of \(Y^{*}_{i}(t)\) .is given by

$$\begin{aligned} \displaystyle \frac{d Y^{*}_{i}(t)}{dt}= \displaystyle \sum _{r=0}^{i-1}M_{r,i}\,Y^{*}_{r}(t), \end{aligned}$$
(8)

and \(M_{r,i}\) is given by

$$\begin{aligned} M_{r,i}=\displaystyle \frac{2^{2-i+r}}{\tau }{\left\{ \begin{array}{ll} r+1,&{} i\ \text {even},\ r\ \text {odd},\\ \displaystyle \frac{i (r+2)}{i+1},&{} i\ \text {odd},\ \frac{i-r-1}{2}\ \text {even},\\ \displaystyle \frac{(i+4) (r+2)}{i+1},&{} i\ \text {odd},\ \frac{i-r-3}{2}\ \text {even},\\ 0,&{}\text {otherwise}. \end{array}\right. } \end{aligned}$$

Proof

Setting \(m=1\) in (5) yields the following two reduced formulas ( [47]) for \(\displaystyle \frac{d Y^{*}_{i}(t)}{dt}\)

$$\begin{aligned}&\displaystyle \frac{d Y^{*}_{2i}(t)}{dt}=\frac{1}{\tau }\,\displaystyle \sum _{r=0}^{i-1} 2^{2-2\, r}\, (i-r)\, Y^{*}_{2i-2r-1}(t), \end{aligned}$$
(9)
$$\begin{aligned}&\begin{aligned} \displaystyle \frac{d Y^{*}_{2i+1}(t)}{dt}&=\frac{\sqrt{\pi }\, 2^{-2 i}}{\tau \, (i+1) \Gamma \left( i+\frac{1}{2}\right) }\,\\&\quad \displaystyle \times\sum _{r=0}^{\left\lfloor \frac{i}{2}\right\rfloor } \frac{(2 i-2 r+2)! \left( -i-\frac{1}{2}\right) _{2 r}}{(i-2 r)! (-2 i+2 r-2)_{2 r}}\, Y^{*}_{2i-4r}(t)\\&+\frac{\sqrt{\pi }\, 2^{-2 i} (2 i+5)}{\tau \, (i+1) (2 i+1) \Gamma \left( i+\frac{1}{2}\right) }\,\\&\quad \displaystyle \times \sum _{r=0}^{\left\lfloor \frac{i-1}{2}\right\rfloor } \frac{(2 i-2 r+1)! \left( -i-\frac{1}{2}\right) _{2 r+1}}{(i-2 r-1)! (-2 i+2 r-1)_{2 r+1}}\, Y^{*}_{2i-4r-2}(t). \end{aligned} \end{aligned}$$
(10)

The two formulas in (9) and (10) can be merged together to give the following unified formula for the first-order derivative:

$$\begin{aligned} \displaystyle \frac{d Y^{*}_{i}(t)}{dt}= \displaystyle \sum _{r=0}^{i-1}M_{r,i}\,Y^{*}_{r}(t), \end{aligned}$$

where

$$\begin{aligned} M_{r,i}=\displaystyle \frac{2^{2-i+r}}{\tau }{\left\{ \begin{array}{ll} r+1,&{} i\ \text {even},\ r\ \text {odd},\\ \displaystyle \frac{i (r+2)}{i+1},&{} i\ \text {odd},\ \frac{i-r-1}{2}\ \text {even},\\ \displaystyle \frac{(i+4) (r+2)}{i+1},&{} i\ \text {odd},\ \frac{i-r-3}{2}\ \text {even},\\ 0,&{}\text {otherwise}. \end{array}\right. } \end{aligned}$$

\(\square\)

Corollary 2

For all \(i \ge 2\), the second-order derivative of the polynomials \(Y^{*}_{i}(t)\) can be expressed explicitly as:

$$\begin{aligned} \displaystyle \frac{d^2 Y^{*}_{i}(t)}{d\, t^2}=\sum _{r=0}^{i-2} \gamma _{r,i} \,Y^{*}_{r}(t), \end{aligned}$$
(11)

where

$$\begin{aligned} \gamma _{r,i}=\frac{2^{2-i+r}}{\tau ^2}\,{\left\{ \begin{array}{ll} (2+r) (-8+i (4+i)-r (4+r)),\\ \quad \hbox {if }i\ \text {even}\quad and \quad \frac{i-r-2}{2}\, \text {even}, \\ (i-r) (2+r) (4+i+r),\\ \quad \hbox {if } i\ \text {even}\quad and \quad \frac{i-r-4}{2}\, \text {even}, \\ \displaystyle \frac{(1+r) (4+i+r) \left( -4+2 i+i^2-(2+i) r\right) }{1+i},\\ \quad \hbox {if } i\ \text {odd}\quad and \quad \frac{i-r-2}{2}\, \text {even}, \\ \displaystyle \frac{(i-r) (1+r) (2 (2+r)+i (6+i+r))}{1+i},\\ \quad \hbox {if } i\ \text {odd}\quad and \quad \frac{i-r-4}{2}\, \text {even},\\ 0, \qquad\qquad \text {otherwise}. \end{array}\right. } \end{aligned}$$

Proof

Setting \(m=2\) in relation (5) yields the following two formulas:

$$\begin{aligned} \begin{aligned}&\displaystyle \frac{d^2 Y^{*}_{2i}(t)}{d\, t^2}\\&\quad =\frac{2^{4-2 i} \sqrt{\pi } (2 i)!}{\tau ^2\, \Gamma \left( -\frac{1}{2}+i\right) } \displaystyle \sum _{\ell =0}^{i-1} \frac{\left( \frac{1}{2}-i\right) _\ell }{(-1+i-\ell )! \ell ! (-2 i+\ell )_\ell } \\& \quad \times \ _{4} F_{3}\left. \left( \begin{array}{cccc} -\ell ,-\frac{1}{2}-i,\frac{3}{2}-i,-2 i+\ell \\ -1-2 i,\frac{1}{2}-i,\frac{1}{2}-i \end{array} \right| 1\right) \, Y^{*}_{2i-2\ell -2}(t), \end{aligned} \end{aligned}$$
(12)

and

$$\begin{aligned} \begin{aligned}&\displaystyle \frac{d^2 Y^{*}_{2i+1}(t)}{d\, t^2}\\&\quad =\displaystyle \frac{2i+1}{\tau ^2}\, \sum _{\ell =0}^{i-1}\frac{2^{3-2 \ell } (1+2 i) (i-\ell )}{(1+2 i-\ell )! \ell !} \\&\quad \times \ _{4} F_{3}\left. \left( \begin{array}{cccc} -\ell ,-\frac{3}{2}-i,\frac{1}{2}-i,-1-2 i+\ell \\ -2-2 i,-\frac{1}{2}-i,-\frac{1}{2}-i \end{array} \right| 1\right) \, Y^{*}_{2i-2\ell -1}(t). \end{aligned} \end{aligned}$$
(13)

If we substitute by the reduction formulas (6) and (7) of the two terminating hypergeometric functions that appear in (12) and (13), then the following two formulas can be obtained:

$$\begin{aligned} \begin{aligned}&\displaystyle \frac{d^2 Y^{*}_{2i}(t)}{d\, t^2}\\&\quad =\displaystyle \frac{1}{\tau ^2}\left( \sum _{\ell =0}^{\left\lfloor \frac{i-1}{2}\right\rfloor } 2^{3-4 \ell }\, (i-2 \ell )\, \left( -1-4\, \ell ^2+i\, (2+4\, \ell )\right) Y^{*}_{2 i-4 \ell -2}(t)\right. \\&\qquad\quad +\left. \sum _{\ell =0}^{\left\lfloor \frac{i-2}{2}\right\rfloor } 2^{3-4 \ell }\, (-1+i-2 \ell )\, (i-\ell )\, (1+\ell )\, Y^{*}_{2 i-4 \ell -4}(t)\right) , \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned}&\displaystyle \frac{d^2 Y^{*}_{2i+1}(t)}{d\, t^2}\\&\quad =\displaystyle \frac{1}{\tau ^2\, (i+1)}\left( \sum _{\ell =0}^{\left\lfloor \frac{i-1}{2}\right\rfloor } 2^{3-4 \ell }\, (i-2 \ell )\,\right. \\&\quad \left. \times (1+i-\ell )\, (1+6\, \ell +i\, (2+4\, \ell ))\, Y^{*}_{2\, i-4\, \ell -1}(t)\right. \\&\quad +\sum _{\ell =0}^{\left\lfloor \frac{i-2}{2}\right\rfloor }\, 2^{1-4\, \ell } (1+4\, i (2+i-\ell )-6\, \ell )\\&\qquad \times\left. (-1+i-2\, \ell ) (1+\ell )\, Y^{*}_{2 i-4 \ell -3}(t)\right.\Bigg) . \end{aligned} \end{aligned}$$

The last two formulas can be unified to give the following formula

$$\begin{aligned} \displaystyle \frac{d^2 Y^{*}_{i}(t)}{d\, t^2}=\sum _{r=0}^{i-2} \gamma _{r,i} \,Y^{*}_{r}(t), \end{aligned}$$

with

$$\begin{aligned} \gamma _{r,i}=\frac{2^{2-i+r}}{\tau ^2}\,{\left\{ \begin{array}{ll} (2+r) (-8+i (4+i)-r (4+r)),\\ \quad \hbox {if }i\ \text {even}\quad and \quad \frac{i-r-2}{2}\, \text {even}, \\ (i-r) (2+r) (4+i+r),\\ \quad \hbox {if } i\ \text {even}\quad and \quad \frac{i-r-4}{2}\, \text {even}, \\ \displaystyle \frac{(1+r) (4+i+r) \left( -4+2 i+i^2-(2+i) r\right) }{i+1},\\ \quad \hbox {if } i\ \text {odd}\quad and \quad \frac{i-r-2}{2}\, \text {even}, \\ \displaystyle \frac{(i-r) (1+r) (2 (2+r)+i (6+i+r))}{i+1},\\ \quad \hbox {if } i\ \text {odd}\quad and \quad \frac{i-r-4}{2}\, \text {even},\\ 0,\qquad\qquad \text {otherwise}. \end{array}\right. } \end{aligned}$$

This completes the proof of Corollary 2. \(\square\)

The proposed numerical scheme for treating the telegraph- type equation

The main aim of this section is to introduce the modified shifted sixth-kind Chebyshev polynomials and utilize them for the numerical solution of the linear hyperbolic telegraph type equation.

Now, we introduce the following modified shifted Chebyshev polynomials of the sixth-kind defined as follows:

$$\begin{aligned} \psi _i(z)=z\,(\ell -z)\,Y^{*}_{i}(z), \end{aligned}$$
(14)

where \(Y^{*}_{i}(z)\) is the shifted Chebyshev polynomials of the sixth-kind that are defined in (2).

The family of polynomials \(\left\{ \psi _i(z)\right\} _{i\ge 0}\) forms an orthogonal set of polynomials on \([0,\ell ]\) in the sense that:

$$\begin{aligned} \int _{0}^{\ell }\frac{(2\,z-\ell )^2}{z^{\frac{3}{2}}\,(\ell -z)^{\frac{3}{2}}}\,\psi _i(z)\,\psi _j(z)\,dz=h_{\ell ,i}\,\delta _{i,j}, \end{aligned}$$

where \(h_{\ell ,i}\) is given in (4).

Our main target in this section is to employ the modified polynomials \(\psi _{i}(x)\) to solve the telegraph type equation. Explicitly, we will analyze in detail our proposed method, namely the modified shifted sixth-kind Chebyshev tau method (MS6CTM).

In order to proceed in our proposed algorithm, and based on Corollaries 1 and 2, the following theorem which expresses the second-order derivative of the polynomials \(\psi _i(z)\) in terms of the polynomials \(Y^{*}_{j}(z)\) will be stated and proved.

Theorem 2

The second-order derivative of \(\psi _{i}(z)\) can be explicitly expressed as:

$$\begin{aligned} \displaystyle \frac{d^2\, \psi _{i}(z)}{d\, z^2}=\sum _{j=0}^{i}\lambda _{j,i}\,Y^{*}_{j}(z), \end{aligned}$$
(15)

where

$$\begin{aligned} \lambda _{j,i}=\frac{1}{2^{i-j-1}}\,{\left\{ \begin{array}{ll} \frac{-(i+1)(i+2)}{2}, &{} \hbox {if } i=j, \\ j+2, &{} \hbox {if } i,j\, \text {even}\quad and \quad \frac{i-j+2}{2}\, \text {odd}, \\ -3\,(j+2), &{} \hbox {if } i,j\, \text {even}\quad and \quad \frac{i-j+2}{2}\, \text {even}, \\ \frac{-(j+1) (i+2\, j+6)}{i+1}, &{} \hbox {if } i,j\, \text {odd}\quad and \quad \frac{i-j+2}{2}\, \text {even}, \\ \frac{(j+1) (-i+2\, j+2)}{i+1}, &{} \hbox {if } i,j\, \text {odd}\quad and \quad \frac{i-j+2}{2}\, \text {odd},\\ 0,&{}\text {otherwise}. \end{array}\right. } \end{aligned}$$

Proof

From the choice of the basis functions in (14), we have

$$\begin{aligned} \displaystyle \frac{d^2\, \psi _{i}(z)}{d\, z^2}= & {} -2\,Y^{*}_{i}(z)+2\,(\ell -2\,z)\,\displaystyle \frac{d \,Y^{*}_{i}(z)}{d\, z}\nonumber \\&+(z\,\ell -z^2)\,\displaystyle \frac{d^2\, Y^{*}_{i}(z)}{d\, z^2}. \end{aligned}$$
(16)

With the aid of Corollaries 1 and 2, Eq. (16) may be rewritten as the following form

$$\begin{aligned} \begin{aligned} \displaystyle \frac{d^2\, \psi _{i}(z)}{d\, z^2}=&-2\,Y^{*}_{i}(z)+2\,\ell \sum _{r=0}^{i-1} \theta _{r,i} \,Y^{*}_{r}(z) -4 \sum _{r=0}^{i-1} \theta _{r,i} \,z\,Y^{*}_{r}(z)\\&+\ell \sum _{r=0}^{i-2} \gamma _{r,i} \,z\,Y^{*}_{r}(z) -\sum _{r=0}^{i-2} \gamma _{r,i} \,z^{2}\,Y^{*}_{r}(z). \end{aligned} \end{aligned}$$
(17)

In virtue of the recurrence relation (1), and by replacing t with \((\frac{2\,z}{\ell }-1),\) the following recurrence relation for the shifted sixth-kind Chebyshev polynomials holds:

$$\begin{aligned} z\,Y^{*}_{i}(z)=\frac{\ell }{2}\,[\,Y^{*}_{i+1}(z)+Y^{*}_{i}(z)+\alpha _{i+1}\,Y^{*}_{i-1}(z)\,]. \end{aligned}$$
(18)

Furthermore, making use of the last formula leads to the following formula:

$$\begin{aligned} z^{2}\,Y^{*}_{i}(z)= & {} \frac{\ell ^{2}}{4}\,[\,Y^{*}_{i+2}(z)+2\,Y^{*}_{i+1}(z)+(1+\alpha _{i+1}\nonumber \\&+\alpha _{i+2})\,Y^{*}_{i}(z)+2\,\alpha _{i+1}\,Y^{*}_{i-1}(z)\nonumber \\&+\alpha _{i}\,\alpha _{i+1}\,Y^{*}_{i-2}(z)\,]. \end{aligned}$$
(19)

Substituting by relations (18), (19) into relation (17), and performing some computations, one finds

$$\begin{aligned} \displaystyle \frac{d^2\, \psi _{i}(z)}{d\, z^2}=\sum _{j=0}^{i}\lambda _{j,i}\,Y^{*}_{j}(z), \end{aligned}$$

where

$$\begin{aligned} \lambda _{j,i}=\frac{1}{2^{i-j-1}}\,{\left\{ \begin{array}{ll} \frac{-(i+1)(i+2)}{2}, &{} \hbox {if } i=j, \\ j+2, &{} \hbox {if } i,j\, \text {even}\quad and \quad \frac{i-j+2}{2}\, \text {odd}, \\ -3\,(j+2), &{} \hbox {if } i,j\, \text {even}\quad and \quad \frac{i-j+2}{2}\, \text {even}, \\ \frac{-(j+1) (i+2\, j+6)}{i+1}, &{} \hbox {if } i,j\, \text {odd}\quad and \quad \frac{i-j+2}{2}\, \text {even}, \\ \frac{(j+1) (-i+2\, j+2)}{i+1}, &{} \hbox {if } i,j\, \text {odd}\quad and \quad \frac{i-j+2}{2}\, \text {odd},\\ 0,&{}\text {otherwise}. \end{array}\right. } \end{aligned}$$

\(\square\)

Now, we are in a position to analyze our algorithm to solve the following linear telegraph type equation ( [29]):

$$\begin{aligned}&{\partial _{tt}}u(z,t)+\gamma \,{\partial _{t}}u(z,t)+\delta \,u(z,t)\nonumber \\&={\partial _{zz}}u(z,t)+g(z,t), \quad 0<z\le \ell , \quad 0<t\le \tau , \end{aligned}$$
(20)

subject to the initial conditions:

$$\begin{aligned} u(z,0)=p_{1}(z), \quad \partial _{t}u(z,0)=p_{2}(z), \quad 0<z\le \ell , \end{aligned}$$
(21)

and the homogeneous boundary conditions:

$$\begin{aligned} u(0,t)=u(\ell ,t)=0, \quad 0<t\le \tau , \end{aligned}$$

where \(\gamma\), \(\delta\) are real constants and \(p_{1}(z),\) \(p_{2}(z),\) g(zt) are continuous functions. Also, the source term g(zt) describes that the medium is heated \(g(z,t)>0\) or cooled \(g(z,t)<0\) at space z and time t.

Now, consider the following spaces

$$\begin{aligned} \hat{\chi }_{M}&=\mathrm{{span}}\{{\psi _{i}(z)}\,{Y^{*}_{j}(t)}:i,j=0,1,\ldots ,M\}, \\ \bar{\chi }_{M}&=\{u\in \hat{\chi }_{M}:u(0,t)=u(\ell ,t)=0,\, 0<t\le \tau \}. \end{aligned}$$

Then any function \(u_{M}(z,t)\in \bar{\chi }_{M}\), may be written as

$$\begin{aligned} u_{M}(z,t)=\sum _{i=0}^{M}\sum _{j=0}^{M}c_{ij}\,\psi _i(z)\,Y^{*}_{j}(t)=\varvec{\psi }(z)\,\mathbf {C}\,\mathbf {\overline{Y}}^{T}(t), \end{aligned}$$

where

$$\begin{aligned} \varvec{\psi }(z)= & {} [\psi _{0}(z),\psi _{1}(z),...,\psi _{M}(z)],\\ \mathbf {\overline{Y}}(t)= & {} [Y^{*}_{0}(t),Y^{*}_{1}(t),...,Y^{*}_{M}(t)], \end{aligned}$$

and \(\mathbf {C}=(c_{ij})_{0\le i,j\le M}\) is the matrix of unknowns of order \((M+1)\times (M+1)\).

The principal idea for the application of MS6CTM is based on finding \(u_{M}(z,t)\in \bar{\chi }_{M}\) such that

$$\begin{aligned} \begin{aligned}&(\partial _{tt}{u_{M}},\psi _r(z)\,Y^{*}_s(t))_{\hat{w}}+\gamma \,(\partial _{t}{u_{M}},\psi _r(z)\,Y^{*}_s(t))_{\hat{w}}\\&\quad +\delta \,({u_{M}},\psi _r(z)\,Y^{*}_s(t))_{\hat{w}}-(\partial _{zz}{u_{M}},\psi _r(z)\,Y^{*}_s(t))\\&\qquad\quad =(g(z,t),\psi _r(z)\,Y^{*}_s(t))_{\hat{w}}, \quad 0\leqslant r\leqslant M,\, 0\leqslant s\leqslant M-2, \end{aligned} \end{aligned}$$
(22)

where \(\hat{w}=\omega (t)\,\omega (z)\) and \((u,v)_{\hat{w}}=\displaystyle \int _{0}^{\tau }\int _{0}^{\ell }\,\hat{w}\, u\,v\,dz\, dt.\) Now, let us define the following matrices:

$$\begin{aligned}&\mathbf {G}=(g_{rs})=\left( g(z,t),\psi _r(z)\,Y^{*}_s(t)\right) _{\hat{w}},\\&\quad \mathbf {F}=(f_{sj}),\ \mathbf {K}=(k_{sj}),\ \mathbf {A}=(a_{ir}),\\&\quad \mathbf {B}=(b_{ir}),\ \mathbf {D}=(d_{sj}), \end{aligned}$$

where \(0\le i,j,r\le M,\)   \(0\le s\le M-2.\)

With the aid of the above matrices, Eq. (22) can be rewritten as follows:

$$\begin{aligned} \mathbf {B}\,\mathbf {C}\,\mathbf {K}^T+\gamma \,\mathbf {B}\,\mathbf {C}\,\mathbf {F}^T+\delta \,\mathbf {B}\,\mathbf {C}\,\mathbf {D}^T-\mathbf {A}\,\mathbf {C}\,\mathbf {D}^T=\mathbf {G}. \end{aligned}$$
(23)

Assuming that \(\tilde{\varvec{F}}\) and \(\tilde{\varvec{G}}\) are, respectively, two matrices of order \((n+1)\times (m+1)\) and \((p+1)\times (q+1),\) then the kronecker product \(\tilde{\varvec{F}}\otimes \tilde{\varvec{G}}\) is the \((n+1)\,(p+1) \times (m+1)\,(q+1)\) block matrix:

$$\begin{aligned} \tilde{\varvec{F}}\otimes \tilde{\varvec{G}}= \begin{bmatrix} f_{00}\,\tilde{\varvec{G}} &{} \dots &{} f_{0m}\,\tilde{\varvec{G}} \\ \vdots &{} \ddots &{} \vdots \\ f_{n0}\,\tilde{\varvec{G}} &{} \dots &{} f_{nm}\,\tilde{\varvec{G}} \end{bmatrix}, \end{aligned}$$

and the vectorization of the matrix \(\tilde{\varvec{F}}\) is a matrix of order \((n+1)(m+1)\times 1\) defined as:

$$\begin{aligned} vec(\tilde{\varvec{A}})=[f_{00},...,f_{n0},f_{01},...,f_{n1},...,f_{0m},...,f_{nm}]^{T}. \end{aligned}$$

Based on the Kronecker product and vectorization (see, [49]), the following property holds

$$\begin{aligned} vec(\tilde{\varvec{F}}\,\tilde{\varvec{G}}\,\tilde{\varvec{H}})=(\tilde{\varvec{H}}^{T}\otimes \tilde{\varvec{F}})\,vec(\tilde{\varvec{G}}), \end{aligned}$$

where \(\tilde{\varvec{F}},\) \(\tilde{\varvec{G}}\) and \(\tilde{\varvec{H}}\) are three matrices.

Thanks to the previous property, Eq. (23) can be written as

$$\begin{aligned}{}[\mathbf {K}\otimes \mathbf {B}+\gamma \,\mathbf {F}\otimes \mathbf {B}+\delta \,\mathbf {D}\otimes \mathbf {B}-\mathbf {D}\otimes \mathbf {A}]vec(\mathbf {C})=vec(\mathbf {G}). \end{aligned}$$
(24)

In addition, the matrix form of the initial conditions becomes

$$\begin{aligned} \begin{aligned}&\varvec{\psi }\left( \frac{i+1}{M+2}\right) \,\mathbf {C}\,\mathbf {\overline{Y}}^{T}(0)-p_{1}\left( \frac{i+1}{M+2}\right) =0,\quad i=0,1,...M,\\ {}&\varvec{\psi }\left( \frac{i+1}{M+2}\right) \,\mathbf {C}\,\varvec{\chi }^{T}(0)-p_{2}\left( \frac{i+1}{M+2}\right) =0,\quad i=0,1,...M, \end{aligned} \end{aligned}$$
(25)

where \(\varvec{\chi }(t)=\left( \frac{d\,Y^{*}_{0}(t)}{d\,t},\frac{d\,Y^{*}_{1}(t)}{d\,t},...,\frac{d\,Y^{*}_{M}(t)}{d\,t}\right) .\)

Now, Eqs. (24) and (25) generate a linear system of equations in the unknown expansion coefficients \(c_{ij}\) of dimension \((M +1)^2\). This system may be solved via the Gaussian elimination technique.

Now, we present a theorem, in which the nonzero elements of the matrices \(\mathbf {F},\) \(\mathbf {K},\) \(\mathbf {A},\) \(\mathbf {B},\) and \(\mathbf {D}\) are explicitly given in the following theorem.

Theorem 3

Let \(\psi _i(z)\) be the basis defined in ( 14 ), and let

$$\begin{aligned} \begin{aligned}&b_{ir}=(\psi _{i}(z),\psi _{r}(z))_{\omega (z)},\quad d_{sj}=(Y^{*}_{s}(t),Y^{*}_{j}(t))_{\omega (t)},\\&\quad f_{sj}=\left( Y^{*}_{s}(t),\frac{dY^{*}_{j}(t)}{dt}\right) _{\omega (t)},\\&\quad k_{sj}=\left( Y^{*}_{s}(t),\frac{d^{2}Y^{*}_{j}(t)}{dt^{2}}\right) _{\omega (t)},\quad a_{ir}=\left( \frac{d^{2}\psi _{i}(z)}{dz^{2}},\psi _{r}(z)\right) _{\omega (z)}. \end{aligned} \end{aligned}$$

Then the nonzero elements \(b_{ir},\) \(d_{sj},\) \(f_{sj},\) \(k_{sj}\) and \(a_{ir}\) are given by

$$\begin{aligned} b_{ir}=&{\left\{ \begin{array}{ll} \displaystyle \frac{5\,\pi \, \ell ^{8}}{8192},\, \hbox {if }\,i=r=0, \\ \displaystyle \frac{-\pi \, \ell ^{8}}{8192},\, \hbox {if }\,i=0,\,r=2, \\ \displaystyle \frac{3\,\pi \, \ell ^{8}}{16384},\, \hbox {if }\,i=r=1, \\ \displaystyle \frac{\ell ^{4}}{16}\,[\,h_{\ell ,i+2}+(1-\alpha _{i+1}-\alpha _{i+2})^{2}\,h_{\ell ,i}+\alpha _{i}^{2}\alpha _{i+1}^{2}\,h_{\ell ,i-2}\,],\\ \quad \hbox {if }\,i=r, i\ne 0,1, \\ \displaystyle \frac{-\,\ell ^{4}}{16}\,[\,(1-\alpha _{i+3}-\alpha _{i+4})\,h_{\ell ,i+2}+\alpha _{i+2}\alpha _{i+3}\,(1-\alpha _{i+1}-\alpha _{i+2})\,h_{\ell ,i}\,],\\ \quad \hbox {if }\,i=r-2,\, i\ne 0, \\ \displaystyle \frac{-\,\ell ^{4}}{16}\,[\,(1-\alpha _{i+1}-\alpha _{i+2})\,h_{\ell ,i}+\alpha _{i}\alpha _{i+1}\,(1-\alpha _{i-1}-\alpha _{i})\,h_{\ell ,i-2}\,],\\ \quad \hbox {if }\,i=r+2, \\ \displaystyle \frac{\ell ^{4}}{16}\,\alpha _{i+4}\alpha _{i+5}\,h_{\ell ,i+2},\, \hbox {if }\,i=r-4,\\ \frac{\ell ^{4}}{16}\,\alpha _{i}\alpha _{i+1}\,h_{\ell ,i-2},\, \hbox {if }\,i=r+4, \end{array}\right. } \end{aligned}$$
(26)
$$\begin{aligned} d_{sj}=\,&h_{\tau ,j}, \quad if \quad s=j,\nonumber \\ f_{sj}=\,&M_{s,j}\,h_{\tau ,s}, \quad if \quad j=s+k,\quad k\ge 1,\nonumber \\ k_{sj}=\,&\gamma _{s,j}\,h_{\tau ,s}, \quad if \quad j=s+k,\quad k\ge 2,\nonumber \\ \end{aligned}$$
$$\begin{aligned} a_{ir}=&\sum _{j=0}^{i}\lambda _{j,i}\, {\left\{ \begin{array}{ll} \displaystyle \frac{\pi \, \ell ^{6}}{256}, &{} \hbox {if }\,j=r=0, \\ \frac{\ell ^{2}}{4}\,[\,(1-\alpha _{r+1}-\alpha _{r+2})\,h_{\ell ,r}\,], &{} \hbox {if }\,j=r,\, j\ne 0, \\ \frac{-\,\ell ^{2}}{4}\,h_{\ell ,r+2}, &{} \hbox {if }\,j=r+2,\\ \frac{-\,\ell ^{2}}{4}\,\alpha _{r}\,\alpha _{r+1}\,h_{\ell ,r-2}, &{} \hbox {if }\,j=r-2. \end{array}\right. } \end{aligned}$$
(27)

Proof

Using Eqs. (15), (18) and (19) , we have

$$\begin{aligned} \begin{aligned} b_{ir}=\,&\left( \,\frac{\ell ^{2}}{4}\,[\,-\,Y^{*}_{i+2}(z)+\left( 1-\alpha _{i+1}-\alpha _{i+2}\right) \,Y^{*}_{i}(z)\right. \\&\quad -\alpha _{i}\,\alpha _{i+1}\,Y^{*}_{i-2}(z)\,],\\&\,\frac{\ell ^{2}}{4}\,[\,-\,Y^{*}_{r+2}(z)+(1-\alpha _{r+1}-\alpha _{r+2})\,Y^{*}_{r}(z)\\&\quad \left. -\alpha _{r}\,\alpha _{r+1}\,Y^{*}_{r-2}(z)\,]\,\right.\Big) _{\omega (z)}, \end{aligned} \end{aligned}$$

and

$$\begin{aligned} a_{ir}= & {} \left( \,\sum _{j=0}^{i}\lambda _{j,i}\,Y^{*}_{j}(z)\, ,\,\frac{\ell ^{2}}{4}\,[\,-\,Y^{*}_{r+2}(z)+(1-\alpha _{r+1}-\alpha _{r+2})\,\right. \\&\left.\qquad \times Y^{*}_{r}(z)-\alpha _{r}\,\alpha _{r+1}\,Y^{*}_{r-2}(z)\,]\,\right.\Big) _{\omega (z)}. \end{aligned}$$

Now, making use of the orthogonality relation (3), we get the nonzero elements (26) and (27).

Similarly, the elements of the matrices \(\mathbf {D},\) \(\mathbf {F}\) and \(\mathbf {K}\) may be obtained with the aid of Eqs. (8), (11) and the orthogonality relation (3). \(\square\)

Treatment of the non-homogeneous boundary conditions

Consider the hyperbolic telegraph equation (20) subject to the initial conditions (21) and the non-homogeneous boundary conditions

$$\begin{aligned} u(0,t)=q_{1}(t), \quad u(\ell ,t)=q_{2}(t), \quad 0<t\le \tau . \end{aligned}$$

With the aid of the following transformation:

$$\begin{aligned} y(z,t)=u(z,t)-\left( 1-\frac{z}{\ell }\right) \,u(0,t)-\frac{z}{\ell }\,u(\ell ,t), \end{aligned}$$

Eq. (20) can be turned into the following one:

$$\begin{aligned}&{\partial _{tt}}y(z,t)+\gamma \,{\partial _{t}}y(z,t)+\delta \,y(z,t)\\&\quad ={\partial _{zz}}y(z,t)+f(z,t), \quad 0<z\le \ell , \quad 0<t\le \tau . \end{aligned}$$

subject to the initial conditions:

$$\begin{aligned} \begin{aligned}&y(z,0)=p_{1}(z)-\left( 1-\frac{z}{\ell }\right) \,q_{1}(0)-\frac{z}{\ell }\,q_{2}(0), \\&\partial _{t}y(z,0)=p_{2}(z)-\partial _{t}\left( \left( 1-\frac{z}{\ell }\right) \,q_{1}(t)+\frac{z}{\ell }\,q_{2}(t)\right) _{t=0},\\&\quad 0<z\le \ell , \end{aligned} \end{aligned}$$

and the homogeneous boundary conditions:

$$\begin{aligned} y(0,t)=0, \quad y(\ell ,t)=0, \quad 0<t\le \tau , \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} f(z,t)&=g(z,t)-\partial _{tt}\left( \left( 1-\frac{z}{\ell }\right) \,q_{1}(t)+\frac{z}{\ell }\,q_{2}(t)\right) \\&\quad-\gamma \,\partial _{t}\left( \left( 1-\frac{z}{\ell }\right) \,q_{1}(t)+\frac{z}{\ell }\,q_{2}(t)\right) \\&\quad-\delta \,\left( \left( 1-\frac{z}{\ell }\right) \,q_{1}(t)+\frac{z}{\ell }\,q_{2}(t)\right) . \end{aligned} \end{aligned}$$

Investigation of the convergence and error analysis

This section investigates the convergence and error analysis of the proposed double polynomial series expansion in depth. Several required lemmas are employed in this investigation. Furthermore, two theorems will be presented and proved. The first theorem estimates the expansion coefficients, whereas the second theorem estimates the truncation error.

Lemma 3

[28] The following inequality holds for \(Y^{*}_{j}(t)\):

$$\begin{aligned} |Y^{*}_{j}(t)|< \frac{j^{2}}{2^{j}}, \quad t\in [0,\tau ], \quad \forall \ j>1, \end{aligned}$$

where \(|Y^{*}_{0}(t)|=|Y^{*}_{1}(t)|\leqslant 1.\)

Lemma 4

The following estimate holds for \(\psi _{i}(z)\):

$$\begin{aligned} |\psi _{i}(z)|< \frac{{\ell }^{2}i^{2}}{2^{i}}, \quad z\in [0,\ell ], \quad \forall \ i>1, \end{aligned}$$

where \(|\psi _{0}(z)|=|\psi _{1}(z)|\leqslant \frac{\ell ^2}{4}.\)

Proof

With the aid of Eq. (14) and Lemma 3, one gets

$$\begin{aligned} \begin{aligned} |\psi _{i}(z)|&=\left| z\,(\ell -z)\,Y^{*}_{i}(z)\right| \\ {}&\le \frac{\ell ^2}{4}|Y^{*}_{i}(z)|\\ {}&<\frac{{\ell }^{2}i^{2}}{2^{i+2}}, \end{aligned} \end{aligned}$$

and hence

$$\begin{aligned} |\psi _{i}(z)|< \frac{{\ell }^{2}i^{2}}{2^{i}}, \quad \forall \ i>1. \end{aligned}$$

Also

$$\begin{aligned} |\psi _{0}(z)|=|\psi _{1}(z)|\leqslant \frac{\ell ^2}{4}. \end{aligned}$$

\(\square\)

Theorem 4

[28] Assume that a function \(f(t)\in L_{w}^{2}[0,\tau ],\, w=(2\,t-\tau )^2\,\sqrt{t\,\tau -t^2}\), with \(|f^{(3)}(t)|\le L,\,L>0\) and assume that it has the following expansion:

$$\begin{aligned} f(t)=\sum _{i=0}^{\infty }a_{i}\,Y^{*}_{i}(t), \end{aligned}$$
(28)

this series converges uniformly to f(t) and the expansion coefficients in (28) holds

$$\begin{aligned} |a_{i}|\lesssim \frac{1}{i^3}, \quad \forall \ i>3, \end{aligned}$$

where the expression \(A\lesssim B\) means that there exists a generic constant c independent of M and any function such that \(A\le c\,B.\)

Theorem 5

A function \(u(z,t)=z\,(\ell -z)\,g_{1}(z)\,g_{2}(t)\), with \(|g_{1}^{(3)}(z)|\le M_{1},\) and \(|g_{2}^{(3)}(t)|\le M_{2},\) for some positive constants \(M_{1},\, M_{2}\) can be expanded as:

$$\begin{aligned} u(z,t)=\sum _{i=0}^{\infty }\sum _{j=0}^{\infty }c_{ij}\,\psi _i(z)\,Y^{*}_{j}(t), \end{aligned}$$
(29)

where

$$\begin{aligned} c_{ij}=\frac{1}{h_{\ell ,i}\,h_{\tau ,j}}\,\int _{0}^{\tau }\int _{0}^{\ell }\hat{w}\,u(z,t)\,\psi _i(z)\,Y^{*}_{j}(t)\,d\,z\,d\,t. \end{aligned}$$

The series in (29) converges uniformly to u(zt), and the expansion coefficients in (29) holds

$$\begin{aligned} |c_{ij}|\lesssim \frac{1}{i^{3}j^{3}}, \quad \forall \ i,j>3. \end{aligned}$$

Proof

According to the hypotheses of Theorem 5and use of the two substitutions:

$$\begin{aligned} \frac{2\,z}{\ell }-1=\cos {\gamma _{1}}, \quad \frac{2\,t}{\tau }-1=\cos {\gamma _{2}}, \end{aligned}$$

we get

$$\begin{aligned} \begin{aligned} c_{ij}&=\frac{1}{4\,h_{i}}\,\int _{0}^{\pi }g_{1}\left( \frac{\ell }{2}\,(1+\cos {\gamma _{1}})\right) \,Y_{i}(\cos {\gamma _{1}})\,\sin ^{2}{(2\,\gamma _{1})}\,d\gamma _{1}\\ {}&\quad\times \frac{1}{4\,h_{j}} \int _{0}^{\pi }g_{2}\left( \frac{\tau }{2}\,(1+\cos {\gamma _{2}})\right) \,Y_{j}(\cos {\gamma _{2}})\,\sin ^{2}{(2\,\gamma _{2})}\,d\gamma _{2}. \end{aligned} \end{aligned}$$
(30)

Imitating the steps used in Theorem 4followed in [28], the desired result can be obtained. \(\square\)

Theorem 6

If u(zt) satisfies the hypothesis of Theorem 5, and if \(u_{M}(z,t)=\displaystyle \sum\nolimits _{i=0}^{M}\displaystyle \sum\nolimits _{j=0}^{M}c_{ij}\,\psi _i(z)\,Y^{*}_{j}(t),\) then the following truncation error estimate is satisfied

$$\begin{aligned} \left| u-u_{M}\right| \lesssim \frac{1}{2^M}. \end{aligned}$$
(31)

Proof

The truncation error may be written as:

$$\begin{aligned} \begin{aligned}&\left| u-u_M\right| \\&\quad =\left| \sum _{i=0}^{\infty }\sum _{j=0}^{\infty }c_{ij}\,\psi _i(z)\,Y^{*}_{j}(t)-\sum _{i=0}^{M}\sum _{j=0}^{M}c_{ij}\,\psi _i(z)\,Y^{*}_{j}(t)\right| \\&\quad \le \sum _{j=M+1}^{\infty }(\,\left| c_{0j}|\,|\psi _0(z)|+|c_{1j}|\,|\psi _1(z)|+|c_{2j}|\,|\psi _2(z)|\right. \\&\qquad \left. +|c_{3j}|\,|\psi _3(z)|\,\right) \,|Y^{*}_{j}(t)|\\&\qquad +\sum _{i=M+1}^{\infty }(\,\left| c_{i0}|\,|Y^{*}_{0}(t)|+|c_{i1}|\,|Y^{*}_{1}(t)|\right. \\&\qquad +\left. |c_{i2}|\,|Y^{*}_{2}(t)|+|c_{i3}|\,|Y^{*}_{3}(t)|\,\right) \,|\psi _i(z)|\\&\qquad +\sum _{i=4}^{M}\sum _{j=M+1}^{\infty }|c_{ij}|\,|\psi _i(z)|\,|Y^{*}_{j}(t)|\\&\qquad +\sum _{i=M+1}^{\infty }\sum _{j=4}^{\infty }|c_{ij}|\,|\psi _i(z)|\,|Y^{*}_{j}(t)|. \end{aligned} \end{aligned}$$
(32)

With the aid of Eq. (30), one can write

$$\begin{aligned} \begin{aligned} c_{0j}&=\frac{1}{4\,h_{0}}\,\int _{0}^{\pi }g_{1}\left( \frac{\ell }{2}\,(1+\cos {\gamma _{1}})\right) \,Y_{0}(\cos {\gamma _{1}})\,\sin ^{2}{(2\,\gamma _{1})}\,d\gamma _{1}\\ {}&\quad \times \frac{1}{4\,h_{j}} \int _{0}^{\pi }g_{2}\left( \frac{\tau }{2}\,(1+\cos {\gamma _{2}})\right) \,Y_{j}(\cos {\gamma _{2}})\,\sin ^{2}{(2\,\gamma _{2})}\,d\gamma _{2}. \end{aligned} \end{aligned}$$

By making similar steps as in Theorem 4 followed in [28], one finds

$$\begin{aligned} |c_{0j}|\lesssim \frac{1}{j^{3}}. \end{aligned}$$
(33)

Similarly, we can prove that

$$\begin{aligned} |c_{1j}|\lesssim \frac{1}{j^{3}},\quad |c_{2j}|\lesssim \frac{1}{j^{3}},\quad |c_{3j}|\lesssim \frac{1}{j^{3}},\quad \forall \ j>3, \end{aligned}$$

and

$$\begin{aligned} |c_{i0}|\lesssim \frac{1}{i^{3}},\quad |c_{i1}|\lesssim \frac{1}{i^{3}},\quad |c_{i2}|\lesssim \frac{1}{i^{3}},\quad |c_{i3}|\lesssim \frac{1}{i^{3}},\quad \forall \ i>3. \end{aligned}$$
(34)

Substituting by Eqs. (33)–(34) into Eq. (32) and using Lemma 3 and the integral test (see, [50]) lead to the estimation

$$\begin{aligned} \left| u-u_{M}\right| \lesssim \frac{1}{2^M}. \end{aligned}$$

\(\square\)

Remark 2

As shown in Theorem 6, we find that the truncation error estimate (31) leads to an exponential rate of convergence.

Illustrative examples

In this section, some numerical tests are presented to test the efficiency of the MS6CTM for solving the hyperbolic telegraph equation. In this respect, three problems will be solved numerically using our proposed method. In addition, to show the advantages of our method, comparisons with some other numerical algorithms used for solving the linear one-dimensional hyperbolic telegraph type equation are displayed.

Example 1

[34] Consider the telegraph equation

$$\begin{aligned}&{\partial _{tt}}u(z,t)+2\,\gamma \,{\partial _{t}}u(z,t)+\delta ^{2}\,u(z,t)-{\partial _{zz}}u(z,t)\\&\quad =\delta ^{2} \cos (t)\,\sin (z) - 2\,\gamma \,\sin (t)\,\sin (z), \end{aligned}$$

subject to the initial conditions:

$$\begin{aligned} u(z,0)=\sin (z), \quad \partial _{t}u(z,0)=0, \quad 0<z\le \ell , \end{aligned}$$

and the non-homogeneous boundary conditions:

$$\begin{aligned} u(0,t)=0, \quad u(1,t)=\sin (1)\,\cos (t), \quad 0<t\le \tau , \end{aligned}$$

where the exact solution is: \(u(z,t)=\sin (z)\,\cos (t).\)

Table 1 presents the absolute error (AE) for \(M=10,\ell =\tau =1\) and for various values of \(\gamma ,\delta\) and t. Table 2 displays a comparison of the maximum absolute error (MAE) for the case corresponding to \(M=8,\ell =\tau =1\), and for various values of \(\gamma\) and \(\delta\), while Table 3 displays the AE for the case corresponding to \(M=13,\gamma =\delta =1,\ell =1, \tau =2.\) Additionally, Fig. 1 shows the Log10(AE) for different values of M. The computational time (CPU time) of Example 1 for different values of M is presented in Tables 4 and 5. Table 6 presents the absolute error (AE) for \(M=18,\ell =1, \tau =10\) and for various values of \(\gamma ,\delta\) and t. Figure 2 shows the \(L_{\infty }\) Error for various values of \(\gamma ,\delta\) at \(M=18,\ell =1, \tau =10\). The results of Tables 1, 2, 3, 6 and Fig. 1, 2 show that our numerical results with taking few terms of the proposed shifted sixth-kind Chebyshev expansion are more accurate. This demonstrates the advantage of our method if compared with some other numerical methods.

Table 1 The AE for Example 1
Table 2 Comparison of MAE for Example 1
Table 3 The AE for Example 1
Fig. 1
figure 1

The Log10(AE) of Example 1 for different values of M.

Table 4 CPU time (seconds) of Example 1
Table 5 CPU time (seconds) of Example 1
Table 6 The AE for Example 1
Fig. 2
figure 2

The \(L_{\infty }\) Error of Example 1

Example 2

[51] Consider the telegraph equation

$$\begin{aligned} \begin{aligned}&{\partial _{tt}}u(z,t)+{\partial _{t}}u(z,t)+u(z,t)\\&\quad ={\partial _{zz}}u(z,t)+(z^{2}-z^{3})(2\,\cos (2\,t)+\sin (2\,t)+\sin ^{2}t)\\&\qquad +(6\,z-2)\,sin^{2}t, \end{aligned} \end{aligned}$$

governed by the following initial and boundary conditions:

$$\begin{aligned} u(z,0)=\,&\partial _{t}u(z,0)=0, \quad 0<z\le \ell ,\\ u(0,t)=\,&u(1,t)=0, \quad 0<t\le \tau , \end{aligned}$$

where the exact solution is: \(u(z,t)=z^{2}\,(1-z)\,\sin ^{2}(t).\)

In Table 7, a comparison of AE is listed for the case corresponding to \(\ell =1, \tau =1, M=10\), while in Table 8, the AE for the case corresponding to \(M=14, \ell =1, \tau =2\) is displayed. In addition, Fig. 3 illustrates the Log10(AE) for different values of M. The CPU time of Example 2 for different values of M is presented in Table 9. We can see from Table 7, 8 and Fig. 3 that the proposed method is appropriate and effective.

Table 7 Comparison of AE for Example 2
Table 8 The AE for Example 2
Fig. 3
figure 3

The Log10 (AE) of Example 2 for different values of M.

Table 9 CPU time (seconds) of Example 2

Example 3

[52] Consider the telegraph equation

$$\begin{aligned}&{\partial _{tt}}u(z,t)+\gamma \,{\partial _{t}}u(z,t)+\delta \,u(z,t)-{\partial _{zz}}u(z,t)\\&\quad =e^{-t}\left( 2\,t^{2}+z\,(z-1)(-2-2\,t\,(\gamma -2)\right. \\&\qquad \left. -t^{2}\,(\delta -\gamma +1))\right) , \end{aligned}$$

governed by the following initial and boundary conditions:

$$\begin{aligned} u(z,0)=\,&\partial _{t}u(z,0)=0, \quad 0<z\le \ell ,\\ u(0,t)=\,&u(\ell ,t)=0, \quad 0<t\le \tau , \end{aligned}$$

where the exact solution is: \(u(z,t)=t^{2}\,(z-z^{2})\,e^{-t}.\)

Table 10 displays the absolute errors for the case corresponding to \(\ell =\tau =1,\ \gamma =\delta =1\) and for the two values \(M=10\) and \(M=12\). Table 11 presents the AE for the case corresponding to \(\ell =1, \tau =5\), \(\gamma =6, \delta =9, M=16\). Additionally, Fig. 4 illustrates the Log10(AE) for different values of M. The CPU time of Example 3 for different values of M is shown in Table 12. We can see from the tabulated AEs of Tables 10, 11 and Fig. 4 that the proposed method is suitable and powerful for solving the telegraph type equation.

Table 10 The AE of Example 3
Fig. 4
figure 4

The Log10(AE) of Example 3 for different values of M

Table 11 The AE for Example 3
Table 12 CPU time (seconds) of Example 3

Concluding remarks

We have presented in this paper a numerical algorithm designed for approximating the solutions for the hyperbolic telegraph type problem based using the spectral tau approach. In fact, to be capable of deriving our numerical scheme, basis functions are selected as suitable combinations of the shifted of double sixth-kind Chebyshev polynomials were introduced and utilized. The first- and second-order derivatives of the proposed basis functions were explicitly expressed using reduction formulae of certain terminating hypergeometric formulae of unit argument. Zeriberger’s algorithm is pivotal in such reduction. The derivatives formulae, in conjunction with the use of the Kronecker algebra, serve in the discretization of the hyperbolic telegraph equation governed by its underlying conditions. The suggested algorithm is applicable and well-suited to computer programming. As a result, we do not require a lot of CPU time or a big number of calculations. The double expansion’s convergence and error analysis were thoroughly studied. Numerical examples in Sect. 5 show that the suggested approach is appropriate and very effective for large intervals. Furthermore, we expect that our proposed method is suitable for solving the two and three-dimensional problems. As an expected future work, we aim to employ the developed theoretical results in this paper along with suitable spectral methods to treat numerically some other types of PDEs and fractional PDEs. All codes were written and debugged by Mathematica 11 on HP Z420 Workstation, Processor: Intel (R) Xeon(R) CPU E5-1620 - 3.6 GHz, 16GB Ram DDR3, and 512 GB storage.