1 Introduction

Partial differential equations (PDEs) have dominant applications in various physical, chemical, and biological processes. These processes are mostly modeled in the form of heat, Laplace, and wave-type differential equations. Finding analytical solution of these PDEs is not an easy task. Therefore, researchers are trying to develop an accurate and efficient numerical method for solving these PDEs. Here we construct general heat type time dependent PDEs of the following form:

$$\begin{aligned} {\partial _t}{Y(\xi ,\eta ,t)}+\alpha Y(\xi ,\eta ,t)\nabla Y(\xi ,\eta ,t)+\gamma ~ Y(\xi ,\eta ,t)=\beta \Delta Y(\xi ,\eta ,t)+g(\xi ,\eta ,t), \quad \xi , \eta \in \Gamma , \quad t \in [0, T], \end{aligned}$$
(1)

with initial and boundary conditions

$$\begin{aligned} Y(\xi ,\eta ,0)=\phi _{1}(\xi ,\eta ), \quad \xi , \eta \in \Gamma \quad \text{ and }\quad Y(\xi ,\eta ,t)=\phi _2(\xi ,\eta ,t), \quad \xi , \eta \in \partial \Gamma , \end{aligned}$$

where g is source term depends on space and time, \(\Gamma \), \(\partial \Gamma \) are spatial domain and boundary of the domain, respectively, \(\alpha \) and \(\gamma \) are positive constants, \(\nabla =\partial _{\xi }+\partial _{\eta }\) and \(\Delta =\partial _{\xi \xi }+\partial _{\eta \eta }\) represents Laplacian operator. When \(\alpha =0\) and \(\beta =1\) Eq. (1) becomes heat equation which is used to model thermal conductivity over a solid body. This equation is also used to model other real-world phenomenons like Black–Scholes model; it is applied for price estimation, and in chemical process it is designed in the form of diffusion equation [18]. In the form of advection-diffusion equation it describes heat transfer in a solid body and dissipation of salt in ground water. In image processing it handles image at different scales [28]. The solution of this equation plays a significant role to examine behavior of such physical processes. Different numerical methods have been used for solution of heat equation by many researchers. A comparative study between the classical finite difference and finite element method was investigated by Benyam Mebrate [25]. B-spline and finite element method have been used by Darbel et al. [12]. Hooshmandasl et al. [18] used wavelet methods to solve one dimensional heat equations, while Dehghan [13] used second order finite difference scheme for two-dimensional heat equations. In [34], the authors applied haar wavelet method for two-dimensional fractional reaction–diffusion equations. In [2], the authors applied fifth-kind Chebyshev polynomials for the spectral solution of convection–diffusion equation.

When \(\alpha =1\) and \(\beta =1/Re\) (‘Re’ is Reynolds number), then Eq. (1) becomes well-known Burger equation which was experienced in the field of turbulence, shock wave theory, viscous fluid flow, gas dynamics, cosmology, traffic flow, quantum field, and heat conduction [27]. The low-kinematic viscosity shocks and the relation between cellular and large-scale structure of the universe have been described by one- and three-dimensional Burger equations. When traffic is treated as one-dimensional incompressible fluid, then the density wave in traffic flow which changes from non-uniform to uniform distribution is described by Burger equation [10]. The Burger equation was first introduced by Bateman in viscous fluid flow, which was then extended by Burger in (1948) to examine turbulence phenomena and that is why it is known as Burger equation [27]. Due to wide applications of Burger equation many numerical methods have been implemented to study behavior of the model. One-dimensional Burger equation has been solved using various techniques in [23, 35]. Mittal and Jiwari [27] implemented differential quadrature method for solution of Burger-type equation. Similarly El-sayed and Kaya [20] solved two-dimensional Burger equation using decomposition method. Liao [24] used fourth-order finite difference technique for the study of two-dimensional Burger equation. Oruc [33] applied meshless pseudo-spectral method to modified Burgers equations. The same author in [30] studied three-dimensional fractional convection–diffusion equation using meshless method based on multiple-scale polynomials. In this work, we study afore mentioned models by using combination of Lucas and Fibonacci polynomials. These polynomials can be directly obtained as a special case from the work done in [4, 6]. These polynomials are non-orthogonal and do not require domain and problem transformation which is important point of the proposed scheme. Also the higher order derivative of unknown functions can be easily approximated via Lucas and Fibonacci Polynomials. Second, it is straightforward and produces better accuracy for less number of nodal points. Many researchers applied these polynomials for the solution of fractional differential equations (FDEs) such as Elhameed and Youssri [3], who applied Lucas polynomial in a Caputo sense to FDEs. Moreover they computed the solution of fractional pantograph differential equations (FPDEs) using generalized Lucas polynomials [39]. Other polynomials applied for approximation of FDEs are studied in [5, 7]. Cetin [11] used Lucas polynomial approach to study a system of higher order differential equation, whereas Bayku [9] applied hybrid Taylor–Lucas collocation technique for delay differential equations. Mostefa [29] obtained solution of integro differential equation using Lucas series. Farshid et al. [26] applied Fibonacci polynomials for solution of Voltera–Fredholm integral equations. Omer Oruc [31, 32] applied a combined Lucas and Fibonacci polynomials approach for numerical solution of evolutionary equation for the first time. Recently, Ali et al. [8, 15, 17] applied Lucas polynomials coupled with finite differences and obtained accurate solution of various classes of one- and two-dimensional PDEs. In this paper, we implement the proposed method to one- and two-dimensional Burger and heat equations. The simulation is carried out with the help of MATLAB 2013 using Intel core-i7 machine with 4GB RAM. The error bound of the scheme is also investigated in this work. The paper is organized as follows: In Sect. 2 we define basic definitions and important concept which will be used in this work. In Sect. 3 methodology of the proposed scheme is formulated. Error analysis of the method is described in Sect. 4. Numerical experiments are presented in Sect. 5 followed by conclusion of the paper.

2 Basic concepts and definition

In this section, we describe some necessary definitions and concepts required for our subsequent development.

Definition 2.1

[22] Lucas polynomials are the generalization of well-known Lucas numbers which are generated by linear recursive relation as follows:

$$\begin{aligned} L_{k}(\xi )=\xi L_{k-1}(\xi )+L_{k-2}(\xi ), \quad \text{ for }\quad k \ge 2, \end{aligned}$$
(2)

with \(L_{0}(\xi )=2\) and \(L_{1}(\xi )=\xi \), by putting \({\xi }=1\), Eq (2) gives Lucas numbers.

Definition 2.2

[22] Fibonacci polynomials can be generated by the following linear recursive relation:

$$\begin{aligned} \mathcal {F}_{k}(\xi )=\xi \mathcal {F}_{k-1}(\xi )+\mathcal {F}_{k-2}(\xi ) \quad \text{ for } \quad k\ge {2}, \end{aligned}$$
(3)

where \(\mathcal {F}_{0}(\xi )=0,\) and \( \mathcal {F}_{1}(\xi )=1\), by putting \({\xi }=1\), Eq. (3) generate Fibonacci numbers.

Function Approximation. Let \(Y(\xi )\) be square integrable on (0, 1) and suppose that it can be expressed in terms of the Lucas polynomials given as follows:

$$\begin{aligned} Y({\xi })=\sum _{k=0}^{N}c_{k}L_k(\xi )=\mathbf {CL}(\xi ), \end{aligned}$$
(4)

where \(\mathbf{C} =\left[ c_{0},c_{1},...,c_{N}\right] ^{T}\) is \((N+1)\times 1\) vector of unknown coefficients and \(\mathbf{L} (\xi )=[L_{0}(\xi ),L_{1}(\xi ),...,L_{N}(\xi )]\) is \((N+1)\times (N+1)\) matrix of Lucas polynomials. Similarly the mth order derivative of the function \(Y(\xi )\) can be approximated in terms of finite Lucas series is given as follows:

$$\begin{aligned} Y^{(m)}(\xi )=\sum _{k=0}^{N}c_{k} L^{(m)}_{k}(\xi )= \mathbf {CL}^{(m)}(\xi ), \end{aligned}$$
(5)

in which \(\mathbf{L} ^{(m)}(\xi )=\left[ L_{0}^{(m)}(\xi ), L_{1}^{(m)}(\xi ),..., L_{N}^{(m)}(\xi ) \right] \) is square matrix.

Corollary 2.3

[22]: Let \(L^{(m)}_{k}(\xi )\) be the mth order derivative of Lucas polynomials; Then it can be expanded in terms of Fibonacci polynomials by the following relation:

$$\begin{aligned} L^{(m)}_k(\xi )=k(\mathcal {F}_{k}(\xi )\mathbf{D} ^{m-1}), \quad m\ge 1, \end{aligned}$$
(6)

where \(\mathbf {D}\) is \((N+1) \times (N+1)\) differentiation matrix such that:

$$\begin{aligned} \mathbf {D}= \begin{bmatrix} 0 &{} 0 &{} \dots &{} 0 \\ 0 \\ \vdots &{} \ &{} \mathbf {d}\\ 0 \end{bmatrix}, \end{aligned}$$

in which \(\mathbf {d}\) is square matrix of order \(N \times N\) defined as:

$$\begin{aligned} d_{m,k}={\left\{ \begin{array}{ll} m\sin \frac{(k-m)\pi }{2}, &{} \text {if k>m},\\ 0, &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

For example, if we choose \(N=3\), then we have

$$\begin{aligned} \mathbf {D}= \begin{bmatrix} 0 &{}0 &{} 0 &{} 0 \\ 0 &{}0 &{}1&{}0 \\ 0 &{} 0 &{}0 &{} 2\\ 0 &{} 0 &{}0&{}0 \end{bmatrix}, \end{aligned}$$

and for \(m=2\) the second-order derivative of \(\mathbf{L} (\xi )\) can be obtained using Eq. (6)

$$\begin{aligned} \mathbf{L} ^{''}(\xi )=k(\mathcal {F}(\xi )\mathbf{D} )=k[0,1,\xi ,\xi ^2+1]\begin{bmatrix} 0 &{}0 &{} 0 &{} 0 \\ 0 &{}0 &{}1&{}0 \\ 0 &{} 0 &{}0 &{} 2\\ 0 &{} 0 &{}0&{}0 \end{bmatrix}, \end{aligned}$$
(7)

where \(k=[0,1,2,3]\) after element wise multiplication we get

$$\begin{aligned} L^{''}(\xi )=[0,0,2,6x] \end{aligned}$$
(8)

Now assume a two-dimensional continuous function \(Y(\xi ,\eta )\) can be written in terms of Lucas polynomials as follows:

$$\begin{aligned} Y(\xi ,\eta )\approx \sum _{k=0}^{N}\sum _{m=0}^{N} c_{km} L_{k}(\xi ) L_{m}(\eta )=\sum _{k=0}^{N}\sum _{m=0}^{N}c_{km}L_{km}(\xi ,\eta )=\mathbf {CL}(\xi ,\eta ), \end{aligned}$$
(9)

where \(\mathbf{C} =[c_{00},...,c_{0N}, c_{N0},...,c_{NN}]^{T}\) is Lucas unknown coefficients vector of order \((N+1)^{2}\times 1\), and \(\mathbf{L} (\xi ,\eta )=[L_{00}(\xi ,\eta ),...,L_{0N}(\xi ,\eta ), L_{N0}(\xi ,\eta ),...,L_{NN}(\xi ,\eta )]\) is square matrix of order \((N+1)^2\).

3 Solution methodology

Consider the following evolutionary equation:

$$\begin{aligned} \partial _{t}Y(\mathbf{x},t)+\pounds Y(\mathbf{x},t)=g(\mathbf{x},t) \quad \mathbf{x} \in \Gamma , \quad t \in [0,T], \end{aligned}$$
(10)

where \( \pounds \) is differential operator and \(g(\mathbf{x},t)\) is a given smooth function. The initial and boundary conditions are given as follows:

$$\begin{aligned} Y(\mathbf{x},0)=Y_{0}(\mathbf{x}), \quad \mathbf{x}\in \Gamma \quad \text{ and } \quad \mathfrak {B} Y(\mathbf{x},t)=f(\mathbf{x},t), \quad \mathbf{x}\in \partial \Gamma , \end{aligned}$$
(11)

where \( {\mathfrak {B}} \) is boundary operator. To approximate Eq. (10), first we define time discretization given by the following:

$$\begin{aligned} t^{n}=(n-1)\delta t, \quad n=1,2,...,M \end{aligned}$$

where \(\delta t=T/M\) is time step size for variable t and T is final time. Now applying finite difference scheme to temporal part and \( \theta - \)weighted scheme to spatial part of Eq. (10), one can write

$$\begin{aligned} \frac{1}{\delta t}{[Y^{n+1}(\mathbf{x})-Y^{n}(\mathbf{x})]}+\theta \pounds Y^{n+1}(\mathbf{x})+(1-\theta ) \pounds Y^{n}(\mathbf{x})= g^{n+1}(\mathbf{x}), \end{aligned}$$
(12)

where \(Y^{n+1}(\mathbf{x} )=Y(\mathbf{x} ,t^{n+1})\) and so forth. By putting \(\theta =0.5\), the scheme in Eq. (12) represents Crank–Nicolson which is \(O(\delta t^2)\) accurate in time.

For discretization of spatial domain \(\Gamma =[a,b]\) we use regular grid point defined as follows:

$$\begin{aligned} \mathbf{x} _{i}=a+\frac{b-a}{M}(i-1), \quad i=1,2,...,N+1, \quad \mathbf{x} _{i}\in \Gamma \end{aligned}$$

For \((n+1)\) time level and at the collocation point \(\mathbf{x} =(\xi _{i},\eta _{j})\) Eq. (9) can be written as follows:

$$\begin{aligned} {Y}^{n+1}{(\xi _{i},\eta _{j})}=\sum _{k=0}^{N}\sum _{m=0}^{N}c_{km}^{n+1}L_k(\xi _{i})L_m(\eta _{j}),\quad i,j=0,1,\dots ,N. \end{aligned}$$
(13)

Using Eq. (13) in Eq. (12), we have

$$\begin{aligned} \frac{1}{\delta t}{\left[ \sum _{k=0}^{N}\sum _{m=0}^{N}c_{km}^{n+1}L_k(\xi _i)L_m(\eta _j) -\sum _{k=0}^{N}\sum _{m=0}^{N}c_{km}^{n}L_k(\xi _i)L_m(\eta _j)\right] } +{\theta }{\left[ \sum _{k=0}^{N}\sum _{m=0}^{N} c^{n+1}_{km} \pounds (L_{k}(\xi _i)L_{m}(\eta _j)) \right] }\\ +{(1-\theta )}{\left[ \sum _{k=0}^{N} \sum _{m=0}^{N} c^{n}_{km} \pounds (L_{k}(\xi _i)L_{m}(\eta _j))\right] } = g^{n+1}(\xi _i,\eta _j),\quad (\xi _i,\eta _j)\in \Gamma . \end{aligned}$$

The above equation can also be written as

$$\begin{aligned}&\sum _{k=0}^{N}\sum _{m=0}^{N} \big [L_k(\xi _i)L_m(\eta _j) +\theta ~\delta t ~\pounds (L_{k}(\xi _i)L_{m}(\eta _j) )\big ]c_{km}^{n+1}\nonumber \\&\quad =\sum _{k=0}^{N}\sum _{m=0}^{N} \big [L_k(\xi _i)L_m(\eta _j)+ (1-\theta )~\delta t~ \pounds (L_{k}(\xi _i)L_{m}(\eta _j))\big ]c_{km}^{n} + g^{n+1}(\xi _i,\eta _j),\quad (\xi _i,\eta _j)\in \Gamma . \end{aligned}$$
(14)

The boundary conditions (11) transform to

$$\begin{aligned} \sum _{k=0}^{N}\sum _{m=0}^{N} c^{n+1}_{km} \mathfrak {B} (L_{k}(\xi _i)L_{m}(\eta _j))=f^{n+1}(\xi _i,\eta _j), \quad (\xi _i,\eta _j)\in \partial \Gamma . \end{aligned}$$
(15)

Matrix form of Eqs. (14) and (15) is given by

$$\begin{aligned} \mathbf{HC} ^{n+1}=\mathbf{GC} ^{n}+\mathbf{B} ^{n+1}. \end{aligned}$$
(16)

For \(k,m=0,\dots ,N\) elements for the above matrices can be obtained in the following:

$$\begin{aligned} \mathbf {H}= {\left\{ \begin{array}{ll} L_{k}(\xi _{i})L_{m}(\eta _{j})+\delta t \theta \pounds (L_{k}(\xi _{i})L_{m}(\eta _{j})), &{} (\xi _i,\eta _j)\in \Gamma ,\\ \mathfrak {B} (L_{k}(\xi _{i})L_{m}(\eta _{j})), &{} (\xi _i,\eta _j)\in \partial \Gamma , \end{array}\right. } \end{aligned}$$
(17)
$$\begin{aligned} \mathbf {G}= {\left\{ \begin{array}{ll} L_{k}(\xi _{i})L_{m}(\eta _{j})+\delta t~ (1-\theta )~ \pounds (L_{k}(\xi _{i})L_{m}(\eta _{j})), &{} (\xi _i,\eta _j)\in \Gamma , \\ 0, &{} (\xi _i,\eta _j)\in \partial \Gamma , \end{array}\right. } \end{aligned}$$
(18)
$$\begin{aligned} \mathbf{B} ^{n+1}= {\left\{ \begin{array}{ll} g^{n+1}(\xi _i,\eta _j), &{} (\xi _i,\eta _j)\in \Gamma , \\ f^{n+1}(\xi _i,\eta _j), &{} (\xi _i,\eta _j)\in \partial \Gamma , \end{array}\right. } \end{aligned}$$
(19)

where \(\mathbf {H}\), \(\mathbf {G}\) and \(\mathbf {B}\) are \((N+1)^{2}\) order square matrices. The unknown coefficient vector \(\mathbf {C}\) can be obtained by solving Eq. (16). Once the values of unknown coefficient are computed the solution of the problem under consideration can be obtained from Eq. (9).

4 Truncation error estimate

To study the error estimate of the proposed scheme, we follow the approach of Abd-Elhameed and Youssri [3].

Theorem 4.1

Let \(Y(\xi ,\eta )\) and \(Y^{*}(\xi ,\eta )\) be exact and approximate solutions of Eq. (1). Moreover, we expand \(Y(\xi ,\eta )\) in terms of Lucas sequence. Then truncation error is given as

$$\begin{aligned} |e|<\frac{4\exp ({2\kappa })\cosh ^{2}(2P)\kappa ^{2(N+1)}}{((N+1)!)^{2}}. \end{aligned}$$
(20)

Proof

Consider the absolute error between exact and approximate solution

$$\begin{aligned} |e|=|Y(\xi ,\eta )-Y^{*}(\xi ,\eta )| \end{aligned}$$
(21)

where \(Y(\xi ,\eta )=\sum _{k=0}^{\infty } \sum _{m=0}^{\infty } \lambda _{k}\lambda _{m}L_{k}(\xi )L_{m}(\eta )\) and \(Y^{*}(x,y)=\sum _{k=0}^{M} \sum _{m=0}^{M} \lambda _{k}\lambda _{m}L_{k}(\xi )L_{m}(\eta )\). Then the truncated term is given as

$$\begin{aligned} |e|=\sum _{k=N+1}^{\infty } \sum _{m=N+1}^{\infty } \lambda _{k}\lambda _{m}L_{k}(\xi )L_{m}(\eta ). \end{aligned}$$
(22)

It is shown in [3]

$$\begin{aligned} |L_{m}(\xi )|\le 2\vartheta ^{m}, \quad ~~ |\lambda _{m}|\le \frac{P^{m}\cosh (2P)}{m!}, \end{aligned}$$

where \(\vartheta \) is well-known golden ratio. Therefore, Eq. (22) implies that

$$\begin{aligned} |e| \le 4\cosh ^{2}(2P)\sum _{k=N+1}^{\infty } \sum _{m=N+1}^{\infty } \frac{\kappa ^{k+m}}{k!m!}, \end{aligned}$$

where \(\kappa =P \vartheta \). The above inequality can also be written as

$$\begin{aligned} |e| \le 4 \exp {(2\kappa )}\cosh ^{2}(2P)\left[ 1- \frac{\Gamma (N+1,\kappa )}{\Gamma (N+1)} \right] ^{2}. \end{aligned}$$
(23)

Here, \(\Gamma (N+1,\kappa )\) is the incomplete gamma function and \(\Gamma (N+1)\) is complete gamma function [36]. In integral form Eq. (23) can be written as

$$\begin{aligned} |e| \le \frac{4\exp ^{(2\kappa )}\cosh ^{2}(2P)}{(N!)^2} \left[ \int _{0}^{\kappa } t^{N} \exp (-t)dt \right] ^{2}. \end{aligned}$$

As, \(\exp (-t)<1\), therefore, for all \(t>0\), we have

$$\begin{aligned} |e| < \frac{4\exp ^{(2\kappa )}\cosh ^{2}(2P)\kappa ^{2(N+1)}}{((N+1)!)^2}. \end{aligned}$$

This completes the proof. \(\square \)

5 Numerical examples

In this section, one- and two-dimensional Burgers’ and diffusion equations have been solved using the proposed method. Performance of the technique is examined by computing \(L_2\), \(L_\infty \), and root mean square (RMS) error norms for different collocation points N and time T. The obtained results are then compared with available results in the literature.

Example 5.1

When \(\alpha =0, ~\beta =1\), \(\gamma =0\) and \(g(\xi ,t)=0\) then Eq. (1) becomes heat equation given by

$$\begin{aligned} \partial _{t}Y(\xi ,t)-\partial _{\xi \xi }Y(\xi ,t)=0, \end{aligned}$$
(24)

with initial and boundary conditions

$$\begin{aligned} Y(\xi ,0)=\sin (\xi ),\quad 0 \le {\xi } \le 1 \quad \text{ and }\quad \quad Y(0,t)=0, \quad Y(1,t)=\sin (1)e^{-t} \quad \text{ for } \quad t \ge 0, \end{aligned}$$

and exact solution [18].

$$\begin{aligned} Y(\xi ,t)=\sin (\xi )e^{-t}. \end{aligned}$$

Comparison of the above equations with Eqs. (10) and (11) gives

$$\begin{aligned} \pounds =-\partial _{\xi \xi },\quad g(\xi ,t)=0,\quad Y_{0}(\xi )=\sin (\xi ),\quad \text{ and } \quad f(\xi ,t)=0. \end{aligned}$$

Applying the technique discussed in Sect. 2, Eq. (14) takes the following form:

$$\begin{aligned} \sum _{k=0}^{N} c^{n+1}_{k} L_{k}(\xi )-{\delta t}{\theta }\sum _{k=0}^{N} c^{n+1}_{k} L^{''}_{k}(\xi )=\sum _{k=0}^{N} c^{n}_{k} L_{k}(\xi )+{\delta t}~(1-\theta )\sum _{k=0}^{N} c^{n}_{k}L^{''}_{k}(\xi ), \end{aligned}$$
(25)

where \(L^{''}_{k}(\xi )\) represents second-order derivative of Lucas polynomials which can be obtained using Eq. (6). The matrices in Eq. (17)–(19) takes the following form:

$$\begin{aligned} \mathbf {H}= {\left\{ \begin{array}{ll} L_{k}(\xi _{i})-\delta t \theta k \mathcal {F}_{k}(\xi _{i})\mathbf{D}\mathbf ,\quad &{} i=1,...,N-1,\\ L_{k}(\xi _{i}),\quad &{} i=0,N, \end{array}\right. } \end{aligned}$$
(26)
$$\begin{aligned} \mathbf {G}= {\left\{ \begin{array}{ll} L_{k}(\xi _{i})+\delta t (1-\theta ) k \mathcal {F}_{k}(\xi _{i})\mathbf{D}\mathbf ,\quad &{} i=1,...,N-1 , \\ 0,\quad &{} i=0,N, \end{array}\right. } \end{aligned}$$
(27)
$$\begin{aligned} \mathbf{B} ^{n+1}= {\left\{ \begin{array}{ll} 0, &{} i=1,2,\dots ,N-1 , \\ \sin (\xi _{i})e^{-t},&{}i=0, N. \end{array}\right. } \end{aligned}$$
(28)

The problem has been solved for different values of nodal points N, and time T. Computed solutions are compared with the results provided by Chebyshev wavelet method in the form of various error norms which are shown in Table 1. It is clear from the table that proposed scheme gives excellent results than cited work. Numerical convergence and Cpu time have been reported in Table 2 for various values of N. From the table it can be observed that the solution converges as the nodal point N increases. i.e. (\(d\xi \) decreases). Error norms for large time level are recorded in Table 3 and better results noticed than that for small time level. Solution profile and absolute errors are presented in Fig. 1 which show that exact and approximate solution are well matched with each other showing efficiency of the proposed scheme.

Table 1 Error norms for \(N=16\), \(\delta t=0.001\) of Example 5.1
Table 2 Spatial convergence when \(T=1\), \(\delta t=0.001\) for Example 5.1
Table 3 Error norm for \(N=20\), \(\delta t=0.001\) of Example 5.1
Fig. 1
figure 1

Solution profile when \(T=0.1\), \(\delta t=0.001\), \(N=15\) of Example 5.1

Example 5.2

Consider Eq. (1), with \(\alpha =0\), \(\beta =1\), \(\gamma =-2\) and \(g(\xi ,t)=0\), defined as homogeneous heat equation given as follows

$$\begin{aligned} {\partial _{t}}Y{(\xi ,t)}={\partial _{\xi \xi }}Y{(\xi ,t)}+2Y(\xi ,t), \quad 0 \le \xi \le 1 , \quad 0\le t \le 1. \end{aligned}$$

with exact solution

$$\begin{aligned} Y(\xi ,t)=\sinh (\xi )e^{-t} \end{aligned}$$

Initial and boundary conditions are extracted from exact solution. The problem has been solved by adopting the procedure discussed in Sect. 2. The results are computed for different values of N and T. RMS, \(L_2\), and \(L_\infty \) error norms have been calculated and comparison is made with available results in literature [18] for different values of \(T,~N=16,~\delta t=0.001\) that are shown in Table 4. From the table, it is straightforward that the results achieved using the proposed method are better than those available in literature which show proficiency of the method used. For convergence in space the results obtained are shown in Table 5 for different values of \(d\xi \) showing that the solution converges as the value \(d\xi \) decreases. Graphs of solutions along with absolute errors for \(T=1\) and N=15 are plotted in Fig. 2. The figure shows that exact and approximate solutions are closely matched.

Table 4 Error norms when \(N=16\), \(\delta t=0.001\) of Example 5.2
Table 5 Space convergence for \(T=1\) and \(\delta t=0.001\) of Example 5.2
Fig. 2
figure 2

Solution profile when \(T=1\), \(\delta t=0.001\), \(N=15\) of Example 5.2

Example 5.3

Consider \(\alpha =1\), \(\gamma =0,~g(\mathbf{x},t)=0\) in Eq. (1); we get the following two-dimensional nonlinear Burger equation:

$$\begin{aligned} {\partial _{t}}Y(\xi ,\eta ,t)+ Y(\xi ,\eta ,t)\big \{\partial _{\xi }Y(\xi ,\eta ,t) +\partial _{\eta }Y(\xi ,\eta ,t)\big \}= \beta \big \{\partial _{\xi \xi }Y(\xi ,\eta ,t) +\partial _{\eta \eta }Y(\xi ,\eta ,t)\big \}. \end{aligned}$$
(29)

There are two cases.

Case 1: In this case the exact solution is given as [16]

$$\begin{aligned} Y(\xi ,\eta ,t)=\frac{1}{1+e^{\frac{\xi +\eta -t}{2\beta }}},\quad 0\le \xi ,\eta \le 1. \end{aligned}$$
(30)

Initial and boundary conditions are extracted from the exact solution. The nonlinear part in Eq. (29) can be linearized by the following formula [8]:

$$\begin{aligned} (Y\partial _{\xi }Y)^{n+1}=Y^{n+1}\partial _{\xi }Y^{n}. \end{aligned}$$
(31)

Applying the technique discussed in Sect. 2 Eq. (14) takes the following form:

$$\begin{aligned}&\sum _{k=0}^{N}\sum _{m=0}^{N} \Big [L_k(\xi _i)L_m(\eta _j)-{\delta t}\theta \beta \big \{ (k\mathcal {F}_{k}(\xi _i)\mathbf{D} )L_m(\eta _j)+L_k(\xi _i)(m\mathcal {F}_{m}(\eta _j)\mathbf{D} )\big \}+ \delta t \theta L_k(\xi _i)L_m(\eta _j)\\& \big \{\partial _{\xi }Y^{n}(\xi _{i},\eta _{j})+\partial _{\eta }Y^{n}(\xi _{i},\eta _{j})\big \} \Big ]c_{km}^{n+1}\\&=\sum _{k=0}^{N}\sum _{m=0}^{N} \Big [L_k(\xi _i)L_m(\eta _j)+{\delta t}(1-\theta )\beta \big \{ (k\mathcal {F}_{k}(\xi _i)\mathbf{D} )L_m(\eta _j)+L_k(\xi _i)(m\mathcal {F}_{m}(\eta _j)\mathbf{D} )\big \}- \\&-\delta t (1-\theta )L_{k}(\xi _{i})L_{m}(\eta _{j})\big \{\partial _{\xi }Y^{n}(\xi _{i},\eta _{j})+\partial _{\eta }Y^{n}(\xi _{i},\eta _{j})\big \} \Big ]c_{km}^{n}. \end{aligned}$$

The matrices \(\mathbf {H, ~G}\) and \(\mathbf {B}\) in Eq. (16) for \(k,m=0,1,\dots ,N,\) are given as follows:

$$\begin{aligned} \mathbf {H}= {\left\{ \begin{array}{ll} L_k(\xi _i)L_m(\eta _j)-{\delta t}\theta \beta \big \{ (k\mathcal {F}_{k}(\xi _i)\mathbf{D}\mathbf )L_m(\eta _j)+L_k(\xi _i)(m\mathcal {F}_{m}(\eta _j)\mathbf{D}\mathbf )\big \}\\ +\delta t \theta L_k(\xi _i)L_m(\eta _j) \big \{\partial _{\xi }Y^{n}(\xi _{i},\eta _{j})+\partial _{\eta }Y^{n}(\xi _{i},\eta _{j})\big \},&{} (\xi _i,\eta _j)\in \Gamma ,\\ L_{k}(\xi _{i})L_{m}(\eta _{j}), &{} (\xi _i,\eta _j)\in \partial \Gamma , \end{array}\right. } \end{aligned}$$
(32)
$$\begin{aligned} \mathbf {G}= {\left\{ \begin{array}{ll} L_k(\xi _i)L_m(\eta _j)+{\delta t} (1-\theta ) \beta \big \{ (k\mathcal {F}_{k}(\xi _i)\mathbf{D}\mathbf )L_m(\eta _j)+L_k(\xi _i)(m\mathcal {F}_{m}(\eta _j)\mathbf{D}\mathbf )\big \} &{} {}\\ -\delta t (1-\theta )L_k(\xi _i)L_m(\eta _j) \big \{\partial _{\xi }Y^{n}(\xi _{i},\eta _{j})+\partial _{\eta }Y^{n}(\xi _{i},\eta _{j})\big \}, &{} (\xi _i,\eta _j)\in \Gamma , \\ 0, &{} (\xi _i,\eta _j)\in \partial \Gamma , \end{array}\right. } \end{aligned}$$
(33)
$$\begin{aligned} \mathbf{B} ^{n+1}= {\left\{ \begin{array}{ll} 0, &{} (\xi _i,\eta _j)\in \Gamma , \\ \frac{1}{1+e^{\frac{\xi _i+\eta _j-t}{2\beta }}},&{} (\xi _i,\eta _j)\in \partial \Gamma . \end{array}\right. } \end{aligned}$$
(34)

The approximate solution of the problem has been computed for different values of viscosity \(\beta \), time T, and nodal points N. Error norms \(L_2, L_\infty \) and RMS have been calculated and the obtained results compared with the results of Haar wavelet [16] and differential quadrature [19]. The results are presented in Tables 6 and 7. From these tables it is obvious that proposed method works pretty well than those available in the literature. The solution profile and error plots for \(T=2\) , \(\delta t=0.001\), \(\beta =1\), and nodal points \([20 \times 20]\) are shown in Fig. 3 showing efficiency of the proposed technique.

Table 6 Error norms of Example 5.3 case 1 for \(\delta t=0.0025\)
Table 7 Error norms when \(N=15\), and \(\delta t=0.01\) for case 1 Example 5.3
Fig. 3
figure 3

Solution profile when \(T=1\), \(\delta t=0.01\), \(N=20\) for case 1 Example 5.3

Case 2:

In this case exact solution of Eq. (29) is given as [19]

$$\begin{aligned} Y(\xi ,\eta ,t)=0.5-\tanh \left( \frac{\xi +\eta -t}{2\beta }\right) \quad -0.5 \le \xi ,\eta \le 0.5, \quad t\ge 0. \end{aligned}$$
(35)

Here also initial and boundary conditions are extracted from the exact solutions. The method discussed in previous example is applied to solve this example for different value of viscosity \(\beta \). Here also the error norms \(L_2, ~ L_\infty \) and RMS have been computed and are compared with the results of meshless collocation method [19] available in literature for different values of N. The obtained results presented in Table 8 which shows that the present method gives better results than those available in literature. The graph of numerical and exact solution are shown in Fig. 4 which shows efficiency of the current technique.

Table 8 Error norms for \(T=0.1\), \(\delta t=0.001\) of case 2 Example 5.3
Fig. 4
figure 4

Solution Profile when \(T=1\), \(\delta t=0.01\), \(\beta =0.01\), \(N=20\) of case 2 Example 5.3

Example 5.4

Finally, consider the case when \(\alpha =0,~\beta =1\), \(\gamma =0\) and \(g(\xi ,\eta ,t)=0\); Then Eq. (1) takes the following form:

$$\begin{aligned} {\partial _{t}}Y(\xi ,\eta ,t)={\partial _{\xi \xi }}Y(\xi ,\eta ,t)+{\partial _{\eta \eta }}Y(\xi ,\eta ,t). \end{aligned}$$
(36)

The initial and boundary conditions are extracted from exact solution [37]

$$\begin{aligned} Y(\xi ,\eta ,t)=\sin (\pi \xi )\sin (\pi \eta ) \\e^{-2\pi ^{2}t}. \end{aligned}$$

The problem has been solved in domain \([0,1]\times [0,1]\) over the time interval [0, 1] for different values of T and M. Error norms are computed and compared with the results of RBFs available in literature given in Tables 9 and 10. From these tables it is clear that the results got using the proposed technique are comparable with those of multiquadric (MQ) RBFs and better than those of wendland (WL) RBFs [37]. The solution profile is plotted in Fig. 5 when \(T=0.2\) showing efficiency of the method suggested.

Table 9 Error norms when \(T=0.2\), and \(\delta t=0.001\) for Example 5.4
Table 10 Error norms when \(N=64\) and \(\delta t=0.001\) of Example 5.4
Fig. 5
figure 5

Solution profile when \(T=0.2\), \(\delta =0.001\), \(N=10\) of Example 5.4

6 Conclusion

In this paper, we studied a numerical technique based on Lucas and Fibonacci polynomials. First, we discretized temporal part of PDEs by finite difference and spatial part by \(\theta -\) weighted scheme with \(\theta =1/2\) (Crank Nicolson method). Thenceforth, the unknown functions are expanded by Lucas series while their derivatives are replaced by Fibonacci polynomials. Performance and convergence of the method are investigated by several test problems including one- and two-dimensional linear and nonlinear equations. The results are compared with exact as well numerical results available in the literature. The comparison of the results justify demonstrates efficiency and applicability of the proposed methodology.