1 Introduction

Fractional differential equations (FDEs) play a crucial role in modeling of different physical phenomena such as heat and mass transfer [1, 2], fluid mechanics [3], and financial theory [4, 5]. Usually these equations arise in the form of fractional partial differential equations (FPDEs). The well known application of FPDEs is their nonlocal property which means that the next state of the dynamical system not only depends on the present state but also on all previous states. That’s the reason why fractional calculus has been given more emphasis in the last few decades. One of the most important models of FPDEs is time fractional diffusion equation (TFDE) which is used to describe anomalous diffusion phenomena in transport processes. This model can be obtained from the classical diffusion model by replacing the first order time derivative with a fractional one. Analytical methods have limited ability to solve these problems, therefore, numerical techniques are essential.

In the literature several approaches have been proposed for the solution of TFDEs, for example, finite difference methods [6, 7], spectral method [8], finite element method [9], etc. In contrast to finite difference methods, compact finite difference methods have good accuracy [10, 11], which have been applied for the solution of 1D TFDEs in [12]. For 2D problems, explicit and implicit central difference formulas coupled with extrapolation and kernel based approximation with adaptive technique have been studied in [13, 14]. For multidimensional problems, another method is operator-splitting technique or ADI which transforms a given problem to progressions of 1D linear problems having low computational cost. For complete details of ADI, the interested readers are referred to the papers [15, 16]. The idea of the ADI method was extended to FPDEs [17, 18].

Among diverse numerical methods, wavelet based methods achieved much attention for solution of FPDEs recently because of ease implementation. Also an interested feature of wavelet based methods is that they are capable of noticing singularities and approximating a function with different resolution levels. There are several families of wavelets, like Daubechies, Symlet, Coiflet, etc. All of them are equally interesting, but Daubechies wavelets of order one, known as Haar wavelets (HW), deserve special attention. HW are composed of piecewise constant functions which attain the values −1 and 1. These wavelets are good for approximations but have the drawback of discontinuity at the endpoints of an interval. This problem has been discussed by Cattani [19, 20] who used interpolating splines to regularize these wavelets. Another approach was used by Chen and Hasio [21], in their work the highest order derivative has been approximated by truncated HW series instead of solution. Further, this approach was used by Lepik [22, 23] to study different problems.

The above discussion about Haar wavelets is limited to integer order differential equations, however, this approach also works for solution of fractional differential equations. Lepik [24] solved fractional integral equations with the help of Haar wavelets. Li [8] applied Haar wavelets for solution of fractional ordinary differential equations (FODEs). Chen et al. [25] analyzed errors of the numerical method for FODEs by Haar wavelets. Ray and Patra [26] solved nonlinear oscillatory van der Pol system using Haar wavelet operational matrices. Yi and Huang [27] implemented an operational matrix method for variable coefficient fractional differential equations. Saeed et al. [28] used Haar wavelet Picard technique for numerical solution of fractional order initial and boundary value problems. Saeed and Rehman [29] applied the same technique for numerical solution of fractional partial differential equations. Majak et al. [30] studied higher order Haar wavelet method for FGM structures. Recently, Alderremya et al. [31] implemented the spectral collocation method together with third order Chebyshev polynomials to certain new models of multi-space-fractional Gardner equation. Zhang et al. [32] computed the general solution for impulsive differential equations with Riemann–Liouville fractional-order \(q \in (1, 2)\). Agarwal and Singh [33] studied mathematical model of Nipah virus with fractional order approach. Morales-Delgado et al. [34] discussed fractional dynamics of oxygen diffusion through capillary to tissues under the influence of external forces considering the fractional operators of Liouville–Caputo and Caputo–Fabrizio. Choi and Agarwal [35] established a fractional integration formula using generalized multiindex Mittag-Leffler function. Agarwal et al. [36, 37] studied fixed point results, some new differential equation formulas for extended Mittag-Leffler-type functions and an extension of the Caputo fractional derivative operator involving generalized hypergeometric-type functions. Amiri et al. [38] developed a numerical method based on cosine-trigonometric basis functions to solve Fredholm integral equations of the second kind. Moghadam et al. [39] applied Bernoulli wavelet method for numerical solutions of advection dispersion equations. Farooq et al. [40] used Chebyshev wavelet method for solution of delay differential equations. Khalil et al. [41] introduced some new operational matrices of integration for fractional Poisson equations with integral type boundary conditions.

1.1 Motivation

Developing methods for solutions of TFDEs either analytical, semianalytical, or numerical is an important task. The motivation of this work is to compute numerical solutions of 2D TFDEs using two-dimensional Haar wavelets and finite differences. The proposed scheme is hybrid and gives good results which are comparable with exact solutions. Also we will examine the computational stability of the proposed algorithm which is related to the spectral radius of the amplification matrix. The problem which will be consider in this study is defined as follows:

$$ \frac{\partial ^{\gamma }C(X,t)}{\partial t^{\gamma }}=\Delta {C(X,t)}+ \mathbf{B(X,t)},\quad t>0, X \in \varPsi , $$
(1.1)

together with the initial condition

$$ C(X,0)=\beta (X), \quad X \in \varPsi , $$
(1.2)

and boundary conditions

$$ C(X,t)=\eta (X,t), \quad t\geq 0, X \in \partial \varPsi , $$
(1.3)

where \(X=X(x,y)\), Ψ and ∂Ψ denote solution domain and boundary, \(\mathbf{B(X,t)}\), \(\beta (X)\), \(\eta (X,t)\) are known functions, while \(C(X,t)\) is an unknown function to be computed. In Eq. (1.1), Δ is two-dimensional Laplacian and \(\frac{\partial ^{\gamma }C(X,t)}{\partial t^{\gamma }}\) is the fractional derivative of C with respect to t in Caputo sense which is defined as:

$$ \frac{\partial ^{\gamma }C(X,t)}{\partial t^{\gamma }}= \textstyle\begin{cases} \frac{1}{\varGamma (1-\gamma )}\int _{0}^{t} \frac{\frac{\partial C(X,\mu )}{\partial \mu }}{ (t-\mu )^{\gamma }}\,d\mu ,&0< \gamma < 1, \\ \frac{\partial C(X,t)}{\partial t},& \gamma =1. \end{cases} $$
(1.4)

The rest of the paper is organized as follows: Basic definitions of HW and its integrals are reported in Sect. 2. The proposed methodology and convergence with stability analysis have been discussed in Sects. 3 and 4, respectively. Numerical results are given in Sect. 5, and finally conclusion is summarized in Sect. 6.

2 Preliminaries

This section is devoted to some basic definitions required in this study. Consider \(\mathbb{M} = 2^{J}\), where J denotes the maximal resolution level. Next assume \(x \in [\mathbb{E},\mathbb{F} )\) and partition of the interval into \(2\mathbb{M}\) subintervals of equal length \(\Delta x=\frac{\mathbb{F}-\mathbb{E}}{2\mathbb{M}}\). Further define the wavelet number \(i =m +\kappa + 1\), where \(m=2^{\lambda }\), \(\lambda = 0, \dots , J\) and \(\kappa = 0,\dots , 2^{\lambda } -1\), respectively. The Haar wavelets for \(i\geq 1\) are defined as follows:

$$\begin{aligned}& \mathbb{H}_{1}(x)= \textstyle\begin{cases} 1, & x\in [\mathbb{E}, \mathbb{F} ), \\ 0, & \text{otherwise}. \end{cases}\displaystyle \end{aligned}$$
(2.1)
$$\begin{aligned}& \mathbb{H}_{i}(x)= \textstyle\begin{cases} (-1 )^{j+1}, & x\in [\zeta _{j}(i),\zeta _{j+1}(i) ),j=1,2, \\ 0, & \text{otherwise}, \end{cases}\displaystyle \end{aligned}$$
(2.2)

where \(\zeta _{s+1}(i)= \mathbb{E}+ (2\kappa +s )\nu \delta x\), \(s=0,1,2\) and \(\nu =2^{J-\lambda }\). To solve the nth order time fractional partial differential equations (PDEs), one needs repeated integrals of the form

$$\begin{aligned} \mathbb{P}_{i,\alpha }(x)= \int _{\mathbb{E}}^{x} \int _{\mathbb{E}}^{x} \dots \int _{\mathbb{E}}^{x}\mathcal{H}_{i}(y)\,dy^{\alpha } = \frac{1}{ (\alpha -1 )!} \int _{\mathbb{E}}^{x} (x-y )^{ \alpha -1} \mathbb{H}_{i}(y)\,dy, \end{aligned}$$
(2.3)

where \(\alpha = 1, 2, \dots , n\), \(i = 1, 2, \dots , 2\mathbb{M}\). Using Eqs. (2.1), (2.2), and (2.3), the closed-form expression of these integrals are given as [22]:

$$\begin{aligned}& \mathbb{P}_{1,\alpha }(x)= \frac{ (x-\mathbb{E} )^{\alpha }}{\alpha !}, \end{aligned}$$
(2.4)
$$\begin{aligned}& \mathbb{P}_{i,\alpha }(x)= \textstyle\begin{cases} 0, & x< \zeta _{1}(x), \\ \frac{1}{\alpha !} [x-\zeta _{1}(i) ]^{\alpha }, & x\in [\zeta _{1}(i),\zeta _{2}(i) ), \\ \frac{1}{\alpha !} [(x-\zeta _{1}(i))^{\alpha }-2((x-\zeta _{2}(i))^{ \alpha } ], & x\in [\zeta _{2}(i),\zeta _{3}(i)), \\ \frac{1}{\alpha !} [(x-\zeta _{1}(i))^{\alpha }-2((x-\zeta _{2}(i))^{ \alpha }+(x-\zeta _{3}(i))^{\alpha } ], & x\ge \zeta _{3}(i). \end{cases}\displaystyle \end{aligned}$$
(2.5)

3 Proposed methodology

This section is devoted to explaining the proposed methodology for the above mentioned problem. In Eq. (1.4), the time fractional derivative can be approximated by an \(L_{1}\)-approximation formula [1] as:

$$ \begin{aligned} \frac{\partial ^{\gamma }C(X,t^{\xi +1})}{\partial t^{\gamma }}&= \frac{1}{\varGamma (1-\gamma )} \int _{0}^{t^{\xi +1}} \frac{\partial C(X,\mu )}{\partial \mu } \bigl(t^{\xi +1}-\mu \bigr)^{- \gamma }\,d\mu \\ & =\frac{1}{\varGamma (1-\gamma )}\sum_{j=0}^{\xi } \int _{j\tau }^{ (j+1) \tau }\frac{\partial C(X,\mu )}{\partial \mu } \bigl(t^{\xi +1}-\mu \bigr)^{-\gamma }\,d\mu \\ & = \frac{1}{\varGamma (1-\gamma )}\sum_{j=0}^{\xi } \biggl[ \frac{C^{j+1}(X)-C^{j}(X)}{\tau }+O(\tau ) \biggr] \int _{j \tau }^{(j+1)\tau } \bigl((j+1)\tau -\mu \bigr)^{-\gamma }\,d\mu . \end{aligned} $$

A simplification of the above integral yields

$$\begin{aligned}& \frac{\partial ^{\gamma } C}{\partial t^{\gamma }} = \textstyle\begin{cases} A_{\gamma }\sum_{j=0}^{\xi } [C^{\xi -j+1}(X)-C^{\xi -j}(X) ] \varphi _{\gamma }(j)+O(\tau ^{2-\gamma }),& 0< \gamma < 1, \\ \frac{C^{\xi +1}(X)-C^{\xi }(X)}{\tau }+O(\tau ), & \gamma =1, \end{cases}\displaystyle \end{aligned}$$
(3.1)

where \(A_{\gamma }=\frac{\tau ^{-\gamma }}{\varGamma (2-\gamma )}\), \(\varphi _{ \gamma }(j)=(j+1)^{1-\gamma }-(j)^{1-\gamma }\). Applying a θ-weighted scheme to Eq. (1.1), we have

$$ \frac{\partial ^{\gamma } C}{\partial t^{\gamma }} =\theta \bigl[\Delta C(X)\bigr]^{ \xi +1}+(1- \theta )\bigl[\Delta C(X)\bigr]^{\xi }+\mathbf{B}^{\xi +1}(X). $$
(3.2)

Using Eq. (3.1) in Eq. (3.2), one can get

$$ \begin{aligned}[b] A_{\gamma }C^{\xi +1}(X)- \theta \bigl[\Delta C(X)\bigr]^{\xi +1}&=A_{\gamma }C^{ \xi }(X)-A_{\gamma } \sum_{j=1}^{\xi } \bigl[C^{\xi -j+1}(X)-C^{\xi -j}(X) \bigr] \varphi _{\gamma }(j) \\ &\quad{}+(1-\theta )\bigl[\Delta C(X)\bigr]^{\xi }+\mathbf{B}^{\xi +1}(X). \end{aligned} $$
(3.3)

In Eq. (3.3), \(\Delta =\frac{\partial ^{2}}{\partial x^{2}}+ \frac{\partial ^{2}}{\partial y^{2}}\), therefore we replace mixed derivatives by two-dimensional HW truncated series as

$$ C^{\xi +1}_{xxyy}(x,y)=\sum _{i=1}^{2\mathbb{M}}\sum_{l=1}^{2 \mathbb{M}} \delta ^{\xi +1} _{i,l}\mathbb{H}_{i}(x) \mathbb{H}_{l}(y), $$
(3.4)

where \(\delta ^{\xi +1}_{i,l}\) are wavelet coefficients to be calculated. Integration of Eq. (3.4) in the domain \([0,y]\) leads to

$$ C^{\xi +1}_{xxy}(x,y)=\sum _{i=1}^{2\mathbb{M}}\sum_{l=1}^{2 \mathbb{M}} \delta ^{\xi +1} _{i,l}\mathbb{H}_{i}(x) \mathbb{P}_{l,1}(y)+C^{ \xi +1}_{xxy}(x,0). $$
(3.5)

Integrating Eq. (3.5) with respect to y from 0 to 1, the unknown term \(C^{\xi +1}_{xxy}(x,0)\) can be computed as

$$ C^{\xi +1}_{xxy}(x,0)=C^{\xi +1}_{xx}(x,1)-C_{xx}^{\xi +1}(x,0)- \sum_{i=1}^{2 \mathbb{M}} \sum _{l=1}^{2\mathbb{M}}\delta ^{\xi +1}_{i,l} \mathbb{H}_{i}(x) \mathbb{P}_{l,2}(1). $$
(3.6)

Substituting Eq. (3.6) into Eq. 3.5), the obtained result is

$$ C^{\xi +1}_{xxy}(x,y)=\sum _{i=1}^{2\mathbb{M}}\sum_{l=1}^{2 \mathbb{M}} \delta ^{\xi +1}_{i,l}\mathbb{H}_{i}(x) \bigl[ \mathbb{P}_{l,1}(y)- \mathbb{P}_{l,2}(1) \bigr]+C^{\xi +1}_{xx}(x,1)-C_{xx}^{\xi +1}(x,0). $$
(3.7)

Integrating Eq. (3.7) in \([0,y]\), we get

$$ C^{\xi +1}_{xx}(x,y)=\sum _{i=1}^{2\mathbb{M}}\sum_{l=1}^{2 \mathbb{M}} \delta ^{\xi +1}_{i,l}\mathbb{H}_{i}(x) \bigl[ \mathbb{P}_{l,2}(y)-y \mathbb{P}_{l,2}(1) \bigr]+yC^{\xi +1}_{xx}(x,1)+(1-y)C_{xx}^{\xi +1}(x,0). $$
(3.8)

Repeating the same procedure, one can easily derive the following expressions:

$$\begin{aligned}& C_{yy}^{\xi +1}(x,y) =\sum _{i=1}^{2\mathbb{M}}\sum_{l=1}^{2 \mathbb{M}} \delta ^{\xi +1}_{i,l} \bigl[\mathbb{P}_{i,2}(x)- x \mathbb{P}_{i,2}(1) \bigr]\mathbb{H}_{l}(y)+xC_{yy}^{\xi +1}(1,y)+ (1-x )C_{yy}^{\xi +1}(0,y), \end{aligned}$$
(3.9)
$$\begin{aligned}& \begin{aligned} C_{x}^{\xi +1}(x,y)& =\sum_{i=1}^{2\mathbb{M}} \sum_{l=1}^{2 \mathbb{M}}\delta ^{\xi +1}_{i,l} \bigl[\mathbb{P}_{i,1}(x)-\mathbb{P}_{i,2}(1) \bigr] \bigl[ \mathbb{P}_{l,2}(y)-y \mathbb{P}_{l,2}(1) \bigr] +yC_{x}^{ \jmath +1}(x,1) \\ & \quad{} +(1-y)C_{x}^{\xi +1}(x,0)+C^{\jmath +1}(1,y) -C^{\jmath +1}(0,y)-yC^{ \xi +1}(1,1) +yC^{\xi +1}(0,1) \\ & \quad{} +(y-1)C^{\xi +1}(1,0)+(1-y)C^{\xi +1}(0,0), \end{aligned} \\& \begin{aligned} C_{y}^{\xi +1}(x,y)&=\sum_{i=1}^{2\mathbb{M}} \sum_{l=1}^{2 \mathbb{M}}\delta ^{\xi +1}_{i,l} \bigl[\mathbb{P}_{i,2}(x)-x \mathbb{P}_{i,2}(1) \bigr] \bigl[ \mathbb{P}_{l,1}(y)-\mathbb{P}_{l,2}(1) \bigr] +xC_{y}^{\xi +1}(1,y) \\ & \quad{} +(1-x)C_{y}^{\xi +1}(0,y)+C^{\xi +1}(x,1)-C^{\xi +1}(x,0)-xC^{ \xi }(1,1) +xC^{\xi +1}(1,0) \\ & \quad{} +(x-1)C^{\xi +1}(0,1)+(1-x)C^{\xi +1}(0,0), \end{aligned} \\& \begin{aligned}[b] C^{\xi +1}(x,y)&= \sum_{i=1}^{2\mathbb{M}} \sum_{l=1}^{2\mathbb{M}} \delta ^{\xi +1}_{i,l} \bigl[\mathbb{P}_{i,2}(x)-x\mathbb{P}_{i,2}(1) \bigr] \bigl[ \mathbb{P}_{l,2}(y)-y\mathbb{P}_{l,2}(1) \bigr] +yC^{\xi +1}(x,1) \\ & \quad{} -yC^{\xi +1}(0,1) +(1-y) \bigl[C^{\xi +1}(x,0)-C^{\xi +1}(0,0) \bigr] +xC^{\xi +1}(1,y) \\ & \quad{} -xC^{\xi +1}(0,y) -xy \bigl[C^{\xi +1}(1,1)-C^{\xi +1}(0,1) \bigr]+x (y-1 )C^{\xi +1}(1,0) \\ & \quad{} +x(1-y)C^{\xi +1}(0,0) +C^{\xi +1}(0,y). \end{aligned} \end{aligned}$$
(3.10)

As the proposed method is based on collocation approach, we use the following collocation points:

$$ x_{\omega }=\frac{\omega -0.5}{2\mathbb{M}},\qquad y_{\varepsilon }= \frac{\varepsilon -0.5}{2\mathbb{M}}, \quad \text{where }\omega , \varepsilon =1,2,\dots ,2\mathbb{M}. $$

Substituting collocation points and values from Eqs. (3.8), (3.9), and (3.10) into Eq. (3.3) leads to the following system of algebraic equations:

$$ \sum_{i=1}^{2\mathbb{M}}\sum _{l=1}^{2\mathbb{M}}\delta ^{\xi +1}_{i,l} \bigl[A_{\gamma }\varOmega _{i,l}({\omega ,\varepsilon }) -\theta \varPhi _{i,l}({ \omega ,\varepsilon }) \bigr]=\varUpsilon ({\omega , \varepsilon }), $$
(3.11)

where

$$\begin{aligned}& \varOmega _{i,l}(\omega ,\varepsilon )= \bigl[ \mathbb{P}_{i,2}(x_{\omega })-x_{\omega }\mathbb{P}_{i,2}(1) \bigr] \bigl[\mathbb{P}_{l,2}(y_{\varepsilon })-y_{\varepsilon } \mathbb{P}_{l,2}(1) \bigr], \\& \varPhi _{i,l}(\omega ,\varepsilon )=\mathbb{H}_{i}(x_{\omega }) \bigl[ \mathbb{P}_{l,2}(y_{\varepsilon })-y_{\varepsilon } \mathbb{P}_{l,2}(1) \bigr]+ \bigl[\mathbb{P}_{i,2}(x_{\omega }) -x_{\omega }\mathbb{P}_{i,2}(1) \bigr]\mathbb{H}_{l}(y_{\varepsilon }), \\& \begin{aligned} \varUpsilon (\omega ,\varepsilon )&=A_{\gamma }C^{\xi }(x_{\omega },y_{ \varepsilon })-A_{\gamma } \sum_{j=1}^{\xi } \bigl[C^{\xi -j+1} (x_{\omega },y_{ \varepsilon })-C^{\xi -j}(x_{\omega },y_{\varepsilon }) \bigr] \varphi _{ \gamma }(j)\\ &\quad {}+(1-\theta )\bigl[\Delta C(x_{\omega },y_{\varepsilon }) \bigr]^{\xi }+\mathbf{B}^{\xi +1}(x_{\omega },y_{\varepsilon }) \\ &\quad{} -A_{\gamma } \bigl[y_{ \varepsilon }C^{\xi +1}(x_{\omega },1) -y_{\varepsilon }C^{\xi +1}(0,1) +(1-y_{ \varepsilon }) \bigl\{ C^{\xi +1}(x_{\omega },0)-C^{\xi +1}(0,0) \bigr\} \\ &\quad{} +x_{\omega }C^{\xi +1}(1,y_{\varepsilon })-x_{\omega }C^{\xi +1}(0,y_{ \varepsilon }) -x_{\omega }y_{\varepsilon } \bigl\{ C^{\xi +1}(1,1)-C^{\xi +1}(0,1) \bigr\} \\ &\quad {}+x_{\omega } (y_{\varepsilon }-1 )C^{\xi +1}(1,0)+x_{\omega }(1-y_{\varepsilon })C^{\xi +1}(0,0) +C^{\xi +1}(0,y_{ \varepsilon }) \bigr] \\ &\quad{}+\theta \bigl[y_{\varepsilon }C^{\xi +1}_{xx}(x_{\omega },1)+(1-y_{\varepsilon })C_{xx}^{\xi +1}(x_{\omega },0) \\ &\quad{}+x_{\omega }C_{yy}^{\xi +1}(1,y_{\varepsilon }) + (1-x_{\omega } )C_{yy}^{\xi +1}(0,y_{\varepsilon }) \bigr]. \end{aligned} \end{aligned}$$

In Eq. (3.11) there are \(2\mathbb{M}\times 2\mathbb{M}\) equations having matrix size \((2\mathbb{M})^{2}\times (2\mathbb{M})^{2}\). After solving the system, an approximate solution can be computed from Eq. (3.10) for arbitrary time.

4 Convergence and stability

First we address the convergence theorem of the proposed scheme. The following lemma is necessary for convergence theorem.

Lemma 1

([42])

If \(f ( x , y )\)satisfies a Lipschitz condition on \([0, 1] \times [0, 1]\), that is, there exists a positive L such that for all \(( x_{1} , y ), ( x_{2} , y ) \in [0, 1] \times [0, 1]\), \(| f(x_{1} , y )-f(x_{2} , y ) | \leq L | x_{1}-x_{2} | \)then

$$ \delta ^{2}_{i,l}\leq \frac{L^{2}}{2^{4\lambda +4}m^{2}}. $$

Theorem 1

([43])

Assume \(C(x,y)\)and \(_{C_{2\mathbb{M}}}(x,y)\)are exact and approximate solution of Eq. (1.1), respectively, then

$$ \Vert E_{J} \Vert \leq \frac{L}{4\sqrt{255}} \frac{1}{2^{4J}}. $$
(4.1)

The above result indicates that the resolution level and error norm are inversely proportional. It has been verified that when resolution increases the error norm decreases.

Proof

To prove the theorem, take the asymptotic expansion of Eq. (3.10) defined as follows:

$$ C^{\xi +1}(x,y)=\sum_{i=1}^{\infty } \sum_{l=1}^{\infty }\delta ^{\xi +1}_{i,l} \bigl[\mathbb{P}_{i,2}(x)-x\mathbb{P}_{i,2}(1) \bigr] \bigl[ \mathbb{P}_{l,2}(y)-y \mathbb{P}_{l,2}(1) \bigr]+\varLambda (x,y), $$
(4.2)

where

$$\begin{aligned} \varLambda (x,y)&=yC^{\xi +1}(x,1) -yC^{\xi +1}(0,1) +(1-y) \bigl[C^{\xi +1}(x,0)-C^{ \xi +1}(0,0) \bigr] \\ &\quad{}+xC^{\xi +1}(1,y) -xC^{\xi +1}(0,y) -xy \bigl[C^{\xi +1}(1,1)-C^{\xi +1}(0,1) \bigr] \\ &\quad {}+x (y-1 )C^{\xi +1}(1,0) +x(1-y)C^{\xi +1}(0,0) +C^{\xi +1}(0,y). \end{aligned}$$

From Eq. (4.2) at resolution level J, we can write

$$ \begin{aligned}[b] \vert E_{2\mathbb{M}} \vert &= \bigl\vert C(x,y)-C_{2\mathbb{M}}(x,y) \bigr\vert \\ &= \Biggl\vert \sum _{i=2\mathbb{M}+1}^{\infty }\sum_{l=2\mathbb{M}+1}^{\infty } \delta ^{\xi +1}_{i,l} \bigl[\mathbb{P}_{i,2}(x)-x \mathbb{P}_{i,2}(1) \bigr] \bigl[\mathbb{P}_{l,2}(y)-y \mathbb{P}_{l,2}(1) \bigr] \Biggr\vert . \end{aligned} $$
(4.3)

The \(L^{2}\)-norm is given by

$$\begin{aligned} \Vert E_{2\mathbb{M}} \Vert ^{2}& = \Biggl\vert \int _{0}^{1} \int _{0}^{1} \Biggl(\sum _{i=2\mathbb{M}+1}^{\infty }\sum_{l=2\mathbb{M}+1}^{ \infty } \delta ^{\xi +1}_{i,l} \bigl[\mathbb{P}_{i,2}(x)-x \mathbb{P}_{i,2}(1) \bigr] \bigl[\mathbb{P}_{l,2}(y)-y \mathbb{P}_{l,2}(1) \bigr] \Biggr)^{2}\,dx \,dy \Biggr\vert \\ & =\sum_{i,l=2\mathbb{M}+1}^{\infty }\sum _{\tilde{i},\tilde{l}=2 \mathbb{M}+1}^{\infty }\delta ^{\xi +1}_{i,l}\delta ^{\xi +1}_{ \tilde{i},\tilde{l}} \biggl\vert \int _{0}^{1} \int _{0}^{1} \bigl( \bigl[ \mathbb{P}_{i,2}(x)-x \mathbb{P}_{i,2}(1) \bigr] \bigl[\mathbb{P}_{l,2}(y)-y \mathbb{P}_{l,2}(1) \bigr] \bigr)^{2}\,dx \,dy \biggr\vert \\ & = \sum_{i,l=2\mathbb{M}+1}^{\infty }\delta ^{2}_{i,l} \biggl\vert \int _{0}^{1} \int _{0}^{1} \bigl( \bigl[\mathbb{P}_{i,2}(x)-x \mathbb{P}_{i,2}(1) \bigr] \bigl[\mathbb{P}_{l,2}(y)-y \mathbb{P}_{l,2}(1) \bigr] \bigr)^{2}\,dx \,dy \biggr\vert . \end{aligned}$$
(4.4)

To evaluate the complex integral, it is sufficient to estimate the maximum bounds of Haar wavelets integrals which can be obtained from the expression [30]:

$$\begin{aligned} \mathbb{P}_{i,n}(x)&=\frac{1}{n!} \bigl[ (x-\zeta _{1} )^{n}- 2 (x-\zeta _{2} )^{n}+ (x-\zeta _{3} )^{n} \bigr] \\ & =\frac{1}{n!}\sum_{\kappa =2}^{n} \binom{n}{\kappa } (x- \zeta _{2} )^{n-\kappa } \biggl[ \biggl( \frac{1}{2^{\lambda +1}} \biggr)^{\kappa } + \biggl(\frac{-1}{2^{\lambda +1}} \biggr)^{\kappa } \biggr] \end{aligned}$$
(4.5)
$$\begin{aligned} & \le \frac{1}{n!}\sum_{\kappa =2}^{n} \binom{n}{\kappa } (1- \zeta _{2} )^{n-\kappa } \biggl[ \biggl( \frac{1}{2^{\lambda +1}} \biggr)^{\kappa } + \biggl(\frac{-1}{2^{\lambda +1}} \biggr)^{\kappa } \biggr] \\ & = \mathbb{P}_{i,n}(1) \end{aligned}$$
(4.6)

From Eq. (4.5), one can deduce that

$$ \mathbb{P}_{i,2}(1)\le \frac{1}{ (2^{\lambda +1} )^{2}}. $$
(4.7)

Using Eq. (4.7) and the above lemma in Eq. (4.4), we can derive

$$\begin{aligned} \Vert E_{2\mathbb{M}} \Vert ^{2}&\leq \sum _{i, l=2\mathbb{M}+1}^{ \infty }\frac{16L^{2}}{2^{4\lambda +4}2^{2\lambda }} \int _{\zeta _{1}}^{1} \int _{\zeta _{1}}^{1}\frac{1}{2^{4\lambda +4}}\,dx \,dy \end{aligned}$$
(4.8)
$$\begin{aligned} &=\sum_{i, l=2\mathbb{M}+1}^{\infty }\frac{16L^{2}}{2^{10\lambda +8}} \int _{\zeta _{1}}^{1}(1-\zeta _{1})\,dy \\ &\leq \sum_{i, l=2\mathbb{M}+1}^{\infty } \frac{16L^{2}}{2^{10\lambda +8}} \int _{\zeta _{1}}^{1}\,dy \\ & \leq \sum_{i, l=2\mathbb{M}+1}^{\infty } \frac{16L^{2}}{2^{10\lambda +8}} \\ &=\sum_{\lambda =J+1}^{\infty } \Biggl\{ \sum _{i=0}^{2^{\lambda }-1} \sum_{l=0}^{2^{\lambda }-1} \frac{16L^{2}}{2^{10\lambda +8}} \Biggr\} \\ &=\sum_{\lambda =J+1}^{\infty }\frac{16L^{2}}{2^{8\lambda +8}}, \end{aligned}$$
(4.9)

which gives

$$ \Vert E_{2\mathbb{M}} \Vert \leq \frac{L}{4\sqrt{255}} \frac{1}{2^{4J}}. $$
(4.10)

This completes the proof of the theorem. □

4.1 Stability

Here stability analysis of the proposed scheme is given. In matrix form, Eqs. (3.8), (3.9) and (3.10) can be written as

$$\begin{aligned}& C_{xx}^{\xi +1}=\mathcal{D}\delta ^{\xi +1}+\tilde{\mathcal{D}}^{\xi +1}, \end{aligned}$$
(4.11)
$$\begin{aligned}& C_{yy}^{\xi +1}=\mathcal{E}\delta ^{\xi +1}+\tilde{\mathcal{E}}^{\xi +1}, \end{aligned}$$
(4.12)
$$\begin{aligned}& C^{\xi +1}=\mathcal{F}\delta ^{\xi +1}+\tilde{ \mathcal{F}}^{\xi +1}, \end{aligned}$$
(4.13)

where \(\delta ^{\xi +1}=\delta ^{\xi +1}(i,l)\), \(\mathcal{D}\), \(\mathcal{E}\), \(\mathcal{F}\) and \(\tilde{\mathcal{D}}^{\xi +1}\), \(\tilde{\mathcal{E}}^{\xi +1}\), \(\tilde{\mathcal{F}}^{\xi +1}\) are interpolation matrices of \(C^{\xi +1}_{xx}\), \(C^{\xi +1}_{yy}\), \(C^{\xi +1}\) at collocation points and boundary terms, respectively. Now using Eqs. (4.11), (4.12), and (4.13) in Eq. (3.3), we get

$$ \bigl[A_{\gamma }\mathcal{F}-\theta (\mathcal{D}+\mathcal{E} ) \bigr]\delta ^{\xi +1}= \bigl[A_{\gamma }\mathcal{F}+(1-\theta ) ( \mathcal{D}+\mathcal{E} ) \bigr]\delta ^{\xi }+\mathcal{G}^{\xi +1}, $$
(4.14)

where

$$ \begin{aligned} \mathcal{G}^{\xi +1}&=-A_{\gamma }\tilde{ \mathcal{F}}^{ \xi +1}+\theta \bigl(\tilde{\mathcal{D}}^{\xi +1}+ \tilde{\mathcal{E}}^{ \xi +1} \bigr)+A_{\gamma }\tilde{ \mathcal{F}^{\xi }} +(1-\theta ) \bigl( \tilde{\mathcal{D}^{\xi }}+ \tilde{\mathcal{E}^{\xi }}\bigr) \\ &\quad{}+\mathbf{B}^{\xi +1}-A_{\gamma }\sum _{j=1}^{\xi } \bigl[C^{\xi -j+1}(X)-C^{ \xi -j}(X) \bigr] \varphi _{\gamma }(j). \end{aligned} $$

From Eq. (4.14) one can write

$$ \delta ^{\xi +1}=\mathcal{M}^{-1}\mathcal{N} \delta ^{\xi }+\mathcal{M}^{-1} \mathcal{G}^{\xi +1}, $$
(4.15)

where \(\mathcal{M}= [A_{\gamma }\mathcal{F}-\theta (\mathcal{D}+ \mathcal{E} ) ]\), \(\mathcal{N}= [A_{\gamma }\mathcal{F}+(1- \theta ) (\mathcal{D}+\mathcal{E} ) ]\). Putting Eq. (4.15) in Eq. (4.13), we get

$$ C^{\xi +1}=\mathcal{F}\mathcal{M}^{-1}\mathcal{N} \delta ^{\xi }+ \mathcal{F}\mathcal{M}^{-1}\mathcal{G}^{\xi +1} +\tilde{\mathcal{F}}^{ \xi +1}. $$
(4.16)

Using Eq. (4.13) in Eq. (4.16), we have

$$ C^{\xi +1}=\mathcal{F}\mathcal{M}^{-1}\mathcal{N} \mathcal{F}^{-1}C^{ \xi } -\mathcal{F}\mathcal{M}^{-1} \mathcal{N}\mathcal{F}^{-1} \tilde{\mathcal{F}}^{\xi } + \mathcal{F}\mathcal{M}^{-1}\mathcal{G}^{ \xi +1} +\tilde{ \mathcal{F}}^{\xi +1}. $$
(4.17)

The above equation shows a recurrence relation of the full discretization scheme which allows us refinement in time. If \(\tilde{C}^{\xi +1}\) is a numerical solution then

$$ \tilde{C}^{\xi +1}=\mathcal{F}\mathcal{M}^{-1} \mathcal{N}\mathcal{F}^{-1} \tilde{C^{\xi }} -\mathcal{F} \mathcal{M}^{-1}\mathcal{N}\mathcal{F}^{-1} \tilde{ \mathcal{F}}^{\xi } +\mathcal{F}\mathcal{M}^{-1} \mathcal{G}^{ \xi +1} +\tilde{\mathcal{F}}^{\xi +1}. $$
(4.18)

Let \(e^{\xi }=C^{\xi }-\tilde{C}^{\xi }\) be the error at the ξth time level. Subtracting Eq. (4.17) from Eq. (4.18) then gives

$$ e^{\xi +1}=\mathcal{T}e^{\xi }, $$

where \(\mathcal{T}=\mathcal{F}\mathcal{M}^{-1}\mathcal{N}\mathcal{F}^{-1}\) is the amplification matrix. According to Lax–Richtmyer criterion, the scheme will be stable if \(\|\mathcal{T}\|\leq 1\).

5 Illustrative test problems

In this section of the manuscript, we solve some test problems to validate the proposed method. To check goodness, we compute the error norms, \(E_{\infty }\) and \(E_{\mathrm{rms}}\), which are defined as follows:

$$ \begin{aligned} E_{\infty }=\max _{1\le i,j,\le 2\mathbb{M}} \bigl\Vert C_{i,j}^{\mathrm{ext}}-C_{i,j}^{\mathrm{app}} \bigr\Vert ,\qquad E_{\mathrm{rms}} &= \sqrt{\frac{1}{2\mathbb{M}\times 2\mathbb{M}}{\sum _{i=1}^{2 \mathbb{M}}\sum_{j=1}^{2\mathbb{M}} \bigl(C_{i,j}^{\mathrm{ext}}-C_{i,j}^{\mathrm{app}} \bigr)^{2}}}, \end{aligned} $$
(5.1)

Example 5.1

Consider Eq. (1.1) with source term \(\mathbf{B}(x,y,t)= [\frac{2t^{2-t}}{\varGamma (3-\gamma )}+2t^{2} ] \sin (x)\sin (y)\). Initial and boundary conditions (Eqs. (1.2)–(1.3)) have been extracted from the exact solution \(C(x,y,t)=t^{2}\sin (x)\sin (y)\). The problem has been solved for \(\tau =0.01\), \(J=4\), \(\gamma =0.1,0.3,0.5,0.7,0.9\) and the obtained results tabulated in Table 1. In Table 2 calculated error norms with decreasing time step size have been given, which show that the scheme is convergent in time. In Tables 1 and 2, the spectral radius of the amplification matrix has been mentioned. It is obvious from the tables that computed solutions agree with exact, and the spectral radius lies in the stability domain. Solution profile and absolute errors are plotted in Fig. 1. From the figure it is clear that the proposed solutions agree well with the exact solution.

Figure 1
figure 1

Graphical behavior of Example 5.1 when \(t=0.4\), \(\gamma =0.5\)

Table 1 Error norms of Example 5.1 at \(J=4\), \(\tau = 0.01\), \(\theta = 1\)
Table 2 Error norms of Example 5.1 at \(\gamma =0.75\), \(\theta = 1\)

Example 5.2

Consider Eq. (1.1) with the exact solution \(C(x,y,t)=\exp (x+y)t^{1+\gamma }\). Initial and boundary conditions have been used from the exact solution. The source term \(\mathbf{B}(x,y,t)\) is easy to derive corresponding to the exact solution. The problem has been solved for different times, and the obtained results have been reported in Table 3. In this problem, we fix \(t=0.2\), \(\gamma =0.9\), while varying resolution level J. The achieved results are presented in Table 4. One can see clearly that when the resolution level increases the accuracy also increases which guarantees spatial convergence. The same table reveals that the spectral radius remains the same when the resolution level varies. To clarify the spatial convergence more, the absolute errors \(E_{\mathrm{abs}}\) at different mesh points have been addressed in Table 5. Graphical solutions together with absolute errors are shown in Fig. 2. From the figure it is obvious that the obtained results and exact solutions have strong agreement.

Figure 2
figure 2

Graphical behavior of Example 5.2 when \(t=0.2\), \(\gamma =0.5\)

Table 3 Error norms of Example 5.2 at \(J=4\), \(\tau = 0.001\), \(\theta = 1\)
Table 4 Error norms of Example 5.2 at \(\gamma =0.9\), \(\theta = 1\)
Table 5 Absolute errors at different points of Example 5.2 at \(t=0.2\), \(\gamma =0.9\)

Example 5.3

Consider Eq. (1.1) with the exact solution \(C(x,y,t)=t^{2}(x-x^{2})^{2}(y-y^{2})^{2}\). The source term is easy to compute from the exact solution. In Table 6, we recorded the computed error norms for parameters \(t=0.5,0.1\), \(\gamma =0.2,0.4,0.6,0.8\), \(\tau =0.01\). In Fig. 3, exact versus approximate solutions and absolute errors have been plotted. From tabulated data and graphical solution, it is clear that computed solutions match well with the exact one.

Figure 3
figure 3

Graphical behavior of Example 5.3 when \(t=0.5\), \(\gamma =0.75\)

Table 6 Error norms of Example 5.3 at \(J=4\), \(\tau = 0.01\), \(\theta = 1\)

Example 5.4

Now we consider a nonlinear time fractional differential equation of the form

$$ \frac{\partial ^{\gamma }C(X,t)}{\partial t^{\gamma }}=\Delta C(X,t)+2C(X,t)-C^{2}(X,t)+ \mathbf{B(X,t)},\quad t>0, X \in \varPsi , $$
(5.2)

where

$$ \begin{aligned} \mathbf{B(X,t)}&=\frac{\sin (x)\sin (y)}{4} \biggl[ \frac{\varGamma (2.4)}{\varGamma (2.4-\gamma )}t^{(1.4-\gamma )}+ \frac{\varGamma (2.2)}{\varGamma (2.2-\gamma )}t^{(1.2-\gamma )}+ \frac{\varGamma (2)}{\varGamma (2-\gamma )}t^{1-\gamma } \biggr] \\ &\quad{}+\sin ^{2}(x)\sin ^{2}(y) \bigl(t^{1.4}+t^{1.2}+t+1 \bigr)^{2}. \end{aligned} $$

The associated initial and boundary conditions have been extracted from the exact solution

$$ C(x,y,t)=\frac{\sin (x)\sin (y)(t^{1.4}+t^{1.2}+t+1)}{4}. $$

In Table 7, we presented computed error norms for the nonlinear problem at different values of γ. From the table it is obvious that the current scheme also works for two-dimensional nonlinear time fractional problems. Graphical solutions together with absolute errors have been shown Fig. 4. From the figure one can observe that computed solutions match well with the exact solution.

Figure 4
figure 4

Graphical behavior of Example 5.4 when \(t=0.1\), \(\gamma =0.9\)

Table 7 Error norms of Example 5.4 at \(J=3\), \(\theta = 1\)

Example 5.5

Finally, we consider a nonlinear time fractional differential equation of the form

$$ \frac{\partial ^{\gamma }C(X,t)}{\partial t^{\gamma }}=\Delta C(X,t)+C(X,t)-C^{2}(X,t)+ \mathbf{B(X,t)},\quad t>0, X \in \varPsi , $$
(5.3)

where

$$ \begin{aligned} \mathbf{B(X,t)}&= \biggl(\varGamma (1+\gamma )+ \frac{\varGamma (3)}{\varGamma (3-\gamma )}t^{(2-\gamma )} \biggr) x^{2}(1-x)^{2}y^{2}(1-y)^{2}\\ &\quad {}-2 \bigl[(1-x)^{2}-4x(1-x)+x^{2}+(1-y)^{2}-4y(1-y)+y^{2} \bigr] \\ &\quad {}- \bigl(t^{\gamma }+t^{2}\bigr) \bigl(x^{2}(1-x)^{2}y^{2}(1-y)^{2} \bigr) \bigl(1-\bigl(t^{\gamma }+t^{2}\bigr) \bigl(x^{2}(1-x)^{2}y^{2}(1-y)^{2} \bigr) \bigr). \end{aligned} $$

The exact solution of this problem is \(C(x,y,t)=(t^{\gamma }+t^{2})x^{2}(1-x)^{2}y^{2}(1-y)^{2}\). It is obvious that the first derivative of the exact solution approaches infinity when \(t \rightarrow 0\), which shows that there is an initial singularity in time. Computed results have been addressed in Table 8 for \(J=3\). From the table it is clear that the suggested scheme works if an initial singularity exists in time. Also the same table shows time convergence of the suggested scheme for a singularity problem.

Table 8 Error norms of Example 5.5 at \(J=3\)

6 Concluding remarks

In this work, we proposed a numerical scheme based on two-dimensional Haar wavelets combined with finite differences for solution of time fractional diffusion equations. The scheme has been applied for solutions of five test problems and noted the achieved results in tabulated as well as in graphical form. It has been observed that two-dimensional Haar wavelets work well for solution of two-dimensional time fractional heat problems.